Let W be “world knowledge”, and assume that each additive increment in W results in a multiplicative improvement in miniaturization, and thus

in computer memory and speed V.

So,

V = exp(W)

——————

In the old days, assume an essentially constant number of humans

worked unassisted at a steady pace to increase W at a steady rate:

dW/dt = 1

So,

W = t

and

V = exp(t)

which is a regular Moore’s law.

——————-

Now, suppose instead W is created soley by computers, and increases at a rate proportional to computer speed.

Then:

dW/dt = V

giving

dw/exp(W) = dt

This solves to

W = log(-1/t)

and

V = -1/t

W and V rise very slowly when t<<0, might be mistaken for exponential around

t = -1,

and have a glorious singularity at

t = 0.

——————-

Most realistically, assume humans keep working at a steady pace, but

are gradually overtaken by contributions from growing computer power:

dW/dt = 1+V giving dw/(1+exp(W)) = dt

Which solves to

W = log(1/(exp(-t)-1)) and V = 1/(exp(-t)-1)

Unsurprisingly, with this equation, W increases linearly when t<<0,

curves up like an exponential around t = -1, rises to a singularity at

t=0.

——————–

The assumption that

V = exp(W)

is surely too optimistic.

I was thinking in terms of independent innovations. For instance,

one might be an algorithmic discovery (like log N sorting) that lets

you get the same result with half the computation. Another might be

a computer organization (like RISC) that lets you get twice the

computation with the same number of gates.

Another might be a circuit advance (like CMOS) that lets you get twice the gates in a given space. Others might be independent speed-increasing advances, like size-reducing copper interconnects and capacitance-reducing silicon-on-insulator channels. Each of those increments of knowledge more or less multiplies the effect of all of the others, and computation would grow exponentially in their number.

But, of course, a lot of new knowledge steps on the toes of other

knowledge, by making it obsolete, or diluting its effect, so the

simple independent model doesn’t work in general. Also, simply

searching through an increasing amount of knowledge may take

increasing amounts of computation.

I played with the

V=exp(W)

assumption to weaken it, and observed that the singularity remains

if you assume processing increases more slowly, for instance

V = exp(sqrt(W))

or

exp(W^1/4).

Only when

V = exp(log(W)) (ie. V = W)

does the progress curve subside to an exponential.

V = W and V = W^2 (!)

——————

Suppose computing power per computer simply grows linearly with

total world knowledge, but that the number of computers also

grows the same way, so that the total amount of computational

power in the world grows as the square of knowledge:

V = W*W

also

dW/dt = V+1

as before this solves to

W = tan(t) and V = tan(t)^2,

which has lots of singularities (I like the one at t = pi/2).

——————–

Unfortunately the transitional territory between the merely

exponential V=W and the singularity-causing V=W^2 is analytically hard

to deal with. I assume just before a singularity appears, you get

non-computably rapid growth!

Simple Equations for Vinge‘s Technological Singularity

Hans Moravec, February 1999

http://www.frc.ri.cmu.edu/~hpm/project.archive/robot.papers/1999/singularity.html

## Leave a Reply