×
you are viewing a single comment's thread.

view the rest of the comments →

[–]gwern 32 points33 points  (12 children)

So von Neumann invented the Singularity after all! Toward the bottom of the Life obit (25 February 1957, pg89-104):

Machines creating new machines

After the last visitor had departed Von Neumann would retire to his second-floor study to work on the paper which he knew would be his last contribution to science. It was an attempt to formulate a concept shedding new light on the workings of the human brain. He believed that if such a concept could be stated with certainty, it would also be applicable to electronic computers and would permit man to make a major step forward in using these “automata.” In principle, he reasoned, there was no reason why some day a machine might not be built which not only could perform most of the functions of the human brain but could actually reproduce itself, i.e., create more supermachines like it. He proposed to present this paper at Yale, where he had been invited to give the 1956 Silliman Lectures.

This is interesting. As described, it can't be a von Neumann machine because it's almost a decade later, the 'paper' could only have been the short work which was published as The Computer and the Brain which doesn't cover replicators but how the brain works & how a Turing machine can emulate it & how much computation is necessary*, and it sounds all wrong for a simple self-replicator anyway. There's a quote from von Neumann about a 'singularity in the future of humanity', Vinge in 1993 says:

...In the 1950s very few saw it: Stan Ulam [1958] paraphrased John von Neumann as saying:

One conversation centered on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Von Neumann even uses the term singularity, though it appears he is thinking of normal progress, not the creation of superhuman intellect. (For me, the superhumanity is the essence of the Singularity. Without that we would get a glut of technical riches, never properly absorbed.)

* Incidentally, Kurzweil wrote the preface to the third edition of The Computer and the Brain, and specifically says that von Neumann saw the Singularity coming and that was the motivation for the book and notes the Ulam quote use of 'singularity'! I've skimmed the book, since it's so short; unfortunately, it is radically incomplete, just sort of stopping in the middle of a digression about the arbitrariness of human languages and the parallelism of thought, and lacks any kind of conclusion or speculation section, and the previous material doesn't provide any smoking gun restating the Life quote.

As Vinge points out, as worded it's not clear if he's thinking of a Good 1959/1965-like argument, about intelligent machines making more intelligent machines ie the Singularity-with-capital-S. It could just as easily refer to something like he discussed in his 1955 essay "Can We Survive Technology?" about the need for a world-government due to nukes, which are 'technology' as well, and the timing ambiguous as both nukes & computing were major focuses at the time. Ulam provides no particular context (other than, perhaps, previously noting von Neumann's intense interest in history & long-term trends, epitomized by a near-photographic memory of Gibbon's Decline and Fall of the Roman Empire) and it's in a rather miscellaneous section. But this Life obit quote, 8 years before Good, makes that interpretation untenable - he's clearly thinking about brain-like computers making 'more supermachines', and is discussing the concept with enough people that they'll tell it to a journalist writing an obit.

lukeprog points out that in his Singularity history anthology, von Neumann in 1948 & 1949 clearly believes automatons could construct more powerful automatons than themselves, which shows he did believe in the missing piece at that point, so it seems peculiar to suggest that he had abandoned or forgotten it despite all the successes of computers & his research program by 1957:

...“complication" on its lower levels is probably degenerative, that is, that every automaton that can produce other automata will only be able to produce less complicated ones. There is, however, a certain minimum level where this degenerative characteristic ceases to be universal. At this point automata which can reproduce themselves, or even construct higher entities, become possible.

1949 ("Theory of self-reproducing automata", pg80):

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.

You can try to argue that maybe Good still has priority because Good explicitly wrote about a recursive loop of improvements and the Life summary, strictly read, has a missing piece of recursive improvement (merely more supermachines 'like it'), but given von Neumann's attitude on 'ever-accelerating progress' to Ulam and elsewhere and the breakdown of 'human' affairs, confident extrapolations in The Computer and the Brain, belief that automation will be a big part of boosting science/tech ("Can We Survive Technology?" etc), the remarkable echo of von Neumann's 'explosive' in Good's 'intelligence explosion' (on top of Vinge's echo of 'singularity'), that seems like a very big stretch to me.

I wonder if somewhere in his unpublished papers or letters there is a more detailed description of what he thought? It seems improbable that some weekly magazine journalist would be the only person to hear about his speculations - posthumously at that. It wouldn't be surprising if there's something there but no one has noticed: 'Singularity' as an idea or phrase only began taking off around 2000 (I noticed this strongly doing some searches in LexisNexis to see if I could find any other old media articles about von Neumann mentioning either "singularity" or "supermachines"), and of course, I only just noticed this Life obituary while reading it for amusement value.

[–][deleted] 4 points5 points  (1 child)

Interesting. As you note, it's crucial whether he meant the machines would create improved machines or just copies. I wonder who the author (Clary Blair Jr.) talked to for that paragraph. Too bad that journalism doesn't have the same citation norms as academia, or we'd know.

I wonder, if he had lived to today, whether von Neumann would still subscribe to 'ever-accelerating progress'. It seems kind of hard to claim that progress was faster in 1960-2020 than it was in 1900-1960 (roughly von Neumann's life)... if anything, maybe it was slower. I think a belief today in a coming singularity requires an inflection point first, after which technological progress accelerates again.

[–]gwern 9 points10 points  (0 children)

I wonder too. She must've talked to the family, because where else would she have gotten all those historical and recent photos? But she also directly quotes a number of other people - at least two IAS faculty, friends, his second wife, possibly his brother, some others. Ulam, or maybe even I.J. Good? (Good hadn't moved to the USA to Virginia Tech in 1957, but WP informs me that around then he did have some sort of consulting with IBM and an associate status with Princeton - which of course hosts the IAS, and the computing world was a small one, not that physical distance was any bar for von Neumann or a journalist with an expense account.)

I think he would be deeply relieved that the Cold War didn't end in nuclear war and that would override any disappointment that many of the things he foresaw didn't come true (for a mix of both technical barriers and very changed attitudes towards risk & environment) - we definitely don't have energy too cheap to meter, nuclear control of the weather, or widespread use of nuclear transmutation. (See some of his expectations in "Can We Survive Technology?") Computing power, however, has advanced as rapidly as he or Turing or Good could & did hope. So turning his expectations towards AI seems like something entirely possible for him. It truly is a pity that he hasn't lived as long as, say, Freeman Dyson...

[–]disumbrationist 3 points4 points  (1 child)

In this speech from late 1955, von Neumann thinks it's unlikely that intelligent machines will be created anytime soon:

There have been developed, especially in the last decade, theories of decision-making – the first step in its mechanization. However, the indications are that in this area, the best that mechanization will do for a long time is to supply mechanical aids for decision-making while the process itself must remain human. The human intellect has many qualities for which no automatic approximation exists. The kind of logic involved, usually described by the word "intuitive", is such that we do not even have a decent description of it. The best we can do is to divide all processes into those things which can be better done by machines and those which can be better done by humans and then invent methods by which to pursue the two. We are still at the very beginning of this process.

(Source: "The Impact of Recent Developments in Science on the Economy and on Economics". Couldn't find a link, but it's in his Collected Works Volume 6, which is available on libgen)

Not sure how to square this with the Life quote. I guess it's possible he changed his mind in the last few months of his life.

[–]gwern 8 points9 points  (0 children)

I don't see any particular contradiction between the Life's "some day" and his speech's immediate "for a long time".

As he points out in The Computer and the Brain, human-equivalent computing power would require something like another >10 orders of magnitude more power & miniaturization, which is not something he could have expected soon (and IIRC, Turing suggested that human-equivalence wouldn't come until at least the 1990s, half a century later, which is surely a 'long time'). So far, computers as decision-making support and difficulties in automating 'intuitive' thinking (such as, say, evaluating Go boards?) is indeed how it has played out.

[–]zergling_LesterSW 6193 2 points3 points  (7 children)

lukeprog points out that in his Singularity history anthology, von Neumann in 1948 & 1949 clearly believes automatons could construct more powerful automatons than themselves, which shows he did believe in the missing piece at that point, so it seems peculiar to suggest that he had abandoned or forgotten it despite all the successes of computers & his research program by 1957:

...“complication" on its lower levels is probably degenerative, that is, that every automaton that can produce other automata will only be able to produce less complicated ones. There is, however, a certain minimum level where this degenerative characteristic ceases to be universal. At this point automata which can reproduce themselves, or even construct higher entities, become possible.

1949 ("Theory of self-reproducing automata", pg80):

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative, but above which the phenomenon of synthesis, if properly arranged, can become explosive, in other words, where syntheses of automata can proceed in such a manner that each automaton will produce other automata which are more complex and of higher potentialities than itself.

Those two quotes seem to be making a much lower level claim, basically he's grasping for the idea that you need some minimal size to achieve a quine, below that you can only output simpler stuff, above that you can add whatever you want to the output.

This has very little to do with the concept of Singularity, since this threshold is pretty low (though Von Neumann possibly didn't know that).

Btw, I was surprised that the word "quine" was coined in 2000 by Douglas Hofstadter in GEB. Also, how do I read the full text of https://onlinelibrary.wiley.com/doi/pdf/10.1002/spe.4380020411 ? edit: I mean, on the second thought quines were invented by Goedel (and I think truly there's nothing like that earlier), but I'd like to check out that paper too.

[–]gwern 2 points3 points  (5 children)

Those two quotes seem to be making a much lower level claim, basically he's grasping for the idea that you need some minimal size to achieve a quine, below that you can only output simpler stuff, above that you can add whatever you want to the output.

No, he's making a much stronger claim than simply 'you can add something'. A quine which can have something arbitrary added as a payload isn't 'explosive' nor does 'each automaton' repeatedly construct 'higher entities' in an indefinitely exploding loop of automatons & complexity. That is very Singularity-style, especially read in conjunction with the other two quotes and reflecting how close in time they are to Good.

Also, how do I read the full text of https://onlinelibrary.wiley.com/doi/pdf/10.1002/spe.4380020411 ?

Libgen.

[–]zergling_LesterSW 6193 1 point2 points  (4 children)

No, he's making a much stronger claim than simply 'you can add something'.

The threshold he is talking about is definitely that of a quine, judging by the lower bound description.

The stuff above it describes a negative rather than positive liberty: now the degenerative nature of copying doesn't have a hold of you, you can do anything... that you can manage to do.

It does seem that he followed it in the obvious direction of making a positive use of this liberty, sure. But I think that it's appropriate to make a point of order: in that comment the quine stuff was the speculation and the Singularity stuff is a speculation about super crazy speculation there.

Libgen.

Thanks, they went the "figure out how to print a quote by abusing quoting" route with no interesting closing thoughts. Goedel did better, lol.

[–]gwern 2 points3 points  (3 children)

The threshold he is talking about is definitely that of a quine, judging by the lower bound description.

I don't think it is. People say the same thing about intelligence and self-improvement. Again, it makes no sense to describe as quine as an 'explosive' process in some cases where 'each automaton will produce other automata which are more complex and of higher potentialities than itself' (I'm going to keep quoting it at you because it's very clear). A quine does nothing like that. It reproduces itself, once, and if you tuck in a payload or easter egg, it reproduces that too, it doesn't become 'more complex' or create a new quine of 'higher potentialities than itself', which can then create a new quine of higher potentialities and so on explosively. Your reading of it as a simple quine just doesn't work and requires ignoring almost the entirety of the second quote in addition to the clause in the first one.

[–]zergling_LesterSW 6193 1 point2 points  (2 children)

The threshold he is talking about is definitely that of a quine, judging by the lower bound description.

I don't think it is.

Ok, so I'll keep quoting these at you:

...“complication" on its lower levels is probably degenerative, that is, that every automaton that can produce other automata will only be able to produce less complicated ones. There is, however, a certain minimum level where this degenerative characteristic ceases to be universal.

There is thus this completely decisive property of complexity, that there exists a critical size below which the process of synthesis is degenerative

The description that bounds that threshold/level/size from below is that of a quine. Like, this is just truth, do you disagree even?

The question is how speculative he was regarding the consequences of the possibility to get over that threshold. And I consider that to be speculation, like, I don't really care much.

A quine does nothing like that. It reproduces itself, once, and if you tuck in a payload or easter egg, it reproduces that too

👉😏👉 here's where you're wrong, kiddo: it can also execute the payload.

edit: though I was wrong, he wasn't speculating about that level of complexity existing, he was absolutely certain, as he should have been.

[–]gwern 1 point2 points  (1 child)

The description that bounds that threshold/level/size from below is that of a quine. Like, this is just truth, do you disagree even?

No. He doesn't even say it has to recreate itself. Why can't it create a different automata, 'other automata', specifically, 'higher automata'? It's an 'or', not an 'and'. Logically, you can have an intelligence explosion where no entity is capable of copying itself, as long as they are able to create another entity smarter than themselves (or I should say, 'more complex and of higher potentialities'); imagine a superintelligent automaton with the additional rule 'thou shalt not copy thyself', which can make a more intelligent automaton but not itself - an intelligence explosion without quining. Self-replication or quining is sufficient to prevent degeneration, by construction, but not sufficient to create 'explosive' processes (otherwise, you wouldn't need to 'properly arrange' anything) and not obviously necessary. It is a limit, of sorts, but not the relevant one, and again, makes little sense to force what he says as referring solely to quines.

And I consider that to be speculation, like, I don't really care much.

Given that the whole question here is whether von Neumann was the first to speculate about a Singularity...

[–]zergling_LesterSW 6193 1 point2 points  (0 children)

No. He doesn't even say it has to recreate itself.

No, as in you disagree with that or you disagree with my point?

Logically, you can have an intelligence explosion where no entity is capable of copying itself, as long as they are able to create another entity smarter than themselves

What. How it is possible to be able to create an entity more complicated than oneself, but not able to create an entity that's less complicated than that?

And while I'm waiting with a bated breath for some clever scheme of yours, I have to point out that John Von Neumann certainly was familiar with the concept of Turing-completeness, so to imply something cleverly outside of that domain, in quotes for popsci journals, no, that couldn't have happened.

I edited my comment above to admit that I was wrong, he wasn't speculating about that level of complexity existing, he was absolutely certain, as he should have been. So he could have been speculating about intelligence or at least complexity explosion above the quine threshold.

But the limit/size John Von Neuman was talking about in the first parts of the respective quotes was definitely that of a quine. I don't even know how we two can RATIONALLY DISCUSS it because it's so obviously true. Help me with that if you can, or concede that point.

[–]anatoly 2 points3 points  (0 children)

Btw, I was surprised that the word "quine" was coined in 2000 by Douglas Hofstadter in GEB.

GEB is from 1979, not 2000.