×
all 36 comments

[–]lazygraduatestudent 10 points11 points  (45 children)

I study complexity theory. I'm not 100% on board with Gwern here. Let me try a few clarifications.

Constant factors, worst-case vs. average case, unproven assumptions like P vs. NP, etc. are all valid and well-known objections to complexity barriers. But having said that, many problems really are difficult in practice AND have complexity theoretic reasons why they "ought to be" truly hard. Examples of such problems can often come from chaotic systems, so predicting chaotic systems really well might be something AIs can never do.

Still, that doesn't mean AI can't kill us all. Let's take a concrete example: evaluating who wins a Go game (and by how much) from a randomly chosen initial position. There is almost certainly no good algorithm for this task, and it wouldn't be too surprising if converting the entire galaxy into a giant quantum computer will not let a super-intelligent AI solve an instance of this problem before the heat death of the universe.

But that doesn't matter: the AI may not need to fully solve Go, it may be satisfied with consistently crushing humans at it. See, if the task is hard, humans can't do it either.

The takeaways I'd take from complexity are the following.

  1. Modeling super-intelligent AIs as omnipotent is probably a bad idea. Many tasks the AI might encounter - such as accurately predicting how the world will respond to its political maneuvers - might be fundamentally impossible.

  2. A result in computability theory says that all powerful enough machines can simulate each other, albeit some much slower than others (this is Turing completeness). It's possible that by analogy, all smart enough agents can understand each other, just with some being much slower than others. If the analogy holds, it means no AI can be fundamentally incomprehensible to a human - it will merely be comprehensible if slowed down by a factor of (say) 1,000,000.

  3. Some things that we know should be possible are really, really, really hard for humans to do. This includes settling P vs. NP. It could also include building an AGI (it might be difficult on a scale of centuries or millennia).

  4. If the problem of designing intelligent systems is sufficiently hard, even an AGI might suck at it. Like, if humans take 50 years to build an AGI that thinks as fast as a human, then even if you give that AGI way more computing power, it might take it 100 years to improve its own design by a factor of 2. In other words, FOOM is not at all guaranteed.

[–]gwern 3 points4 points  (20 children)

Examples of such problems can often come from chaotic systems, so predicting chaotic systems really well might be something AIs can never do.

I would consider that to fall under the environment-modification and control argument. "All processes that are stable we shall predict. All processes that are unstable we shall control." And if AIs can't control such systems, it's hard to see how humans can use those chaotic systems to protect themselves or control the AIs.

See, if the task is hard, humans can't do it either.

While a cute reversal, that doesn't really work because it leaves open the equivalency argument that Naam is trying to make in the first place: "AIs can't solve harder problems than humans can, so everyone is equally powerful".

A result in computability theory says that all powerful enough machines can simulate each other, albeit some much slower than others (this is Turing completeness).

I think you need to throw in some important caveats there, like having enough additional memory to model the other. There's no reason to think that a human with 80 billion neurons can exactly simulate an AI with 160 billion neurons. Don't these simulation results usually come with dire slowdowns as well? It is not very helpful to model something at a slowdown of 1 million, especially if you're trying to control that something.

It could also include building an AGI (it might be difficult on a scale of centuries or millennia).

I don't know, I think building an AGI has been going well. The whole field is, what, 60 years old?

[–]lazygraduatestudent 2 points3 points  (19 children)

"All processes that are stable we shall predict. All processes that are unstable we shall control."

I disagree with that intuition. There are some problems, like solving Go, that are almost certainly computationally intractable (even to an AI) but are not really "unstable". There is at least some intuition from CS that hard problems come up very naturally, and can't always be circumvented. Again, a super-intelligence will dominate humans even on hard problems, but it would not be omnipotent.

Let's try another concrete example: cryptography. Assuming plausible cryptographic assumptions, it should be possible for humans to use cryptographic schemes to prevent a super-intelligence from reading their communications. I know today's software security systems are a terrible mess, but that has more to do with the way computers and the internet are set up than with any mathematical barriers. If humans knew the stakes, and were willing to use computing devices that (e.g.) don't allow installation of third-party programs, computationally-unbreakable crypto should be achievable.

Given an AGI might not be able to break human's cryptography, it is very far from being omnipotent.

I think you need to throw in some important caveats there, like having enough additional memory to model the other. There's no reason to think that a human with 80 billion neurons can exactly simulate an AI with 160 billion neurons.

With some pencil and paper, they could, at least in theory. Of course, I agree that's not too interesting if the slowdown is 1 million, but it may still be better to model an AGI as a very sped-up human rather than a god. It gives better intuition to questions like can the AGI solve Go, can it deduce the laws of physics from a single frame of a video, can it perfectly predict how humans will react to its political moves, etc.

I don't know, I think building an AGI has been going well. The whole field is, what, 60 years old?

That's a matter of how you look at it. The whole field is 60 years old, and computers still can't do things that were considered easy at the beginning (conversation at a child's level, basic vision tasks, etc.). We've adjusted our expectations so far downwards that we now consider an AI that detects if an image has a cat - given millions of images for training - to be impressive.

[–]gwern 4 points5 points  (13 children)

There is at least some intuition from CS that hard problems come up very naturally, and can't always be circumvented.

But Go boards do not come up naturally. von Neumann's weather example is a good one: weather comes up naturally and is a chaotic system par excellance, but to the extent we can predict it (which has increased hugely since then), we can optimize around it, and to the extent we can't predict it, perhaps we can control it.

I know today's software security systems are a terrible mess, but that has more to do with the way computers and the internet are set up than with any mathematical barriers. If humans knew the stakes, and were willing to use computing devices that (e.g.) don't allow installation of third-party programs, computationally-unbreakable crypto should be achievable.

Aside from implementations being approximately as secure as cardboard, as the NSA has demonstrated very effectively via the Snowden leaks, the best way to defeat crypto is to redefine the problem and go after the end-points with a wide array of hacks, implanted devices, and infections. An AI cannot bruteforce an AES key, but sending a drone over to a house to do some van Eck phreaking or acoustic analysis of keyboards/processors...?

but it may still be better to model an AGI as a very sped-up human rather than a god.

Does it work to model a human as a very sped-up chimpanzee rather than a god?

The whole field is 60 years old, and computers still can't do things that were considered easy at the beginning (conversation at a child's level, basic vision tasks, etc.). We've adjusted our expectations so far downwards that we now consider an AI that detects if an image has a cat - given millions of images for training - to be impressive.

(Chatbots, both the old and new neural memory network ones, are already past a child's level, but that illustrates more that children can't speak very well and don't make a good Turing test.) Recognizing a cat given <1m images (Imagenet is never trained with 'millions of images') is the 2011 state of the art. In 2016, we have super-human image recognition performance on Imagenet which includes cats among the 1k categories; and 'impressive' is now localizing, segmenting, and labeling dozens of objects in an image in realtime, generating English descriptions from images (and vice versa), learning 3D object representations from just large sets of 2D images, hallucinating highly realistic images, approaching or exceeding human-level facial recognition, guessing location on the planet just from photographs of the outdoors, processing satellite imagery, self-driving cars etc. 'Basic vision tasks' have made huge strides and researchers are now setting their sights higher than 'detecting cats'.

[–]lazygraduatestudent 1 point2 points  (12 children)

Aside from implementations being approximately as secure as cardboard, as the NSA has demonstrated very effectively via the Snowden leaks

Actually, my main takeaway was that the NSA had to resort to asking Google for backdoors, because they really couldn't break RSA no matter how hard they tried.

An AI cannot bruteforce an AES key, but sending a drone over to a house to do some van Eck phreaking or acoustic analysis of keyboards/processors...?

That's if and only if the AI has the hardware capability (e.g. owns drones). That's almost like saying "yes, the AI can't break crypto, but maybe it can just look over your shoulder as you type in your original message". That's true, but not very interesting, and still makes the AI far from omnipotent.

Does it work to model a human as a very sped-up chimpanzee rather than a god?

My entire point is that there is a phase shift at Turing universality. Chimps aren't Turing universal; they can't be taught to brute-force search for a proof of the Riemann hypothesis. Smart humans are Turing universal: they can be taught to brute force any specified task. The intuition from complexity theory is that such a phase shift really does happen, and turns machines/brains that can't do everything into machines/brains that can do everything.

(Chatbots, both the old and new neural memory network ones, are already past a child's level, but that illustrates more that children can't speak very well and don't make a good Turing test.)

No, I can distinguish a chatbot from a child in 30 minutes with 100% accuracy.

In 2016, we have super-human image recognition performance on Imagenet which includes cats among the 1k categories; and 'impressive' is now localizing, segmenting, and labeling dozens of objects in an image in realtime, generating English descriptions from images (and vice versa), learning 3D object representations from just large sets of 2D images, hallucinating highly realistic images, approaching or exceeding human-level facial recognition, guessing location on the planet just from photographs of the outdoors, processing satellite imagery, self-driving cars etc. 'Basic vision tasks' have made huge strides and researchers are now setting their sights higher than 'detecting cats'.

That's great, and there has indeed been a lot of progress. But you can still use "which of these images have sushi" as captcha, so perhaps you're exaggerating things a little.

[–]gwern 2 points3 points  (11 children)

Actually, my main takeaway was that the NSA had to resort to asking Google for backdoors, because they really couldn't break RSA no matter how hard they tried.

Wrong. They asked, and they also took that backdoor by hacking Google's datacenter links, to quote the most famous slide: ":)". The NSA did not 'resort'; they are maximalists who want all forms of access, both front door and back door. (NSA's slogan: "Collect it all.") They hack companies they already had legitimate access to - such as Google, which they could and do send NSLs to for whatever data they needed. This is both convenient for bulk ingestion & processing, frees them from legislative constraints which they don't care about, gives them options on how to parallel construct evidence, and the more systems are compromised, the more potential connections which can be further exploited ("I hunt sysadmins").

That's true, but not very interesting

That's extremely interesting! What good is crypto that can be bypassed‽‽‽ As the NSA does routinely and specifically tries to go after endpoints? As Snowden says, the NSA can't crack PGP but that does you jackshit when they crack your laptop instead. This is why everyone is putting all this effort into perfect forward secrecy and end-to-end crypto! So the NSA doesn't just hack a Gmail or Icloud datacenter and get all your HTTPS-connection-protected but unencrypted-at-rest emails!

Chimps aren't Turing universal; they can't be taught to brute-force search for a proof of the Riemann hypothesis.

Chimps (and primates in general) can be taught a lot of things. They can be taught some basic sign language and counting, they have some theory of mind and plan, they are comparable to human children when tested, they can be taught experimental setups for all sorts of things, they can have a longer memory span than yours. You think they can't be taught a cellular automation's transitions or a Turing machine's rules? Please provide some citations for this.

No, I can distinguish a chatbot from a child in 30 minutes with 100% accuracy.

The people in the Turing tests that get run apparently can't.

and there has indeed been a lot of progress.

So maybe you should stop saying that anyone finds detecting cats impressive and representative of current progress.

But you can still use "which of these images have sushi" as captcha, so perhaps you're exaggerating things a little.

Google's CAPTCHAs, if that's what you're referring to, have already been broken. The authentication seems to be primarily account and cookie-based. Also, that argument doesn't work since CAPTCHAs have always been an economic barrier since solving them can be farmed out to humans; the question is not whether they can be solved by deep nets, but whether they can be solved economically and fast enough for spammers and both deep learning experts & GPUs are expensive for spammers to hire. This is why text CAPTCHAs continued to be used long after they were broken, because they were still enough of an economic deterrent to tamp down the spam.

[–]lazygraduatestudent 1 point2 points  (10 children)

That's extremely interesting! What good is crypto that can be bypassed‽‽‽ As the NSA does routinely and specifically tries to go after endpoints? As Snowden says, the NSA can't crack PGP but that does you jackshit when they crack your laptop instead. This is why everyone is putting all this effort into perfect forward secrecy and end-to-end crypto! So the NSA doesn't just hack a Gmail or Icloud datacenter and get all your HTTPS-connection-protected but unencrypted-at-rest emails!

Oh, come on, you know what I meant. Obviously it's important that cryptography can be surpassed in practice. But equally obviously, end-to-end crypto cannot be surpassed without either breaking the crypto or getting access to one of the ends.

Like, suppose that we throw out our computers and build new hardware that only implements a text editor plus RSA. We get one of these each, plus a wire connecting them. Then it is impossible to read our communication without either breaking RSA or getting physical access to one of our devices (equivalent to looking over the shoulder).

The fact that this strategy can beat even super-intelligences means they are not omnipotent.

Chimps (and primates in general ) can be taught a lot of things. They can be taught some basic sign language and counting, they have some theory of mind and plan, they are comparable to human children when tested, they can be taught experimental setups for all sorts of things, they can have a longer memory span than yours. You think they can't be taught a cellular automation's transitions or a Turing machine's rules? Please provide some citations for this.

We're going to need a notion of "taught" that involves some understanding, I think. Silicon can be taught transition rules for a Turing machine, but silicon is not intelligent.

I only mean to use Turing universality as an analogy. Maybe the analogical cognitive milestone is something like "being able to engage in formal reasoning". Anyway, the very fact that a human can enumerate strategies means that a human with pencil and paper can beat a computer a chess given sufficient time to think (e.g. a million years), something a chimp cannot do.

The people in the Turing tests that get run apparently can't.

The Turing tests that get run kind of suck.

So maybe you should stop saying that anyone finds detecting cats impressive and representative of current progress.

My sincere apologies. I meant that it was super impressive in 2011, 50 years after it was thought to be probably trivial. When I said we're impressed by cat recognition, I literally meant that I remember people being impressed by cat recognition a few years ago. Naturally, since it's done now, it is no longer impressive.

[–]gwern 1 point2 points  (9 children)

Obviously it's important that cryptography can be surpassed in practice.

Just as it's important that apparent complexity classes can be surpassed in practice.

Then it is impossible to read our communication without either breaking RSA or getting physical access to one of our devices (equivalent to looking over the shoulder).

It's a good thing that it's also impossible to ever implement RSA wrong or get physical access.

Anyway, the very fact that a human can enumerate strategies means that a human with pencil and paper can beat a computer a chess given sufficient time to think (e.g. a million years), something a chimp cannot do.

That human is almost certainly going to have to do something like more deeply evaluate the game tree to try to find a better move; even a grandmaster must deeply evaluate lines of play and conduct a tree search. Which is a simple program that can be encoded into a Turing machine whose rules a chimp can follow, and we know this because we've already written Turing machines which can outplay a chess AI (other chess AIs). This is why I don't find Turing-completeness at all helpful. As long as something passes a basic level of complexity, it's quite common for Turing-completeness. A chimp can execute transition rules, but probably so too could a raven or a mouse or an ant mound or something; given indefinite time/space resources naturally they can compute anything computable. But without a specially setup Turing machine or system, an immortal ant mound will never compute anything important. The additional intelligence humans have matters a lot; a human is not simply a spedup chimpanzee because the human will think of things the chimpanzee never will. The human neural network can apparently express far more abstract and complicated programs than a merely chimp-sized neural network. These capacity differences don't fall neatly into a blunt Turing-complete/non-Turing-complete classification.

The Turing tests that get run kind of suck.

They have successfully shown that a lot of plausible-seeming tasks which might seem like decent Turing tests, are not. Kind of a useful null result in the same sense that ELIZA was.

I meant that it was super impressive in 2011, 50 years after it was thought to be probably trivial.

The Dartmouth people did not actually think it would be trivial.

Naturally, since it's done now, it is no longer impressive.

Yes, as the saying goes, 'AI is whatever we can't solve yet'. But once the cat was recognized, a lot of progress was made, both relative and absolute.

[–]lazygraduatestudent 1 point2 points  (8 children)

It's a good thing that it's also impossible to ever implement RSA wrong or get physical access.

My point stands: if the AI needs physical access, it is not omnipotent. We know what we need to do to defend against its attack - prevent physical access.

a human is not simply a spedup chimpanzee because the human will think of things the chimpanzee never will.

Right, but I'm arguing this maxes out at humans, since humans know how to think of everything (in the worst case, the can literally enumerate strategies, and they are smart enough to know they can do so). I don't have foolproof evidence - Turing completeness is merely an analogy - but neither do you.

Further evidence for my case: I think there's no proof that a decent math student won't understand if they work on it enough. It seems a decent math student is "complete" for all known mathematical proofs, even those written by geniuses.

[–]gwern 2 points3 points  (7 children)

We know what we need to do to defend against its attack - prevent physical access.

We also know what we need to do to defend against losing money on Wall Street - buy low, sell high.

Right, but I'm arguing this maxes out at humans, since humans know how to think of everything (in the worst case, the can literally enumerate strategies, and they are smart enough to know they can do so).

But your suggestion, like with the chess AI, simply boils down to 'slowly simulate a more powerful AI using your postulated infinite space and time resources'. If I write down a Turing machine for AIXI and hand it to a chimpanzee, have I turned them into the equivalent of AlphaGo? No, because the chimpanzee doesn't actually have the space or time. The space of programs the chimp can think through is much smaller than that of humans, and humans themselves can think through only so many problems. A notepad is no replacement for having another hundred billion neurons.

Further evidence for my case: I think there's no proof that a decent math student won't understand if they work on it enough.

Which is why no one ever drops out of math programs or fails to understand a proof, and why the ABC conjecture has been resolved?

[–]Noncomment 0 points1 point  (3 children)

Assuming plausible cryptographic assumptions, it should be possible for humans to use cryptographic schemes to prevent a super-intelligence from reading their communications.

Yes but cryptography problems are carefully and intentionally constructed to be unsolvable. Of the set of problems that occur "naturally", I doubt most are anywhere near as hard as cryptography. And even cryptography has problems with unexpected and clever attacks, like insane side channel attacks, brute forcing weak passwords and hashing algorithms, etc.

Given an AGI might not be able to break human's cryptography, it is very far from being omnipotent.

No one is claiming an AI would actually be omnipotent. But it could be practically omnipotent, from a human point of view. A being hundreds, or thousands of times more intelligent than me is terrifying. It would be to us what we are to chimpanzees.

We've adjusted our expectations so far downwards that we now consider an AI that detects if an image has a cat - given millions of images for training - to be impressive.

That was true 5 years ago. Now not only can they recognize cats easily, but they can recognize thousands of common objects, and do so at higher accuracy and speed than humans. Progress in AI is rapid and scary.

[–]lazygraduatestudent 1 point2 points  (2 children)

Of the set of problems that occur "naturally", I doubt most are anywhere near as hard as cryptography.

No, this intuition is false. The reason is that cryptography can't use just any hard problem, it must use something that's hard for adversaries but easy if you have the secret key. It's hard to construct such problems, so cryptography tasks tend to be less astronomically impossible than many others.

Examples of naturally occurring really hard problems (hard both in theory and in practice):

  • Variants of flight scheduling: you're an airline wishing to optimally schedule flights to maximize revenue. You know some things about how many people want to go where, etc.. Most ways of formalizing this problem are NP-hard, and in practice airlines spend a ton of effort on heuristic algorithms that probably don't return the optimal answer.

  • Theorem proving: given a mathematical statement, find a proof or a disproof.

  • Find a bug in code (will specified code always do specified task correctly)?

  • Protein folding and related problems

There are many, many other problems that an AI will encounter that it will not be able to solve.

It would be to us what we are to chimpanzees.

No, it would not, unless you think chimps can design a cryptographic system you couldn't break.

Progress in AI is rapid and scary.

Progress in the last 5 years was rapid. Progress before was slower. Once we exhaust what we can do with neural nets, I predict progress will slow down again.

[–]Noncomment 0 points1 point  (1 child)

I think gwern's objections apply to all of your examples though.

For instance, flight scheduling need not find the global optima to be useful, and computers can already do vastly better than humans at it. Protein folding may benefit from fast heuristic methods, as seen in the foldit project. Theorem proving could be done probabilistically, not proving it definitely true, but showing it's very likely to be true.

I don't think there will ever be a situation where the AI's power depends entirely on solving an impossible mathematical problem. It's power comes from being relatively more powerful than humans.

E.g. humans didn't evolve to be good at engineering or math. It's really a lucky coincidence that our minds are capable of it. We are the very first intelligence to evolve that can do it at all. Surely an AI designed to be good at that would be vastly superior to humans. The same way that AIs designed to be good at Go or Chess are vastly superior to the best human players (let alone the average player.)

No, it would not, unless you think chimps can design a cryptographic system you couldn't break.

I fail to see your analogy. Chimps can't design anything more complicated than a pointy stick because their brains are so limited. But that's exactly the point I'm making. The AI's abilities could be beyond our ability to comprehend, the same way chimps can't comprehend how a car works.

[–]lazygraduatestudent 2 points3 points  (0 children)

For instance, flight scheduling need not find the global optima to be useful, and computers can already do vastly better than humans at it. Protein folding may benefit from fast heuristic methods, as seen in the foldit project.

I never claimed computers won't be better than humans at these tasks. I just claimed their solutions would still be extremely far from optimal - optimal is fundamentally unreachable.

Theorem proving could be done probabilistically, not proving it definitely true, but showing it's very likely to be true.

This one is false. But your overall point stands: computers will still be able to prove more theorems than humans.

I don't think there will ever be a situation where the AI's power depends entirely on solving an impossible mathematical problem. It's power comes from being relatively more powerful than humans.

It will be more powerful than humans; it will not be omnipotent. I thought I was clear about this.

I fail to see your analogy. Chimps can't design anything more complicated than a pointy stick because their brains are so limited. But that's exactly the point I'm making. The AI's abilities could be beyond our ability to comprehend, the same way chimps can't comprehend how a car works.

Chimps can't fool us. We can "fool" AI in the sense that we can design a cryptography it can't break, and we can use it to prevent the AI from doing something it may want to do (read our communications). Do you not see a difference here?

[–][deleted] 0 points1 point  (0 children)

Again, a super-intelligence will dominate humans even on hard problems, but it would not be omnipotent.

Why is "omnipotence" an important question here?

With some pencil and paper, they could, at least in theory. Of course, I agree that's not too interesting if the slowdown is 1 million, but it may still be better to model an AGI as a very sped-up human rather than a god. It gives better intuition to questions like can the AGI solve Go, can it deduce the laws of physics from a single frame of a video, can it perfectly predict how humans will react to its political moves, etc.

Wellll if you're modeling the AI as running on a Turing Machine and lacking a Turing oracle, then what you've really got here is a series of unknowns, the quantities of prior knowledge and computational power necessary for a statistical learner to:

  • Play Go,
  • Discover the laws of physics from one or more frames of video footage (easier subtask: induce that the several frames of video footage are points in an open neighborhood, and the whole open set should be considered as a continuous object),
  • Predict how humans will react to its political moves (easier subtask: acquire knowledge of specific humans in order to predict their reactions),
  • Etc.

The sample complexity for each of these tasks is likely to be high individually, but transfer learning can ameliorate that. Also, once a single instance has been successfully learned (one good game of Go, one physics perception, one human manipulated), the remaining sample complexity on other instances is low.

[–][deleted] 1 point2 points  (0 children)

To strengthen your analogy to go: It won't be an AI vs human competition, it will be an AI v human + and all human computational tools competition

[–][deleted]  (22 children)

[deleted]

    [–]lazygraduatestudent 7 points8 points  (21 children)

    That's certainly evidence in the other direction, yes. But I'm not sure how far to push it. There is some selection bias effect, after all (if evolution did not produce intelligence, we wouldn't have this conversation).

    [–][deleted]  (20 children)

    [deleted]

      [–]lazygraduatestudent 2 points3 points  (15 children)

      Yes, anthropics dictates that intelligence had to appear, but not that it had to appear so quickly. Octopuses, dolphins and smart crows all indicate that some level of smarts (below human level) is reachable without anthropic cheating too. Add in a suitable set of hands to make tools, plus some other favourable conditions and it really took off.

      Yeah, again, I agree that this is evidence. But the anthropic principle is tricky and may still have caused some of this. If highly-intelligent life is really really difficult to design, the worlds where it exists would be extremely finely selected. Is there a hidden reason why such worlds are likely to have many intelligent species? I don't know, but it doesn't seem impossible.

      Also look at Ashkenazi Jews in Europe. Would they have been be able to gain IQ so quickly if the local recalcitrance was very high and an anthropic cheat had got us to minimal intelligence for civilisation?

      I'm not sure if they did gain IQ quickly. My favored theory is that Ashkenazis are smarter because dumb Ashkenazis de-converted. That would be possible in any conceivable world - even one in which gaining IQ is all but impossible. The only requirement is that there is some non-zero initial variation in human IQ; then self-selection can concentrate that IQ in a particular demographic.

      Overall I don't see any evidence for high local recalcitrance, nor any prior reason for it to be high. I would guess that it will eventually rise up, but that's an instinct.

      Not sure where the terminology "local recalcitrance" came from, but I don't like it. Anyway, one reason to think AI is hard is that we haven't been able to do it after trying hard for a very long time.

      [–][deleted]  (14 children)

      [deleted]

        [–]lazygraduatestudent 2 points3 points  (12 children)

        I don't think the maths stacks up. Look at the number of Jewish nobel prizes (~20%). If the only mechanism for Jewish IQ gain is dropping out of the dumb ones, you wouldn't expect such extreme numbers unless 20% of the world started off as Ashkenazi jews, which is clearly false. Selection must be at work as far as I can see.

        We don't need 20% of the world, we need 20% of people of European descent (since they are primarily the ones with access to higher education, a necessary requirement for nobel prizes). The medieval European population was around 100 million, of which there were maybe 1 or 2 million Jews. To make the math work, it suffices to assume that Jews had more reproductive success - say, each generation the Jewish population would double but half of that generation would de-convert. Over 30 generations, that's a lot of selection power - more than genetic selection is likely to provide.

        Recalcitrance refers to the difficulty of increasing the intelligence of a system. If it is low, then a comparatively small amount of effort will yield a large marginal increase in intelligence. Marginal is key here.

        Good, well, humanity has put a lot of effort into AI, and the marginal returns were variable. In the "AI winter", it felt like no amount of effort really does anything impressive. Then the big data revolution happened, and we are getting a more steady advancement, but due to an absence of an objective measurement system, it's hard to tell whether that's big or small. But the fact that the "winter" happened in the first place means we can't always use analogies to biology to predict what will happen.

        [–][deleted]  (8 children)

        [deleted]

          [–]lazygraduatestudent 2 points3 points  (7 children)

          Well, okay, but over such short timescales natural selection isn't making anyone smarter - it's entirely possible that the entire Ashkenazi Jew IQ gain was achieved with zero mutations, for example. If you take the smartest person in the world and make a bunch of clones of them, you'd get a really smart tribe, but that's not relevant to the question of how difficult it is to design smarter brains.

          [–][deleted]  (6 children)

          [deleted]

            [–][deleted]  (2 children)

            [deleted]

              [–]lazygraduatestudent 2 points3 points  (1 child)

              I think governments only stopped putting in money because there seemed to be no progress.

              [–]gwern 0 points1 point  (0 children)

              I don't think the maths stacks up. Look at the number of Jewish nobel prizes (~20%). If the only mechanism for Jewish IQ gain is dropping out of the dumb ones, you wouldn't expect such extreme numbers unless 20% of the world started off as Ashkenazi jews, which is clearly false. Selection must be at work as far as I can see.

              'boiling off' without any additional sexual/natural selection can work because while the net remains the same (slightly falling), the subgroup gets more concentrated; and in this IQ case, being normally distributed, due to the thin tails can result in more extremes. See

              However, I think this theory would have to explain why there is so little Ashkenazi genetic traces and why they look so distinct from Europeans if tons of Ashkenazi are outbreeding every generation. (For example, I have ~1.2% Ashkenazi ancestry, however, I can trace this precisely in my family tree to an Ashkenazi who converted to Catholicism like 6 generations back, with no additional apparent Ashkenazi further back.)

              [–][deleted] 1 point2 points  (3 children)

              I think it's a big assumption that IQ has any relevance to this discussion at all. IQ is a specifically human calibrated measurement. In what sense would an AI have an IQ with anymore meaning then it would have a BMI?

              [–][deleted]  (2 children)

              [deleted]

                [–][deleted] 2 points3 points  (1 child)

                What I meant was, take actual Hunan ability, apply any linear function you want*, and people's IQs will stay the same. I'm sceptical of arguments based on IQ related to AI, because they extrapolate from a difference that may be arbitrarily small.

                • positively sloped, big enough to measure

                [–]PM_ME_UR_OBSIDIANhad a qualia once -1 points0 points  (2 children)

                TL;DR: constant factors?

                I place great faith in the diminishing returns argument against a singularity, if only because, paraphrasing Aaronson, we would live in a very different world if this weren't the case. Nature abhors a singularity, the only ones we might have found are black holes and we're not even sure about those.

                [–]moyix 5 points6 points  (0 children)

                I think /u/gwern covers this point pretty well. Effectively, the mathematical term "singularity" is not what is meant when people like Vinge talk about it:

                But this doesn’t make any sense. ‘escape velocity’ is not a concept anyone has required to be true of the Singularity; if nothing else, there are physical limits to how much computation can be done in the observable universe, so it’s clear that there is no such thing as ‘infinite intelligence’. At no point do Good or Vinge say that the Singularity is important only if the increase of intelligence can continue eternally without bound; it is important because wherever the improvements terminate, it will terminate at a intelligence level above humanity, which will give it capabilities beyond humanity’s. (Good, for example, in his cost projections, appears to have a diminishing returns model in mind when he speculates that if human-level intelligence can be created, then twice the cost would give a greater-than-human level intelligence, and his later emphasis on ‘economy of meaning’; and Vinge says the Singularity is “the point where our old models must be discarded and a new reality rules”, without making claims about indefinite intelligence increase, just that control of events will have “intellectual runaway” from humans - but a runaway train doesn’t increase velocity exponentially until it attains the speed of light, it just escapes its operators’ control.)

                Runaway growth doesn't mean that it won't ever end – just that the place it ends up will be way beyond anything we can hope to understand and control.