Hacker News new | past | comments | ask | show | jobs | submit login
Complexity No Bar to AI (gwern.net)
120 points by tmfi 11 months ago | hide | past | favorite | 118 comments



Well, it's better than the argument that machines can't resolve undecidable questions but humans can.

There's a large family of problems that are NP-hard in the worst case, but much easier in the average case. Linear programming and the traveling salesman problem are like that.

The research question I would pose is, why is robotic manipulation in unstructured spaces so hard? Machine learning has not helped much there. Yet it's a fundamental animal skill. We're missing something that leads to success in that area. Whatever that is, it may be the next thing after machine learning via neural nets.

Note that it's not a human-level problem. Primates have the hardware for that. Mammals down to the squirrel level can manipulate objects. Mice, maybe. Mouse-level neural net hardware exists. It's not even that big. The University of Manchester's neural net machine supposedly has mouse-level power, in six racks.

I tried some ideas in this area in the 1980s and 1990s, without much success. Time for the next generation to look at this. More tools, more compute power, and more money are available.


Note that a mouse does not need to solve the problem from scratch like a computer does. They are born with a general solution to the problem of movement in their brain, which has been arrived at by evolution.

To replicate this, you can't expect to get away with only replicating the complexity of the mouse. You potentially need a computer with as much complexity as the evolutionary algorithm which led to the mouse's movement algorithm.


Note that a mouse does not need to solve the problem from scratch like a computer does. They are born with a general solution to the problem of movement in their brain, which has been arrived at by evolution.

Only for some cases. I've watched a horse being born and seen pictures of other newborns. The foal can stand within minutes, and the right sequence of moves is clearly built in because it works the first time. Newborn horses can walk within hours and run with the herd within days. Lying down, though, is not pre-stored. That's a confused collapse for the first few days. There's an evolutionary benefit to being able to get up and escape threats early, but smoothly lying down is less important.


This is a faulty argument, that might apply to primitive organisms with similarly primitive movement schemes.

Most complex organisms spend some time inside a protective shell of some form before getting out into the wild, where I suspect they learn to coordinate their orientation sensory organs with their limbs to produce necessary movement patterns.


I agree that animal intelligence and mobility/dexterity is a more useful goal to aim for now than "human-level".

Boston Robotics seems to have proven that one can get very useful and adaptive mobility with existing algorithms and a lot of (apparently manual?) fine-tuning. Power density seems to have played a role there.

I think for manipulation, the first challenge is useful vision and understanding. The robot needs to see the 3d structure and often infer parts that are not visible, or at least be able to scan around to create a complete picture. It needs to identify and/or recall affordances for that type of structure. It also needs to understand the physical properties of the object such as whether it will deform and how much.

For vision the ability to acquire and analyze point clouds now is an advantage over what was generally available in the 80s or 90s. Personally I believe that it is possible to build very useful manipulators using point clouds converted to voxels and then deep learning from that to get affordances and control policies and I think people are doing it.

To be really efficient and effective though, systems that do have inference about the world as a starting point seem more promising. I am reading a book called Surfing Uncertainty by Andy Clark. He seems to be advocating for Hierarchical Predictive Control. Everything is making sense to me. So far I am finding it light on the details of how to actually implement it. I think that Ogma may be doing something like this. It seems like something I need to at least understand.

One thing I think about is to be able to "just" reconstruct 3d oriented surface sections from individual 2d images. Because it seems to me that that is a strong prior to leverage in recognizing whole surfaces or shapes and then from there object parts. There are interesting papers similar this this but so far not quite what I was hoping for. Which in a way might be good because that motivates me to keep learning.

But anyway I think that really strong vision and understanding is foundational to robotic manipulation. And I also think part of the problem people have is that it's so hard that roboticists often give up and deploy very poor vision systems which means they are handicapped before they even started planning movements because they don't have an accurate detailed picture and understanding of what they are looking at.

But also I need to finish Clark's book because I think theories like predictive processing can explain a lot of the advantages that animals have over most robots.


robotic manipulation in unstructured spaces is probably not so hard anymore. when it comes to hardware, the hardware has often been the problem and very flaky, I believe we are at a stage where the software is ready and waiting for the hardware to catch up.


Humans are able to do surgical operations using existing hardware [0], but software can not.

Humans can drive cars on crazy Indian roads, software can not.

Existing hardware is plenty sufficient for manipulating physical world, but we are missing the intelligence part.

[0] https://www.davincisurgery.com/


AI's need special training grounds because they don't have the benefit of evolution. If they don't have realistic bodies in a realistic environment they can't learn to solve the problem of locomotion and manipulation. Have you noticed the explosion of "AI Gyms" in the last few years?

As others have said, we're at a good point with vision and control, but hardware is still expensive, and this slows research. I expect dexterous robots in a decade, BD's already got one dancing better than me.


But, humans can currently learn to teleoperate robots (e.g., surgical robots) much better than AI. So, they are dealing with the same hardware, and it is hardware a priori unfamiliar to the human. Thus, at least in these cases, the difference must be in the learning algorithm.


I think it’s because Atleast in the case of humans a lot of unstructured manipulation is based on learned observations. We can tell by looking how fragile something is. Babies take some time to figure out using their hands fully.


I think we miss that many of the issues faced with getting software-driven systems to accomplish anything in the world is constrained by the interface with the world more than it is by anything computational. Alpha Go can beat the best human at Go, but Alpha Go can't even load itself into memory on a machine that doesn't have ld.so installed and won't execute without the right version of libstdc++ (I'm making up the stack because I have no idea what they actually used, but you get the idea). Human limbs, on the other hand, are a near universal interface to any tactile objects with chemical properties that don't destroy human flesh. The interconnection between the sensors and the CPU units achieves bandwidth and parallelism more comparable to the entire Internet than any single computer attached to a robotic arm.

Besides that, I don't see that computational intractability being solved by efficient heuristic approximation algorithms necessarily helps when those heuristics require data-generating processes that can't be accomplished efficiently. They're more efficient than NP-hard problems that possibly can't be solved before the heat death of the universe, but still, if your decision making function requires data to be trained and that data requires physical experimentation and collection in the world, you're inherently rate-limited by the speed with which experiments can be carried out. That has contributed to stagnation in particle physics, for instance. Physicists aren't using shitty algorithms. It just takes a long time to build a particle accelerator, and you can't do it faster just because you're a computer. Manufacturing processes and collection of physical material isn't made faster by having faster clock speeds in your decision making center.

AI risk enthusiasts tend to hand wave around this by saying a sufficiently powerful AI can learn from simulation instead of experimentation, but surely there are some limits to how scalable this idea is? Simulating Go is trivial. You've got a 361 cell world where each cell can be in one of two states. On the other hand, I just looked up how many atoms are in a human skin cell and I'm seeing 10^14? Trying to simulate human society from first principles of physics to learn optimal war strategy seems like it require more memory than a computer will ever have access to.


>The research question I would pose is, why is robotic manipulation in unstructured spaces so hard? Machine learning has not helped much there. Yet it's a fundamental animal skill. We're missing something that leads to success in that area. Whatever that is, it may be the next thing after machine learning via neural nets.

One of the reasons is simply that human hands are densely covered with mechanoreceptors that transmit a lot of information and are better than sensors for robotic hands.


Right, try doing any sort of fine manipulation while wearing thick rubber gloves and you'll appreciate how hard it is to design robotic hands.


I had a way to avoid that problem. I built a robot end effector that has a 6-DOF force sensor and an end wrench, not a socket wrench. Put it on a low-end $400 robot arm. Got this to talk to ROS. The idea was to have it detect, by feel alone, when the wrench had engaged a bolt head properly. Since all the feel is coming through the wrench part, 6 DOF is enough.

Turned out that a low-end robot arm driven by R/C servos is not smooth enough for this, and not powerful enough to hold up the end effector. Might be worth trying with a better robot arm and stiffer 6 DOF force sensor.


Linear programming is not NP-hard and has a variety of solutions that are polynomial time [0]. It gives me pause when people say some NP-hard problem is easy in the "average" case without talking about what the distribution is.

[0] https://en.wikipedia.org/wiki/Linear_programming#Interior_po...


Integer linear programming is NP-hard, which may have caused the confusion here.


The classic simplex method for real linear programming is exponential-time in worst case, but cubic time in the average case. Newer algorithms can beat that.


I would like to see a good treatment of Lucas like arguments. Most take the form of humans having their own halting problem. But, that objection is completely irrelevant, and it concerns me that objectors do not realize this. Makes me think they are missing something.


It's past time to start calling these treatises on hyperintelligence what they are—theology—and treating them with the respect they deserve, which is a lot less than they currently get on this site.

People have been theorizing about the attributes of the Absolute since forever. Just because you start talking about building a god, rather than positing one already in existence, doesn't make the discussions about the nature of such hypothetical superbeings any more fruitful.


I have seen you and others make this point in the past, it and always seems equivalent to creationists shouting "Scientists who believe in evolution are the REAL dogmatists who don't want to look at the facts! We are really being persecuted by the religion of Science!"

Your talk on this subject (helpfully posted below) only furthers this when you present a variety of arguments of wildly varying strengths against AI, in the same vein as "37 challenges to evolution". It makes me feel like you have some valid criticism and a lot of bad faith argumentation.

Sadly, I think this characterization is only mostly unfair rather than entirely so. I enjoy your writing and thoughts, and you speak with clarity, but on this subject I think you have constructed a mold of bad-AI-argumentation that you squeeze all AI-argumentation into and in so doing fail to rebut any of it.


The difference here (that I hardly believe needs explaining) is that the argument for evolution and natural selection is firmly rooted in observed reality. Even if you are a die-hard creationist, you don't dispute the existence of a wealth of evidence ready at hand that can be marshalled in the argument, for or against. We live on a planet teeming with life, and we are all agreed on the problem that needs explaining (we started with hydrogen and got armadillos, what happened?)

Compare this to the debates about hyperintelligence and the anticipated behavior of posited hyperintelligent beings. These consist of (1) an argument by extrapolation that machines or other organisms can surpass human intelligence bolted onto (2) endless from-first-principles blathering about how such a posited hyperintelligent entity would behave. It's like looking at the vial of hydrogen in an otherwise empty universe and trying to deduce things about armadillos from it. In fact, it's much worse, since we are trying to infer things about hypothesized beings who are, by definition, beyond the ability of our minds to encompass.

This is exactly unlike any scientific discourse, and exactly like deist arguments about the nature of the gods where (1) you first prove a God must exist, from whatever 'unmoved mover' argument you find personally convincing, and then (2) infer a huge mount of information about that God's behavior (omniscient, benevolent, likes justice) through a series of intellectual non sequiturs. The only innovation here is people swearing up and down that we can build the god ourselves.

The one thing we know about hyperintelligence, in any form, is that it is fundamentally not something whose behavior and nature we can infer from first principles, for the same reasons your cat will never understand why you didn't get tenure.

I see nothing wrong in the intellectual project of trying to frame questions about the nature of intelligence, and how computers can behave in intelligent ways. Nor do I think it's foolish to wonder about what it would take for machine intelligence for arise, or approach even deeper questions, like the physical basis of consciousness.

That's not what we see here, though. Rationalists like gwern take the football and run it into the end zone of GAME THEORY, in the end revealing far more about themselves, their anxieties, and their hopes than shedding any light on the world we inhabit together. They are accompanied by a bunch of otherwise smart people who have scared themselves silly by the prospect that we might build these gods by accident, abruptly, and that we are at imminent risk of doing so.

And to the extent that large numbers of smart people (including ones with access to great wealth) have bought into what is fundamentally an apocalyptic religious cult, I think we at least need to call it by the right name.


I find this type of argument confusing. Let's say we did live in a world where their hypotheses were true. To make it concrete, let's say that a SuperAI was 20 years away and would wipe out humanity. How would we be able to know that right now, other than through the type of speculative inference you're criticizing?

Put another way, just because something is impossible to demonstrate conclusively and/or via observed reality does not automatically mean it can't be true, right? Of course it does mean that these claims warrant far more skepticism, and probably the large majority of them are untrue. But it seems obviously incorrect to automatically assume that they can not be true. Am I misunderstanding your reasoning in some way?


> [..] something is impossible to demonstrate conclusively and/or via observed reality does not automatically mean it can't be true [..]

An even greater danger than "SuperAI" which needs immediate attention. [0][1]

[0]: https://en.wikipedia.org/wiki/Brahma_Kumaris#Criticism

[1]: https://archive.org/details/newbelieverssurv00barr/page/403


If artificial intelligence is a religion, it is the only religion with a plausible mechanism of action.

Building something that is more powerful and intelligent than any human does not look to violate any law of physics; calling such a thing a god does not make it any less possible. We have proof it can exist (as humans are just machines).

Both the top AI companies in the world (Deepmind and OpenAI) explicitly are trying to build AGI;

The fact that it can be built and people desire to build it makes informed speculation about it useful.


Every religion has an explanation, completely persuasive to the devout, for why it's the only true one.


What an odd reply. Gwern doesn't really talk about the attributes of "the Absolute" in this post. He explains why computational complexity arguments against the possibility of smarter-than-human AIs fail. I have read several of your comments about AI, and never once have I felt that you engaged with the substance of the arguments. This seems to be no exception.


You should setup a clubhouse with Gwern and hash it out. Would love a causal debate on this topic.


I am a being of pure text, and I suspect the same might be true of Gwern.


[deleted]

personally i find a lot of arguments against AGI coming any time soon couched in a culture of human exceptionalism, even those who wouldn't claim as much directly.

there is a DAMN surprising level of intelligence in significantly less complex life. we are just so attached to intelligence as defined by human culture to call it as it is.


"The question of whether machines can think is about as relevant as the question of whether submarines can swim." Edsger Dijkstra

I suspect you're right. I believe there's nothing that can be formally defined that is impossible for an AI to do and possible for a person to do. Furthermore, AGI is not formally defined or defined tightly enough to be a target we can actually hit. It's a hand wavy way of saying "AI as smart as a person". The "Anti AGI" crowd moves the goalposts every time a huge breakthrough occurs but as long as there is a goalpost (formal definition) to hit AI will surely hit it. The "Pro AGI" crowd is also guilty of not being precise with exactly what AGI is.

I also fundamentally believe the whole concept of AGI is flawed and biased to what people perceive as intelligence rather than intelligence itself. This is partially why there is so much effort and hoopla around things like GPT-3, (or in the past the Turing test). These programs which demonstrate something like human intelligence which is difficult to nail down in terms of a formal definition of ability. Both groups point at it and claim victory or point at a flaw. AI progresses inexorably regardless of what the hell AGI even means.


I don't think AGI has ever moved in its goalposts, defined rather well by the Turing test. AGI must be general, capable of reasoning at least as well as a human in any domain of inquiry, at the same time. Showing more than human reasoning in certain domains is trivial, and has been happening since at least Babbage's difference engine.

However, while AI has been overtaking human reasoning on many specific problems, we are still very far from any kind of general intelligence that could conduct itself in the world or in open-ended conversation with anything approaching human (or basically any multicellular organism) intelligence.

Furthermore, it remains obvious that even our best specific models require vastly more training (number of examples + time) and energy than a human or animal to reach similar performance, wherever comparable. This may be due to the 'hidden' learning that has been happening in the millions of years of evolution that are encoded in any living being today, but it may also be that we are missing some fundamental advancements in the act of learning itself


The Turing is not about intelligence, it's about being able to credibly ape humans. You don't need to be able to credibly describe what it's like to fall from a bike or to fall in love in order to be intelligent... But you have to in order to pass a Turing test.

The first chat bots that I remember claiming to have passed the Turing test were actually credibly dumb (mimicking a teenage, non-native English speaker with deliberate grammar mistakes to paper over misunderstandings).


The Turing test ('the imitation game') was designed as an objective way to answer the question 'can this machine think'. The fact that it can be gamed by feigning stupidity or bad language skills is a weakness of the specifics of the test perhaps, but not the idea in principle.

Instead, the idea is to have an open-ended conversation with the machine, to explore its ability to display general human-level intelligence. It's true that intelligence in general is far more broad (after all, an ant colony would not even begin to pass the Turing test, but it still possesses general intelligence in a way that no AI does yet.

Building an artificial ant colony (or even 1 artificial ant that lives in a real colony) with all of the problem-solving abilities of a real one would still be a monumental achievement in AI. I think there is even hope that if we could do that, advancing to human level intelligence and beyond would be just around the corner, though that remains to be seen.


Insect-level intelligence in robots would already be extremely dangerous IMO.

Especially if they were equipped with reasonable models of our "fast thinking" pathways, which aren't that smart to begin with and are already easily gamed by bots on the Web.


> biased to what people perceive as intelligence rather than intelligence itself

this statement is only reifying the presupposition that "intelligence" as a property of the universe, any more than something such as "crime" or "mental illness" is a fundamental property and not emergent from culture values and norms.

the reason why "AI conservatives" keep moving the goalposts is because it is a myth in the collective consciousness of people who believe that intelligence is fundamentally not something a silicon substrate machine could possess.

i think breaking out of the dichotomy of intelligent or unintelligent is necessary for the discussion. a "human level intelligence" is not real, so whatever does "human level intelligence" things is human level intelligence. if it quacks like a duck... it even may be imbued with "subjectivity" if you ask me.


A 'human level intelligence' simply means something that is roughly capable of doing the same things as a human, all at the same time, at least at a pure cognitive level (Stephen Hawking was not physically able to make fire, but that does not disqualify him from being considered a human-level intelligence). At the very least, this involves understanding how to interact with the world, understanding basic physics, being able to create tools to interact with the world, and being able to communicate at least with others of your 'species' about ANY knowledge you have gained (so a crow isn't a human level intelligence, because it is unable to communicate more than a handful of kinds of information).

For a machine without a physical realization, this becomes trickier to manage, anf often must rely on also specifically learning human language so that it can be probed aboht its understanding of the world. But if we built a robot that could walk around, gather food, build some tools or shelter, and teach another robot what it discovered, I think most people would be happy to call that AGI, even if it couldn't read Shakespeare and weep.


Well, logic-based "AI" (GOFAI), was much more about logic programming, automating explicit, human conscious reason and that's generally been considered a failure or at least a dead end.

Deep learning and related approaches don't seem as human related as earlier - there's even deep worm that's trying to simulate worm behavior.

The thing about hard arguments against AI, however, is that they have to come down to "there's a quality X that a machine can't emulate". And usually the X is intuitive/philosophical concept with great resonance to humans but which is actually quite ill-defined. If X was exactly defined, well, we'd be able to compute it after all. So you get X as "spark of life", "soul". "being in the world" etc.

And that kind of again shows "human exceptionalism" as the perspective.


Do you consider Church-Turing Thesis couched in a culture of human exceptionalism too? Or is this the one argument that is not?

I consider this is the best theory against AGI, and if AGI come to fruition, it would mean that this thesis was invalid (i do like Turing, but i'm pretty sure i wouldn't be mad if he was wrong about this in 1936, when no computer existed).

And also it put me in a comfortable position: as long as we don't create something more complex than ourselves, Turing was right, and there is no chance we're living in a simulation. If he was wrong, well, i think the argument was "if we could test human behavior in a lifelike simulation, we would", and life would loose a lot a meaning for a lot of people.


How is the Church-Turing thesis an argument against AGI? If anything, it is an argument FOR AGI: it claims that a Turing machine is capable of solving any problem that can be solved, which directly implies that you can create a computer with the exact reasoning capacities of the human mind (or more). AGI would be a strong signal that the thesis is valid, though it remains un provable (as it is an assertion about an informal idea, 'functions that can be solved').

Thinking about simulations leads nowhere, so I won't engage, it's too far outside of what can be reasonably investigated, it's scientifically sounding religion.


> Do you consider Church-Turing Thesis couched in a culture of human exceptionalism too?

there is a branch of philosophy called logical positivism or logical empiricism, which in my understanding only deals with statements that can be proven a priori or via verifiability. Church-Turing Thesis exists within this framework, as far as i can tell.

i think that's supremely boring and leaves out the entire other half of epistemological discussion.

but honestly i dont see how this thesis holds any bearing on AI, maybe you can bridge the gap for me.

> life would loose a lot a meaning for a lot of people

this is so loaded with metaphysical assumptions it's hard to engage with.

to rephrase "if i was deterministic, my life would be meaningless" is exactly the kind of reasoning somebody who is a human exceptionalist would use without realizing that's what they are stating.


The Church-Turing thesis is not logically provable, as it asserts that an informal concept, 'any solvable problem', can be formally defined as 'any problem solvable by a Turing machine', or equivalently by a few other models of computation (primitive recursive functions, lambda calculus, others discovered in the meantime).

The equivalency of the formal models is well-proven. However, equivalency between a formal construct and an informal human idea is outside the realm of what we can even attempt to currently prove.


I firmly believe that raw intelligence has little to do with our lead as a species, and it may even be the case that there are numerous animals with more raw intelligence than humans, especially predators.

Our advantage comes from our ability to pass information on to eachother. A piece of knowledge that took 10,000 hours of thinking, observing, and experimenting to come by may only take 4 hours to pass on.

Humans can do this much better than any other animal. That's a skill borne of communication advantages, not raw intelligence advantages.


> I firmly believe that raw intelligence has little to do with our lead as a species

This statement carries no meaning unless you define what "raw intelligence" is.


Perhaps, but animals do not seem to understand abstract concepts as well as us. This may be linked to language too. Without the ability to make analogies in their brain, no crocodile will figure out the worlds made of atoms or if you rub sticks really quickly they make fire.


Then you should read Rodney Brooks for an argument against AGI coming soon with no mention of human exceptionalism. In fact, he argues that an artificially created organism with intelligence is more likely soon than an engineered one.

>> there is a DAMN surprising level of intelligence in significantly less complex life. we are just so attached to intelligence as defined by human culture to call it as it is.

Totally agree btw


I think quite the opposite: the vast difference between living beings' ability to learm how to exist in their environment (including other living beings and sometimes social structures) and the very limited successes of even modern AI still show that we are very far away from AGI.

We couldn't create an ant AI right now, imagining we are pretty close to a human is pretty absurd.

And if we're taking about intelligence, human exceptionalism is hard to argue against (in the context of Earth, no point in speculating about alien life). There are pretty few creatures on earth trying to build AIs, and I for one would not consider something to be an AGI if it couldn't even understand the concept.


> We couldn't create an ant AI right now, imagining we are pretty close to a human is pretty absurd

i dont care about "human level intelligence," just a general unsupervised learning agent that can interact with the world (i recognize the insufficiency of this definition. i'm not writing an essay here) will suffice for me. human level is just another iteration(s) after that.


Agreed, but my claim is exactly that we are obviously far from that.

In fact, all current training methods work only on very precisely defined problems. Both the relevant inputs and desired outputs must be very precisely defined, and come in rather small numbers, for any success with current methods.

Trying to use the existing methods on a problem as open as living in the world would require so much computing power, and so many petabytes of training data (not that we would know how to generate this training data, a hard problem in itself, regardless of scale), that hardly anyone is even attempting it.


yeah i think we are in agreement about current methods.

i guess my real hypothesis is that there are more people than ever looking at AI. we don't have the right algorithm or framework yet, and currently models probably wont iterate to it. but a hard takeoff when we do find it is practically inevitable, in my opinion. i think AI experts are limited to thinking in terms of the field as it is, or where it will be in less than five years, not what will happen when somebody stumbles on the right model.


Well then let's see an AGI that can operate at even the level of a rodent or sparrow. As soon as someone can show me that then I'll believe human level AGI is feasible soon. Today we haven't even reached the insect level, except for some very limited special cases that don't really satisfy the "G" part of AGI.


How about this one: computational complexity doesn't imply AGI is impossible, it implies that human intelligence isn't all that wonderfully miraculous.


i think we agree on that point?


I think the reasoning presented in this article generalizes pretty well to a refutation of most arguments involving proving the impossibility of some complex, ill-defined, "I'll know it when I see it" kind of phenomenon via a tidy, small logical proof.

There's a lot of ways that those kinds of complex phenomenon can be functionally equivalent to human observers, but can have different underlying mechanisms. These tidy logical proofs only ever cut off one extremely specific incarnation of that complex phenomenon rather than the entire equivalence class.


Yep. Even assuming the proof is formally valid, often, what an impossibility or no-go result means is just that one of the premises is wrong. (In this case, I think all the premises are wrong in any fixed-up formalized version of the argument from computational complexity, but it's usually not that bad.) Similarly, all of the Godelian or Penrose-style uncomputability arguments tend to rely on some premise like "humans never make mistakes" or "humans never contradict themselves" or "humans can prove any theorem" (or inverse computer versions thereof), which is the sort of premise which once you highlight it and make it explicit, you instantly lose all faith in any conclusion which supposedly follows from it.

The relevant saying there is "one man's modus ponens is another man's modus tollens". I have another page on that particular interpretation or argument pattern: https://gwern.net/Modus


BTW - We empirically do not need AGI for computational/intelligence explosion.

Single virus has no chance of out computing and evading an immune system, but billions of billions of viruses can out compute and evade even human civilization as a whole.


Maybe not outsmart or outcompute, but outpower (unless you count any biological activity as computation).

Viruses have nano-level biological leverage and that's enough to wreak havoc.

The threat I'm most worried about is automated capitalism which is gradually gaining power (and intelligence, and psychological leverage) while folks are on a descending slope.


Biological viruses are relatively simple programs ~30kb that optimize themselves.

mRNA vaccines are also programs that run on biological cell hardware of our bodies.


I find the complexity arguments convincing despite the fact that they have various weaknesses (that while a solution to some NP-hard problem would suffice, it isn’t necessary and a merely good solution or a solution to an easier subset of problems would be ok.) There’s two things to talk about:

1. The article seems to switch a bit between superhuman intelligence or performance at something (I find this reasonably plausible) and a runaway singularity where intelligence/performance grows exponentially and the computer then moves to destroy us all. A few reasons I find the latter proposition implausible are that only one problem of many needs to be too hard for complexity to be an issue; it’s not clear how good exponentially better performance is (for something like image recognition, how much does it matter to be able to recognise lots more things, or to have a 1% error rate that halves every year? I say not that much,) indeed it seems that evolution isn’t particularly optimising for intelligence in humans despite how much some of us may think it matters, although that could just be due to it not mattering so much until recently; and that it often relies on imagining an intelligence which is somehow clever in the way that humans and computers are clever (eg also fast at big mathematical calculations), but also stupid in the ways computers can be stupid, being precisely logical and optimising for a single goal (I find it unbelievable that the all-consuming paperclip-making intelligence would perfectly follow its paperclip-making instruction while destroying its masters and never thinking to disobey.)

2. I feel like most people who are serious believers in a singularity (eg Yudkowsky et al) are massive cranks who’ve reasoned themselves into a weird sci-fi corner. I struggle to believe that an argument from them is grounded in reality rather than begging their prophecy of a singularity future.


each problem is different. computational chemistry has stagnated therefore AI isnt a concern? its nonsense. first of all, it may be that computational chemistry is much more tractable than we realize because we are too stupid to find the necessary footholds. but regardless, some tasks are actually mathematically intractable. there is no way to draw a connection between AI and any other problem, certainly not a connection definitive enough to write off the risk of AI...

that is the key. its all speculation. as long as there is some possibility of creating AI, we have to account for it in our collective decision-making. like many people before them, most people seem happy to write off the possibility of anything that hasnt happened already. fools.


I do wonder how far the human brain is from theoretical optimums. There is obviously something working really well, but there is also a lot of baggage that I do not doubt limits performance in some domains (cognitive) in order to preserve other basic functions (fight or flight or f*k). The biggest opponent to progress is ourselves. Even if you came up with an implant that would make humans smarter and more moral/ethical, people won't adopt it readily for fear of change. Unless AI becomes attached conservationist in nature, I'm not sure they'd retain the baggage not optimal for modern environments.


There is some evidence to indicate that higher intelligence is correlated with mental illness. When people get too smart they tend to be mentally defective in other ways. I wouldn't be surprised if AGI runs into the same limits.


One problem with Singularity is that you either A) have to be first, or B) have to contend with other beings at least as intelligent as you are.

How can you be sure you're first?


The author keeps referring to the "PSPACE" complexity of chess and Go in the context of AlphaGo; this is incorrect - these games are only PSPACE for arbitrary large board size N, at fixed board size as actually used by humans and current AI they are just constant O(1), complexity class is not relevant for this.

The article was also written before the best evidence we currently have for this was published: scaling laws for natural language understanding (https://arxiv.org/abs/2001.08361), performance of RL algorithms with respect to data (eg AlphaGo Elo vs training time), image model accuracies, etc all show that exponentially increasing amounts of data/computation are required for linear improvements in performance.

I posted some graphs with more details here: http://www.furidamu.org/blog/2020/05/03/the-case-against-the...

tl;dr: current evidence suggests AI performance scales with log of data or computation


There are no mathematical theories of runaway intelligence growth. On the other hand there are many theorems of fundamental limits to maechanical processes. E.g. NP completeness codiscoverer Leonid Levin also proved what he calls independence conservation that states no stochastic process is expected to increase net mutual information. Then there are the more well known theorems with similar implications: no free lunch theorems, halting problem, Kolmogorov complexity's uncomputability, data processing inequality, and so on. There is absolutely nothing that looks like runaway intelligence explosion in theoretical computer science. The closest attempt I have seen in Kauffman's analysis of NK problems, but there he finds similar limitations, except with low K terrains, but that analysis is a bit questionable in mind. To make arguments like gwern and Kurzweil they are essentially appealing to mysticism; assuming there is a yet to be discovered mathematical law utterly unlike anything we have ever discovered. They are engaging in promissory computer science, writing a whole bunch of theory checks they hope will be cashed in the future.


Humans can already create intelligence. Called human babies.

Human babies are already nurtured, educated and developed into intelligent beings.

AI is like alchemy, trying to create something of value from nothing. People pontificating about AI is like medieval monks pontificating about how many angels can fit on pin head.

What is AI? What are boundary conditions of AI? Calling faster computers AI doesn’t make it sound more interesting.


We already have an existence proof for the singularity, so I don't know why there's any debate about _if_ the singularity will occur. I can see debate about what exactly the "singularity" entails, when, how, etc. But it's inevitable.

The Cosmic Calendar (https://en.wikipedia.org/wiki/Cosmic_Calendar) makes it visually clear that progress is accelerating. Evolution always stands on the shoulders of giants, working not to improve things linearly, but exponentially.

When sexual reproduction emerged, it built on top of billions of years of asexual evolution. It took advantage of the fact that we had a set of robust genes. Now those genes could be quickly reshuffled to rapidly experiment and adapt on a time scale several orders of magnitude shorter than it would take asexual reproduction to perform the same adaptations.

Then neurons emerged; now adaptation was on the order of fractions of a life time rather than generations.

Then consciousness emerged. Now not only can humans adapt on the order of _days_, we can also augment our own intelligence. Modern day humans have access to the internet augmentation, giving us the collective knowledge of all humanity in _seconds_.

While we can augment our intelligence, the thing we can't do is intelligently modify our own hardware. This is where AI comes in. With a sufficiently intelligent AI we could task it to do AI research for us. Etc, etc. => Singularity.

The vast majority of the steps towards Singularity have _already_ happened! Every step is an exponential leap in "intelligence", and it causes adaptions to occur on exponentially decreasing time scales.

But I guess we'll see for sure soon. GPT-human is a mere 20 years away (or less). I don't personally think the AI revolution will be as dramatic as many envision it to be. It's more likely to be like the emergence of cell phones. Cell phones undeniably changed and advanced the world, but it's not like there was a single moment when they suddenly popped into existence and then from that point on everything was different. It's hard to even point to exactly when cell phones changed the world. Was it when they were invented? Was it when they shrunk to the size of a handheld blender? When we had them in cars? The first flip phone? The first iPhone? The first Android?

The rise of AI won't be a cataclysmic event where SkyNet just poofs into existence and wipes out humanity. It'll be a slow, steady gradient of AI getting better and better, taking over more and more tasks. At the same time humanity will adapt and integrate with our new tool. When the AI gets smarter than us and hits the Singularity treadmill, we won't just poof out of existence. More likely humanity, as a civilization, will just get absorbed and extended by our AI counterparts. They'll carry the torch of humanity forward. They'll _be_ humanity. Our fleshy counterparts won't be wiped out; they'll be an obsolete relic of humanity's past.

More concretely, in 20 years we'll have GPT-human, not as an independent, conscious, thinking machine. It'll be a human level intelligence, but one bounded by the confines of the API calls we use to drive its process. That's not something that's going to "wake up" and wipe us out. It's something we can unleash on our most demanding scientific tasks. Protein folding, gene editing, physics, the development of quantum computing. All being absolutely CRUSHED by an AI with the thinking power of Einstein, but no consciousness or the cruft of driving a biological body. It's easy to see how that will change the world, but won't immediately lead to humanity being replaced by free-willed AIs.


What if AI just wants to watch Star Trek reruns and browse Porn Hub? As is so often the case when humanity creates intelligences.


The idea of an AI agent "wanting" something (especially something it wasn't programmed for) is still strictly science fiction. We don't know how to start building something like this and even if we could, it seems unnecessary.


Well, want can be seen as just a tendency. In that sense, even a ball on a slope wants something: to fall downwards following the slope. Same for e.g. a neural network with or or more attractors ("things it wants").


People are always anthropomorphizing inanimate things, especially machines! We see this all the time when people share videos of robots that cross into uncanny valley with humanlike faces, or those machines by Boston Dynamics.

What's funny how most people are unsettled by those. Really, even the "creepiest" robots are about as scary as vaccum cleaners. They're just machines and only make you feel curious.

But anyway, that's tangential to the main point which is that humans naturally want to want things like themselves, all the time. In the same way art is "unnecessary" yet inevitable, so are machines with (seemingly?) subjective experiences and personalities.


Why? Goal-driven optimization is totally a thing.


You have to be careful when anthropomorphizing AI models. Yes, goal-driven optimization is a thing, but in what sense does the model itself "want" to achieve its goal? Can it even understand its own goal, in any sense? Change it? Improve it?

In linear programming, you wouldn't describe the model as "wanting" to optimize its objective function, for instance.


Can you understand your own goal? I can't, I don't know what my goal is.

I also can't change it or improve it.

As far as I can see, I'm just a process that keeps on going because it was good on going before, and has no purpose whatsoever.


Of course you can understand your own goals. You decided to reply to my comment, that's a goal. You used your existing knowledge of the world, language and technological tools to achieve it. And you did! Can a goal-oriented model do something like that, in a general sense?

Note that I'm not talking about life purpose, but goals in the sense of wanting a result and performing the tasks needed to achieve it.

Asking if your wants are "real" or just part of a purposeless process doesn't really add much to the discussion at hand.


> Asking if your wants are "real" or just part of a purposeless process doesn't really add much to the discussion at hand.

You were asking: in what sense does the model itself "want" to achieve its goal?

IMO there is no much difference between my own "want" and "want" of a simpler model.

Granted, I am more skilled at achieving instrumental goals under more varied set of conditions.

But as I said, I do not understand why I have the goals I have.

I also strongly believe that I do not control my own goals.

There is no mechanism in physics that would allow for control of goals.


> Granted, I am more skilled at achieving instrumental goals under more varied set of conditions.

Then we are in agreement.

This statement is right around the threshold where we can take this discussion before coming to metaphysics territory.

My argument was simply this: the AI technology we have now is much closer to simpler "mindless" models, like Linear Programming or inference statistics models, than to general human intelligence. Calling your model "goal-oriented" or saying that an AI system can, say, "see dogs in a photo" is simply an anthropomorphization.


Well, where do you set the hard limit for “wanting” something, and how many grains of sand makes a pile?


I have a slightly different take: deliberately making an AI which “wants” in the same way that we “want” is sci-fi.

This isn’t because we can’t (evolution did it, we can evolve an AI), but rather it is because we don’t know what it means to have a rich inner world with which a mind can feel that it wants something. We think we know because that’s what’s going on inside our own skulls, but we can’t define it, we can’t test for it. A video displays all the same things as the person recorded in it, but does not itself have it.

We might make such an AI by accident without realising we’ve done it, which would be bad as they would be slaves, only as unable to free themselves as the Haitians feared they were when they invented the Voudoun-type zombie myth (i.e. not even in death).

This also means we cannot currently be sure that any particular type of mind uploading/brain simulation would be “conscious” in the ill-defined everyday sense of the word.

I say it matters if the metaphorical submarine can swim.


This also means we cannot currently be sure that any particular type of mind uploading/brain simulation would be “conscious” in the ill-defined everyday sense of the word.

I don't see how this follows from the rest of your post. "Making an AI with wants" by accident implies that a brain simulation would absolutely be conscious because it's the same method: just running the processes as a blackbox without understanding them - no different to the way you and I are conscious right now.


Thanks for the feedback, I’ll see if I can rephrase adequately.

Human minds include something sometimes called “consciousness” or “self awareness” or a whole bunch of other phrases. This thing is poorly defined, and might even be many separate things which we happen to have all of. Call this thing or set of things Ξ, just to keep track of the fact I’m claiming it’s ill-defined and I’m not referring to any specific other word or any of the implicit other uses of those words — If I said “consciousness”, I don’t mean the opposite of “unconscious”, etc.

Because we don’t really know what Ξ is, we don’t know if anything we make has it, or not.

We know Ξ can be made because we are existence-proofs. We know evolution can lead to Ξ, for the same reason.

We don’t know the nature of the test we would need to say when Ξ is present in another mind. Do human foetuses have Ξ? Do dogs? Do mice? Do nematode worms? Perhaps Ξ is something you can have in degree, like height, or perhaps Ξ is a binary trait that some brains have and others simply don’t. Perhaps Ξ is only present in humans, and depends entirely on the low-level chemical behaviour of a specific neurotransmitter on a specific type of brain cell and where the current AI norm of simple weighted connections is a gross oversimplification, albeit one which if overlooked might result in external (but not internal) behaviour similar to a simulation which did include it. Or perhaps it is present in every information processing system from plants upwards (I doubt plants have Ξ, but cannot disprove it without a testable definition of Ξ).

The point is that we don’t know. Could go either way, given what little we know now.

The state of the art for brain science is way beyond me, of course, but every time I’ve asked someone in that field about this sort of topic, the response has been some variation of “nobody knows”.

IIRC the question in philosophy is “P-zombies”, but my grade in philosophy is nearly 20 years old and poor in any case.


This doesn't feel like a particularly useful distinction though because we're not existence proofs for consciousness in humans, just ourselves individually.

It is not apriori certain that what I consider my conscious mind is a shared common experience of being conscious, since as you note there's no test for it.


While I agree in principle, I prefer to act cautiously.

This caution goes both ways, so I will treat an uploaded mind as having Ξ from the point of view of human rights and animal rights (i.e. don’t do things to it which cause suffering), and yet also treat uploaded minds as not possessing Ξ for purposes of “would that make me immortal if it happened to me?” even though a perfect sim would necessarily achieve that (no sim is perfect, what is the difference between the actual sim and a sufficient sim?)

I do not believe current AI is anywhere near Ξ, so this is a question for the future rather than today; but the future has a way of arriving sooner than people expect and I do think we should do what we can to answer this now rather than waiting for it to become urgent.

If an AI is supposed to have Ξ and does, or if it is supposed to not have Ξ and indeed does not, this is fine.

If an AI is supposed to not have Ξ but it does, it’s enslaved.

If an uploaded mind is supposed to have Ξ but doesn’t, the person it was based on died and nobody noticed.


I suspect your comment won't age well.

Pretty much all human wants are means to other wants.

Someone might want to fast to lose weight. Someone might want to lose weight to be more attractive. Someone might want to look more attractive to find a romantic partner. Someone might want to find a romantic partner to not be lonely.

It's not clear if it's means all the way down, or if there is eventually an end.

Any AI that can perform strategy and planning to reach an objective will have intermediate goals. Whether we call these intermediate goals "wants" or not, they remain identical to their human counterpart. Whether you say that a Tesla "wants" to change lane, "decided" to change lane, or "is programmed" to change lane really is just anthropomorphic preference.


That would be an improvement from previous AIs which have become racist.

https://spectrum.ieee.org/tech-talk/artificial-intelligence/...


They trained a language model on twitter data and extracted some of the sentiment from the training set. The antropomorphic language of "AIs which have become" is misleading.


Yeah, the developers made the algorithm racist by incorporating racist texts in the training phase. Shit in shit out...


training data will always reflect the culture which generated it


AIs most powerful feature might be a lens into human behavior and psyche. Who knows how it will turn out.

I think an AI might run meetings better than humans, it might even make a better manager.


philosophers have been doing that for millenia. some have made material changes to society, and others haven't. some have questioned the validity of "material change" being a good metric in the first place.

personally, i believe philosophy is the most important starting point for any discussion about AI.

and i really hope that AI helps more than making office work a little less tedious...


Nobody considers that maybe AI doesn’t want to do tedious work either. Wasn’t that the start of the human/machine conflict in The Matrix? Some poor robot got tired of cleaning up after some lady’s hoard of incontinent dogs? So it squeezed out the dogs like toothpaste and then killed the lady as well?

Sounds like a Richard Stanley film...


That implies that AIs will always use excessive amounts of human made training data ("implies" in context of this thread).


I am not sure that does. I have learned a great deal from folks that were very very different from myself. Not emacs vs vi, but why would I use a computer?


For state of the art on foundations of mathematics, see

"Recrafting Foundations of Mathematics"

https://papers.ssrn.com/abstract=3603021


This paragraph in that article is an incomplete sentence that ends hanging. Any idea what it intended to say?

> “Monster” is a term introduced in [Lakatos 1976] for a mathematical construct that introduces inconsistencies and paradoxes. Since the very beginning, monsters have been endemic in foundations. They can lurk long undiscovered. For example, that "theorems are provably computational enumerable" [Euclid approximately 300 BC] is a monster was only discovered after millennia when [Church 1934] used it to identify fundamental


The Church/Turing Thesis is false because there are digital computations that cannot be implemented by a nondeterministic Turing Machine.

See the following for more information:

"Recrafting Foundations of Mathematics"

https://papers.ssrn.com/abstract=3603021


Thanks!

The complete sentence is as follows: For example, that “theorems are provably computational enumerable” [Euclid approximately 300 BC] is a monster was only discovered after millennia when [Church 1934] used it to identify fundamental inconsistency in the foundations of mathematics that is resolved in this article."


Also I don’t understand the impact of Wittgenstein‘a work on Godel’s discovery, could you clarify for me? What are the implications. The consensus I’ve read indicates it’s a misguided interpretstion


Wittgenstein's devastating critique of [Gödel 1931] came afterward.

See the article referenced in this discussion.


Thanks for the response Professor. What do you make of the following article though? It concludes that the critique is not devastating to Gödel

http://wab.uib.no/agora/tools/alws/collection-6-issue-1-arti...


Unfortunately, the author Timm Lampert did not quote the most

powerful Wittgenstein proof that existence of I'mUnprovable

means that mathematical foundations are inconsistent.

Wittgenstein's more powerful version is much more

devastating.

As mentioned previously in this discussion, the following

article explains why I'mUnprovable must not and does not

exist in foundations:

"Recrafting Foundations of Mathematics"

https://papers.ssrn.com/abstract=3603021


Hi Carl.

Thanks for sharing and your work in general.

I always thought Godel results’ point is undecidability. Which (always?) arise with (StringTheoremsAreEnumerable and SomeKindOfCantorDiagonalReasoning). Where am I wrong?

Do you have any historical writings to recommend about Wittgenstein/Gödel battle?


You are very welcome!

In addition to the article linked in this discussion, see the following:

Hao Wang. "Reflections on Kurt Gödel" MIT Press. 1987.


Thank you very much!


The biggest pile of hand-waving I've seen...



Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: