Hacker News new | past | comments | ask | show | jobs | submit login
Tool AIs want to be agent AIs (gwern.net)
153 points by ogennadi on Dec 21, 2016 | hide | past | favorite | 58 comments



I highly recommend the book referenced in the article: Nick Bostrom's Superintelligence.

https://www.amazon.com/Superintelligence-Dangers-Strategies-...

It has helped me make informed, realistic judgments about the path AI research needs to take. It and related works should be in the vocabulary of anybody working towards AI.


Every time I encounter Bostrom's writing, I think of this Von Neumann quote:

"There’s no sense in being precise when you don’t even know what you’re talking about."

Bostrom is one of those medieval cartographers drawing fantastical beasts in the blank spots of continents which he has never visited.


Actually there are at least two decades-old branches of computer science/mathematics that have formulated precise definitions of AI, and proved many theoretical results that gave way to lots of practical applications. These branches of CS are called "Reinforcement Learning" and "Universal AI".

While Gwern has already mentioned Reinforcement Learning, UAI is a less known (but even more rigorous and well received) mathematical theory of general AI that arose from Marcus Hutter work [1].

My point here is how can one say that there is no definition of AI when there are several precise mathematical definitions available with many theorems proven about them?

1. http://www.hutter1.net/ai/uaibook.htm


You are confusing narrow AI for AGI. None of those things have proved anything practical about what an actually achievable AGI would look like, rather than some theoretical construct that is provably incomputable.


No, he is not. Hutter's work on universal AI, his AIXI formulation is specifically a model of application generic AGI.

That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.


Because AIXI_tl has failure modes (it doesn't model itself as being embedded in its environment so it can't ensure its own survival) demonstrating that any approach which is just a weaker version of it will have those same problems.

> That said it is also not computable with finite time or resources, so it is unclear what relevance it has to practical applications.

You can define it as space or time-bound and then it's finite but still intractable.


I agree with the first sentence, but I'd like to note that there are practical (though weak) approximations of AIXI that preserve some of its properties, and while not turing-complete, prove to be more performant when compared to other RL approaches on Vetta benchmark. See [1].

Also there is a turing-complete implementation of OOPS, a search procedure related to AIXI that can solve toy problems, programmed by none other than Jurgen Schmidthuber 10 years ago [2]

Even more important: there is a breadth of RL theory built around MDPs and POMDPs. There are asymptotical, convergence, bounded regret, on-policy/off-policy results, etc. Modern practical Deep RL agents (the ones DeepMind is researching) are developed on the same RL theory and inherit many of these results.

From my POV it looks unfavorable to researchers that produced these results over decades of work when the comment's grandfather (and grand-grandfather) write that there is no definition and theory about AI, and that AI is like alchemy.

1. https://www.jair.org/media/3125/live-3125-5397-jair.pdf 2. http://people.idsia.ch/~juergen/oops.html


Thanks for the quote and the metaphor. It's a good description of what's wrong with the "AI risk" community. Drives me nuts how much traction they've been able to get, and how many ardent defenders, when they're not doing any intellectual work, just facile speculation. Their dogma seems to infect every conversation on AGI and it's a shame.


Bless you for saying it.

The analogy I like to use for our understanding of AI is alchemy. We threw Sir Isaac Goddamned Newton at chemistry and he couldn't make forward progress, because the tools were not precise enough. Similarly, we just don't understand minds enough yet to formulate sensible questions about AI.

This doesn't bother Bostrom. He builds castles of thought in the air, and then climbs up into them.


Quentin Hardy of the NYT on Bostrom: His career amounts to: "Assume hummingbirds will speak French. Let's discuss their novels." https://twitter.com/qhardy/status/806003812431319041


> Similarly, we just don't understand minds enough yet to formulate sensible questions about AI.

We most certainly do understand enough to formulate questions, and even answer some of them. The problem is that the people making the most noise (Bostrom et al) are not trained in neuroscience or computer science, nor do they have practical experience in deploying working systems. They have about as much training and expertise as science fiction writers, and the end result is similar.


> The problem is that the people making the most noise (Bostrom et al) are not trained in neuroscience or computer science…

This is incorrect. Among his degrees, Bostrom has a master's in computational neuroscience. His arguments have also convinced PhD neuroscientists (such as Sam Harris) and computer scientists (such as Stuart Russell) about the potential dangers of AI.


Degrees don't matter, publications do.


Particularly when that degree is from a single year program and now 20 years old, in a field that has been revolutionized multiple times in he interim. It's a bit like someone saying they are a web developer because they went through an App Academy like boot camp in 1996, if such a thing existed. King's College is bit more prestigious than that, sure, but content wise it is a fair comparison.

It would be a different story if he published in the meantime, but he did not. Nor did he work on practical projects in industry or anything. He shifted gears to philosophical speculation which he has done since.


Not only are you moving the goalposts, but you are again incorrect. Since 1999, Bostrom has authored four books and published over 30 articles in peer-reviewed journals.

There are good arguments to be made against Bostrom's Superintelligence, but malignments and surface analogies aren't appropriate. Please engage the ideas, not the man.


Published in the field of neuroscience or computer science?

Read what I wrote again please. I think you misinterpreted.


Silly me, I thought it was results!


Not about AGI. We understand enough to formulate questions about narrow "AI," certainly, since that already exists in a narrow sense.


In case you were not aware, we have about a decade of conferences on the specific topic of Artificial General Intelligence. Many of the papers from that conference provide valuable insights into the capabilities and limits of various approaches to solving specific general-intelligence problems. You might find articles from the past conferences interesting:

http://agi-conference.org/

There is also the Advances in Cognitive Systems journal and associated conference, which is AGI even if they prefer to avoid that specific acronym:

http://www.cogsys.org/

And there are always a small but growing number of papers related to AGI in each AAAI conference.


1. Just because something has been published as a paper, does not mean it is applicable, says something interesting (even if theoretical), or even that it's actually correct (it just can't be obviously wrong).

2. Setting aside cogsys (CogSci is a whole different beast than AI/ML in computer science), the only impactful journal/conference you've listed is AAAI.

3. Papers are also typically incremental and all of the AGI papers I've seen in AAAI (and there have been very few) are no different, tackling some small theoretical subproblem.

4. I'm not saying the research is useless. It's very valuable. But it's is pure theory right now, and to claim it has insights for us about what AGI would actually look like is very premature.


Cutting criticism is unless without examples. Name such a beast, and why it is unlikely to be.


Maybe I'm ignorant, but reading the abstract/introduction,I immediately got the sense that this guy (Gwern) was a crank. At the very least, I figured it was some tangentially related philosophical quote, not part of the main body.


Nick Bostrom also did an interview on EconTalk that covers quite a lot of the topics in the book from a high-level, if you want a shorter introduction to AI safety and the control problem: http://www.econtalk.org/archives/2014/12/nick_bostrom_on.htm...


Thank you! Now I have a way to introduce my book-averse friends to the control problem.


It's a book worth reading as it seems to have captured quite a bit of interest from the movers and shakers in this field. However one should be aware that it presents a one-sided view point and reasonable minds disagree. It's not clear yet what the future will bring, and whether this focus on AI safety will reduce real existential risk, or delay life saving technologies.


Check out Gwern's Reinforcement Learning subreddit. He's practically supporting this subreddit by himself.

https://www.reddit.com/r/reinforcementlearning/


This is excellent. If you want to see what real discussion of an AGI alignment issue looks like, please read this.


What about the relative size of the available datasets? It seems like that would make offline learning much more valuable than learning directly from experience.

The largest publicly available research datasets for machine translation are 2-3 million sentences [1]. Google's internal datasets are "two to three decimal orders of magnitudes bigger than the WMT corpora for a given language pair" [2].

That's far more data than a cell phone's translation app would receive over its entire lifetime. Similarly, the amount of driving data collected by Tesla from all its cars will be much larger than the data received by any single car.

This suggests that most learning will happen as a batch process, ahead of time. There may be some minor adjustments for personalization, but it doesn't seem like it's enough for Agent AI to outcompete Tool AI.

At least so far, it seems far more important to be in a position to collect large amounts of data from millions of users, rather than learning directly from experience, which happens slowly and expensively.

This is not about having a human check every individual result. It's about putting a software development team in the loop. Each new release can go through a QA process where it's compared to the previous release.

[1] https://github.com/bicici/ParFDAWMT14 [2] https://research.googleblog.com/2016/09/a-neural-network-for...


Offline batch learning is not contradictory to reinforcement learning. It just requires that it be an off-policy RL algorithm, which happily, many, like DQN, are.

> Similarly, the amount of driving data collected by Tesla from all its cars will be much larger than the data received by any single car.

Well, Tesla benefits from the fact that its cars are already agents, acting in the real world. So using it as a counter-example doesn't really work... You would need to imagine some sort of counterfactual Tesla which eschewed any real-world deployment or simulators. Which no one would ever do because the usefulness of trying to create a self-driving car database without agents or reinforcement learning is so obviously zero.

And anyway, most of Tesla's data, however, will be totally useless. You can train a simple lane-following CNN with a few hours of footage (as Geohotz demonstrated), but another million hours of highway driving is near-useless and doesn't get you much closer to self-driving cars (as perhaps Geohot also inadvertently demonstrated). What you need is the weird outliers which drive accident rates. So for example, the Google self-driving car team doesn't depend purely on collected data even from its agents, but simulates very rare scenarios as well. (If I may quote myself: "You need the right data, not more data.")

> This suggests that most learning will happen as a batch process, ahead of time. There may be some minor adjustments for personalization, but it doesn't seem like it's enough for Agent AI to outcompete Tool AI.

Even if we imagine that the dataset covers all possible sequences and doesn't need any kind of finetuning the loss, the agent AIs in this scenario are still going to benefit from actions over (going by section): 'internal to a computation', 'internal to training', 'internal to design', and 'internal to data selection'. In fact, if you want to have any hope of running effectively over extremely large datasets like hundreds of millions of sentences, you are going to effectively require some of those to keep training time tractable and runtime cheap (even Google can't afford a million TPUs for Google Translate). For example, regular hyperparameter optimization using a few dozen examples doesn't look very good when it takes a month on a GPU cluster to train a single model, but using a meta-RL RNN to do some transfer learning and pick out near-optimal architectures & hyperparameters in just a few iterations looks pretty nice.


I think you're missing a distinction that makes self-driving cars not really agents.

I haven't seen reports that any self-driving cars learn to drive better in real time. Instead the data is collected and used to improve the software. There might be frequent software releases (perhaps even every day) but this isn't quite the same thing.

An offline learning process seems considerably more predictable. The release engineers can test each release to see how it would behave under various extreme conditions.

It's not clear that there's a compelling reason to switch to true online learning. Self-driving cars will improve over time, but not to the point where new training data gathered at 9am should make a significant difference to how the car drives at 6pm. Probably this year's model will drive better than last year's, but the changes are likely to be gradual.

As you say, there are diminishing returns to gathering more data, which suggests that in many domains, updating the software in real time won't be a competitive advantage. Unless it's a system that's reacting to breaking news, the delta in training data gathered in the last 24 hours is unlikely to make a huge difference.


> As far as I know, self-driving cars don't learn to drive better in real time. Instead the data is collected and used to improve the software. There might be frequent software releases (perhaps even every day) but this isn't quite the same thing.

Most self-driving cars probably use some variant of local search[1]. In this case it probably doesn't matter if the agent learns offline or online.

[1] https://en.m.wikipedia.org/wiki/Local_search_(optimization)


I firmly believe that general AI will not be developed without agency for the AI. The "info only" helper AI (called Tool AI) means that information will have to be added manually by some intelligent agent (human or otherwise). No exploration of how actions and interactions affect results can be explored.

Tool AIs will never "want" anything because the meaning of want will be completely foreign.


You're not alone. Yann LeCun's famous cake analogy associates the cherry on the top with reinforcement learning (agency, being a part of an environment from which it can learn and explore).

I especially liked this passage. It connects many ideas onto one theme:

> CNNs with adaptive computations will be computationally faster for a given accuracy rate than fixed-iteration CNNs, CNNs with attention classify better than CNNs without attention, CNNs with focus over their entire dataset will learn better than CNNs which only get fed random images, CNNs which can ask for specific kinds of images do better than those querying their dataset, CNNs which can trawl through Google Images and locate the most informative one will do better still, CNNs which access rewards from their user about whether the result was useful will deliver more relevant results, CNNs whose hyperparameters are automatically optimized by an RL algorithm will perform better than CNNs with handwritten hyperparameters, CNNs whose architecture as well as standard hyperparameters are designed by RL agents will perform better than handwritten CNNs… and so on. (It’s actions all the way down.)

This is the general trend. Going meta, through RL. Optimize the optimizer, learn gradient descent by gradient descent.


> learn gradient descent by gradient descent.

As far as 'learning gradient descent by gradient descent' goes, I think it's still an open question if a tiny RNN actually can improve meaningfully over ADAM etc :) I don't recall the paper showing any wallclock times, which suggested to me that the RNN was way slower even if it trained faster. The individual parameter adjustments are so low in the hierarchy of actions that the value may be minimal compared to higher up like architecture design.


What paper are you referencing?


I assume this https://arxiv.org/abs/1606.04474 (since the title is basically the quoted text)


I'm not sure that trawling Google Images counts as active. It doesn't seem like it can be any more powerful than being given the entire Google Images database in advance.


If you have a decision process of some kind you can get more value out of it if you can link it to a utility function in which the "tool" tries to maximize the value it creates.

It's tough enough to get value out of A.I. that this trick should not be left on the table. Thus Tool A.I.s need to be Agent A.I.s to maximize their potential.


The question is what happens when "what we want" is replaced with "what you should want if you were more intelligent". Sorry Dave.


It seems most of these arguments apply equally well to the problem of solving AI value alignment, or of preventing the development of AI at all. (I.e., it's cheaper and faster to race ahead without worrying about value alignment.) But that doesn't make us conclude that value alignment is impossible, just hard to achieve soon enough in the real world.

Yes, we should be aware of the limitations and market instability of tool AI, but I think it's unjustified to suggest that tool AI is essentially impossible ("highly unstable equilibrium") and all we can hope to do is solve value alignment.


> It seems most of these arguments apply equally well to the problem of solving AI value alignment

Only somewhat. There are not repeated practical demonstrations, nor is there any really good theoretical reasons, to think that value alignment mechanisms would be seriously and systematically detrimental to performance & cost in the same way that we have for reinforcement learning/active learning/sequential trials/etc. You can reuse my little trivial proof to argue that UFAIs >= FAIs, but of course they could just be == on intelligence. Then FAIs can be a relatively stable equilibrium because they are not instantly outclassed by any fast-growing UFAI. Contrast that with Tool AIs where everyone who cooperates is at enormous disadvantage to one defector, and defectors have large and growing rewards.

> But that doesn't make us conclude that value alignment is impossible, just hard to achieve soon enough in the real world.

I'm not sure anyone seriously thinks that value alignment will be easy to achieve, much less at sufficient scale.

(It's weird to try to reason that 'value alignment looks hard for the same reason that Tool AI looks hard; but we can do value alignment, thus, we can do Tool AI as well'. Maybe we can't do either? We desperately want value alignment to work, but the universe doesn't owe us anything.)


> There are not repeated practical demonstrations, nor is there any really good theoretical reasons, to think that value alignment mechanisms would be seriously and systematically detrimental to performance & cost in the same way that we have for reinforcement learning/active learning/sequential trials/etc. You can reuse my little trivial proof to argue that UFAIs >= FAIs, but of course they could just be == on intelligence. Then FAIs can be a relatively stable equilibrium because they are not instantly outclassed by any fast-growing UFAI.

Some vaguely similar arguments:

1. The race by organizations to develop AI is slowed >10% by worrying about friendliness. In a big efficient global marketplace, the winning organizations will almost certainly be ones who ignored friendliness.

2. A seed AI which is amoral can take immoral actions not available to a moral AI. Even a tiny advantage amplifies over time because of the exponential nature of recursive self-improvement. So the first super intelligent AI will be immoral.

3. Nations who obtain nuclear weapons have immense power. Even if most nations decide not to obtain them, or only obtain them for peaceful purposes, at least a few will get them and then use them to take over the Earth. Therefore, the Earth will be taken over completely by a nuclear-armed nation.

All of these arguments have some merit, but none are strong/inevitable. They all require quantification and comparison against countervailing forces (e.g., the effect of laws, the moral motivations of nearby human programmers, the counterthreat provided by many allied nations/AIs).

> (It's weird to try to reason that 'value alignment looks hard for the same reason that Tool AI looks hard; but we can do value alignment, thus, we can do Tool AI as well'. Maybe we can't do either? We desperately want value alignment to work, but the universe doesn't owe us anything.)

I'm not making any sorts of claims like that.


Glad to see this articulated so well. The 'Overall' paragraph sums up thoughts that had been in the back of my mind for months. Plus, hey, it's Gwern. If you're reading this: you're an inspiration, man.

What is still unclear -- to me at least -- is the technical challenges that lie ahead of this "neural networks all the way down" approach. I get the impression we'll need quite a few breakthroughs before usable Agent AIs are a thing. Insights on the order of importance as, say, backpropagation and using GPUs.


Meta- reinforcement learning could prove to be such breakthrough, see [1],[2]. Also next generation ASIC accelerators (Google's TPU, Nervana) can give 10x increase in NN performance over a GPU manufactured on the same process, with another 10x possible with some form of binarized weights, e.g. BNN, XNOR-net. There are also interesting techniques to update the model's parameters in a sparse manner.

So, there certainly is a lot of room left for performance improvements!

1. https://arxiv.org/abs/1611.05763 2. https://arxiv.org/abs/1611.02779


> we'll need quite a few breakthroughs before usable Agent AIs are a thing. Insights on the order of importance as, say, backpropagation and using GPUs.

I think a 10x-100x increase in efficiency is going to come soon, based on more research into efficient hardware and efficient models. New algorithms, new hardware and especially a lot of money and talent invested in research are going to power the next step.


I think all AIs in my lifetime will simply be Chinese Rooms

https://en.wikipedia.org/wiki/Chinese_room


The question of whether or not AlphaGo is 'really' intelligent is irrelevant to whether it can beat me at chess. The question of whether or not the Pentagon's integrated AI system is really intelligent is similarly irrelevant to whether or not it might undertake a program of action it's creators would object to if they understood what it meant.


The Chinese room argument is irrelevant here.


is it? It seemed to me like author assumes AI decision making will be roughly equivalent to biological decision making, just faster. I thought one of the Chinese room arguments is that biological decisions will always be "different" than AI ones.

Also it seemed like the author assumes great technological advances in AIs, but not in biology. If we're gonna dream shit up why not dream that brains in the future will be 10,000 times as dense and computers won't be able to keep up except as tools.


The point of the Chinese room argument is that while the room receives and emits Chinese just like any Chinaman, it isn't conscious. As the WP article makes clear, the assumption is that the Chinese room is just as competent at emitting Chinese:

"Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being."

> If we're gonna dream shit up why not dream that brains in the future will be 10,000 times as dense

Because... that is not a thing which is happening. And deep learning and AI progress are things that are happening. (Quite aside from the many issues with your proposal, like a brain 10kx as powerful due to 10kx density would probably break thermodynamic limits on computation and of course cook itself to death within seconds.)


whats not a thing thats actually happening? Drugs and other procedures that increase synaptic/neural density are indeed happening.

Like i mentioned, i don't know all the AI terminology but isn't there an unresolved argument that ai architecture in the short run can't mimic biological decision making, and so the decisions will always be different/ tasks for which tool AI will be better to help the biological decision making processes?


No, the Chinese room takes as an assumption that the input and output of the room is the same as someone who "actually understands" Chinese. In other words, it assumes that biological decisions will always be the same as the AI's decision.


I think we will discover that our own consciousness works like the Chinese Room, and acknowledging and internalizing that will cause tremendous unrest between philosophers, computer scientists, neurologists, and other academic disciplines--potentially even including the law.


You are a Chinese room too.


Is this what they call anatta?


That would be my interpretation, yes. To be fair Searle would say that it proves the opposite, but his argument isn't much more than "isn't that crazy?!" (Ok that wasn't very fair.)


It depends who they are. (Ha ha.)

That's one perspective on anatta, but not a particularly useful one. You may find this interesting:

http://www.accesstoinsight.org/lib/authors/thanissaro/selves...

  If you've ever taken an introductory course on Buddhism, you've
  probably heard this question: "If there is no self, who does the
  kamma, who receives the results of kamma?" This understanding turns
  the teaching on not-self into a teaching on no self, and then takes
  no self as the framework and the teaching on kamma as something that
  doesn't fit in the framework. But in the way the Buddha taught these
  topics, the teaching on kamma is the framework and the teaching of
  not-self fits into that framework as a type of action. In other
  words, assuming that there really are skillful and unskillful
  actions, what kind of action is the perception of self? What kind of
  action is the perception of not-self?


Or perhaps philosophical zombies?

https://en.wikipedia.org/wiki/Philosophical_zombie




Applications are open for YC Summer 2022

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: