×
all 3 comments

[–]CyberByte 3 points4 points  (1 child)

I'm going to put some criticisms below, but first I just want to say that this was very interesting and well written. I found the survey of ways in which "agentiness" has already been incorporated into deep learning especially valuable. It's a little unclear to me though if these are supposed to transform a Tool AI into an Agent AI, because it seems to me that most of the cited systems would still be considered Tools.

It does seem to me however that the article is in some ways ignoring the central premise of the tradeoff between the effectiveness of Agent AIs and (ostensibly) the safety of Tool AIs. When the article says "Tool AIs are inferior on every dimension to Agent AIs" it ignores the dimension of safety. And when it goes on to argue that Agent AIs are indeed more effective, I can't help but think that this was already the premise of the tradeoff (although the analysis of why Agents outperform Tools is insightful).

An argument could perhaps be made that Agents are so much better that nobody would make the tradeoff for safety, or that those who do will be competed out of the market, but this is not explicitly done (at least: safety is not taken into account here). The article says Agents would be systematically preferred over Tools, but I would think that safety has at least some value: surely a billion dollars is preferable to infinity dollars + the end of humanity. Also, regarding competitiveness, it may be possible to discriminate against Agent AIs in the market to such a degree that Tool AIs actually become more profitable. I very much doubt it, but it seems to warrant some analysis.

I'm inclined to agree with the general conclusion that Tool/Oracle AIs are probably not a good long-term solution, but more for the reasons briefly stated in the Background section. In other words: not because the "effective" side of the tradeoff is more effective than the other side, but because the "safe" side is not actually that safe or might actually be infeasible.

[–]avturchin 0 points1 point  (0 children)

I would also add that if our goals are limited, then we don't need unlimited AI, and limited Tool AIs may be just enough.

For example, if we have two main goals: solve aging and ensure car safety, we could solve both by creating two different Tool AIs. First will just analise individual combinations of geroprotectors and second is self-driving car software.

Tool AIs could be extremely efficient and reliable (like microcalculator). And they are safe.

[–]avturchin 0 points1 point  (0 children)

Looks like most contemporary ML system are Tool AIs, that is they just transform information from inputs to outputs, but do not have agency and also they are very specialized.

The right question imho is: Is it possible to replace an agent AI with set of many Tool AIs?

If we have complete set of Tool AIs to all our problems, you could solve all these problem, and there is no much AI safety problems.

Example: In my phone I have many apps which are in fact Tool AIs for different tasks. From economic point of view it may be more interesting to sell Tool AIs, as they are safe and also because the seller could have many sells in the future. But if he sell one universal AI, there will be no more sales.

Another example is alternative between having house robot, or many specialise house keeping systems, like food processor, grove, self-driving car, microcalculator.