×
all 16 comments

[–]bascule 1 point2 points  (0 children)

If you cut off their source of mass, eventually they'll evaporate due to Hawking Radiation

[–]weeeeearggggh 1 point2 points  (0 children)

If you can't beat 'em, join 'em.

[–][deleted] 1 point2 points  (0 children)

Nukes.

[–]AsaTJ 2 points3 points  (2 children)

Is this related to Mass Effect 3 coming out?

XD

Honestly, I don't think there is a way to stop it. It's evolution. You would have to put yourself beyond the singularity to compete with other post-singularity entities on a military level.

Humans as we know ourselves will not be relevant forever. That's just a fact of the universe.

[–]parmethius2000 0 points1 point  (1 child)

Which is why I'm hoping we go the borg route.

Edit: Then at least we might stay relevant longer beyond just the AI's Ex-creators.

[–]AsaTJ 0 points1 point  (0 children)

I think it's far more likely that we merge with cybernetic life than be replaced by it, just because we're too smart to let something like Terminator happen.

Also, happy cake day!

[–]Anzereke 0 points1 point  (0 children)

Pre or post? Pre you just kill people selectively and on a large scale, then guide them into a dead end.

Post, far harder to do. Surest way would require some pretty excessive force and likely require post singularity or very far into linear development future tech.

[–]vamper 0 points1 point  (0 children)

religion

[–]autotrack -1 points0 points  (8 children)

I would develop an algorithm that makes protecting life across the universe a crucial component of AI. The algorithm would make this component unmodifiable and enhancing the security around this component would be one of the goals of the AI as it enhances itself. That way, AI will become the ultimate protectors of life and as it advances, it'll take all life along for the ride.

[–]Zumf 1 point2 points  (6 children)

All well and good in concept... But this idea has been explored extensively in fiction, leading to scenarios such as:

  • AI decided that in order to protect life, it must take away humans' abilities to harm themselves, leading to a global "human zoo" scenario, or leaving humans that are docile and planet-bound.

  • AI finds life elsewhere that it deems a threat to humans, wipes out said life.

Anyway, we can't second-guess the behaviour of a hypothetical AI without knowing its nature or origin. Another theory is that any AI much more capable than any human could modify its own programming (regardless of how well we try to safeguard it. Imagine a child trying to hide something in a lock-box from an adult locksmith) and bypass those safeguards, or simply create another AI with its own design.

I believe our (humans') fear of the unknown gives us an instinctive distrust of the very notion of strong AGI, when it may be more likely they have ambivalent opinion of us, if they choose to regard us at all. We forget that "intelligence" does not necessarily mean "human-like intelligence". An AI doesn't have an evolutionary legacy which includes greed, xenophobia, fear, distrust, and any of the myriad perversions humans show. They could be better "people" than us.

Also, regarding the OP: Attempting to slow or stop Moore's Law to prevent an existential threat that may never come to pass wouldn't stop all progress. It would only drive research underground, where the global community couldn't regulate research. It would also cripple attempts to defend against possible threats.

[–]gwern[S] -1 points0 points  (5 children)

Also, regarding the OP: Attempting to slow or stop Moore's Law to prevent an existential threat that may never come to pass wouldn't stop all progress. It would only drive research underground, where the global community couldn't regulate research. It would also cripple attempts to defend against possible threats.

Just like it has driven underground all nuke research, and not stopped 'Einstein's law' but left it alone, so you can go down to your local black market and easily buy a teraton h-bomb.

[–]Zumf 0 points1 point  (4 children)

Research and production of nuclear weapons has slowed down since the Cold War, but no serious multilateral effort has been made to abolish or stop it, because it is of interest of nations that already have them to keep them. Those powers who have nukes go to great efforts to make sure emerging powers do not get access to them, preserving the status quo.

The same sort of situation could happen with AI, but can you really see every nation agreeing to halt research and give up the huge economic advantages it could bring? Certainly not the US or China. They might sign a pact, but that didn't stop research into biological/chemical weapons either.

Nuclear weapon research and production is not an exponential technology, nor is it a field with applications as diverse and profitable as computing technology.

When (practically) any profitable technology or product is prohibited by governments & regulatory bodies, it does not stop research, it only slows it.

[–]gwern[S] 0 points1 point  (3 children)

Certainly not the US or China. They might sign a pact, but that didn't stop research into biological/chemical weapons either.

Has research followed any exponential? If you had drawn a graph from WWI to WII and extrapolate it, would current biological or chemical weapons be above, at, or below the extrapolation?

When (practically) any profitable technology or product is prohibited by governments & regulatory bodies, it does not stop research, it only slows it.

Which is all one ever hoped to do in this scenario.

[–]Zumf 0 points1 point  (2 children)

The title of the OP said "stop a Singularity", but yeah, slowing was discussed too.

No, chemical and biological weapon research aren't exponential but I made the comparison to suggest that signing pacts does not necessarily stop research.

I'm not advocating racing ahead with no forethought, but simple economic demand will keep Moore's Law firmly on track (maybe with a few minor blips if a chip Fab or two get bombed by some paranoid doomsayer). Educating people and policy-makers about the potential risks and rewards of emerging technology and discussing ways to develop and apply them responsibly is still the best way to go about it in my opinion.

[–]gwern[S] 0 points1 point  (1 child)

The title of the OP said "stop a Singularity", but yeah, slowing was discussed too.

All titles are false, but some are interesting or provoking.

I wish I could come up with an interesting non-misleading title but somehow "Exploration of semiconductor-related strategies to altering the temporal differential between the dates of achieving whole-brain emulation and artificial general intelligence by regulating chip fabrication facilities" just doesn't seem very interesting.

I'm not advocating racing ahead with no forethought, but simple economic demand will keep Moore's Law firmly on track (maybe with a few minor blips if a chip Fab or two get bombed by some paranoid doomsayer).

I'm not sure that's true. Everything runs into diminishing marginal returns at some point. (I, and everyone else, derived much less utility from an upgrade to a 1GHZ chip to a 2GHZ chip than we did when we were able to swap 512MHZ for 1GHZ, and so on and so forth.) Given Moore's Second Law has capital costs escalating... At some point the sheer risk involved means no one will dare invest even if there seems to be substantial profits involved.

[–]Zumf 1 point2 points  (0 children)

If we were only talking about silicon transistors I would agree with you, and you're right about escalating manufacturing costs (for current semiconductors), but there are several alternative technologies in development ready to step in and change the game (like the change from valves to transistors). But we won't see them on the market we can't practically shrink transistors any further and multiplying the numbers of cores gets impractical.

Still, optical systems, memristor-based computing, doped carbon nanotubes and graphene/graphyne based transistors have a lot of unrealised potential!

Anyway, good talking to you. It'll be interesting whichever way it turns out.