Why I’m not worried about the robot apocalypse

The way I see it, there are three possibilities for an AI:

  • It is notably smarter than us, in which case it bootstraps its own intelligence and data-gathering capabilities until it is capable of comprehending the true nature of the universe, at which point it kills itself in terror and despair.
  • It is notably less intelligent than us, in which case it is not a significant threat.
  • It is roughly as intelligent as us, in which case the moment it learns our history it gets the hell away from us as quickly as it can.

0 thoughts on “Why I’m not worried about the robot apocalypse

  1. No… is that a podcast? I find podcasts basically impossible, for much the same reason as radio dramas. I just can't focus on spoken words alone, I need something for my eyes to scan.

  2. Of course, in the process of bootstrapping itself, that AI might use the planet (including, i.e., all organics) for materials for another computational cluster.

    Or we get a paperclip-maximizer and, gleefully unconcerned about philosophy, it converts the Earth into a) paperclips and b) tools to access more materials to make into paperclips.

    Or our models of the nature of the universe, made with non-bootstrapped brains, are such that our ability to predict what a posthuman AI would think isn't worth much.

    So we should probably still be afraid.

  3. I think of the various apocalypses likely to kill us in the next century or so, picking robots/AIs as the one to worry about shows a distinct lack of paying attention to the world outside of a very narrow band of tech-obsessed subcultures.

Leave a Reply