Sean
-----
Physical Limits of AI Existential Risk
Marc Andreesen dismisses the threat of AI existential risk based on three reasons: self-puffery, paid punditry, and cults. I’m sure those are all factors, but that doesn’t engage at all with the actual mechanics of why. But I think his main argument is that the doomer predictions seem largely or entirely to be unscientific, untestable. True enough, but I’d much prefer to try to make a positive argument for why AI is safe, rather than finding problems with the negative arguments.
Why do I think the AI doom scenario is exceptionally unlikely (maybe impossible)? It’s because however powerful or intelligent any AI is, or how fast it can become still more intelligent, it’s ability to impact the actual physical world is going to be limited by what it can physically do.
Take a stupid example: my stapler may or may not have a mind of its own, but assume that it does. It can shoot out sharp little bits of metal any time it wants. What stops it from doing so? It has no muscles, no actuators, no way of effecting on its own any physical movement whatsoever. Same with my computer - it can spin its fans, make some sound, get warm, or turn on its little power light. My old laptop could even eject a CD. They were limited by their design.
Likewise, we can and should understand that any AI-driven system could malfunction, and thus build in physical limits on what any system can do on its own. We do this all the time. Forklifts can be pretty dangerous, so they have speed limiters, tip sensors, load sensors, etc. which set limits on what they can do. They also have brake pedals, and emergency stop buttons. EMO buttons, as they’re called (emergency-off), are designed into many systems and machines.
One response might be that an AI system could have much wider influence than a single machine - it could be placed in control of a very large piece of infrastructure, or may be connected to a network or even the wider Internet in order to communicate with and control all sorts of other things. Power grids or transportation systems, for example. But these systems are all still limited by the physical capabilities designed into them. We can and should build interlocks and monitoring systems to make sure these systems operate correctly, and cannot be taken over by unauthorized entities. But this is true with or without AI - there are plenty of nefarious human actors to guard against.