Discussion about this post

User's avatar
suman suhag's avatar

For the foreseeable future, any dystopian threat comes from people who engineer the systems, not the systems themselves. And so, constraints of the sort implied by the question are neither needed nor wise to introduce.

Too many people misunderstand the state and capabilities of AI. I made the same mistake myself thirty years ago. And Asimov proposed his own solution in the three laws of Robotics in 1942, 74 years ago. We're no closer to needing Asimov's laws today than we were when he wrote them.

Look at self-driving cars. Machine learning based AI is a key component of the control system. Even so, the cars won't autonomously become murderous any more than an airplane autopilot will suddenly start lacing brownies with cyanide.

However, a single disgruntled employee could inject data into the learning system, compromise any tests, and result in a software update set to make all cars drive incorrectly. For example, the AI might have instructions saying that if it's after 7 pm PST on Thursday the 24th of March, accelerate when you should brake. Starting at that time, hundreds of thousands of people are injured or killed. It's not the control system independently causing the problem, it's the engineers, like someone who designs an accidentally or maliciously defective braking system in an ordinary car.

Even so-called generalized AI is a long ways away from being truly general. For such systems to work at all, data needs to be formatted and interpreted for the system using rules people create. Heuristics are put in place to interpret that data, or algorithms for generating those heuristics. Eventually simple decisions can be made based on input data. Those decisions can be translated into information ("yes, that's a goat.") or control inputs "apply 1% more throttle."

We are no closer to understanding and defining consciousness in any meaningful way than we were a thousand years ago. I don't believe 'conscious' AI will be created in the lifetime of anyone around as I write this. Putting safeguards in place to protect against something so poorly understood, so far in the future is ineffectual. If computer science and software engineering has taught me anything, it's the futility of writing code based on theoretical behaviors of systems we haven't built, don't understand, and can't convincingly predict the shape of. To do so inevitably causes more harm than good.

If you want to prevent AI and machine learning from harming people, ensure high quality research, development, and testing practices are in place. Have differentiated redundant protections against system failure. And keep your engineers from sabotaging the product.

Expand full comment

No posts

Ready for more?