Elon Musk is a leading voice on the potential danger's of AI. His concern over AI led him to co-found OpenAI, a non-profit AI research company. Scroll down to read the quotes, or watch Elon share his thoughts on AI.
Feel free to leave a comment at the bottom with your favorite quote. Enjoy!
"Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." (Aug, 2014 | Source)
"So the goal of OpenAI is really just to take the set of actions that are most likely to improve the positive futures. Like, if you can think of like the future as a set of probability streams that branch out and then converge; collapse down to a particular event and then branch out again and there's a certain set of probabilities associated with the future being positive and a different type flavours of that and at OpenAI we want to try to guide… do whatever we can to increase the probability of the good futures happening." (June, 2016 | Source)
"I think if AI power is widely distributed then, and there's not, say, one entity that has some super AI that is a million times smarter than anything else, if instead the AI power is broadly distributed and, to the degree that we can link AI power to each individual's will, like you'd have your AI agent, everyone would have their AI agent, and then if somebody did try and do something really terrible then the collective will of others could overcome that bad actor, which you can't do if there's one AI that's a million times better than everyone else." (June, 2016 | Source)
"I think AI is something that is risky at the civilization level, not merely at the individual risk level, and that's why it really demands a lot of safety research." (Jan, 2015 | Source)
"I think A.I. is probably the single biggest item in the near term that’s likely to affect humanity. So it’s very important that we have the advent of A.I. in a good way, that it’s something that if you could look into a crystal ball and see the future, you would like that outcome. Because it is something that could go wrong… So we really need to make sure it goes right." (Sep, 2016 | Source)
"I think having a high bandwidth interface to the brain [is extremely important], we are currently bandwidth limited. We have a digital tertiary self. In the form of our email, computers, phones, applications, we are effectively superhuman. But we are extremely bandwidth constrained in that interface between the cortex and that tertiary digital form of yourself, and helping solve that bandwidth constraint would be very important for the future as well." (Sep, 2016 | Source)
"The best of the available alternatives that I can come up with [regarding A.I.], and maybe somebody else can come up with a better approach or a better outcome, is that we achieve democratization of A.I. technology. Meaning that no one company or small set of individuals has control over advanced A.I. technology. That’s very dangerous, it could also get stolen by somebody bad. Like some evil dictator. A country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation if you have any incredibly powerful A.I. You just don’t know whose going to control that." (Sep, 2016 | Source)
"It’s not as though I think the risk is that A.I. will develop a will of its own right off the bat, the concern is more that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that's bad, that someone could take it from them and use it in a way that’s bad. That I think is quite a big danger. So I think we must have democratization of A.I. technology and make it widely available. And that’s obviously the reason that you [Sam Altman], me, and the rest of the team created OpenAI - was to help spread out A.I. technologies so it doesn’t get concentrated in the hands of a few." (Sep, 2016 | Source)
"If we can effectively merge with A.I. by improving the neural link between the cortex and your digital extension of yourself - which already exists, it just has a bandwidth issue - then effectively you become an A.I. human symbiote. And if that then is widespread, [where] anyone who wants it can have it, then we solve the control problem as well. We don’t have to worry about some evil dictator A.I., because we are the A.I. collectively. That seems to be the best outcome I can think of." (Sep, 2016 | Source)
"[OpenAI] seems to be going really well. We have a really talented team that are working hard. OpenAI is structured as a non-profit, but many non-profits do not have a sense of urgency… but OpenAI does. I think people really believe in the mission, I think it’s important. It’s about minimizing the risk of existential harm in the future…" (Sep, 2016 | Source)