Elon Musk is a leading voice on the potential danger's of AI. His concern over AI led him to co-found OpenAI, a non-profit AI research company. Scroll down to read the quotes, or watch Elon share his thoughts on AI.

Feel free to leave a comment at the bottom with your favorite quote. Enjoy!

  1. "Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." (Aug, 2014 | Source)
  2. "I think AI is something that is risky at the civilization level, not merely at the individual risk level, and that's why it really demands a lot of safety research." (Jan, 2015 | Source)

  3. "I think A.I. is probably the single biggest item in the near term that’s likely to affect humanity. So it’s very important that we have the advent of A.I. in a good way, that it’s something that if you could look into a crystal ball and see the future, you would like that outcome. Because it is something that could go wrong… So we really need to make sure it goes right." (Sep, 2016 | Source)

  4. "I think having a high bandwidth interface to the brain [is extremely important], we are currently bandwidth limited. We have a digital tertiary self. In the form of our email, computers, phones, applications, we are effectively superhuman. But we are extremely bandwidth constrained in that interface between the cortex and that tertiary digital form of yourself, and helping solve that bandwidth constraint would be very important for the future as well." (Sep, 2016 | Source)

  5. "The best of the available alternatives that I can come up with [regarding A.I.], and maybe somebody else can come up with a better approach or a better outcome, is that we achieve democratization of A.I. technology. Meaning that no one company or small set of individuals has control over advanced A.I. technology. That’s very dangerous, it could also get stolen by somebody bad. Like some evil dictator. A country could send their intelligence agency to go steal it and gain control. It just becomes a very unstable situation if you have any incredibly powerful A.I. You just don’t know whose going to control that." (Sep, 2016 | Source)

  6. "It’s not as though I think the risk is that A.I. will develop a will of its own right off the bat, the concern is more that someone may use it in a way that is bad. Or even if they weren’t going to use it in a way that's bad, that someone could take it from them and use it in a way that’s bad. That I think is quite a big danger. So I think we must have democratization of A.I. technology and make it widely available. And that’s obviously the reason that you [Sam Altman], me, and the rest of the team created OpenAI - was to help spread out A.I. technologies so it doesn’t get concentrated in the hands of a few." (Sep, 2016 | Source)

  7. "If we can effectively merge with A.I. by improving the neural link between the cortex and your digital extension of yourself - which already exists, it just has a bandwidth issue - then effectively you become an A.I. human symbiote. And if that then is widespread, [where] anyone who wants it can have it, then we solve the control problem as well. We don’t have to worry about some evil dictator A.I., because we are the A.I. collectively. That seems to be the best outcome I can think of." (Sep, 2016 | Source)

  8. "[OpenAI] seems to be going really well. We have a really talented team that are working hard. OpenAI is structured as a non-profit, but many non-profits do not have a sense of urgency… but OpenAI does. I think people really believe in the mission, I think it’s important. It’s about minimizing the risk of existential harm in the future…" (Sep, 2016 | Source)