The future of artificial intelligence is promising. Smart cars, automation, and solving the world’s toughest problems will all involve artificial intelligence, when human thinking can no longer face our problems with the sheer computation speed needed to solve them. However, it does not come without it’s risks.
Elon musk, billionaire founder of SpaceX, Tesla, and SolarCity, has voiced some concerns about the future of artificial intelligence. Musk quotes, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” said Musk. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish” (The Guardian). Musk’s concerns parallel Bostrom’s paper on existential risk, where there are 4 classes of existential risk: Human extinction, where human’s go extinct before they develop technology; Permanent stagnation, where humanity survives, but never reaches technological maturity; Flawed realization: Humanity reaches technological maturity in a way that is flawed; and Subsequent ruination: Humanity reaches technological maturity in a way that gives good prospects, yet developments cause ruination of those prospects. Musk’s concerns fall in to Flawed and Subsequent ruination, where AI at first may serve to cater in a way that is beneficial to the human race, but is discovered later on to be detrimental, when it could possibly escape our control. In fact, one of Bostrom’s line of action to counter-act possible existential risk is using “resources made available to research existential risk… the more important risk areas – machine super intelligence, nano-technology” (Bostrom 26). This line of thinking justifies Elon musk’s large investment in artificial intelligence research risk factors as our vigilant defense against a potential artificial intelligence future gone awry.
Musk is also among those who believe in an simulation hypothesis where reality is just a computer simulation created by a sophisticated intelligence. (The Guardian) which ties directly to another paper by Bostrom in “Are you living in a simulation”. Bistrom explains how the world can be simulated, through “interacting with normal human ways with their simulated environment, do not notice any irregularities. The microscopic structure inside of the Earth can be omitted” (Bostrom 240). There are type of simulations, Bostrom argues, that have a large possibility of existing. For example, post-humans in the future will have the ability to run a large number of ancestor-simulations, which in turn could lead to such things as selective-simulations where the world would not simulate certain events to certain humans. I believe in turn leads to a type of solipsism, where one believes everyone besides themselves lives inside a simulation. Of course, there are multiple debates against solipsism, namely, there are great works out there that exist, but one cannot recreate those great works by ourself. Does that mean we aren’t living in a simulation, or on the contrary, does the simulation want you to believe that those works were created by beings within their own simulations? This is a good discussion for discussion the fundamentals for what justifies a simulation, and what does not.