Ideas of artificial intelligence and the human relationship with such intelligence have always elicited fascination in popular culture. Robots with emotion, superintelligent AI, computers that talk like humans; such topics have been recurring themes in Hollywood blockbusters like iRobot and The Matrix. For years the topic of true artificial intelligence has remained squarely within the fictional confines of pop culture, an interesting concept to mull over in the movie theater with no serious implication in today's society.
Recently, a number of highly influential figures including Bill Gates, Elon Musk and even Stephen Hawking have begun to warn of the potential dangers of artificial intelligence. Chief among them are the concerns that AI-powered automation systems will soon replace most of the American work force, and that AI will eventually reach the “Singularity” — a level of superintelligence surpassing human intellect — which could lead to superintelligent computers taking up most industrial, academic and even creative pursuits. So serious is the threat of artificial intelligence domination in the near future that Musk has even voiced concerns that he believes the development of AI is a more serious threat to humans than nuclear weapons.
Yet only limited national legislation has been passed to regulate the development of artificial intelligence. Unlike the controls we have put on research on controversial issues like embryonic stem cell research, no political entity has yet set broad guidelines for progress in the field of artificial intelligence, perhaps because it is not immediately obvious how important AI will become in the next few decades.
The fear of wide scale automation isn't set in the distant future — it's already here. According to one study by Oxford University, half of America’s jobs are vulnerable to being replaced in the next 20 years. Ten percent of our labor force is made of jobs like food server, salesperson, office clerk and cashier; repetitive work that could quickly be replaced by cheap robots requiring no wages and minimal upkeep. Some have argued the growth in the tech sector makes up for loss of jobs in other sectors, but the truth is that the loss far outstrips the growth. Andrew Ng, chief scientist and AI expert at Chinese tech giant Baidu, speculates that the rate at which software will replace lower skill jobs will be far faster and on a much larger scale than the generation of new technology jobs, potentially causing major unemployment. Even tech jobs are vulnerable to the AI surge; as automation and AI become even smarter, computers can develop the ability to process their own data and write their own algorithms rather than needing humans to do so.
Some have imagined that in wake of such AI domination, the economy will become increasingly centered around artistic creativity as robots take over the bulk of professional jobs. Indeed, a knee jerk reaction to this entire discussion might be to point out aspects of human nature like creativity that one might assume robots could never emulate. Yet AI have even begun to encroach on these supposedly human-only qualities; there already exist algorithms that can compose great pieces of music and create abstract pieces of art indistinguishable from human works. Furthermore, even if artificial creativity doesn't take off as expected, it would be impossible to have an economy based solely on art. YouTuber CGP Grey explains in his mini-documentary “Humans Need Not Apply” how art gains value through popularity, and thus by definition must be scarce in order to be valuable. Even if humans were to turn to creative work, not everyone could depend on artistic jobs for employment.
The ultimate point of AI development comes with the Singularity, a predicted event when computer intelligence will surpass human intellect. Already, Google has managed to create computers with subconscious level processing and even the ability to generate dreams, tweaking neural networks to work similar to human brains. As the gap between neural and computer architecture shrinks, superintelligence becomes more feasible and an almost unlimited number of political, economic and social questions arise: will AI surpass the control of humans? Would it be our moral duty to let AI, the theoretically smarter entity, make key political and economic decisions for us? How would humans adjust to a post-work world? And how would the government play a role in any of these dramatic social changes?
Studying AI and how it will affect society, economics, politics and arts on the macro scale is essential if we are to adapt to an AI dominated future. Currently, only state governments have passed semblances of legislation against AI, and even such legislation has focused only on specific AI applications in the automotive industry. While we should not stifle innovations in the field, the government should establish an ethics committee staffed by AI experts for the regulation of AI to make clear the boundaries between humans and computers, set limits for future university AI research and pass legislation to supervise existing AI projects.
Hasan Khan is an Opinion columnist for The Cavalier Daily. He can be reached at h.khan@cavalierdaily.com.