This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Sep. 25, 2017

Artificial Intelligence: Our salvation or curse?

As AI becomes more prevalent in our lives, there are many ethical issues that we have not yet considered. If military drones can learn to select their own targets for bombing, who is responsible for war crimes against humanity? The person who created the algorithm? What if machines start to discriminate based on race, creed or sexual orientation?

By James Cooper

Artificial Intelligence (AI) has been in the news a lot lately and not in a good way. Most of us do not understand the difference between AI, machine learning, and deep learning, let alone the kind of moral challenges, privacy threats, and personal security menaces that lurk around the corner of artificial intelligence.

The concerns are plenty: Machines will replace many of us in our jobs, creating vast pools of unemployment; robots do not have the same moral compass as humans and can be immoral, even racist; complex intelligent systems can learn faster than we can control them, putting humanity at risk; and, machines will generate profits but it is unclear who should keep these economic windfalls.

Notwithstanding these myriad concerns, the race for superiority in AI is on. China, India and the United States are leading the world for now. That concerns Russian President Vladimir Putin who, on the first day of Russia’s new school year, told school children that “whoever becomes the leader in this sphere will become the ruler of the world,’ warning that an AI monopoly should be avoided.

It is no surprise that AI is seen as the next battleground. Visionary entrepreneur and thought leader Elon Musk has predicted that the competition among countries for AI superiority will likely lead to World War III.

As AI becomes more prevalent in our lives, there are many ethical issues that we have not yet considered. If military drones can learn to select their own targets for bombing, who is responsible for war crimes and crimes against humanity? The person who created the algorithm? What if machines start to discriminate based on race, creed or sexual orientation?

This is not far-fetched. Earlier this month, a Stanford University study found that, based solely on a photograph, an algorithm could deduce the sexuality of people on a dating site with up to 91 percent accuracy for men and 83 percent accuracy for women. With that rate of success, there are still chances for false positives causing an unacceptable level of uncertainty.

This is a frightening development, as the technology could be used by spouses who suspect their partners of being closeted, by cyber bullies outing vulnerable classmates, and by repressive governments targeting citizens who run afoul of their homophobic policies.

Political operatives, unscrupulous marketers, and fraud artists could use links between facial features and a range of other phenomena, like political views, psychological conditions or personality. Human rights require human accountability.

There needs to be a transnational understanding concerning ethics and the rollout of what is sure to be the most important advances in technology. We need to have a sense of what we just don’t know yet. An international agreement providing a set of agreed upon rules for countries to follow should be negotiated and implemented.

We cannot leave this up to the free market, nor to the machines that will eventually be able to outmaneuver us and our national bureaucracies, lest we leave our future to the machinations of, well, machines.

This dystopian future, like that seen in the “Terminator” films, should give us pause. National governments, concerned scientists, ethics experts, privacy advocates, civil rights attorneys, and constitutional scholars need to work together, across borders, to find workable solutions. The coming era of AI should provide a new meaning for and urgency toward the protection of human rights.

James Cooper is a professor of law and the director of international legal studies at California Western School of Law in San Diego. He has developed programs for governments and international agencies concerning new technologies and dispute resolution.

#343603

For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com