Time and time again we have created artificial intelligence (AI) systems to help solve our problems, but what happens when the AI systems become the problem?
Artificial Intelligence systems have been created to help humans work faster, respond more justly, manage more and make fewer mistakes, but now the solution has become the issue. As these systems progress and become more prevalent, ethical and existential risks have emerged. Brian Christian argues that it turns out there is only so much AI can do before it becomes painfully clear that humans need humans. We need empathy and connection when determining bail amounts. We need doctors who know our names in order to feel cared for, not just machines that have downloaded our health data. Not everything can be outsourced, but so much already is and it now becomes a dilemma how to rein it in. What happens when our machines outsmart us, or an enemy outsmarts our systems? How do we realign?
Christian investigates these questions and more in his new book, The Alignment Problem: Machine Learning and Human Values. Join us for our conversation about what must change culturally and in the world of tech to ensure that humanity remains our north star.
Christian photo by Michael Langan
Visiting Scholar, University of California Berkeley; Author, The Alignment Problem: Machine Learning and Human Values
Vice President of Media & Editorial, The Commonwealth Club—Moderator
The leading national forum open to all for the impartial discussion of public issues important to the membership, community and nation. The Commonwealth Club of California is the nation's oldest and largest public affairs forum. Each year, we bring nearly 500 events on topics ranging across politics, culture, society and the economy to more than 25,000 members and the public, both in-person and via an extensive online and on-air listenership and viewership.
San Francisco Headquarters
110 The Embarcadero
San Francisco, CA 94105