Amazon Alexa, AI and the Core Values Built Into the Machines Shaping Our Future
What the relentless move toward automation and artificial intelligence means for the future of corporate culture and leadership
In 1950, Alan Turing posed a question that put the field of computer science on a new trajectory: can machines think? Fast-forward to today: the influence of machine learning, big data, and the internet of things has permeated our daily lives. More and more functions are being automated, and artificial intelligence—systems or devices that mimic human decision-making, visual perception, speech recognition, and more—is on the rise.
Amazon sold 3.3 million Echo devices with the Alexa voice assistant in a single day this month. Alexa is the early version of a thinking machine virtual assistant for consumers. Google Assistant, an Alexa rival, is available to over 100 million smartphone users as of July 2017 according to Google CEO Sundar Pichai. IBM Watson is an industrial-strength AI that advises cancer doctors on diagnoses and treatments. Tractica suggests that the $1.4 billion spent on AI in 2016 will grow to $60 billion annually by 2025. From scientists to Silicon Valley entrepreneurs, experts are asking what all of this will mean for the world of work and society as a whole.
Elon Musk has described artificial intelligence (AI) as “humanity’s most existential threat.” Bill Gates has suggested taxing organizations that use robots instead of people to “slow down the speed” of automation. Darden Professor Edward Hess has predicted that the continued rise of AI and robotics “will call into question the American Dream…and the future of work.”
As people and machines work side-by-side more regularly, and in new ways, a question must be resolved. How will human values guide machine use and behavior? Boards and C-Suite leaders must make the connection between their company culture and values and the guiding principles and behavior of the machines they employ.
The connection between technology and your company culture
Simply put, culture is the “way things are done” in a company and is formed by the values and behaviors that individuals and teams within that company live each day. Culture exists whether leaders intentionally shape and manage it or not. If unmanaged, leaders leave opportunity for negative behaviors to emerge, putting the business itself at risk.
Consider Uber—the latest tech giant to make headline news for its poorly managed company culture and resulting unethical behavior. The company’s unwieldy 14 core values and “do whatever it takes attitude” are consistent with the values permeating software it created for the explicit purpose of deceiving regulatory authorities and thwarting competitors. In other words, the values that led Uber to the top of the market remained unexamined, allowing the sinister side of these values to come alive in the software and technology the company created.
This is not fundamentally different from the system Volkswagen designed to deceive U.S. emissions regulators in 2015. Similar to Uber, Volkswagen had its own declared cultural ethos. If managed, measured, and nurtured, this “north star” could have helped prevent wrong-doing; but it didn’t. Instead, there was a fundamental breakdown between what the company claimed to stand for and how its values were put into action. The employees who designed, built, and approved the system failed to uphold Volkswagen’s espoused company values, which, in turn, failed to show up in the system’s behavior. In both cases, culture subsumed ethics in the way employees programmed technology to behave.
Machines have values embedded in their code
As with human behavior, a company’s values will, or will not, get translated into the code and conduct of its technologies. Leaders are therefore responsible for monitoring the values and resulting behaviors not only of their people but also of the machines and artificial intelligence their company develops or employs. The risks of not doing so will only increase as more functions are automated and as technology learns to interpret and guide its own behavior. On the other hand, companies that are intentional in aligning their technology consumption and development with their values will be poised to drive social and economic good.
The painstaking process of “federating” culture into policies and practice is often subsumed by important business priorities like hitting quarterly earnings. But companies incapable of routinely having values-based conversations with the teams responsible for technology decisions leave themselves open to the culture risk exhibited by Uber, Volkswagen, and others. C-Suite leaders must ensure employees making technology decisions or writing code understand the core values of the company and are dedicating adequate think time to ethically bring these values to life through machine behavior. After all, the onus to uphold your values will be on your people as machines will simply behave in accordance with their programmed and learned values.
The role that boards and C-suite leaders must play
#1. CEOs and their leadership teams must engage in ongoing dialogue to understand what cultural ethos they are creating among their people and machines
In our experience, high-performing organizations routinely ask the question, “How does this decision reflect our values?” As technology continues to advance and we move further into the smart machine age, this question and the conversations it enables will become more important and must become more frequent. This includes dialogue around whether and how decisions about what to automate, what not to automate, and how automation should work reflect your company’s values. For a gut check on “culture risk,” CEOs should ask every person at the next leadership team meeting to write down the company’s values and describe how they used them to make a decision (technology-related or otherwise) in the last month. If your leadership team can’t easily remember the core values, you may have a deeper awareness problem to resolve first.
CEOs should also task a member of the leadership team with creating and disseminating clear guidelines and explicit policy connections between company values and technology adoption. New training will likely be needed to ensure employees responsible for technology development understand the cultural ethos and values statements that should guide their vision, design, programming, and adoption of all new technologies.
#2. Boards must help mitigate culture risk by deeply understanding how values are (or are not) embedded in the technologies of the business
Today, every board should be asking intentional questions about the culture of the company they govern and seeking to understand how the company’s values are lived by the leadership team, networked teams, managers, technologists, and all other employees. Evidence has shown that disruption can cause neglect of and breed disdain for rules, laws, and quality control. As automation and artificial intelligence continue to disrupt business, companies—from the board-level down—must increase their values-based dialogue.
In the short-term, we see a new role for the board’s ethics committee: driving dedicated dialogue around how your company’s culture and espoused values are made real to all employees and then ingrained in machine behavior. In the longer-term, organizations should consider standing up a technology and culture values sub-committee that can look further into the future and ask questions about the effects that automation and artificial intelligence have on company strategy.
#3. Leaders at all levels must manage the fear and transition of their people as more and more functions are automated
Make no mistake: technology will continue to displace functions and roles to create business efficiencies. Change and disruption will continue to happen at greater rates. Fear and job insecurity will become a permanent reality in every business as more jobs are ceded to machines and artificial intelligence. Meanwhile, a new wave of collaboration between people and technology is coming. This collaboration will only succeed if trust exists among your people. If leaders don’t help manage the fear that automation creates, they risk preventing values-driven behavior and effective collaboration between machines and the people who remain.
Values alignment is a responsibility
In January 2017, AI experts from across academia, industry, nonprofits, and the government gathered to discuss the potential impacts—both long-term and immediate—of artificial intelligence. From these discussions emerged a set of 23 principles intended to guide discussion of, application of, and aspirations for artificial intelligence. The tenth principle, ‘Value Alignment,’ states that “highly autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation.” To every CEO who makes or consumes technology: it is your job to make sure this alignment happens.
Voice Assistant Timeline: A Short History of the Voice Revolution