Machine meritocracy is here. In this article, the authors elaborate on questions of inclusivity, fairness, and governance. Will we survive as we are or is there a need for a new Magna Carta?
We stand at a watershed moment for society’s vast, unknown digital future. A powerful technology, artificial intelligence (AI), has emerged from its own ashes, thanks largely to advances in neural networks modelled loosely on the human brain. AI can find patterns in massive unstructured data sets, improve performance as more data become available, identify objects quickly and accurately, and, make ever more and better recommendations and decision-making, while minimising interference from complicated, political humans. This raises major questions about the degree of human choice and inclusion for the decades to come. How will humans, across all levels of power and income, be engaged and represented? How will we govern this brave new world of machine meritocracy?
To answer this question, we need to travel back 800 years: January 1215 and King John of England, having just returned from France, now faced angry barons who wished to end his unpopular vis et voluntas (“force and will”) rule over the realm. In an effort to appease them, the king and the Archbishop of Canterbury brought 25 rebellious barons together to negotiate a “Charter of Liberties” that would enshrine a body of rights to serve as a check on the king’s discretionary power. By June they had an agreement that provided greater transparency and representation in royal decision-making, limits on taxes and feudal payments, and even some rights for serfs. The famous “Magna Carta” was an imperfect document, teeming with special-interest provisions, but today we tend to regard the Carta as a watershed moment in humanity’s advancement toward an equitable relationship between power and those subject to it. It set the stage eventually for the Enlightenment, the Renaissance and democracy.
It is that balance between the ever-increasing power of the new potentate – the intelligent machine – and the power of human beings that is at stake. In a world in which machines will create ever more value, produce more of our everyday products with reducing human control over designs and decisions. Existing work and life patterns are changing forever. Our creation is running circles around us, faster than we can count the laps.
This goes well beyond jobs and economics: in every area of life machines are starting to make decisions for us without our conscious involvement. Machines recognise our past patterns and those of allegedly similar people across the world. We receive news that shapes our opinions, outlooks and actions based on inclinations we expressed in past actions, or the actions of others in our bubbles. While driving our cars, we share our behavioural patterns with automakers and insurance companies so we can take advantage of navigation and increasingly autonomous vehicle technology, which in return provides us new conveniences and safer transportation. We enjoy richer, customised entertainment and video games, the makers of which know our socioeconomic profiles, our movement patterns and our cognitive and visual preferences to determine pricing sensitivity.
As we continue to opt into more and more conveniences, we choose to trust a machine to “get us right”. The machine will get to know us in, perhaps, more honest ways than we know ourselves – at least from a strictly rational perspective. But the machine will not readily account for cognitive disconnects between that which we purport to be and that which we actually are. Reliant on real data from our real actions, the machine constrains us to what we have been, rather than what we wish we were or what we hope to become.
Will the machine eliminate that personal choice? Will it do away with life’s serendipity? Will it plan and plot our lives so we meet people like us, and thus deprive us of encounters and friction that forces us to evolve into different, perhaps better human beings? There’s tremendous potential in this: some personal decisions should be driven by more objective analysis, for instance including the carbon footprint for different modes of transportation, integrating this with our schedules and socio-emotional needs, or getting honest pointers on our true talents when making partner choices, or designing more effective teaching plans for diverse student bodies. But it might also polarise societies by pushing us further into bubbles of like-minded people, reinforcing our beliefs and values without the random opportunity to check them, defend them, and be forced to rethink them? AI might get used for “digital social engineering” creating parallel micro-societies. – Imagine digital gerrymandering with political operatives using AI to lure voters of certain profiles into certain districts years ahead of elections or AirBnB micro-communities only renting to and from certain socio-political, economic or psychometric profiles. Consider companies being able to hire in much more surgically-targetted fashion, at once increasing their success rates and also compromising their strategic optionality with a narrower, less multi-facetted employee pool.
A machine judges us on our expressed values – especially those implicit in our commercial transactions – yet overlooks other deeply held values that we have suppressed or that are dormant at any given point in our lives. An AI might not account for newly formed beliefs or changes in what we value outside the readily codifiable realm. As a result, it might, for example, make decisions about our safety that compromise the wellbeing of others based on historical data in ways we might find objectionable in the moment. We are complex beings who regularly make value trade-offs within the context of the situation at hand, and sometimes those situations have little or no codified precedent for an AI to process. Will the machine respect our rights to free will and self-reinvention?
Similarly, a machine might discriminate against people of lesser health or standing in society because its algorithms are based on pattern recognition and broad statistical averages. Uber has already faced an outcry over racial discrimination when its algorithms relied on zip codes to identify the neighbourhoods where riders were most likely to originate. Will the AI favour the survival of the fittest, the most liked or the most productive? Will it make those decisions transparently? What will our recourse be?
Moreover, a programmer’s personal history, predisposition and unseen biases – or the motivations and incentives their employer – might unwillingly influence the design of algorithms and sourcing of data sets. Can we assume an AI will work with objectivity all the time? Will companies develop AIs that favour their customers, partners, executives or shareholders? Will, for instance, a healthcare-AI jointly developed by technology firms, hospital corporations and insurance companies, act in the patient’s best interest, or will it prioritise a certain financial return?
We can’t put the genie back in the bottle, nor should we try – the benefits will be transformative, leading us to new frontiers in human growth and development. We stand at the threshold of an evolutionary explosion unlike anything in the last millennium. Explosions and revolutions are messy, murky, and fraught with ethical pitfalls.
Magna Carta, meaning ‘The Great Charter”, was originally issued by King John of England (r.1199-1216) as a practical solution to the political crisis he faced in 1215. Magna Carta established for the first time the principle that everybody, including the king, was subject to the law.
Therefore, we propose a Magna Carta for the Global AI Economy – an inclusive, collectively developed multi-stakeholder charter of rights that will guide our ongoing development of artificial intelligence and lay the groundwork for the future of human-machine co-existence and continued more inclusive human growth. Whether in an economic, social or political context, we as a society must start to identify rights, responsibilities and accountability guidelines for inclusiveness and fairness at the intersections of AI with our human lives transparency on code, data sourcing integrity and biases. Without it, we will not establish enough trust in AI to capitalise on the amazing opportunities it could afford us.
This article is adapted from the forthcoming book “Solomon’s Code: Power and Ethics in the AI Revolution” (working title) copyright © 2017 Olaf Groth & Mark Nitzberg.
About the Author
Dr. Olaf Groth, Ph.D. is CEO of Cambrian.ai, a network of advisers on the global innovation economy for executives and investors. He serves as Professor of Strategy, Innovation & Economics at Hult International Business School, Visiting Scholar at UC Berkeley’s Roundtable on the International Economy, and the Global Expert Network member at the World Economic Forum.
Dr. Mark Nitzberg, Ph.D. is Executive Director of the Center for Human-Compatible AI at the University of California at Berkeley. He also serves as Principal & Chief Scientist at Cambrian.ai, as well as advisor to a number of startups, leveraging his combined experience as a globally networked computer scientist and serial social entrepreneur.
Dr. Mark Esposito, Ph.D., is a socio-economic strategist and bestselling author, researching MegaTrends, Business Model Innovations and Competitiveness. He works at the interface between Business, Technology and Government and co-founded Nexus FrontierTech, an Artificial Intelligence Studio. He holds appointments as Professor of Business and Economics at Hult International Business School and Grenoble Ecole de Management and he is equally a faculty member at Harvard University since 2011. Mark is an affiliated faculty of the Microeconomics of Competitiveness (MoC) network at Harvard Business School’s Institute for Strategy and Competitiveness and is currently co-leader of the network’s Institutes Council.