Three years ago, thousands of people from more than 100 countries sent selfies to an organisation called Beauty.ai. These were entries for what was billed as ‘the first international beauty contest to be judged by Artificial Intelligence’.
When the results were announced, it was evident that the algorithms were dismissive of people with dark skin. Though a majority of contestants were white, many people of colour had submitted photos, including large numbers from India and Africa. Of the 44 winners, however, nearly all were white, a few were Asian, and only one had dark skin.
That’s just one example of how algorithms can extend bias because of flawed data. The increasing number of such cases doesn’t seem to have put people off. Last year, a prominent Japanese banker and a former Google Japan employee joined hands to nominate an AI-powered robot to run for mayor in Tokyo’s Tama City with the pledge of “fair and balanced opportunities for everyone”. (It stood third.)
A few months later came the news that the Chinese foreign affairs ministry was planning to use AI in diplomacy by means of an algorithm that drew on a large amount of data, from cocktail-party gossip to satellite images. It was stressed, though, that AI would provide a range of options and human beings would take the decisions.
The AI genie is well and truly out of the bottle. Algorithms are used in policing, medicine, law, education, health, childcare, transport, e-commerce, the media, and almost every other area we interact with. The rise and implications of this are what Robert Elliott Smith considers in his recent book,
Rage Inside the Machine. An insider’s view
Smith has what you could call an insider’s view. He’s spent over three decades as an academic and an engineer, developing algorithms in areas from biological science to military planning to economics. He’s studied their output in tasks as varied as understanding immune systems, learning aircraft manoeuvres and analysing the influence of emotions on financial markets.
Technology, he asserts, doesn’t exist in a vacuum: it’s created in a specific, cultural and philosophical context. Thus, questionable historical ideas and prejudices form the foundations of the algorithms we rely upon. This is “a value system that promotes division instead of collaboration, tribalism instead of individuality and intolerance instead of diversity”.
Smith goes into the background and origin of algorithms as we know them, with forays into statistical and probability theory. He dwells on the key roles of Llull, Pascal, Leibniz, Babbage, Turing and others in advancing computing theory and practice. All this is leavened by personal anecdotes, be it about the time he went on a blind date, to hanging out at his friend’s alternative record store, to discussion and dissension with his associates.
The development of algorithms, he writes, came to a halt during the seventies, a time known as the ‘AI winter’. This was when the problem of transferring expert knowledge into frames of computer representations was seen as expensive, time-consuming and commercially unviable.
With the explosive growth of the World Wide Web, all that changed. Now, huge numbers of people offered up a sea of information online for free, sometimes without realising it. Big Data spurred a new wave of technology companies armed with massive amounts of personal information that could be plugged into algorithms. Statistical analysis took the place of understanding the complexity of an individual human’s thinking.
Today’s algorithms use statistical techniques of probability theory and Bayesian inference, which result in Bell curves. These Bell curves, as has been pointed out, cluster data around a supposedly ideal average and ignore variants. They are representations, not reality. The black swans they overlook may hold information we need to progress our knowledge of the world.
Smith does take pains to point out the helpful aspects of algorithms. Drawing on his and others’ experiences, he illustrates these when it comes to aircraft-wing failure rates, risk factors for cancer, the effects of new technologies on the climate, and more.
With human-centred interactions in economics, sociology, health and the media, however, algorithms simplify the complexities of the real world. They generalise by means of quantitative features and categories. This, with flawed data, means that algorithms have a view of us that, as Smith says, is as two-dimensional as an airplane wing.
It’s not that models of reality aren’t useful; it’s just that one should be aware of what they represent. After all, “it is one thing to create a mathematical model of the world and another to reconcile it with our lived experience, the daily struggle of life, the constant trial and error as we experiment with new ideas and ways of doing things, and the ambiguity of the facts themselves”.
At the heart of the issue is that data chosen to drive algorithms can itself contain preconceived ideas and beliefs that reinforce a dominant social ideology. Algorithms have no understanding of issues such as gender equality or systemic racism; they simply identify patterns because of historical correlations. The result of the Beauty.ai contest is just one example.
Algorithms can sow division
Smith is dismissive of views that compare the human brain to a machine, and of the possibility that algorithms can therefore “think”. The behaviour of real, biological neurons is very different, he writes, being influenced not only by synaptic activations but also the body’s hormonal system, immune system, peripheral nervous system and the so-called ‘second brain’ in the gut.
Worryingly, intolerant views and simplified communication such as tweets and Facebook posts fit like jigsaw puzzle pieces into an algorithm’s simplified models. The ensuing segregation and feedback loops can divide people in the same way that real-world discrimination and prejudice do. Algorithms can then make decisions about how to stimulate people into generating and sharing more content, creating a never-ending cycle. Here, regardless of whether algorithms present ‘true’ or ‘fake’ news, “they will still be working towards their primary directive: the maximisation of value”.
With democracies and elections, there’s already enough information on how algorithms can sow division. In this way, “algorithms that divide complex human capabilities into simplified features are not only a catalyst for dehumanisation, they are the mechanism that causes it”.
Clearly, hard thinking is required about the role of AI. Not just efficiency and profit, but the welfare of human beings should be prioritised. “Flawed, historical philosophies underpin the entire algorithmic endeavour, and we need to re-examine them urgently.”
The point, Smith rightly says, is to preserve a diversity of opinions, cultures and perspectives. Leading to a stable, resilient society where our differences enrich us, instead of descending into a hell-scape of combative echo chambers.
Some steps are already being taken. To ensure fairness, some scientists are developing what they call Seldonian algorithms (named after the central character of Asimov's
Foundation series). Then, there are groups such as the Algorithms Justice League, a collective that highlights bias and asks for accountability; and Algorithm Watch, which evaluates the social relevance of decision-making processes.
More needs to be done. Stringent ways to root out sexist, racist and socio-economic bias in data; more advocacy groups and citizen involvement; changes in laws and algorithmic design principles; and profit-making motives made transparent. Otherwise, we’re going to be controlled by an “unholy convergence of scientism, computation and commercialism” that will make the fictional dystopias we read about resemble pleasant daydreams.
Read his columns
Sanjay Sipahimalani is a Mumbai-based writer and reviewer. here.