top of page

Book review - “Guardrails - Guiding Human Decisions in The Age Of AI”

Soma Fuxreiter

image of the cover of the book Guardrails: guiding human decisions in the age of AI

On an average day, we are faced with thousands of decision situations. These decisions can be less risky, such as which toothpaste to take off a shop shelf, but also high-stakes, such as when casting your vote in an election or when a judge decides the rest of a person's life. These decisions control people, society, the economy, and ultimately the fate of the planet. But our choices are actually influenced by a host of external factors. And these influences are getting bigger every day thanks to advances in technology. Would we live in a better world if our decisions were made by an impartial artificial intelligence, excluding these external influences? In their latest book, "Guardrails - Guiding Human Decisions in The Age Of AI," Urs Gasser and Viktor Mayer-Schönberger discuss the difficulties society faces in the AI information age and the areas where AI can help us in our decision-making.

Artificial intelligence is present in our lives; we use it in our work, and it surrounds us for most of our day, so ignoring it when establishing the rules of society can be irresponsible. However, believing that artificial intelligence can solve every problem in the complex fabric of our society is perhaps an even more irresponsible viewpoint.

Guardrails are, to paraphrase sociologist Anthony Giddens, “social practices”—structural mechanisms that reconfigure and reshape society. The authors' important claim is that these are not "walls" but guiding boundaries that seek to keep our choices within ethical and social considerations. However, the final decision will always be human, and the authors particularly believe in the importance of human agency.

"Guardrails" confronts head-on the ethical conundrums inherent in technological governance, prompting readers to critically evaluate the unintended consequences of over-reliance on technical solutions at the expense of human autonomy and agency.

The book's first half presents three current governance challenges in the decision space that have been exacerbated by technological change. It shows how traditional protective barriers are applied to these challenges and why they lack effectiveness. Later, in the second half, the book develops effective technical solutions to solve them. It explains these new technical protective barriers and examines what qualities the turn to technology—such as efficiency, focus, or durability—has emphasized in society. One such problem appeared in 1993 in an online game called LambdaMOO. In the famous case, a hacker raped the characters of other users, thereby committing a crime in cyberspace. As a result of what happened, a discourse on the regulation of cyberspace was launched, but the question arose: how can a cyberspace that crosses states be regulated? However, before anything else, the admins banished the hacker from the game, which showed that the code was the law.

One of the biggest influencing factors during our decision-making is the incomprehensible amount of information that reaches us, as well as wrong information, to be precise. MIT researchers have shown that misinformation is shared 70 percent more often on Twitter than accurate information. This disinformation can be intentional, like a Cold War double agent disseminating information, or it can simply be the result of human ignorance, like when a US president encouraged citizens to drink bleach at the start of the Covid 19 pandemic. Given this, the solution would be obvious with the capabilities of today's technology: AI-controlled filtering of false information. The authors dispute this position. The use of technology seems like an easy solution, but it can make us less adaptable and resilient. Artificial intelligence might appear impressive, but it only works well if we're okay with keeping things the same as they've always been. To make better decisions, we need more options, not fewer. It's impossible to predict all the changes that might happen in the future, so we need guardrails that can adapt when things change unexpectedly. To address the challenges digital networks pose, we need to focus on the principles that guide our guardrails, such as individual empowerment and learning.

Self-restraint is essential to avoid overreaching with guardrails. Technical tools can be useful, but they shouldn't replace social constructs like law. Ultimately, guardrails can guide us, but they can't make decisions for us. It’s the human condition to decide as individuals yet be anchored in society.

In summary, "Guardrails—Guiding Human Decisions in The Age Of AI" is a very useful and instructive work that illuminates the complexities of technological governance and human action with insight and learning. Gasser and Mayer-Schönberger encourage readers to reflect on AI's profound implications for society and urge a considered balance between technological innovation and human values.

Subscribe to our newsletter for events, news, and other trends on digital organizing for progressives on this link.


bottom of page