IT

Setting limits for artificial intelligence

07/03/2024

If AI systems make decisions solely on the basis of data and algorithms, it leads to ethically problematic outcomes

At the end of November, 18 countries put forth the first detailed international agreement on protective measures to combat the misuse of artificial intelligence. The signatories of this 20-page document call upon companies that design and use AI to develop and deploy these technologies in a way that protects their customers and the general public from misuse. But aside from laws and international conventions, does artificial intelligence also need to conform to human values and moral standards? “We should set the highest possible standards for AI systems to make sure that ethical principles are taken into account in the development of these technologies,” says Sarah Spiekermann-Hoff, professor at the WU Institute for Information Systems and Society. Major AI developers are already exploring ways of putting this approach into practice. For example, a group of AI developers has launched an alternative to ChatGPT, named Claude, which uses Constitutional AI techniques to achieve better alignment with human-rights-based principles in its chat conversations. In addition, some companies have already put in place specific procedures to ensure that human values and legal principles are taken into account in the development of AI systems. According to Spiekermann-Hoff, this is already the case in many areas, ranging from food to car production.

AI misuse

Even if developers take great care when designing AI systems, there is still no guarantee that the technology will not be misused. When AI development tools are distributed and made publicly available, malicious forces can use the power of this software to set up dangerous AI models on social networks, for example. “These things are difficult to prevent in an internet culture that cherishes anonymity and the non-authentication of participants as a value in itself,” Spiekermann-Hoff points out. “Today, anyone can use social networks anonymously or under a pseudonym without undergoing any substantial authentication. This means that with the appropriate technical skills, anyone can spread malicious AI and manipulative AI-generated content such as deep fake images without getting caught.” According to the researcher, a mandatory real-name policy would be a first step for bringing more responsibility to these platforms. This is a very controversial issue, however, because it would make the use of pro-democracy networks very dangerous in many non-democratically governed countries.

More trust in social media than trust in people

It has been shown that that people today trust social media more than they trust other people. As Spiekermann-Hoff explains, “This has to do with our traditional reverence for the power of machines. People love their innovations and believe that they only ever bring good to the world, that they are neutral, robust, and reliable. At the same time, we have a deeply rooted mistrust of humans, their emotional control, ability to think, their logic, and their rationality.” Our blind reliance of computers is, for instance, exemplified by the fact that AI programs have played a central role in the discrimination of minorities and people with non-mainstream views for around a decade now. For example, applications from job seekers are filtered out if they are too creative. “This includes those who deviate from the mainstream and no longer use the keywords that the AI searches for as criteria for good ratings,” explains Spiekermann-Hoff. The clients who commission these AI systems are often unaware of this fact at first and, out of shame and habit, it is then difficult for them to change course and switch to a different approach once they find out.

Faulty face recognition

Authoritarian regimes increasingly employ automated systems, such as facial recognition technology used to surveil and suppress political activists. However, these technologies also harbor risks for the governments that use them because AI is very susceptible to errors and can create enormous collateral damage, for example in cases where completely innocent people are prosecuted or misclassified. As Spiekermann-Hoff points out, “Authoritarian regimes that still believe in the power of AI should ask themselves whether they are ready to accept the massive collateral damage to their societies.” According to Professor Spiekermann-Hoff, as a first step, humanity needs to learn how to build value-based AI to prevent damage. The alternative would be to end up in a brutal techno-fascist scenario where no one takes responsibility anymore and humans are completely at the mercy of AI systems.

[Translate to English:] Portrait Spiekermann

About Sarah Spiekermann-Hoff

Sarah Spiekermann-Hoff has served as head of the WU Institute for Information Systems and Society since 2009. She is a renowned researcher, author, speaker, and digital ethics consultant. In 2021, Spiekermann-Hoff was one of the 14 authors who published the “Manifesto in Defense of Democracy and the Rule of Law in the Age of Artificial Intelligence.” The document calls for a comprehensive catalog of measures to protect against “governmental and non-governmental abuses of power.”

Back to overview