The harm that artificial intelligence can cause to people remains uneffectively addressed in legislation on the new technological tool.
Fourteen-year-old Sewell Setzer committed suicide in February after falling in love with an AI-created character on the platform Character.AI , according to a lawsuit filed by the teenager’s family against the company. The late Paul Mohney never saw combat, nor has Olena Zelenska , the wife of the Ukrainian president, bought a Bugatti Turbillon. But false information, created with the help of artificial intelligence (AI), has been spread with the intention of making easy money from advertising on obituary pages or to generate Russian propaganda.
A school cleaner in Edinburgh, with a single parent of two children, lost her benefits, like many other women in her circumstances, due to a bias in the system’s artificial intelligence. A customer of a payment platform was warned by the algorithm about a transaction that never happened, a lawsuit questions the safety of a vehicle due to an alleged programming error and thousands of users see their data used without consent.
At the end of the artificial intelligence chain there are people, but the responsibility for the damage it can cause to them is not entirely defined. “Here we find an alarming legislative vacuum,” warns Cecilia Danesi, co-director of the master’s degree in Ethical Governance of AI (UPSA) and author of Consumer Rights at the Crossroads of Artificial Intelligence (Dykinson, 2024).
Making money off the deaths of strangers . It’s easy and cheap with AI, even if it comes at the cost of spreading falsehoods that increase the grief of the deceased’s relatives. It’s done through obituary sites where AI crafts information about the deceased using real or fabricated data, such as Mohney’s military history, to gain traffic and therefore ad revenue.
“There’s a whole new strategy that relies on getting information about someone who has died, seeing that there’s a small spike in traffic, even if it’s in a particular area, and quickly publishing articles about the person to get these trickles of traffic,” search engine expert Chris Silver Smith tells Fastcompany .
False information and pornographic deepfakes . The AI Incidents page collects dozens of alerts of incidents generated by artificial intelligence or cases of abuse every month and has already identified more than 800 complaints.
In one of its latest records, it includes false information about the attempted assassination of Donald Trump, the Democratic candidate for the presidency of the United States, Kamala Harris, or about false and realistic pornographic images ( deepfakes ) of British politicians .
Fear of the effects of these creations and their viralization in democratic processes is increasing and 31% of Europeans believe that AI has already influenced their voting decisions, according to a survey for the European Tech Insights Report 2024 , developed by the Center for the Governance of Change (CGC) of IE University.
“Citizens are increasingly concerned about the role of AI in the conduct of elections. And while there is still no clear evidence that it has caused substantial alterations in the results of elections, the emergence of AI has increased concerns about disinformation and deepfake technology around the world,” said Carlos Luca de Tena, CEO of CGC.
“When creating a fake video or image with generative AI, it is clear that AI is a medium, a tool, so the responsibility will be on the creator. The main problem is that in most cases it is impossible to identify it. The case of pornfakes [fake images of pornographic content], for example, directly impacts the gender gap, since platforms encourage you to use them with images of women.
By having more photos of this genre, they become more accurate with our bodies and the result is the greater marginalization and stigmatization of women. Therefore, in the era of misinformation and cancel culture, education is extremely important and that as users we double-check every content we see and, above all, verify it before interacting with it,” explains Danesi.
The researcher, a member of UNESCO’s Women for Ethical in AI and co-author of the report presented at the G20 Brazil on algorithmic audits, adds, in relation to the effects of disinformation: “An algorithm can have a double role: one for the creation of fake news through generative AI and another, through search engines or social media algorithms that make false content go viral. In this second case, it is clear that we cannot demand that platforms verify every piece of content that is published. It is materially impossible.”
Automatic discrimination . Among the complaints about AI malfunctions is one about the inclusion of a bias that harms single-parent families (90% of whom are women) in a Scottish benefits system. “While the AI Regulation has various provisions to avoid bias (particularly with regard to the requirements that high-risk systems must meet), by not regulating issues related to civil liability it does not contribute to the victim receiving compensation. The same is true of the Digital Services Act, which imposes certain transparency obligations on digital platforms,” explains Danesi.
Defective product . The AI incidents page includes an open court case for a possible defect in the programming of a vehicle that could affect safety. In this regard, the researcher explains: “With regard to the reform of the Directive on Defective Products, it also falls halfway. The problem lies in the types of damages that can be claimed under the law, since it does not include, for example, moral damages. Attacks on privacy or discrimination are excluded from the protection of the Directive.”
According to Danesi, these cases demonstrate that one of the areas of law that needs prompt attention due to the arrival of AI is that of civil liability. “Because consumers are highly exposed to the damage that can be caused. If we do not have clear rules on how to proceed in the face of such damage, people are left unprotected. But, in addition, clear rules on civil liability provide legal certainty, encourage innovation and, in the event of a damaging event, encourage the conclusion of an agreement.
This is because, as companies know in advance the rules of the game in terms of liability, they decide where and in what to invest with greater certainty about the scenario in which they will work,” he argues.
There are initiatives at European level that attempt to address the issue, according to the researcher, such as the Artificial Intelligence Regulation, the Digital Services Act (which establishes measures that affect the algorithms of digital platforms, social networks, search engines, etc.), the proposed Directive on liability in relation to AI and the imminent reform of the Directive on defective products.
“This Directive had become obsolete. There was even a debate about whether it was applicable to AI systems, since the definition of a product was based on something physical rather than digital. The amendment includes an extension of the concept of a product, since it includes digitally manufactured files and computer programs. The focus of the regulation is on the protection of the individual, which rightly makes it irrelevant whether the damage is caused by a physical or digital product,” he explains.