Issue link: https://maltatoday.uberflip.com/i/1525930
10 OPINION maltatoday | WEDNESDAY • 28 AUGUST 2024 GENERATIVE artificial intel- ligence (GenAI) is a type of AI that can create new content based on the data it has been trained on. Imagine you have a super-smart program that can write essays, create realistic photos, compose music, or even generate videos just by learning from existing examples. For in- stance, tools like ChatGPT can write stories or answer ques- tions, and DALL-E can create images from text descriptions. These advancements have opened up incredible possibil- ities, from healthcare to enter- tainment. However, alongside the bene- fits, there is a growing concern about GenAI's darker side, of- ten called "digital sludge." This term describes the harmful and deceptive content these AI sys- tems generate, polluting our digital spaces. One of the most troubling aspects of digital sludge is the manipulation of human like- ness. GenAI can create high- ly realistic images, audio, and videos that mimic real people. This ability has been misused to impersonate individuals, in- cluding public figures and pri- vate citizens, leading to various harmful outcomes. For exam- ple, AI-generated audio clips that sound like a well-known politician could be used to spread false information or cre- ate fake endorsements. This, in turn, misleads the public and undermines trust in legitimate communications. The creation of non-consen- sual intimate imagery (NCII) is another serious issue. GenAI can generate explicit content featuring individuals without consent, often by altering exist- ing photos or creating entirely new ones. This violation of pri- vacy can cause severe emotion- al distress and damage reputa- tions. Celebrities and ordinary people have found themselves victims of such malicious ac- tivities, highlighting the ur- gent need for better protective measures. Beyond personal harm, digital sludge also includes large-scale misinformation and disinfor- mation campaigns. AI-gener- ated content can create fake news articles, images, and vid- eos that appear authentic. This questionable information can spread rapidly across social media, influencing public opin- ion and political outcomes. For instance, false pictures of events that never happened can be circulated during election periods to manipulate voters' perceptions and choices. This not only undermines the dem- ocratic process but also sows discord and confusion among the public. Scams and fraud facilitat- ed by GenAI are increasingly common as well. AI can gen- erate convincing messages, emails, and even voices that deceive individuals into hand- ing over money or sensitive in- formation. Imagine receiving a phone call that sounds exactly like your boss, instructing you to transfer funds urgently. This level of sophistication makes it harder to identify scams, lead- ing to significant financial loss- es. The accessibility of GenAI tools means that these issues are not confined to highly skilled hackers or state-sponsored ac- tors. Individuals with minimal technical expertise can misuse these tools to create realis- tic and harmful content. This widespread availability lowers the barriers to malicious activ- ities, making it easier for any- one to contribute to the digital sludge. One of the subtler yet per- vasive forms of digital sludge is the mass production of low-quality, AI-generated con- tent. This includes spam-like articles, fake product reviews, and automated social media posts designed to manipulate public opinion or boost certain products. The sheer volume of such content can overwhelm users, making it challenging to discern credible informa- tion from fake or low-quality content. Over time, this erodes trust in online platforms and diminishes the overall quality of online information. Political campaigns and advo- cacy groups have also started leveraging GenAI to create and distribute tailored messages without proper disclosure. For instance, AI-generated images and videos that portray candi- dates in a favourable light or as- cribe false statements to oppo- nents can mislead voters. These often undisclosed practices blur the lines between genuine polit- ical communication and manip- ulation, posing a threat to dem- ocratic integrity. Addressing the issue of digital sludge requires a multifaceted approach. Developers need to implement stricter safeguards to prevent the generation of harmful content and ensure better detection of malicious activities. However, technical measures alone are not suffi- cient. Public awareness and edu- cation are crucial. Users need to be informed about GenAI's capabilities and risks. This in- cludes understanding how to identify AI-generated content and recognising the potential for deception. Media literacy programmes can equip individ- uals with the skills to evaluate digital content critically, re- Sludge factories: GenAI and the mass production of misinformation Alexiei Dingli is Professor of Artificial Intelligence Alexiei Dingli