MediaToday Newspapers Latest Editions

BUSINESS TODAY 22 February 2024

Issue link: https://maltatoday.uberflip.com/i/1516278

Contents of this Issue

Navigation

Page 11 of 11

12 22.2.2024 Alexiei Dingli Prof Alexiei Dingli is a Professor of AI at the University of Malta and has been conducting research and working in the field of AI for more than two decades, assisting different companies to implement AI solutions. He forms part of the Malta.AI task-force, set up by the Maltese government, aimed at making Malta one of the top AI countries in the world NEWS Is ChatGPT a Labourite or a Nationalist? A t first glance, questioning the political leanings of ChatGPT might seem as absurd as asking about the voting intentions of one's toast- er. Indeed, it's a silly notion to attribute political preference to a household appli- ance. However, the question has a reply that gains a surprising degree of relevance and complexity when we delve deeper into the implications of large language models like ChatGPT. First and foremost, it's paramount to remember that ChatGPT is just a tool without personal beliefs, aspirations, or political affiliations. It doesn't harbour sympathies towards political candidates or parties, nor does it aspire to become a delegate for any political movement. In essence, ChatGPT is similar to a high- ly sophisticated calculator: you input a question, and it generates a response based on its programming and training. Yet, here lies a crucial difference — unlike a traditional calculator, which will unwaveringly output '2' in response to '1 + 1', ChatGPT's responses can dif- fer. is is because large language mod- els are non-deterministic. eir outputs are unpredictable, so their validity can- not be guaranteed in every instance. Furthermore, this raises an intriguing question of whether such a model can exhibit political viewpoints? Whilst they definitely do not hold personal views as we do, their replies typically exhibit biases. Remember that ChatGPT, like all AI models, is shaped by the data it was trained on. If its training data skews towards left-lean- ing texts, the model would exhibit a leftist bias, and vice versa. So much so that a study by the Massachusetts In- stitute of Technology (MIT) suggests that ChatGPT tends to lean more to- wards Labourite ideologies, indicating a left-leaning bias in its training data. So, when asking ChatGPT for information, users should be mindful that the model will provide replies with some degree of bias. But bias isn't unique to this model and afflicts all the Large Language Models. is is inevitable since they learn from vast amounts of data that inherently re- flect societal biases. Because of this, AI biases can become truly problematic, as they can influence real-world decisions and exacerbate societal inequalities. Let me give you an example. Picture yourself applying for your dream job, equipped with the right qualifications and enthusiasm. However, an unseen barrier stands in your way - an AI re- cruitment system. A comprehensive Reuters report has shed light on dis- turbing instances where such systems, driven by biased historical hiring data, have unfairly discriminated against can- didates based on gender, age, or ethnic- ity. ese digital gatekeepers, suppos- edly neutral, instead enforce outdated prejudices. ey make critical decisions about who gets a foot in the door, often overlooking genuinely qualified indi- viduals simply because they don't align with the system's skewed idea of an 'ide- al candidate'. is is not just a faceless statistic; it's a reality for many. It could be you, a family member, or a close friend unjustly sidelined in their profes- sional journey, not by a human but by an algorithm. While it may seem absurd to attrib- ute political leanings to an AI like ChatGPT, the underlying biases in these technologies have real and significant implications. As we continue integrat- ing AI into various aspects of our lives, addressing and mitigating these biases becomes increasingly crucial. Failing to do so risks emphasising existing soci- etal inequalities while undermining the principles of fairness and equality we strive to uphold in a democratic society. We must actively seek to identify these biases within AI systems and strive re- lentlessly to mitigate them. It is only through such conscientious effort that we can edge closer towards creating a world that is more equitable and just.

Articles in this issue

Links on this page

Archives of this issue

view archives of MediaToday Newspapers Latest Editions - BUSINESS TODAY 22 February 2024