BusinessToday Previous Editions

BUSINESS TODAY 16 June 2022

Issue link: https://maltatoday.uberflip.com/i/1471025

Contents of this Issue

Navigation

Page 11 of 11

5.12.19 12 OPINION 16.6.2022 Did an Artificial Intelligence system just become human? T he first book of the Bible relates how God blew into the nostrils of the first man giving him life. Ever since that time, humans have dreamt about the possibility of bringing an inan- imate object to life. So much so that we see numerous fictional stories centred around this theme. From the well-known children's fairytale Pinocchio, whereby a wooden doll received the gi of life through magic, to the famous Franken- stein by Mary Shelley, where a scientist managed to bring a corpse to life using electricity. e prospect of becoming the crea- tor has always fascinated humans. In the field of Artificial Intelligence (AI), this idea surfaces time and time again simply because AI scientists try to mimic human functions with the use of machines. If one looks at recent sci- ence-fiction movies like "2001: A Space Odyssey", "ex Machina", "Ghost in the Shell", "her", and "the Matrix", to name a few, it is very evident that this theme lies at the heart of most storylines. e notion of reaching human poten- tial and maybe even surpassing it has been a significant target since the in- ception of AI. Right after the 2nd World War, Alan Turing, the father of AI, de- vised the Turing test, which states that if we cannot distinguish whether a ma- chine is a human or a bot, then that ma- chine would have reached human intel- ligence. He designed such a test because he couldn't find a coherent definition of intelligence. Humans can quickly call an animal intelligent if it can understand commands and perform simple actions. We can marvel at the massive struc- tures built by thermite colonies and la- bel them as an engineering feat. But if someone asks us to define intelligence, we are lost for words. Because of this, Alan Turing chose a definition by asso- ciation whereby he associated machine accomplishments with human achieve- ments. However, the general belief to- day is that the test is too simple, and we should be looking at something more complex. Even if we use it as a baseline, till today, no AI system has ever man- aged to pass the Turing Test! But things might be changing! In 2021, Google announced that it had created a new AI system called LaMDA. is pro- gram is specifically suited for processing natural languages like English, Italian or French. Let's not forget that languages are very complex because they can be literal, figurative, plain, informal, formal and written in different styles. A conver- sation too can verge on various topics; it assumes a shared context and can be somewhat erratic. So on one end, lan- guage is one of human's most excellent tools, yet its peculiarities make it a tough nut to crack for a computer. LaMDA recently made the news be- cause a Google engineer working with it declared that the system had become conscious and begun reasoning like a human being. A quick look at the con- versations with LaMDA reveals that it seems aware of its existence; it exhibits emotions ( happiness and sadness) and can discuss deep subjects (like religion, justice and compassion). But does this make it human? e answer is simply no. First of all, exhibiting these character- istics does not mean they are real. In the same way, flying does not make an aer- oplane a bird, or an underwater subma- rine does not make it a fish. ey might indeed have similar features, but they're very different. Second, the engineer who made these claims also admitted that the transcript of the conversation he published is not the raw text. He edited it to ensure it was readable, but it shows that the algo- rithm does have significant limitations. Furthermore, hundreds of other soft- ware engineers used it, and no one else made a similar claim. ird, our feelings go way beyond a simple calculation. When we feel sad, we experience stomach cramps and un- easiness. Happiness, on the other hand, brings a rush of dopamine. ese are sensations we feel but cannot express clearly. We have no reason to think LaMDA had similar feelings during its conversation. Fourth, we humans are social animals and tend to see patterns everywhere. As such, we must be careful when speaking with artificial beings because we might fall into the trap of believing they are human. Of course, the million-dollar question everyone is asking is how did Google pull this feat? It started a few years back when large corporations began creating huge language models. ese models are extensive collections of text down- loaded from the internet and used to train AI systems. One of the most fa- mous is GPT2, a model created by Ope- nAI, the company of Elon Musk. When the company revealed it in 2019, they declared that it might be too dangerous to release since it had reached almost human abilities. ey fear it could be maliciously misused if it falls into the wrong hands. To give an idea of the scale of such systems, GPT2 leant from around 8 million web pages. Of course, this concern is passe; it is a relatively small model with today's standards, and anyone can download it! Considering that the model is three years old, today's models seem im- mense. ey learn from a dataset of ap- proximately 1 billion web pages, so it is unsurprising that we are experiencing massive improvements over former sys- tems. Just imagine that later this year, we expect the release of GPT4, which uses almost 500 billion web pages to learn. So the growth of these language models is so fast that we can anticipate some giant leaps in AI in the coming years. LaMDA is one of these models. e difference with the GPT models is that it is specific to human language and trained on 1.6 trillion human conver- sations. us it is not surprising that LaMDA is extremely good at human dialogue. However, we can't claim that it is conscious or that it feels emotions. Creating an AI at par with a human will take much more. Still, we live in an ex- citing period where conversing with computers using natural languages will become a reality pretty soon! Alexiei Dingli Prof Alexiei Dingli is a Professor of AI at the University of Malta and has been conducting research and working in the field of AI for more than two decades, assisting different companies to implement AI solutions. He forms part of the Malta.AI task-force, set up by the Maltese government, aimed at making Malta one of the top AI countries in the world

Articles in this issue

Links on this page

Archives of this issue

view archives of BusinessToday Previous Editions - BUSINESS TODAY 16 June 2022