Issue link: https://maltatoday.uberflip.com/i/1495990
30.3.2023 11 WORLD NEWS 'Pause giant AI experiments' Elon Musk and Apple co-founder Steve Wozniak amongst hundreds of tech and science leaders calling for six-month pause on 'training of AI systems more powerful than GPT-4' OVER 1,000 signatories – including hundreds of tech, science, and academ- ic leaders – have signed an open letter calling on all AI labs around the world "to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4". It adds: "is pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quick- ly, governments should step in and insti- tute a moratorium." ose who've signed the letter, insti- gated by the Future Of Life Institute, include Elon Musk (CEO of SpaceX, Twitter, and Tesla), in addition to the co-founder of Apple, Steve Wozniak. Other notable signatories include Emad Mostaque, the CEO of Stabili- ty AI (home of popular text-to-image generator Stable Diffusion), plus Evan Sharp, the co-founder of Pinterest, and Chris Larsen, the co-founder of Ripple. Elsewhere, the letter's backers in- clude three team members at Alphabet/ Google's experimental AI hub, Deep- Mind: Victoria Krakovna (DeepMind, Research Scientist, co-founder of Fu- ture of Life Institute); Zachary Kenton, (DeepMind, Senior Research Scientist); and Ramana Kumar, DeepMind, Re- search Scientist. Musk's involvement is particularly sig- nificant, considering he was a co-found- er (and helped fund the creation) of OpenAI Incorporated, which launched GPT-4 on March 14. San Francisco-headquartered Ope- nAI, which accepted a USD $10 billion investment package from Microsoft in January this year, began life as a non-profit entity, but transitioned into being a for-profit company in 2019. (Musk resigned from its board in 2018.) Today, the firm calls GPT-4 (widely referred to as Chat GPT-4), "e latest milestone in OpenAI's effort in scaling up deep learning." It boasts that "while [GPT-4 is] less ca- pable than humans in many real-world scenarios, [it] exhibits human-level per- formance on various professional and academic benchmarks". In their open letter, titled simply "Pause Giant AI Experiments", Musk and the other signatories write: "AI sys- tems with human-competitive intelli- gence can pose profound risks to socie- ty and humanity, as shown by extensive research and acknowledged by top AI labs." ey continue: "Contemporary AI sys- tems are now becoming human-com- petitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and re- place us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech lead- ers. "Powerful AI systems should be de- veloped only once we are confident that their effects will be positive and their risks will be manageable." Elsewhere in the letter, the signatories move into a discussion around gener- ative AI's potential impact on specific areas of business, politics, the economy and beyond. ey write: "AI research and develop- ment should be refocused on making today's powerful, state-of-the-art sys- tems more accurate, safe, interpretable, transparent, robust, aligned, trustwor- thy, and loyal. "In parallel, AI developers must work with policymakers to dramatically ac- celerate development of robust AI governance systems. ese should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of compu- tational capability; provenance and wa- termarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced in- stitutions for coping with the dramat- ic economic and political disruptions (especially to democracy) that AI will cause." AI and music e suggestion that the world needs products of AI to carry "watermar- keting systems to help distinguish real from synthetic" will resonate with the music business. Earlier this month, over 30 groups representing and/or associated with the music business launched the Human Artistry Campaign, which aims to en- sure that AI will not replace or "erode" human culture and artistry. Its signatories included the Record- ing Industry Association of Ameri- ca (RIAA), the Recording Academy, SAG-AFTRA and SoundExchange. "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."