Issue link: https://maltatoday.uberflip.com/i/1463927
11 ANALYSIS 7.4.2022 legal authority to regulate algorithms," added Engler. "This isn't passing a new law, just seeing how current laws affect AI government... that's a neces- sary step we haven't seen." Still, AI issues are being discussed in the new EU-U.S. Trade and Technolo- gy Council. "The European Commis- sion is quite strongly engaged and in- volved in these discussions and it has top level political backing," noted Juha Heikkilä, adviser for AI, DG CNECT, European Commission. "The work is led by two executive vice presidents at the European Commission. So […] that shows you the kind of level of im- portance that it has on our side." Heikkilä also noted that the US be- gan developing a Bill of AI Rights (the Bill of Rights for an Automated Society) in the autumn of 2021. "And so there are steps there, which from our perspective, perhaps go in the same direction as we've been moving things. So, I think that the political will is something that is available now more than it was perhaps previously." Although the federal government in the U.S. isn't following the same regu- latory approach as the EU, "at the level of individual states, there is regulation enacted and some of that actually goes very much in the same direction as the [EU] AI Act has outlined," Heikkilä added. "There is more appetite now to regulate AI in a reasonable way. Not just in Europe, but also elsewhere." Other speakers agreed that there is a growing trans-Atlantic consensus around the governance of AI. Raja Chatila, professor emeritus, ISIR, Sorbonne University, pointed to signs that the EU's risk-based approach is gaining traction in the U.S. "Five days ago, NIST (the National Insti- tute of Standards and Technology) in the United States has issued an AI risk management framework," he said. There is also progress on global stand- ards and certification, Chatila added, describing them as a "means for soft governance". The devil is in the detail One of the most fundamental chal- lenges for proponents of international harmonisation is reaching an agree- ment on what constitutes AI and what is simply conventional software. Cha- tila suggested defining AI is difficult "because it includes many techniques stemming from applied mathematics and computer science and also ro- botics. So it's very wide and all these are evolving." However, he noted that since 2012, "when we started to speak about deep learning etc.," the founda- tional methods for AI haven't changed much. "We are speaking about the same systems as 10 years ago." He added that the EU AI Act has mostly adopted the OECD definition of AI, which is also the basis for the GPAI. A key driver behind international cooperation is the need for interop- erability policies for trustworthy AI principles, AI systems classification, the AI system lifecycle, AI incidents and AI assurance mechanisms,, ex- plained Karine Perset, head of unit at the Artificial Intelligence Policy Ob- servatory, OECD. Governments need to focus on achieving "this interoper- ability, using the same terms to mean the same thing, even if we set the red lines - what's okay and what's not okay - at different levels." The OECD is advocating a common approach to classifying specific AI applications and distinguishing them from foundational AI models, which need to be assessed and regulated differently. Perset's team and partner institutions are also developing an AI incidents tracker to "inform this risk assessment work by tracking AI risks that have materialised into incidents and monitoring this to help inform policy and regulatory choices." One of the OECD's goals is to help policymakers create a framework through which they can identify the key actors in each dimension of the AI system lifecycle. "This is really impor- tant for accountability and risk man- agement measures," Perset explained. "And we see some convergence in ISO (the global standards body) and NIST, for example, on leveraging the AI system lifecycle for risk manage- ment as the common framework and then conducting risk assessments on each phase of the AI system lifecycle." But she acknowledged that reaching an international consensus on how to classify risk may take some time and political impetus. Defining risk can be risky Controversially, the draft EU AI Act only has two categories of risk (in addition to a banned category). For Evert Stamhuis, professor, Erasmus School of Law, Erasmus University Rotterdam, this approach is too crude. Noting the growing use of AI in the healthcare domain, Stamhuis said the "huge diversity in terms of risk" in this arena cannot be reflected in just two risk categories. "You cannot achieve any certainty if you have those simple categories." More broadly, the dynamic nature of AI makes developing a durable definition difficult. "A definition for a longer period of more than five years is going to be totally unviable," Stam- huis contended. "One of the difficulties with Europe- an legislation is the process of getting it enacted is so complicated," he added. As a result, there is "huge resistance in quickly modifying it, which usual- ly brings the European institutions to allocating these kinds of flexibility to bylaws and the side mechanisms, but this is so fundamental," that it requires an open political debate, he says, cau- tioning against the use of a side mech- anism in this case. More broadly, Stamhuis called into the question the need for dedicated regulation for AI. He also harbours doubts about the EU AI Act's reliance on certification of AI systems. "What are we actually certifying? Is it the models, is it the systems?" he asked. Once an AI system or model has been certified, "what happens if the process changes or the model im- proves itself," Stamhuis added. "If the system is fluid, what is the value of a CE certification given at a certain mo- ment in time?"