Issue link: https://maltatoday.uberflip.com/i/1544145
9 maltatoday | WEDNESDAY • 1 APRIL 2026 OPINION Criminalising the malicious use of deep fakes Mark Said Veteran lawyer LAST January, the government announced that it is preparing legislative amendments to crack down on the malicious use of deepfakes by analysing existing laws and drafting proposals to address the use of AI technolo- gy for harassment, blackmail or bullying. This is a long-awaited step in the right direction, and much can be done in this respect. But first, one must keep in mind the recent background that gave rise to this consideration. Videos, images and audio cre- ated using AI to realistically simulate or fabricate content are booming on the internet. They are becoming increasing- ly accessible, as what previously required powerful tools can now be done with free mobile apps and limited digital skills. Deep fakes pose greater risks for children than adults, as chil- dren's cognitive abilities are still developing, and children have more difficulty identifying deep fakes. Children are also more susceptible to harmful online practices, including grooming, cyberbullying and child sexual abuse material. This highlights the need for legal action and co- operation, including developing the tools and methods needed to tackle these threats at the re- quired scale and pace. Since 2024, the EU Artificial Intelligence Act (AI Act), en- forceable in Malta, has outlawed the worst cases of AI-based iden- tity manipulation and mandated transparency for AI-generated content. This came at a critical time, as statistics showed that half of all businesses have expe- rienced fraud involving AI-al- tered audio and video. Still, that act doesn't stand alone in its fight against AI iden- tity fraud, as newer anti-deep- fake laws are being passed all over the world. Apart from up- dated, robust legislation, one must also consider what should go hand in hand with legislation to effectively fight against AI identity fraud. One of the most talked-about laws against AI deepfakes comes from Denmark, which has amended its copyright law to ensure that every person has the right to their own body, fa- cial features and voice. In effect, Denmark is treating a person's unique likeness as intellectual property, a first-of-its-kind ap- proach, at least in Europe. Under this amendment, any AI-generated realistic imita- tion of a person (face, voice, or body) shared without consent violates the law. Danish citizens now have a clear legal right to demand the takedown of such content, and platforms that fail to remove it face severe fines. However, the law does make ex- ceptions for parody and satire, as they remain permitted. In 2025, the US enacted the 'Take It Down' law, marking the first US federal law directly restricting harmful deepfakes. It focuses on non-consensual intimate imagery and imper- sonations such as deepfake por- nography, sexual images, or any AI-generated media falsely de- picting a real person in a harmful way. The legislation also makes it a crime to knowingly share nude or sexual images of some- one without consent, including AI-generated fakes. Penalties include monetary fines and cus- todial sentences of up to three years; the maximum applies in aggravated circumstances such as prior offences or distribution with intent to harass. Moreover, the law doesn't only punish the initial perpetrators; it also imposes obligations on plat- forms to act when such content is flagged. According to the new law, if someone finds an explicit deepfake of themselves, online platforms are now required by federal law to remove it with- in 48 hours of a report. By May 2026, any platform that hosts us- er content and could contain in- timate images must have a clear notice-and-takedown system in place. France has amended its penal code to criminalise non-consen- sual sexual deepfakes. It punish- es making public, by any means, sexual content generated by al- gorithms reproducing a person's image or voice without consent. Possible penalties include up to two years' imprisonment and a €60,000 fine, with higher thresh- olds in some specific contexts. Several countries, among which are the UK, South Korea, Aus- tralia, Italy, the UAE, China and Japan, have introduced specific legislation or amended existing laws to criminalise the malicious use of AI deepfakes, often focus- ing on areas like non-consensual sexual imagery, fraud, and elec- tion manipulation. Criminalising the malicious use of AI deepfakes is one im- portant aspect that our legisla- tors will definitely not ignore. AI is driving a new era of fraud, one that doesn't just fool ma- chines but people. But try as hard as we can, our legislation hasn't always kept pace. Laws that out- law malicious use of deepfakes may look good on paper, but without the tools to detect and prove synthetic content, they are toothless. How would speed limits help if there were no radar guns or police to enforce them? The same logic applies here. The most effective response will require more than legis- lation. It demands a universal approach that involves strong- er verification technologies to catch fake identities at critical checkpoints, paired with public education to help people recog- nise the red flags. Because when deepfakes look real, sound real, and pass basic checks, the only thing standing between a per- son and fraud is their ability to recognise the red flags and act accordingly. Technology can detect what humans can't, but humans still need to be equipped to detect what technology misses. One without the other simply won't hold. Technolog y can detect what humans can't, but humans still need to be equipped to detect what technolog y misses. One without the other simply won't hold

