Issue link: https://maltatoday.uberflip.com/i/1542372
11 maltatoday | SUNDAY • 4 JANUARY 2026 LOOKING FORWARD 2026 Navigating the AI labyrinth and document systems will keep getting smarter, but many new products will not look like traditional apps at all. Instead of clicking through menus, people will describe what they want done, and the system will organise the work for them," he said. This means screens will show what the AI did, what data it used, and what still needs hu- man approval. "In simple terms, software will start to feel less like tools you operate and more like systems you direct," he said. The dark side But AI's rapid advance is al- so opening the door to darker, more criminal uses often out- pacing the laws and safeguards meant to contain it. In 2025 a woman was charged for using an AI-generated video of Robert Abela to scam people into fake crypto investments. One victim told police they were scammed out of at least €52,000. Cyber Crime Unit Superinten- dent Anna Maria Xuereb had explained in an interview with MaltaToday cybercriminals were generating AI child abuse material and selling it online. Dingli said he thinks scams in 2026 will become more person- al, more convincing and harder to spot. "AI removes the effort and skill scammers used to need. Instead of poorly written emails sent to thousands of people, scams will be tailored to the individual. Messages will refer- ence your job, your colleagues, recent events, and even your writing style. Voice scams will also grow rapidly, where a short audio clip from social media is enough to imitate a boss, a family member, or a company director asking for something 'urgent'," he said. The danger, Dingli insisted, is not just that these scams exist, but they arrive through trusted channels like email, messaging apps, and even internal work tools, making them feel routine rather than suspicious. Another major shift is speed and scale. He explained AI al- lows scammers to test hun- dreds of versions of a message, see which one works, and im- prove it almost in real time. This means scams will adapt faster than people's awareness. We will also see more "quiet scams", where the goal is not to steal money immediately but to slowly gather information, build trust, and then strike lat- er with something much bigger and more damaging. "The most important defence in 2026 will not be technical alone, but behavioural. People will need to get used to paus- ing, verifying, and being com- fortable saying 'I will check this first', even when a message sounds urgent or comes from a familiar voice. Scams will feel more human than ever, which is why our response must be more deliberate, not faster," he said. But Dingli's outlook is a pos- itive one, warning against the AI Bogeyman, saying it's not a question of whether it is too clever, but whether we can use it wisely. "Another risk is quiet deskill- ing, where people rely on AI so much that they stop under- standing the work themselves. The answer is not fear or bans, but clear rules, good training, and a simple principle: AI can help decide and act, but hu- mans must always stay respon- sible for the final call," he said. The AI-fication of Malta For Malta, the biggest realistic opportunity in AI is at a nation- al and local level, not just inside individual companies. "This means AI that helps run transport, public services, edu- cation, and government admin- istration in a joined-up way. Because Malta is small, it can move faster and test ideas that are difficult in bigger coun- tries," Dingli said. He said the real win would be AI systems which help public servants make better decisions, reduce paperwork, and respond faster to citizens, while keeping humans clearly in charge. AI is here to stay, and failing to keep up or adapt will be det- rimental. As the technology be- comes more powerful and more deeply integrated into everyday systems, the challenge is not stopping its progress but man- aging it responsibly. Used wise- ly, AI can boost productivity, improve public services and en- hance quality of life. Used care- lessly, it can enable new forms of crime and render people ir- relevant. Used recklessly, it can create new dangerous weapons of war but that is a conversation for another time. According to Dingli, the next big thing in 2026 is not smarter chatbots like ChatGPT or Gemini, but software that can run parts of a job end to end

