Issue link: https://maltatoday.uberflip.com/i/1508084
5.12.19 12 21.9.2023 Alexiei Dingli Prof Alexiei Dingli is a Professor of AI at the University of Malta and has been conducting research and working in the field of AI for more than two decades, assisting different companies to implement AI solutions. He forms part of the Malta.AI task-force, set up by the Maltese government, aimed at making Malta one of the top AI countries in the world OPINION The best way to prevent AI cheating in schools A rtificial intelligence (AI) has ushered in a new era of possibilities and challenges across various sectors, including education. While AI offers innovative tools for learning and research, it also presents a problem for academic integrity. The traditional methods of preventing cheating are be- coming obsolete, and the AI detectors designed to catch these new forms of academic dishonesty are far from fool- proof. This calls for a compre- hensive reevaluation of how we approach academic integ- rity in the age of AI. AI detectors, such as Turni- tin's new software, have been marketed as the next frontier to combat academic dishon- esty. However, these tools are far from infallible. They oper- ate on predictive algorithms that can yield false positives, flagging innocent students and casting doubt on their in- tegrity. This is not just a the- oretical concern; there have been instances where stu- dents were wrongly accused based on these algorithms. Such incidents tarnish the academic records of innocent students and erode the trust between educators and stu- dents. So much so that in its blog containing tips for edu- cators, OpenAI (the company behind ChatGPT) has offi- cially admitted that AI writing detectors don't work. Moreover, the rapid evo- lution of AI technology is outpacing the capabilities of these detection tools. As AI-generated content be- comes more sophisticated, the effectiveness of detection tools diminishes. The tech- nology is essentially in an arms race, where advance- ments on one side necessitate countermeasures on the oth- er, leading to a never-ending cycle of escalation without a clear resolution. The use of AI detection tools also raises ethical questions. For instance, these tools can disproportionately affect stu- dents with English as a second language, flagging their work as suspicious even when no cheating has occurred. This adds an extra layer of com- plexity and unfairness to the already fraught issue of aca- demic integrity. Given these limitations and ethical con- cerns, a compelling argument exists for rethinking our ap- proach. The best way forward is to assume that students are us- ing AI, so we must challenge students in new ways. • Educators could assign essays to be written at home, perhaps en- couraging AI tools like ChatGPT. The students could then be asked to improve, critique, or de- fend their work in a con- trolled classroom envi- ronment without the aid of AI. This approach not only levels the playing field but also enhances critical thinking skills, as students need to un- derstand the content they have generated with the help of AI. • ey could give students a project encouraging them to use AI for data analy- sis or content generation. e next step would be an in-class presentation where students must ex- plain their methodology, AI's role, and how they verified or modified the AI's output. is would not only test their under- standing but also their ability to collaborate with AI responsibly. • Students could be tasked with using an AI tool to generate arguments for a debate topic and then create counter-argu- ments themselves. In a classroom setting, they would then have to de- fend their human-gener- ated arguments against the AI-generated ones, demonstrating a deep understanding of the topic. • After using AI to draft essays at home, students could participate in a real-time, in-class peer review session. They would exchange papers and critique each other's work, focusing on how well the AI-generated content was integrat- ed and whether it was critically analyzed and improved upon by the human author. • Much like math stu- dents are required to show their work to get full credit, students in humanities could be asked to provide "track changes" documenta- tion or a reflective essay detailing how they mod- ified or improved upon AI-generated content. This would offer in- sights into their thought process and ensure they engaged critically with the material. These are not new concepts; we've seen similar shifts in other disciplines. The in- troduction of calculators in classrooms led to a change in how mathematics is taught and assessed. Simple arithme- tic took a backseat to more complex problem-solving and conceptual understanding, which calculators couldn't easily solve for students. Like- wise, integrating AI into the academic landscape should lead to a reevaluation of what skills and knowledge we value and assess in students. The limitations of AI de- tectors and the potential for innovative assessment strate- gies underscore the need for comprehensive policies that are informed by the capabili- ties and limitations of AI. Ed- ucational institutions should develop guidelines that clearly outline the acceptable use of AI in academic work and the procedures for verifying the integrity of such work. These policies should be developed in consultation with educa- tional technology experts, ethicists, and legal advisors to ensure they are both effective and equitable. As we stand at the inter- section of AI and education, it's clear that our traditional approaches to academic in- tegrity are due for an over- haul. Rather than relying on imperfect AI detectors, we should embrace the technol- ogy's potential to enrich our educational systems while de- veloping robust methods and policies to ensure academic integrity. This balanced approach will uphold the values of academia and better prepare our stu- dents for a future where AI is an integral part of life. By doing so, we can navigate the complexities of this new fron- tier with the nuance and so- phistication it demands.