By Christien Ettema, 5 May 2025

ChatGPT Frenemy 1Image created with the free version of Dall-E in ChatGPT, based on my off-the-cuff prompt: ‘Please create an image of the frenemy concept applied to AI, where AI can be a student's best friend but also an enemy to their motivation and learning’

Since the launch of ChatGPT in November 2022, educators around the world have been struggling to formulate adequate guidelines for the use of Generative AI (GenAI) tools by students. The challenge is daunting: how can we safeguard assessment when students can use GenAI as a shortcut to do their homework and write their thesis reports? How can we ensure that students continue to engage with content in a meaningful way and develop critical thinking and problem-solving skills? How can we keep up with all the new AI tools and deal with the growing integration of AI into common software?

One thing is clear: the brief time that teachers could simply forbid students from using AI tools is past and gone. According to a recent UK survey, student use of AI has exploded, with nine out of ten UK undergraduates now using AI for their assignments. From my experience at Utrecht University, where I teach academic writing to undergraduate and PhD students, I can see that the situation in the Netherlands is no different; and like other teachers, I’m struggling to keep up.

Thus, I was keen to attend the UniSIG Zoom meeting on 7 March to hear Peter Levrai from the University of Turku, Finland, share his ideas on how university teachers can encourage students to engage with AI in a positive way. Peter’s talk was based on a blog post he and his colleague Averil Bolster recently published on the topic, based on their own experiences in the classroom. I’m summarizing the main points of his talk here. Note: Peter’s talk focused on the use of generative AI tools such as ChatGPT, which I will refer to here simply as ‘AI’.

To frame the issue at hand, Peter used the term ‘best frenemy’ to capture both the opportunities and threats that AI poses to student learning. AI has the potential to make life much easier for students, but it can also undermine their motivation and opportunities for learning and lead them down the rabbit hole of disinformation. Drawing on Bloom’s taxonomy of educational objectives, Peter argued that AI can speed up lower-level tasks such as listing, summarizing and applying, but cannot and should not replace higher-level tasks such as creating, evaluating and analysing. The arrival of AI tools means that students will now have to operate (or learn how to operate) at these higher levels of thinking more consistently.

Next, Peter pointed out the problem with guidelines that define ‘how much’ AI can be used for different assignments. These guidelines date back to the very recent time when students had to actively seek out AI, but with the explosion of tools and intrusion of AI into mobile phones and common software, it has become almost impossible to avoid AI; it is simply everywhere. Therefore, rather than focusing on ‘how much’, it makes more sense to focus on identifying ‘why’ and ‘how’ students are using AI and, based on these insights, to develop strategies that encourage AI use that has the best outcome for student development.


ChatGPT Frenemy 2The AI Quality of Engagement matrix (AIQEM) developed by Levrai and Bolster (2024)

Combining the qualitative dimensions of ‘motivation’ and ‘criticality’ into a matrix provides a framework for assessing the quality of student engagement with AI tools. As the diagram shows, there is really only one desirable outcome: that students use AI to develop and test their ideas (positive motivation) and carefully evaluate the quality of the AI output (higher criticality). The diagonal opposite is obviously the worst case (using AI as a shortcut and taking the output at face value), but the other two options are only slightly less worrying: even if the motivation to use AI is positive, using AI output without further critical analysis undermines development and learning; and using AI as a shortcut while adapting the output just enough to pass it off as one’s own is equally questionable.

Based on this analysis, Peter argued that the main concern for teachers should be to help students better understand how they can use AI to develop their knowledge and ideas in ways that enhance rather than undermine their learning. Good strategies include using AI to brainstorm ideas and develop background knowledge on a topic, and asking the chatbot to give feedback, act as a tutor, or be a debate partner. The key is to develop good prompting skills. For example, Peter suggested trying different verbs (comparing output from asking to explain a topic versus to debate a topic) and using persona prompts, where the chatbot is given a highly specific role (see Valchanov, 2024). I’m also thinking of the various prompting frameworks already out there, such as the RISEN framework (Role, Instructions, Steps, End goal, Narrowing) that is now embedded in ChatGPT.

A second conclusion drawn from the matrix is that students need to understand the limitations of GenAI output and, now more than ever, must develop critical evaluation skills. Finally, Peter added that the matrix can also inspire us as teachers to reflect on our own use of AI, to explore how AI can support our professional development, and to increase our understanding of the challenges our students are facing.

In closing, Peter emphasized that, with the arrival of AI tools, our own thoughts, imagination and creativity are more important than ever. We should also not give up on learning and teaching lower-order thinking skills because without these skills we cannot successfully operate at higher-order thinking, nor can we interact with AI output in a meaningful way. Last but not least, we need to make students more aware of issues such as data ownership and security, and encourage them to check that AI works for them, not that they work for AI.

After Peter’s thought-provoking presentation, the following lively discussion ensued:

Joy: When you encourage students to use AI, do you ask for transparency?

Peter: Absolutely. For assignments, students have to submit a statement disclosing their use of AI; they are also advised to keep a history of their interaction with AI so that they have proof of their own input in case AI detection software flags their work.

Michelle: How useful would it be to discuss ethics in relation to AI use among students?

Peter: Hugely important. And we also need to talk about how the harvesting of training data violates copyright and privacy, the trauma of people who have to clean up the AI data that feeds the models, the tragedy of the commons where we have to use AI or be left behind, and make students aware that everything that you put into these tools is owned by the companies behind these tools. See Stahl and Eke’s recent article in Elsevier.

Wendy: Do you see a difference in AI use in terms of practice and appropriateness by students at different levels, e.g. PhD vs. MA or undergraduate? It seems to me that the higher the level, the less helpful or the more careful one must be, even with positive motivation and high criticality.

Peter: My focus is mostly on undergraduates, but what I see is that AI use is not a matter of academic level; it is partly faculty related (arts vs. sciences) and it also varies a lot between individuals, with some students using AI for everything and others staying away from it for a variety of reasons. But generally speaking, the higher the academic level, the more work has to go in to getting something useful out of GenAI.

Jackie: How can teachers ‘check’ what students have done with AI?

Peter: The submission statements and self-policing are important here, but we cannot check how honest they are. At some point, schools will have to accept that these tools are being used. My main concern is that students will get stuck in superficial analysis, the dumbing-down effect of just reading AI summaries of articles and not fully engaging with original texts, which will impair reading and writing skills. The utopia is that AI will do all the dirty work so that we will have time to write poetry. But no, we’ll just watch more Netflix. Forming our own thoughts, opinions, creativity, that’s where humans come in.

Tom: I’m a corporate trainer, not a teacher. The participants in my report-writing workshops are often consultants. They are using AI within a company-protected AI environment for different stages in the writing process, from ideation to crossing t’s. What is your take on the speed of development of AI. How long will report-writing workshops be necessary…?

Peter: I’m optimistic, as long as we can adapt. The fundamentals won’t change. To draw a parallel with using AI for coding: you still need to understand how the code works to be able to fix problems and debug code. Similarly, you need to understand how text works, how writing works, how people read, to be able to interact with AI output and produce a good end result.

Charles: If a PhD student uses AI to help with a research paper, can this impact the ‘publishability’ of the manuscript? For example, are there concerns about potential loss of advantage if a research design in a highly competitive field is plagiarized before the original manuscript is published?

Peter: I would be very cautious about putting anything into an AI that is confidential information. Ownership can be lost as soon as you put it into an AI. Depending on the terms and conditions of the tool, your work may be lost – even if tools claim they will not use your data for training, even if you have a paid version. A solution would be to have a safe and secure ‘in-house’ AI system, but at this point that is a significant investment.

References

Levrai, P. and Bolster, A. (2024). Supporting ethical and developmental AI use with the AI Quality of Engagement Matrix. Theory into Practice Blog.

Lewis, B. (2019). Using Bloom’s Taxonomy for Effective Learning. ThoughtCo.

Stahl, B.C. and Eke, D. (2024). ‘The ethics of ChatGPT – Exploring the ethical issues of an emerging technology’. International Journal of Information Management, Vol 74, 102700. https://doi.org/10.1016/j.ijinfomgt.2023.102700

Valchanov, I. (2024). Best ChatGPT Prompts: Persona Examples. Team GPT.

     Blog post by: Christien Ettema
     Website: www.shadesofgreen.nl
     LinkedIn
: christienettema