Logo

The AI genie is out of the bottle – now what?

Generative AI is here to stay, so let’s build AI literacy, incorporate AI into assessment and craft solid policies for its use

,

7 Feb 2025
copy
0
bookmark plus
  • Top of page
  • Main text
  • More on this topic
A magic lamp with a speech bubble
image credit: iStock/Moor Studio.

Created in partnership with

Created in partnership with

Nazarbayev University logo

You may also like

Three ways to leverage ChatGPT and other generative AI in research
5 minute read
Woman using AI to aid research

Popular resources

Since the public release of ChatGPT-3 in November 2022, terms like “generative AI” and “large language models” have become commonplace among educators, students and even policymakers. While artificial intelligence as a concept has existed for over 70 years, it’s only now that its impact is becoming visible to the general public. 

Many of us have grappled with crafting policies to define appropriate use of AI tools in the classroom, worried about the inappropriate use of AI in assessment tasks and struggled to keep up with the pace of new AI technology and applications. 

As generative AI pioneer Ethan Mollick noted, “The AI genie is out of the bottle”, and there is no going back to the way things were pre-ChatGPT-3. We need to recognise that generative AI is here to stay in higher education. The question is no longer whether AI will influence education but rather how higher education institutions will adapt to this new reality.

Building AI literacy competence

We believe that effective incorporation of AI in higher education starts with building AI literacy competence for both students and educators. This includes but is not limited to understanding, applying and critically evaluating the use of AI in various settings. It does not matter whether you deal with computer programming or teach history to first-year students, you should have a basic understanding about how (Gen)AI works. 

One of the critical skills to acquire is prompt engineering. It might seem relatively easy to ask ChatGPT to produce a lesson plan, design a sequence of maths problems or generate a multiple-choice quiz. However, the devil is in the details – it is essential to refine your prompts so that the virtual magician pulls out exactly what you hoped for: a white rabbit with long ears or a black dove with grey wings, or anything else your “abracadabra” asked for. If the trick doesn’t quite work, you refine the spell until the perfect result emerges. The magic lies in crafting the right spell.

Incorporating AI into assessment tasks

Understanding AI is step one, but making it work for you in the classroom is where its potential truly unfolds. Educators have had to adjust assessments so that students can really practise their critical thinking skills without breaching the principles of ethical use and academic integrity. Here are four examples of how AI can easily be incorporated into assessments in a way that mitigates the risk of students using AI tools inappropriately, while helping to develop their critical thinking skills:

AI versus manual brainstorming: Students could be asked to: (i) manually brainstorm ideas for a particular project or problem, for example, generating ideas for a new entrepreneurial venture; (ii) then use AI to generate ideas on the same topic; and (iii) evaluate the similarities and differences between the two brainstorming activities and their implications. By critically comparing the results, students would not only engage with AI, but also refine their critical thinking and evaluative skills. This exercise could also help to highlight the strengths and limitations of using AI to generate diverse and original ideas.

AI-supported literature analysis: In this task, students could be first asked to use AI to summarise and comment on a scholarly article or piece of literature, such as Shakespeare’s Henry V. They would then critically evaluate the AI-generated response, focusing on factors such as accuracy, depth and interpretative quality. Once again, this would enable students to engage with AI tools, while at the same time helping them to develop a nuanced understanding of both the literature and the AI tools.

AI as an expert adviser: Students could prompt AI to provide expert advice on a specific topic and then critically assess the quality of the advice – for example, asking AI to provide advice on a specific engineering problem. This activity could help students to identify errors, biases or oversights in AI-generated content and reinforce the importance of human judgement, expertise and ultimate responsibility.

AI-generated presentations: In this exercise, students could be asked to use AI to create a presentation on a chosen topic. They would then present their topic, including an evaluation of the quality of the presentation generated by the AI, taking into account factors such as presentation design, accuracy, relevance and consistency. In addition to engaging with AI and helping to develop critical thinking skills, this task could also help to improve communication and presentation skills.

Developing ‘next generation’ AI policies

The last – and yet perhaps the most critical – consideration is developing forward-thinking policies. While understandable, given that ChatGPT was only introduced to the world in 2022, existing AI policies at many universities tend to focus on restrictive guidelines, detailing whether and how students and educators can use AI and in what contexts. However, such policies often fail to capitalise on AI’s capabilities to enhance learning.

To address this, higher education institutions need to develop “next generation” (“Gen 2”) AI policies that also emphasise how AI should be used. These policies should aim to:

  • Foster critical engagement: Encourage students to critically evaluate AI outputs rather than passively accept them.
  • Promote ethical usage: Define responsible use of AI, addressing issues such as bias, transparency and academic integrity.
  • Encourage creativity: Use AI tools to inspire and enhance creative problem-solving.

Generative AI is here to stay and higher education institutions need to find ways of incorporating it effectively. By developing foundational AI literacy competence, reimagining assessments and adapting institutional policies, universities can turn potential challenges into opportunities. 

Aida Nuranova is a professional learning and development expert at the Center for Innovation in Learning and Teaching, and Timothy Wawn is an instructor at the Graduate School of Business, both at Nazarbayev University.

If you would like advice and insight from academics and university staff delivered direct to your inbox each week, sign up for the Campus newsletter.

Loading...

You may also like

sticky sign up

Register for free

and unlock a host of features on the THE site