Hashtag

I got generative AI to attempt an undergraduate law exam. It struggled with complex questions

  • Written by Armin Alimardani, Lecturer, School of Law, University of Wollongong
I got generative AI to attempt an undergraduate law exam. It struggled with complex questions

It’s been nearly two years since generative artificial intelligence was made widely available to the public. Some models showed great promise by passing academic and professional exams.

For instance, GPT-4 scored higher than 90% of the United States bar exam test takers. These successes led to concerns AI systems might also breeze through university-level assessments. However, my recent study paints a different picture, showing it isn’t quite the academic powerhouse some might think it is.

My study

To explore generative AI’s academic abilities, I looked at how it performed on an undergraduate criminal law final exam at the University of Wollongong – one of the core subjects students need to pass in their degrees. There were 225 students doing the exam.

The exam was for three hours and had two sections. The first asked students to evaluate a case study about criminal offences – and the likelihood of a successful prosecution. The second included a short essay and a set of short-answer questions.

The test questions evaluated a mix of skills, including legal knowledge, critical thinking and the ability to construct persuasive arguments.

Students were not allowed to use AI for their responses and did the assessment in a supervised environment.

I used different AI models to create ten distinct answers to the exam questions.

Five papers were generated by just pasting the exam question into the AI tool without any prompts. For the other five, I gave detailed prompts and relevant legal content to see if that would improve the outcome.

I hand wrote the AI-generated answers in official exam booklets and used fake student names and numbers. These AI-generated answers were mixed with actual student exam answers and anonymously given to five tutors for grading.

Importantly, when marking, the tutors did not know AI had generated ten of the exam answers.

A man writes on a sheet of paper.
We handwrote the AI answers so markers would think they were done by students. Kate Aedon/Shutterstock

How did the AI papers perform?

When the tutors were interviewed after marking, none of them suspected any answers were AI-generated.

This shows the potential for AI to mimic student responses and educators’ inability to spot such papers.

But on the whole, the AI papers were not impressive.

While the AI did well in the essay-style question, it struggled with complex questions that required in-depth legal analysis.

This means even though AI can mimic human writing style, it lacks the nuanced understanding needed for complex legal reasoning.

The students’ exam average was 66%.

The AI papers that had no prompting, on average, only beat 4.3% of students. Two barely passed (the pass mark is 50%) and three failed.

In terms of the papers where prompts were used, on average, they beat 39.9% of students. Three of these papers weren’t impressive and received 50%, 51.7% and 60%, but two did quite well. One scored 73.3% and the other scored 78%.

A landing page for ChatGPT, asking 'How can I help you today?'
Generative AI has gained a reputation for passing difficult exams. Tada Images/ Shutterstock

What does this mean?

These findings have important implications for both education and professional standards.

Despite the hype, generative AI isn’t close to replacing humans in intellectually demanding tasks such as this law exam.

My study suggests AI should be viewed more like a tool, and when used properly, it can enhance human capabilities.

So schools and universities should concentrate on developing students’ skills to collaborate with AI and analyse its outputs critically, rather than relying on the tools’ ability to simply spit out answers.

Further, to make collaboration between AI and students possible, we may have to rethink some of the traditional notions we have about education and assessment.

For example, we might consider when a student prompts, verifies and edits an AI-generated work, that is their original contribution and should still be viewed as a valuable part of learning.

Authors: Armin Alimardani, Lecturer, School of Law, University of Wollongong

Read more https://theconversation.com/i-got-generative-ai-to-attempt-an-undergraduate-law-exam-it-struggled-with-complex-questions-240021

Health & Wellness

Understanding the Environmental and Health Impacts of Waste Disposal: Essential Insights

Hashtag.net.au - avatar Hashtag.net.au

🌎♻️Explore the health & environmental impacts of waste disposal. Dive into essential insights💡for a sustainable future! #WasteManagement #Health🌿🌍 Waste disposal is an inevitable aspect of mode...

The Role of a Child Psychologist: Helping Kids Navigate Emotional Challenges

Hashtag.net.au - avatar Hashtag.net.au

Child psychologists play an important role in aiding children to overcome behavioral and emotional challenges. The experts will easily identify the developmental stage and emotional needs of the chi...

Therapy as a Preventative Measure: Maintaining Mental Health Before a Crisis

Hashtag.net.au - avatar Hashtag.net.au

In recent years, the conversation around mental health has shifted significantly. No longer seen solely as a response to crises, therapy is now increasingly recognised as a proactive measure for mai...

What Are Sleep-Wake Disorders? A Comprehensive Overview

Hashtag.net.au - avatar Hashtag.net.au

In our fast-paced world, the importance of a good night’s sleep cannot be overstated – sleep is essential not only for physical rejuvenation but also for mental and emotional wellbeing. However, for...

Tomorrow Business Growth