logo
MMW Logo
A teacher writing a rubric.

Should we be using AI as a rubric generator?

There is no doubt that AI has the power to alleviate a lot of the administrative pain associated with teaching. Our access to AI tools has grown exponentially over the last year, and more and more we are seeing AI being applied to various use cases within the teaching workflow. I personally have tested AI across the bulk of my day-to-day tasks as an English and Humanities teacher, including using AI rubric generators to help me build rubrics for my students. However, the more I come to understand the power of AI, I find myself asking an important question: where is the best place to inject my IP?

AI rubric generators can be great tools to help build content.

You can dump knowledge into the system in an unstructured way and prompt it to structure your thoughts, ideas and expectations into a rubric format. You may prompt the AI agent to follow a particular taxonomy, or even engage the AI to help determine which taxonomy might suit your assessment. In many ways, it can act as a Socratic counterpart–a virtual brainstorming partner–challenging you to add or remove content to better articulate the core aims of the task.

Sitting down to write a rubric is a daunting task, and, at the end of the day, using AI to assist reduces cognitive load. You can input a half-baked idea that captures the essence of what you are looking to explain, and the AI can stitch it together cohesively into a framework that can then be tinkered with.

My concern around the use of ChatGPT and other forms of AI in rubric generation is when we ask these systems to write rubrics for us in their entirety.

When we do this, we are not taking advantage of our experience and knowledge as teachers. We know so much. We have sat through PDs, taught the same lessons many times, read an absurd number of essays–we know more about the specific task than ChatGPT could ever know; it has no idea about the context within which we are teaching.

Let’s remember the purpose of a rubric, at its core: to allow students to understand exactly what they are expected to accomplish. If we use AI, and rely on it fully, then there will always be a gap between what it produces and what we truly expect from students. Failing to communicate our expectations with students prevents them from accessing our expertise as teachers. It comes down to who sets the guideposts–you, or the AI.

It is completely possible that, over time, AI will reach a point where it can–with one prompt–generate a detailed rubric that perfectly reflects the specifics and intricacies of the task. However, based on my work in both teaching and with AI, I really do think we are a long way away from this. Perhaps we will sooner be able to create a rubric with only a few criteria for a common task–such as an LSAT–but when dealing with detailed, granular tasks where there is much more room for nuance, the teacher needs to be as involved as possible.

The reason it is not worth cutting corners at this end of the teaching loop is that a well-structured rubric can be incredibly powerful. It can act as a written skills checklist for students; determine a student’s score; ensure students receive feedback within their zones of proximal development; allow for analysis of student, class and cohort performance over time; and enable students to self-assess.

Write the rubric once and write it properly, and then re-use it across the lifetime of that assessment.

This is the place to inject your knowledge as a teacher: at the start. Lean on Mark My Words to write the feedback and annotate the essays, with the understanding that it is being informed by a rubric that effectively communicates your expectations.

I recently met with two senior leaders at a major school in Victoria. The school has historically used the type of rubrics that we have known about and used for years. Four skills–usually ideas, expression, structure, and analysis–and four levels within each. Both of the leaders acknowledged the limitations with these rubrics and expressed an interest in changing.

But, as you know, change is time consuming. Very time consuming. And so, as leaders, we are faced with resistance–we know that there is a better way; yet we do not do it.

The reason for this, in my view, is that up until this point there was no real reason to change. What was the point? What was the real advantage? It takes time to develop, and then even more time to implement–to bring your staff on board, explain the changes to your students. Until now, there has never been an immediate reward for the upfront investment in redesigning rubrics.

With the emergence of AI, however, there is now a really good reason to put in the time. Once you have written your comprehensive rubric, you can save time and effort at every step of the marking process from there on out.

AI presents us with an opportunity to make rubrics mean more. If you really ask yourself what benefit these historical rubrics provide, the likely answer is that they are easy to make and easy to add up.

Of course, these rubrics are pushed out by organisational bodies at the top, which is fine, but the decision on where to place a student can be incredibly subjective, and the quality and depth of feedback they offer as a standalone resource is limited.

It is my view that rubrics of this nature, that are inherently simple and generic and that may be able to be replicated by AI, undermine the very point of rubrics, which is to provide students and teachers with a transparent, objective framework from which performance can be assessed and students can be guided.

I have been working on Mark My Words for over three years, the purpose of which is to employ AI to ease the marking load for teachers. When I first started building this platform, I actually implemented a model that could help teachers build a rubric–you can see this in action below. I was obsessed with this as a time-saving solution. However, the more we tested the product and the more familiar I became with the inner workings of AI, the more I understood the sanctity of my teaching expertise. Inputting one prompt into a system could never produce an output that is exactly what I wanted. If my experience was to be injected anywhere, this was it.

For this reason, I removed this function from the software and now encourage users to invest the time in building their ideal assessment rubric, and use the platform to save time down the track.

Sure, it adds friction, but I want Mark My Words to be teacher-led. If you don’t want to start from scratch, then interact with another AI tool and tinker with its output until it may as well be yours, or try one of the pre-written rubrics we have available on Mark My Words. When it comes to AI in teaching, it is key to remember that you’re the one with the knowledge–the AI should be guided by you, not itself.