Dennis Sale

By Dennis Sale

IN the previous column, I outlined the key implications of AI on both student learning and the practices of teaching, suggesting that the benefits outweighed the concerns, if implemented thoughtfully from an evidence-based approach. In this column, I apply the same analysis to the thorny issue of assessment.

The quality of assessment practices are probably the most significant aspects of a curriculum in terms of its perceived value. In the most basic terms, if assessment practices are lacking in quality, what value can be placed on the qualifications accredited? As Ramsden (1992) pointed out:

“Assessment defines the curriculum…It sends messages about the standard and amount of work required, and what aspects of the syllabus are most important.”

Similarly, Boud (1988) illustrated:

“There have been several notable studies over the years which have demonstrated that assessment methods and requirements probably have a greater influence on how and what students learn than any other single factor.

“This influence may well be of greater significance than the impact of teaching or learning materials.”

How technology is impacting assessment practices

At present, technology-based assessments are providing immediate and precise descriptive feedback relating to student performance, enabling both diagnostic capability and personalisation/differentiation of instruction.

For example, Learning Analytics (LA) tools can collect, analyse, and present students’ performance data in highly visual ways to both enable rapid and focused feedback, as well as guide instructional interventions.

The specific teaching and learning benefits include:

Identifying learners’ understanding and performance levels in designated learning areas and tasks

  • Diagnosing learner’s knowledge gaps and misconceptions

  • Customising and the personalisation of instruction to individual learner needs and specific conceptual/skill areas

  • Providing an ongoing evidence-base for future instructional planning.

As AI tools increasingly automate much of the assessment process, this frees up instructional time for the teaching faculty. Also, even for assessing more complex technical and cognitive skills, computer-based simulations can provide data that facilitates real-world performance-based assessment. This is especially important as approaches to assessment are incorporating more performance-based/authentic assessment tasks, which focus on assessing more complex performance in real work (or simulated contexts).

The concern

The big concern is that for many ‘take home’ assessments set by teachers, students are now able to produce excellent work in minutes rather than days, without much cognitive effort on their part. This totally violates a core principle of good assessment – authenticity.

Of course, this is not a new concern, as any ‘take-away’ assessment can be done by other sources (e.g. parents, friends, other family members) rather than the student themselves; hence the efforts to contain plagiarism have not just surfaced with AI. However, AI turns this concern into another league of assessment problems.

Of course, there is much educational merit in students doing assignments outside of classroom time and working collaboratively with other students, as this reflects the real world of work.

In solving complex problems, thoughtful people access various information sources, analyse data, evaluate options, and then derive better solutions. They do not rely solely on what is already memorised in their long-term memory systems; it’s an iterative process between prior learning and new learning.

In the present and emerging AI context, the allocation of marks and grading in summative assessments (those that define access to future educational pathways and employment opportunities) to work done outside of school control will become increasingly problematic.

Are there evidence-based practical solutions?

There is, of course, no one uncontested solution, and valuations are involved. Hence, bear this in mind for context, as I offer what I see as viable approaches to addressing the problem, both practically and with a defendable validity base.

Firstly, assessment is not just about making summative assessment decisions for purposes of grading and selection, but on assessment for learning and as learning – typically referred to as formative assessment. This involves more collaboration and transparency between both teachers and students, as well as among students.

Peer instruction (e.g Mazur, 1996) and team-based learning (e.g Sibley & Ostafichuk, 2014) are becoming popular, as they offer significant evidence-based learning benefits – as summarised by Knight (1995):

“The key to the use of assessment as an engine for learning is to allow the formative function to be pre-eminent. This is achieved by ensuring that each assignment contains plenty of opportunities for learners to receive detailed, positive, and timely feedback, with lots of advice on how to improve.”

As I see it, there is less of a concern with students using the full range of EdTech and AI tools as a significant part of the learning process; that is in collecting, curating, and summarising information. However, assessments made in this context should focus more on formative, rather than summative assessment, for the reasons stated above.

Of course, we want our students to be self-directed learners who have integrity and resist the temptation to cheat, but we must recognise that there is both a reality and an ideal – and we need to tread this domain in the interest of all stakeholders.

Secondly, I do see merit in using an open-book exam format (e.g this is where students are in an exam setting – classroom, studio, lab and so on – with the traditional controls, but they can fully access a range of designated support resources).

The open-book format both ensures authenticity and enables the assessment tasks to reflect real-world activities to ensure that students produce performance evidence that is valid and sufficient for the assessment areas being assessed.

For example, if students are asked to prepare a voyage plan for a ship sailing from Southampton to Cape Town, they would need to demonstrate the full range of key content understandings, critical thinking skills, and the technical competencies involved in doing this task. They would not need to know the weather conditions, tide times, or port regulations, as they can get this from the resources provided.

In summary, this is the arena in which educational policy makers and practitioners must negotiate and evolve practices that are both consistent with quality assessment decisions (e.g valid, reliable, fair, and sufficient) as well as adaptive to emerging AI realities. Challenging, yes. Achievable – of course.

  • Dennis Sale worked in the Singapore education system for 25 years as Advisor, Researcher, and Examiner. He coached over 15,000 teaching professionals and provided 100+ consultancies in the Asian region. Dennis is author of the books Creative Teachers: Self-directed Learners (Springer 2020) and Creative Teaching: An Evidence-Based Approach (Springer, 2015). To contact Dennis, visit dennissale.com.