In this week’s reading there are many interesting innovations, cases and frameworks for assessment discussed by Nicol, Redecker & Johannessen. I found that there was one common theme that tied all of these frameworks together – a movement towards greater learner self-regulation .
Early in his analysis, Nicol identifies that he is looking at assessment frameworks and feedback principles that would allow the development of greater learner self-regulation. His analysis focuses on setting goals, allowing self-assessment and reflection, delivering high-quality feedback, encouraging discourse and dialogue, tying feedback into motivation, closing the gap and allowing feedback to shape teaching. However, his perspective bears the influence of communication theories and I found myself wondering, that despite his discussion of using MCQs to engage learners in higher-order thinking, the assessment still seemed mechanical because the human element was secondary to the technology. Perhaps the reliance on big data, electronic voting systems and even more mature tools such as multiple-choice item development assignment were too quantitative in terms of their assessment and there was a qualitative dimension that was being neglected. Even though Nicol states that the need for new assessment tools and techniques was not driven by technology, I still found a technological determinism to the tools and techniques that were selected and discussed (i.e. the ones that could be improved by technology were selected).
Redecker & Johannessen also stress the need for greater student self-regulation in order for formative and summative assessment models to be successful. They also discuss assessment packages currently being developed for Learning Management Systems that integrate automated real-time data gathering, analysis, feedback and self-assessment into the student interaction model. Moreover, their discussion of Learning Analytics and Intelligent Tutoring Systems once again propose sophisticated models of self-regulation (with elements of self and peer assessment).
I was not entirely convinced that these self-regulated and (different levels of) automated models could replace the human element in the assessment relationship. Even though none of the authors argue that the teacher should be replaced with technological solutions, they do lean towards the efficacy of quantitative, big data and algorithm-driven interactions between teachers-learners and learners with their peers. Can a sophisticated, heuristic algorithm that gathers big data, performs billions of calculations per second and interacts with each student using highly evolved artificial intelligence replace the slow and fallible human element?
Eric Mazur: Why You Can Pass Tests and Still Fail in the Real World (9:32)
Nicol, David (2007) E‐assessment by design: using multiple‐choice tests to good effect. Journal of Further and Higher Education, 31(1),53-64
Redecker, C. and Johannessen, Ø. (2013), Changing Assessment — Towards a New Assessment Paradigm Using ICT. European Journal of Education, 48(1),79–96. doi: 10.1111/ejed.12018