Evening all,
I arrived home last week to the Great Brown Southern Mudpie hideously jetlagged, and recovering from a nasty cold/flu/alien bacterial possession of some sort which left me knocked around all week while also getting back to my beautiful mountain of work.
But having regained my typical levels of vigour I now feel compelled to address something which lurched out of the internet at me after my last post. After that post a friend of the blog dropped me a line to let me know that my views on low stakes quizzes had not been well received by one of the presenters at the conference, and had genuinely upset the speaker. Now, as it turns out, I wasn't at the talk in question, and wasn't commenting on their talk or its content in any way. That's generally how this blog works. I talk about things that I am aware of. Nonetheless, that didn't stop a white knight bravely dashing in to defend the presenter.
In short, our white knight got his facts wrong, misrepresented me and made himself "look like a twat" as a result. Pretty smart, hey? I also note that he hasn't apologised for his error. Next time you fly off half-cocked, Mr White Knight, I hope you act with more grace. 🚩
Meanwhile, another interlocutor launched in with a far more reasonable sounding, but still unreasonable and baffling statement that "Sweeping statements about an approach's authenticity, applicability, and inclusivity are actually anything but inclusive, as they serve to exclude voices." Aside from having little connection to what I actually wrote, it strikes me that if you have multiple academics leaping to your defence you are not at all excluded. You are very much in the room. Apparently, I'm not. It seems to me that academics pointedly excluding professional staff voices with relevant expertise they don't wish to hear is a bad look and is the opposite of inclusion.
However, in the interests of self-reflection, let's take a look at the relevant part of what I wrote, shall we? You can be the judge about whether I crossed a line.
Phill and Tom did a simple demo of how an online quiz could be answered with browser-based tools such as Copilot in about 7 nanoseconds, but the day before I watched a talk encouraging folks to embrace low stakes quizzes. WTF? These have been totally borked forever, but even if you started with Covid these are some of the most cheated assessments going. In the year of our Lord Twenty Twenty Four (I'm an atheist, roll with me here), why would anyone be heading toward that form of assessment with a straight face? Say it ain't so!
I stated a fact that such assessments can be cheated easily and quickly, as was demonstrated by Phill Dawson and Tom Corbin. I also stated a fact, in my fashion, that students do exactly that! Now, I hear and understand the arguments for low-stakes assessments as learning ("the testing effect"), but what effect is there if students simply cheat the quizzes? At what rate of cheated quizzes would the quizzes become a problem in both learning and assessment terms? This occurred at massive scale during Covid, but was clearly occurring prior. What is being learned if large percentages of students (research results vary) cheat on these quizzes? Moreover, in a degree grading system (H1, 2:1, etc) such as the UK, how can those grades have any real meaning when a significant proportion of your assessment is compromised? It's performative, and arguably fraudulent, despite the best of intentions.
I'm not saying don't use quizzes, but disregarding the likelihood of learning not occurring (cheating) is as bad as someone like me assuming everyone cheats, isn't it? By all means use these assessments if it suits your broader institutional assessment strategy. But adopting low stakes quizzes at the subject/module right now is meant to achieve what exactly? I would argue it won't achieve what is hoped by its introduction, unless it is part of a broader strategy in which the trade offs are well considered. Of course, I'm open to being contradicted. I can be as wrong as anyone else, and I sensibly change my mind based on new information.
I'm going to end by repeating something I wrote in the last post: "[In] my mind happily sitting in your garden and pretending everything around you isn't on fire isn't a professional or scholarly thing to do. It's daft and childish". And that includes scolding people who actually want to help you fight your fires. You may not like the way I say something, but reflexively and unthinkingly declaring undesirable information exclusionary is ridiculous, and unbecoming of "scholars." It's nothing more than intellectual cherry picking, ignoring information that doesn't fit your theories, preferences and preconceptions, eternally immune to challenge.
Until next time,
KM
P.S. For the record, I emailed the speaker I upset with my post with the following explanation and apology:
"Hi [Name],
We don't know each other but I thought I'd better write. Phill Dawson alerted me that the word had got round that my blog post yesterday was referring to your talk at AHE. I just wanted to let you know that it wasn't referring to your talk at all. I actually didn't see the talk (it's a massive buffet of choice as you know!). I certainly wouldn't comment on something I didn't see. So please don't take anything I said as any kind of reflection on the quality of your work.
But more importantly I wanted to sincerely apologise to you for the mix up. I'm terribly sorry that my stupid blog made you feel put down, or denigrated, or...
Phill told me that you are an early career researcher. I have a few rules in life, and don't punch down is one of them. If I were at the talk, and was aware you were ECR, I would have taken another tack simply for that reason.
There are a whole bunch of good things about various types of assessment, I have the unfortunate pleasure of seeing an endless stream of bad stuff, and it colours my views a touch.
If we cross paths again, I hope I can shout you a coffee or a tea or a pint."
Sadly, I didn't receive a response, and that's entirely their prerogative.
Well, first up, the lack of response is poor form.
Carrying on a little from our Twitter interactions on this, I think it comes back to the set of constraints that have to be applied. On the one hand, we might be able to classify assessment methods and assessment regimes in terms of their pedagogical efficacy in isolation, i.e. on the assumption that students behave as expected and complete them in good faith. That's where arguments about cog sci vc educational theory might play out. Obviously, I don't hide the fact that I prefer empirically-based arguments to fetishising triangles some guy drew in the 50s or extrapolating from largely-discredited theories of learning (from a pre-neuroscientific age) to theories of pedagogy.…