Learning community means even more in the age of AI

Learning community means even more in the age of AI

The other day an FT article caught my eye about the “Gen Z trend” of something called “admin nights” – a sentence that, I can assure you, makes me feel about a hundred years old. Apparently you can find out more about it on something called TikTok.

Anyway, the concept is simple and it’s genius. All those life admin tasks you’ve been putting off, get your mates round and devote some time to getting them sorted. The FT journalist decided to give it a try, recruited some colleagues as lab rats, and observed two things. The first was that doing cognitively laborious tasks is just easier when you’re in a group – there’s a mixture of accountability and community that really helps with individual motivation.

The second was that as some people found they were working on similar issues, like budgeting, or managing their pensions, they started to break into smaller groups to discuss how best to do that. Together, they consulted web pages and LLMs for advice, before discussing the pros and cons of what the devices spat out – some of which was on point, and some of which was biased (which it probably takes an FT journalist to catch).

This scenario, it seems to me, offers a prototype of what the best version of an AI-enabled education might look like. But it’s also the version that seems at most risk of eroding in the current system logic.

At a time when institutions are under financial pressure, AI offers a high-volume low touchpoint model of education that is, nevertheless, deeply tailored to the individual. It can generate learning content that aligns with your personal interests, respond immediately to your specific questions, and create assessments that are calibrated to your current competence. In one sense it offers a level of personalisation far beyond what any human educator could, even in a very small and resource intensive learning environment.

For some, not having to navigate the messy human interactions in learning with all their potential for misunderstanding and miscommunication and bias and the expectation to negotiate a common agenda and having to show vulnerability might be a real strength of AI-enabled education. But that logic leads us to a very chilly place indeed.

The feedback conundrum

Our working premise with our Secret Life of Students event, Learning to be human in the age of AI is that AI shifts the balance of educational value away from cognitive “compute” (pulling together disparate sources, spotting patterns, structuring information in a way that is accessible to others) to the things that are only realised as having value in a pedagogical context when humans do them. That doesn’t mean that humans never need to know how to do any of the “compute” tasks in learning or in life, or that we can henceforth cognitively offload all such tasks to machines. But it could mean that in thinking about what learning “means” we collectively over time place a lower overall value on “compute” than we did before, and a higher value on the human bits.

Thinking about it this way helps frame an answer to something that’s been playing on my mind for a while: students’ suspicion of the idea of academics using AI to help them assess and give feedback. From a purely pedagogical perspective this makes no sense. If you want feedback that is deeply engaged with the detail of your output, turned around quickly and that is applied fairly and consistently, from a feedback quality perspective AI is actually in many ways a better bet than an academic. So what’s the problem?

One answer is that if the work is graded and that grade contributes to your final mark, you don’t have enough transparency about the process to be confident that the AI knows what it’s doing or actually is applying the criteria fairly and consistently. But I also hypothesise that it really matters to students that academics “see” their work, and the effort it took, and by extension, them. It’s not just a moment of human connection per se, it’s confirmation that they have done enough, that they belong here. Outsourcing that moment of validation and personal attention to a machine feels…icky.

Great job

So academics just shouldn’t use AI to do feedback, right? Or if they do they should take care, as students do, to make it look like they haven’t? Well, take a pause before letting that be the final word on the topic and reflect that the way feedback is currently organised in a mass HE system is to try to make human academics behave as much like machines as possible, in the sense that the job of feedback is the consistent application of a marking rubric produced within a defined timeframe. Just letting the machine do it seems a logical extension of that premise. To the extent that feedback practice is an artefact of the higher education model that depends on students learning then producing an assessment then receiving a grade and some feedback, then the advisability of using AI to help you produce that feedback is the wrong question.

We’ve grown used to thinking about connection and community as forming an advantageous environment in which individuals can pursue knowledge work. We know that when students feel lonely and isolated and lack a sense of belonging they are much more likely to decide to stop pursuing knowledge work or struggle to engage with it. Just as we’re more likely to tackle that tricky life admin task if we have a friend doing theirs alongside us.

At the same time we celebrate the “added value” that comes from learning in a group setting: opportunities to collaborate with others; communication skills; a sense of the responsibilities that come with being a participant in a community – and sometimes agonise about how to meaningfully represent the value of these things within the constraints of assessments and standards frameworks. But this positions human interaction as a positive side-effect of learning, not so much its core.

Making space for the human when machines can do some of the classic cognitive compute work means you have to take learning community seriously as a pedagogic tool. Machines may be able to simulate empathy and connection, and can look like they are taking a critical perspective, informed by ethical positions. They can even appear, via digital twinning, as if they have a situated identity of some kind, and negotiate from that perspective. But they offer all of that in a very “safe” way that is fundamentally just not real – and not necessarily in a way that causes students to confront difficult or contrary propositions, or to find common purpose with others who have different experiences and motivations, or to feel a sense of positive connection when they experience it.

It continues to matter that humans do these things because doing things together, cooperatively, is an essential part of what it means to be human – on a practical as well as a romantic level, in workplaces, and public debate, and for the survival of the species. Doing them to a high level, consciously and intentionally, is very much a critical “output” of a higher education experience – it’s just that, maybe because we’re human, we tend to take them for granted.

It’s also in these human capabilities where, arguably, human feedback really matters for growth and development. And the ability to give and receive feedback human-to-human, without the intermediation of a machine, with all the anxiety and vulnerability that implies, become fundamental parts of a higher learning experience.

In the future, hearing from an academic that you did a great job negotiating that moment of in-class disagreement, or that you really stepped up with offering an alternative perspective to that offered by the campus LLM – or from a peer that they really enjoyed collaborating with you to solve that problem, or an employer that you really worked at building a relationship with that difficult client – could offer a psychologically important balance to the AI readout of whether you hit the marking criteria for the cognitive compute bit of your degree.

This isn’t a call to roll back from pursuing the AI-led personalisation approach – calling back to the opening example of the admin night, we’re entering a magical world where it will hopefully be possible to have both AI enablement and a thriving learning community, seamlessly integrated. My worry is that we’re staring down the barrel of a future where the cost of human in person interaction is viewed as being an elite luxury rather than a vital element of a rounded HE experience in a mass system. Undervaluing learning community – in the sense of failing to find ways to recognise human-based learning outcomes and only pursuing the path of doubling down on a more intensively personalised experience – takes us to a pretty dark place.

Source link

Related Articles

Ready to Launch Your Academic Future?

Join thousands of students using our tools to find and fund the perfect college. Let Resource Assistance USA guide your journey.

Get Started Now