As a child, I loved fingerpainting and anxiously awaited the weekly, colorful in-class activity. It wasn’t so much the art that compelled me; I loved the distinctive smell and visceral feel of the fingerpaint. The entire process felt like an exploration, and through it, I discovered my creativity.
It was messy, chaotic, and crucial, I think, for my development. The new idea with fingerpainting is to separate a child’s fingers from the paint. You splash some of the squishy colors onto a canvas, then seal the goop under plastic. The child then basically pushes the colors around without actually touching them.
It’s clean, antiseptic, terrible, and a metaphor for what I think AI might be doing to learning.
My concerns were sparked anew by a recent and well-researched story in USA Today explaining “How AI is affecting the way kids learn to read and write.”
It’s full of details and anecdotes about how teachers are turning to AI in the classroom to help students, for instance, ideate. One teacher complained that the kids’ essay ideas were growing “stale,” so she’s having them use AI to help them come up with better ones.
Antiseptic AI learning
Forget brainstorming in the classroom, kicking around ideas big and small that might spark others. AI offers a valuable shortcut. It also cuts out the messiness of bad ideas. AI’s job is not to come up with answers randomly. The Large Language Models (LLMs) in ChatGPT, for instance, have been trained on millions, if not billions, of parameters to have a better understanding of a broad range of topics.
I often describe this as AI’s knowing better than us “what comes next.” That works in reading, writing, coding, and art. It’s not always a clean process, though.
Early AIs (ones from 12 months ago) with somewhat limited training didn’t always understand that humans have five fingers on each hand, so we got six fingers and sometimes extra phantom limbs. Interestingly, we seem quite comfortable with AI’s learning through their own messy mistakes.
Literacy, the report notes, is dropping among grade school children largely because they’re doing less reading of long-form content – they mostly read stuff on small screens if they’re not ingesting endless video scrolls – and the pandemic set almost all learning back by a few years.
Educators struggle with this and AI has arrived as a handy tool for navigating around many of these issues.
Students are also engaging in more back-and-forth with AI for research. While boomers and Gen X might have used encyclopedias, Millennials and Gen Z have largely grown up using the web as a core research tool. They learned how to search on Google and, through trial and error, find the details they needed.
AI, though, is a conversation where the response is presented as fact, and the student assumes it is so. There is no error or assumption of error, and mistakes could easily be hidden in AI hallucinations.
Again, the engagement with a teacher and even other students is lost. Ideas no longer float in the ether. Questions are not shared among a group.
Let’s make mistakes
Good teachers used to say, “There’s no such thing as a dumb question.” Asking “dumb” questions was how we learned. Students using AI are shielded from that moment. They just type in the prompt and the AI responds.
We learn through trial and error, and studies have shown that young minds, in particular, need to learn from the messiness of mistakes.
In a 2016 study, Learning from Errors, researchers wrote, “Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning.”
A world in which students are potentially paired with their own AI chatbot and self-navigate without any experimentation or flat-out mistakes means that the conversation about why the work was wrong will never happen.
There is an exploration lost for the student who will not learn about the right way and understand how that error might lead to other reasoning dead ends and for the teacher who will fail to learn about the best way to engage and teach that student.
The sad thing is that I’m not sure we can convince students and their parents that this lack of messiness, error-making, and feedback loops will harm the students.
Outside the classroom, students teach themselves how to use ChatGPT to produce essays and get the best results and grades. At least educators are hip to these efforts. In the USA Today story, one educator who discovered them began running all the essays through AI checkers. Those are, of course, not fool proof.
The sad thing is that I’m not sure we can convince students and their parents that this lack of messiness, error-making, and feedback loops will harm the students. They will not learn as much, and I’m pretty sure their intellectual curiosity and creativity will be stunted.
How do we learn fresh things when our teacher is an AI, one that’s been trained on all that was and is still not that good at telling us what comes next?
Look, I am not anti-AI, but AI in the hands of children and young students is like the sealed fingerpainting kit: antiseptic, wrong, and the opposite of the beautiful mess that is learning.
You might also like
https://cdn.mos.cms.futurecdn.net/FiT6JeWuSnsAkGyn6gmr2J-1200-80.jpg
Source link
lance.ulanoff@futurenet.com (Lance Ulanoff)