Robot teachers in the classroom?
Things to think about before plugging students into the Matrix
Most schools still hire human beings to teach students. But artificial intelligence is rapidly improving, and the quest for an ideal “personalized learning” experience is ongoing.
Self-paced digital curriculum is already being used in some charter schools and microschools. It’s only a matter of time before robot teachers enter the educational marketplace.
Thanks to artificial intelligence, we can imagine a robot that creates unique lessons and assessments, immediately corrects student misunderstandings, and offers customized next steps for the learning process.
This robot would never get burnt out and quit. It wouldn’t need a retirement plan or health insurance. It would know more information about more topics than any human possibly could.
Sounds great, but what are the tradeoffs?
To think through the practical and philosophical implications of robot teachers, I want to draw takeaways from three interesting articles, each one looking at the situation from a different angle.
How to Get Kids to Read for Fun
This first piece is about reading instruction. It doesn’t mention digital technology at all, but I think it demonstrates one of the main problems with the techno-efficiency approach to education.
Natalie Wexler, an education writer and researcher, raises two concerns. First, kids aren’t spending much time reading for fun. Second, reading comprehension levels are low, according to NAEP data.
Wexler thinks these deficiencies are partly caused by what she calls a “well-intentioned but misguided approach to teaching literature.”
Too often, teachers will have students read short excerpts of larger texts, and then ask boring analytical questions like, “what is the main idea of this passage?” Or they might take a phrase from the text and ask, “how does this phrase affect the meaning of the passage as a whole?”
Why do teachers drill students with boring questions about snippets of larger texts? Because that’s how state standardized tests are formatted. That’s the curriculum they are given.
Schools have been giving students isolated bits of text rather than letting them sink their teeth into engaging novels, and they’ve prioritized teaching analytical reading skills over allowing kids to immerse themselves in a good story.
Wexler discusses a study in England where researchers had teachers read aloud challenging novels at a faster pace than usual. The study found that students in this sample group made huge strides in reading comprehension. Wexler believes reading gains go hand-in-hand with enjoyment:
... students’ emotional engagement in the material helped them retain the vocabulary they needed to understand the passages on the reading test. And that vocabulary, along with the realization that reading can be fun, could well lead them to engage in more reading for pleasure.
I guess there’s no reason why learning from a robot couldn’t be fun. A robot could read the novel instead of a human teacher. But emotional engagement is better practiced in human community.
In the real world, learning from a robot teacher would probably mean sitting in front of a computer screen reading short experts and answering boring questions programmed to improve analytical skills.
Not all computer programs are mind-numbing, and some students will thrive with digital curriculum.
But we need to pay attention to the fact that students are human.
At the Other End of Your Educational Technology There’s a Student’s Brain
Freddie DeBoer has an unconventional take on education. He argues that it doesn’t work — at least not to deliver the results most people expect. As a Marxist, Freddie would be open to a school system that could spark social transformation. As an evaluator of educational research, he doesn’t see evidence that schools have the power to do anything of the sort.
Freddie doesn’t see a problem in education that new technology can solve. He thinks educational technology has been a “boondoggle” in the past, and he thinks new technologies will be similarly helpless to fix the “large-scale structural social problems” that people falsely believe schools can fix.
He is unimpressed with artificial intelligence in general, and he hates the media hype about its transformational potential:
So here’s the question for everyone saying that AI is going to revolutionize teaching: if we estimate that something like 90% of the variance in educational outcomes is student-side - that is, reflects some stable quality of individual students rather than their schools, teachers, or curriculum - how much difference can AI really make? If 10% is our estimate of what’s controlled, we can’t expect the delta of AI over actual teachers to be anything but single digits, and unless you think human teachers are adding literally zero value, we would expect it to be in the low single digits. Does that match the soaring horizons of the New Yorker piece? But it gets worse when you consider all the things that teachers do that aren’t matters of quantifiable education metrics. Teachers help students with socialization, make them feel welcome and safe, establish discipline, interact with parents, report potential neglect or abuse to social services, and do all manner of other essential tasks, for low money and a lot of disrespect in our media. Duolingo can’t do any of that. And if we were ever to actually pay attention to the evidence and reorient our education system - away from achieving learning gains that have never been achieved in the history of schooling and towards nurturing individual interests and strengths - then the value of kind and patient human teachers grows even more and the value of (thus far purely theoretical) AI teachers falls further.
The “soft skills” described here could just as easily be done by a human worker (rather than a trained teacher) supervising students who are learning on a computerized curriculum. That’s how a lot of microschools operate today.
But even if a robot could teach in a more efficient way that also manages to keep students motivated, I think there is still something missing. In my view, academic outcomes are greater than what can be measured on a test. The best teachers I had in school made impressions on my mind, not just by what they taught but by how they taught it.
Education is not merely the uploading of information or the acquisition of skills. It is an immersion into human knowledge and human society.
Ideally, we would recruit enough good teachers to reach all students. Realistically, we’re going to see a whole lot of experiments with machines.
Render Unto the Machine
In this last piece, tech philosopher L.M. Sacasas contemplates whether artificial intelligence will replace human labor.
He recalls a story from the Bible:
In the gospels, there is a brief but memorable scene best known for its political ramifications. The story begins with religious leaders seeking to entrap Jesus with a question that would force him either to implicitly deny that he was the expected Messiah or to open himself up to the charge of treason against the empire: “Is it lawful to pay taxes to Caesar, or not?”
Jesus, conscious of their motives, asks for a coin. When they bring him the coin, Jesus asks, “Whose likeness and inscription is this?” They said, “Caesar’s.” Then he said to them, “Render therefore to Caesar the things that are Caesar’s, and to God the things that are God’s.”
In this way, the snare is avoided and the demands of Caesar are utterly subverted. What is Caesar’s? A piece of metal with his image. Give it to him. I imagine Jesus flicking the coin back at them. But what is God’s? Everything. Everything that matters. Perhaps more specifically, the life of the whole person. Just as the coin bore the image of Caesar, so in the Jewish tradition the human being bears the image and likeness of God.
Sacasas looks at society and sees a world shaped by the “principles of efficiency and speed and optimization and profitability.” Many people work jobs that are “formulaic, mechanistic, and predictable.”
Build a techno-social system which demands that humans act like machines and, lo and behold, it turns out that machines can eventually be made to displace humans with relative ease.
What can we do about artificial intelligence replacing human labor?
Render unto the machine what belongs to the machine, and reclaim for ourselves what is truly human.
The economic implications of artificial intelligence are impossible to predict, and Sacasas doesn’t try to predict them. Instead, he invites us to ponder a resetting of priorities.
Does teaching and learning belong to the realm of machines or the realm of humanity?
That’s for us to decide.
Intelligent and thought provoking as always.