AI In Schools: Revolution Or Risk For Black Students?

1 in 4 educators plans to increase the use of AI in the classroom, but experts warn it could worsen long-standing race and equity issues. (Credit: Pexels/Polina Tankilevitch)

by Joseph Williams

Hailed as cutting-edge technology that will revolutionize teaching, next-generation artificial intelligence is coming to a classroom near you — if it hasn’t already arrived. Advocates say computer programs and algorithms can help educators with everything from lesson plans to absenteeism, and can even spot students who are using AI to cheat. 

But the approaching AI wave in middle- and high-school education could swamp Black students.

Experts say the increasing adoption of AI tools threatens to worsen long-standing race and equity issues — including racial bias in lessons and grading, the over-disciplining of Black boys, the continued under-resourcing of majority-minority schools and the “digital divide” between Black students and their white peers. 

Yet as increasingly advanced machine-learning technology becomes more widespread, analysts say, school administrators and elected officials are behind the curve in crafting policies and standards for AI in the classroom that would ensure equity and protect Black students.   

‘We already know about the bias issues with AI,” says Victor Lee, an associate professor at Stanford University Graduate School of Education. The risk for technologically-induced bias, Lee says, “is quite high for schools to jump in too quickly.”

But surveys have indicated 1 in 4 educators plan to increase the use of AI in their classrooms — a data point, he says, that indicates the technology is “rapidly advancing on a scale we are not prepared for.” 

To be clear, schools and teachers have managed to adopt or work with advancing technology for generations — from pocket calculators in the 1970s to take-home laptops in recent decades. Even artificial intelligence has established itself in classrooms, including smartphone apps that can scan and calculate a student’s math equation and Google smartboards that recognize shapes and colors. 

Generative AI, however, is considered a game-changer for teachers and students. 

Education experts, including Matthew Lynch, point to multiple ways artificial intelligence is already relieving teachers of essential but burdensome tasks: grading papers, crafting lesson plans, and taking attendance. In other classrooms, AI programs analyze homework and tests to help teachers identify struggling students — and then put together a tailored plan to tutor them. And some school administrators are using facial recognition technology to identify and discipline students. 

Are we thinking critically about this? Is the student prepared for the world?


However, research has shown that racial bias is often baked into artificial intelligence programs, lessons, and tutoring systems because they reflect the biases and blind spots of their designers — and the tech industry is predominantly white. Although Black Americans comprise approximately 13% of all workers, they make up only 7.4% of digital workers, according to a 2023 report by McKinsey & Co. 

“Oftentimes tech companies didn’t really seem to understand the experience of Black and Brown students in the classroom,” Nidhi Hebbar, who co-founded the EdTech Equity Project, told The Hechinger Report in 2023. The EdTech Equity Project helps schools choose equitable ed tech products and hold companies accountable for tools that aren’t inclusive.

When tech companies build products for schools, they join forces with affluent, mostly-white schools or rely on their own educational experience, she said. Their designers, she says, usually don’t consider or struggle to imagine the needs or experiences of under-resourced schools and Black students. 

That means AI classroom modules that short-change Black history, or don’t take into consideration cultural differences in tutoring lessons or tests, experts say. Over-reliance on software programs in schools, they say, could misidentify struggling students, fail to process cultural references in an essay, or point to problems that don’t exist.

Lee says using AI for test proctoring, for example, relies in part on facial recognition software that’s often balky, particularly when examining Black and brown faces. Issuing AI-based homework and lessons, he says, assumes that all students have equal access to computers and wifi at home to complete the work. And cash-strapped districts may be tempted to use AI as a quick fix for teacher shortages. 

Widely adopting AI technology “is already embedding a lot of assumptions” about the software, its efficacy, and for whom it is created, Lee says. “Bias exists in any creation,” like notebooks or scissors that are tough for left-handed people to use. 

Before schools and teachers go all-in on AI technology, he says, tough questions must be asked: “Are we thinking critically about this? Is the student prepared for the world?” 

“We want to be thoughtful about how to be inclusive,” he says.