Artificial intelligence (AI) remains a source of both concern and promise across industries in 2019. While gains made in the name of efficiency are potentially huge, issues of algorithmic bias, digital redlining, and other unintended consequences have already come to light. As AI in the guise of personalized learning infuses more and more with American classrooms, stakeholders need to move forward with caution. The Oxford Handbook of Ethics of AI, which will be published later this year, might be a good place to start. Elena Zaide, the PULSE fellow in artificial intelligence law & policy at UCLA, has written a chapter focusing specifically on AI in personalized learning and education. “Robot Teaching, Pedagogy, and Policy,” was published in SSRN earlier this month.
It’s difficult to discuss personalized learning and AI in education with much certainty. Experts typically use speculative language to do so. For example, “China’s AI Teachers Could Revolutionize Education Worldwide.” They haven’t done it yet, but they might.
Many Have Raised Operational Concerns About Personalized Learning and AI. Zaide Provides the Ethical Dimension.
Zaide generally steers away from this speculation and discusses how personalized learning and AI fit into the classroom and where it presents ethical issues.
Many either scorn or worry about the idea that AI could replace a teacher. But in Zaide’s view, that possibility is already viable. In her view, “Teachers provide differentiated instruction by: (1) observing student performance; (2) assessing progress; and (3) informing and evaluating their real-time pedagogical decisions about the response most likely to promote student success.” Each of these tasks has been performed by AI with varying degrees of efficiency.
Zaide’s primary concern in the chapter is not to question how effectively AI can teach students. Instead she discusses how their use runs up against best practices at existing institutions.
With this in mind, there’s one big differentiator between human and AI teachers. You can ask a human why they made the decisions they did. But with robots, not so much.
“Legibility and contestability of algorithmic determinations are important for students, parents, teachers, and administrators,” Zaide writes. “Students’ and parents’ rights to access and challenge personally identifiable student information in school records has been a core component of student privacy policy for forty-five years. Teachers and administrators need a sufficient understanding of a software’s inner working to exercise their professional judgment about its use.”
Democratic Institutions and Outsourcing
Moving beyond this, personalized learning and AI in the classroom conflict with how schools operate. Zaide describes how schools tend to be highly diverse and democratic institutions. Sure, federal and state policy make broad prescriptions, but the vast majority of the nitty gritty is handled by school leaders, administrators, and teachers on the ground.
“The highly public and participatory nature of these pedagogical and policy choices stands in stark contrast to the black boxes and invisible infrastructures of personalized learning systems,” Zaide writes.
Critics might say that these democratic institutions have long operated in partnership with private companies.
Zaide argues, however, that private outsourcing takes on new dimensions when it involves personalized learning.
“Schools outsourcing functions to private vendors is nothing new. However, until recently, much of this outsourcing related to ancillary services that supported schools’ core education functions. Most third-party software performed organizational and institutional processes, not academic ones. With automated instructional tools, however, educators can delegate the entire instructional process, including the pedagogical and policy decisions that shape school curricula, metrics, and standards. Communities should be free to do so if desired. However, this displacement of authority and accountability should happen with the same consideration and scrutiny applied to textbook and standardized test selection.”
The fact remains that no high profile, smoking gun example of algorithmic bias or misuse of AI has been identified in education.
But Zaide concludes, writing, “Schools and education policymakers should not wait for public outcry but take a more proactive approach.”
Read the full chapter here.
Source: ElearningInside
Comment