The Promises and Perils of Generative AI in Education: TFA's Evolving Perspective
While recent breakthroughs in artificial intelligence (AI) show immense promise for education, they also carry significant risks. Teach For America’s broad and diverse network of teachers, school leaders, systems leaders, and policy makers are in a unique position to help guide the responsible and equitable adoption of AI in education.
Just as this field is evolving at a rapid pace, so too is our thinking about AI and education. The following are our thoughts as of August 2023.
Opportunities of AI in Education
While the field of generative AI is broad, we have honed in on a few opportunities for pre-K to 12 education. This potential will only be realized with intentional design, investment, and leadership.
1. Educate all students about AI: AI will significantly shape how we live, work, and relate to one another. Therefore, it’s critical for young people to develop an understanding of how these technologies work in order to use them responsibly in their learning and careers. Since it is still early in this disruption, marginalized youth could be positioned out in front in understanding and leveraging AI as a means to bring greater equity and economic mobility, rather than racing to catch up to their more affluent peers. Digital citizenship, AI literacy, and the integration of responsible AI practices into pre-existing curricula will become increasingly important.
2. Leverage AI to support educator development, efficacy, and efficiency: If provided with the right pedagogical approach, generative AI tools can be an effective teaching aid and save educators hours on administrative tasks. AI can be leveraged to create class outlines, rubrics, and exit slips, generate ideas for classroom activities, and even update curricula based on the latest breakthroughs in their field. Educators can leverage AI to differentiate for students’ varying interests, reading levels, and needs. Generative AI can also provide teachers with real-time actionable feedback on their teaching practice. For example, AI can produce post-lesson reports with metrics around student speaking time and topics or questions to spark deeper engagement.
3. Empower youth and educators to be creators of AI and shape AI development: According to the Consortium of School Networking’s 2023 State of EdTech Leadership report, the vast majority of those leading in edtech are white, male, and between the ages of 40 and 59. Given the rich diversity and generational shifts in our schools, we must create opportunities for diverse youth, educators, and school leaders to shape how technology companies build with AI, as well as empower those most proximate to educational inequity to actively build their own solutions.
4. Harness AI to help more fundamentally reinvent school: Previous technology waves (e.g., the advent of personal computers and the internet) were largely assimilated within century-old, conventional schooling methods, such as siloed subject areas and whole-class, age-based instruction. Breakthroughs in AI pose a unique opportunity to transform what, how, and with whom young people learn, unlocking the potential for greater student agency, creativity, and higher order thinking. Young people have a new opportunity to leverage AI tools to drive their own learning, both on assigned topics and their own curiosities and passions. Some teachers believe that AI can shift students away from teacher-constructed prompts to more in-class time for inquiry, community building, and teacher coaching. Leveraging AI to meaningfully advance learner agency, real-world relevance, and customization requires a deeper redesign of the structures of our current system, as well as thoughtful approaches to innovative design, experimentation, and applied research.
Risks, Harms, and Obstacles with AI in Education
We are also cautious of generative AI tools and how and when they should be used in pre-K-12 education. We have identified a few issues we believe must be taken seriously: When used inappropriately, AI can prove harmful to students and educators.
Models generate biased and harmful content: ChatGPT and other generative AI tools have many harmful biases, which reflect racism, sexism, ableism, and other systems of oppression. Last year, Steven T. Piantadosi, a professor at the University of California, Berkeley, uncovered a few of these instances, including responses where ChatGPT said that “only White or Asian males make good scientists” and “that a child’s life shouldn’t be saved if they were an African American male.” As a result, AI models do not authentically represent or understand a diverse range of students and are devoid of context. If educators use AI to automate tasks like grading or providing students feedback on their work, it will be critical for them to quality-assure the outputs against racial, cultural, or other types of bias. Stanford Professor of Education Bryan Brown speaks to the importance of “students receiving cues of cultural belonging,” and current generative AI tools do not center the intersection of language, culture, and cognition in order to accomplish that.
Spread of inaccurate information: In some cases, AI models can generate completely fabricated information, a phenomena researchers describe as “hallucination.” They can generate coherent and elegant text, and that same text can peddle conspiracy theories and cite fake scientific studies. For teaching and learning, it has produced content with high-quality teaching techniques (like positive reinforcement), but failed to get to the right mathematical answer. Moreover, as ChatGPT is positioned as an alternative (one source of synthesized information) to search engines like Google for seeking out information (in which students evaluate from a list of sources of information), students can become particularly vulnerable to disinformation and weakened information literacy skills. Its conversational nature and sense of authority can lead students to over-rely on its accuracy. On a positive note, some teachers are using the false information generated by AI models as an opportunity to create media literacy activities for their students.
Evaluating authentic student work: Discerning between original student work and AI-generated content can feel like an overwhelming responsibility for educators. According to the Washington Post, approximately 2.1 million teachers in the U.S. are using Turnitin, a new AI detection tool, in order to catch their students cheating or plagiarizing their work with ChatGPT. Unfortunately, the tool isn’t very accurate or reliable, and is falsely accusing students of wrongdoing. Not only does banning and overly monitoring the use of ChatGPT potentially hurt students who are not using the tool, but experts believe it hurts students' development of critical literacies around using emerging technologies. Students will need to understand how to use them, their strengths and weaknesses, how to steward them ethically, and how they can be misused. ChatGPT and other generative AI tools should be treated in the same fashion as calculators—permissible to use for some assignments but not when the objective is to test the building-block skills versus the application of those skills. If students are not being supervised in person, it should be assumed that there’s a good chance they’re using AI. Looking ahead, there will be a great need to revise assessment strategies to account for AI while upholding academic integrity.
Data privacy and rights: New AI models rely on ingesting massive amounts of information. For certain use cases, they require sensitive information from students and educators, and how that information is being used or potentially sold can be concerning. Minimally, students’ data privacy should be respected and adhered to according to COPPA, FERPA, and other data protection laws.
Looking Ahead
This piece represents our initial thinking and informs our own emergent strategy around AI. We know that others in our sector are also engaged in this exploration, and as we continue our own learning, our insights and recommendations will evolve.
We’re in the early stages of this wave of AI development. We’re 15 years into the smartphone revolution, and our society is still coming to terms with the changes it has brought. Many researchers argue that the changes brought about by AI will be even more profound.
That said, while technological change can feel inevitable, how we use technology and how it shapes our day-to-day is anything but predetermined. All of us—educators, students, and parents—have a role to play in shaping how AI is designed and adopted by our teachers, students, and educational systems.
The perspectives articulated in this piece were shaped by many leaders and thinkers across Teach For America in consultation with trusted external researchers and partners. The piece was written by Ariam Mogos, Yusuf Ahmad, and Michelle Culver.
Research & Insights About AI in Education
Specific gratitude to the following researchers & authors for insights from these sources:
- CoSN, (2023). 2023 State of EdTech Leadership Survey. Retrieved Aug 7, 2023 from https://www.cosn.org/edtech-topics/state-of-edtech-leadership/
- Rose, J (2023). AI will not transform k-12 education without changes to the grammar of school. https://www.the74million.org/article/ai-will-not-transform-k-12-education-without-changes-to-the-grammar-of-school/
- Staff, N. (2023, March 31). Study: 30% of college students have used CHATGPT for essays. GovTech. Retrieved April 10, 2023, from https://www.govtech.com/education/higher-ed/study-30-of-college-students-have-used-chatgpt-for-essays
- Roose, K. (2023, January 12). Don't ban chatgpt in schools. teach with it. The New York Times. Retrieved April 10, 2023, from https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html
- Chen, C. (2023, March 9). AI will transform teaching and learning. Let's get it right. Stanford HAI. Retrieved April 10, 2023, from https://hai.stanford.edu/news/ai-will-transform-teaching-and-learning-lets-get-it-right
- Anderson, J. (2023, February 9). Harvard edcast: Educating in a world of artificial intelligence. Harvard Graduate School of Education. Retrieved April 10, 2023, from https://www.gse.harvard.edu/news/23/02/harvard-edcast-educating-world-artificial-intelligence
- Ferlazzo, L. (2023, January 27). CHATGPT: Teachers weigh in on how to manage the new AI chatbot (opinion). Education Week. Retrieved April 10, 2023, from https://www.edweek.org/teaching-learning/opinion-chatgpt-teachers-weigh-in-on-how-to-manage-the-new-ai-chatbot/2023/01
- Fowler, G. A. (2023, April 3). *Analysis | We tested a new CHATGPT-detector for teachers. it flagged an innocent student.*The Washington Post. Retrieved April 10, 2023, from https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/
- Hsu, T., & Thompson, S. A. (2023, February 8). Disinformation researchers raise alarms about A.I. Chatbots. The New York Times. Retrieved April 10, 2023, from https://www.nytimes.com/2023/02/08/technology/ai-chatbots-disinformation.html
- Mollenkamp, D. (2023, March 10). What the White House 'ai bill of rights' means for education - edsurge news. EdSurge. Retrieved April 10, 2023, from https://www.edsurge.com/news/2022-10-14-what-the-white-house-ai-bill-of-rights-means-for-education
- Stanford faculty weigh in on CHATGPT's shake-up in Education. Stanford Graduate School of Education. (2023, February 2). Retrieved April 10, 2023, from https://ed.stanford.edu/news/stanford-faculty-weigh-new-ai-chatbot-s-shake-learning-and-teaching
- GOV.UK. (2023, March 29). Generative Artificial Intelligence in Education. GOV.UK. Retrieved April 10, 2023, from https://www.gov.uk/government/publications/generative-artificial-intelligence-in-education?trk=public_post_comment-text
- Heaven, W. D. (2023, April 7). CHATGPT is going to change education, not destroy it . MIT Technology Review. Retrieved April 10, 2023, from https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/
- Alba, D. (2022, December 8). CHATGPT, open ai's chatbot, is spitting out biased, sexist results. Bloomberg.com. Retrieved April 11, 2023, from https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results#xj4y7vzkg
- Tran, T. H. (2022, December 6). OpenAI's impressive new chatbot isn't immune to racism. The Daily Beast. Retrieved April 11, 2023, from https://www.thedailybeast.com/openais-impressive-chatgpt-chatbot-is-not-immune-to-racism