Skip to main content
Ideas and Solutions

The Promises and Perils of Generative AI in Education: TFA's Evolving Perspective

New artificial intelligence tools like ChatGPT can have a positive impact on students and teachers but also carry risks that must be avoided.

August 11, 2023
Teach For America's purple spark symbol

Teach For America

Illuminate Every Learner

While recent breakthroughs in artificial intelligence (AI) show immense promise for education, they also carry significant risks. Teach For America’s broad and diverse network of teachers, school leaders, systems leaders, and policy makers are in a unique position to help guide the responsible and equitable adoption of AI in education.

Just as this field is evolving at a rapid pace, so too is our thinking about AI and education. The following are our thoughts as of August 2023.

Opportunities of AI in Education

While the field of generative AI is broad, we have honed in on a few opportunities for pre-K to 12 education. This potential will only be realized with intentional design, investment, and leadership.

1. Educate all students about AI: AI will significantly shape how we live, work, and relate to one another. Therefore, it’s critical for young people to develop an understanding of how these technologies work in order to use them responsibly in their learning and careers. Since it is still early in this disruption, marginalized youth could be positioned out in front in understanding and leveraging AI as a means to bring greater equity and economic mobility, rather than racing to catch up to their more affluent peers. Digital citizenship, AI literacy, and the integration of responsible AI practices into pre-existing curricula will become increasingly important. 

2. Leverage AI to support educator development, efficacy, and efficiency: If provided with the right pedagogical approach, generative AI tools can be an effective teaching aid and save educators hours on administrative tasks. AI can be leveraged to create class outlines, rubrics, and exit slips, generate ideas for classroom activities, and even update curricula based on the latest breakthroughs in their field. Educators can leverage AI to differentiate for students’ varying interests, reading levels, and needs. Generative AI can also provide teachers with real-time actionable feedback on their teaching practice. For example, AI can produce post-lesson reports with metrics around student speaking time and topics or questions to spark deeper engagement.

3. Empower youth and educators to be creators of AI and shape AI development: According to the Consortium of School Networking’s 2023 State of EdTech Leadership report, the vast majority of those leading in edtech are white, male, and between the ages of 40 and 59. Given the rich diversity and generational shifts in our schools, we must create opportunities for diverse youth, educators, and school leaders to shape how technology companies build with AI, as well as empower those most proximate to educational inequity to actively build their own solutions. 

4. Harness AI to help more fundamentally reinvent school: Previous technology waves (e.g., the advent of personal computers and the internet) were largely assimilated within century-old, conventional schooling methods, such as siloed subject areas and whole-class, age-based instruction. Breakthroughs in AI pose a unique opportunity to transform what, how, and with whom young people learn, unlocking the potential for greater student agency, creativity, and higher order thinking. Young people have a new opportunity to leverage AI tools to drive their own learning, both on assigned topics and their own curiosities and passions. Some teachers believe that AI can shift students away from teacher-constructed prompts to more in-class time for inquiry, community building, and teacher coaching. Leveraging AI to meaningfully advance learner agency, real-world relevance, and customization requires a deeper redesign of the structures of our current system, as well as thoughtful approaches to innovative design, experimentation, and applied research.

Risks, Harms, and Obstacles with AI in Education

We are also cautious of generative AI tools and how and when they should be used in pre-K-12 education. We have identified a few issues we believe must be taken seriously: When used inappropriately, AI can prove harmful to students and educators.

Models generate biased and harmful content: ChatGPT and other generative AI tools have many harmful biases, which reflect racism, sexism, ableism, and other systems of oppression. Last year, Steven T. Piantadosi, a professor at the University of California, Berkeley, uncovered a few of these instances, including responses where ChatGPT said that “only White or Asian males make good scientists” and “that a child’s life shouldn’t be saved if they were an African American male.” As a result, AI models do not authentically represent or understand a diverse range of students and are devoid of context. If educators use AI to automate tasks like grading or providing students feedback on their work, it will be critical for them to quality-assure the outputs against racial, cultural, or other types of bias. Stanford Professor of Education Bryan Brown speaks to the importance of “students receiving cues of cultural belonging,” and current generative AI tools do not center the intersection of language, culture, and cognition in order to accomplish that.

Spread of inaccurate information: In some cases, AI models can generate completely fabricated information, a phenomena researchers describe as “hallucination.” They can generate coherent and elegant text, and that same text can peddle conspiracy theories and cite fake scientific studies. For teaching and learning, it has produced content with high-quality teaching techniques (like positive reinforcement), but failed to get to the right mathematical answer. Moreover, as ChatGPT is positioned as an alternative (one source of synthesized information) to search engines like Google for seeking out information (in which students evaluate from a list of sources of information), students can become particularly vulnerable to disinformation and weakened information literacy skills. Its conversational nature and sense of authority can lead students to over-rely on its accuracy. On a positive note, some teachers are using the false information generated by AI models as an opportunity to create media literacy activities for their students.

Evaluating authentic student work: Discerning between original student work and AI-generated content can feel like an overwhelming responsibility for educators. According to the Washington Post, approximately 2.1 million teachers in the U.S. are using Turnitin, a new AI detection tool, in order to catch their students cheating or plagiarizing their work with ChatGPT. Unfortunately, the tool isn’t very accurate or reliable, and is falsely accusing students of wrongdoing. Not only does banning and overly monitoring the use of ChatGPT potentially hurt students who are not using the tool, but experts believe it hurts students' development of critical literacies around using emerging technologies. Students will need to understand how to use them, their strengths and weaknesses, how to steward them ethically, and how they can be misused. ChatGPT and other generative AI tools should be treated in the same fashion as calculators—permissible to use for some assignments but not when the objective is to test the building-block skills versus the application of those skills. If students are not being supervised in person, it should be assumed that there’s a good chance they’re using AI. Looking ahead, there will be a great need to revise assessment strategies to account for AI while upholding academic integrity.

Data privacy and rights: New AI models rely on ingesting massive amounts of information. For certain use cases, they require sensitive information from students and educators, and how that information is being used or potentially sold can be concerning. Minimally, students’ data privacy should be respected and adhered to according to COPPA, FERPA, and other data protection laws.

Looking Ahead

This piece represents our initial thinking and informs our own emergent strategy around AI. We know that others in our sector are also engaged in this exploration, and as we continue our own learning, our insights and recommendations will evolve.

We’re in the early stages of this wave of AI development. We’re 15 years into the smartphone revolution, and our society is still coming to terms with the changes it has brought. Many researchers argue that the changes brought about by AI will be even more profound. 

That said, while technological change can feel inevitable, how we use technology and how it shapes our day-to-day is anything but predetermined. All of us—educators, students, and parents—have a role to play in shaping how AI is designed and adopted by our teachers, students, and educational systems. 

The perspectives articulated in this piece were shaped by many leaders and thinkers across Teach For America in consultation with trusted external researchers and partners. The piece was written by Ariam Mogos, Yusuf Ahmad, and Michelle Culver.

Research & Insights About AI in Education

Specific gratitude to the following researchers & authors for insights from these sources:

Sign up to receive articles like this in your inbox!

Thanks for signing up!