Gen ai in our ed tech imaginations
Here are four responses about gen ai hype for educators interested in building a just ed tech imagination.
This semester, I had the awesome opportunity to co-instruct a graduate-level studio course in the Design department about speculative co-designing with young people. Despite a few notable setbacks early in the semester brought on by attacks on university DEI efforts, the 20 enrolled students facilitated unique workshops for 5 groups of youth in the area. The final artifacts covered a range of themes, from sustainability to economic security, transportation to politics. But the designs that resonated most for me were one group’s approach to redesigning educational experiences.
The youth in this particular group theorized the ways that advanced technologies could enhance experiential learning. Interestingly, they asserted that VR technologies and other digital devices were insufficient for the kinds of activities they wanted to participate in. (Look out for a longer piece about this later this year after a graduate student from the course and I finish writing up the results of this workshop.) These youth’s perspectives contrast with industry imaginations about VR’s role in educational futures.
I say “industry imagination” rather than “popular imagination” because I have found it difficult to locate a consistent repertoire of educational technologies in popular storytelling. Digital technologies like GIS are consistently associated with disciplines like geography and planning and robotics often lends itself to medical training, but besides books and laptops I’ve struggled to find a group of technologies that constitute part of a cross-disciplinary educational imagination. That is, if you don’t count the widespread concern among educational professionals about student use of generative artificial intelligence chatbots like ChatGPT, Claude, and DeepSeek.
Perhaps you recall all the hubbub surrounding Open.AI’s release of ChaptGPT two and half years ago. ChatGPT and similar platforms have failed to live up to their creators’ hype but they’re still far from irrelevant. I avoid asking students directly about their use of generative ai for university course work but I incorporate study of digital technologies in my course syllabi and take note of when and how students reference gen ai platforms. Back in December, for example, I was catching up on work in a campus study area when I overheard a nearby student proclaim to their study group, “Thank god for ChatGPT!” They were preparing for a written sit-down exam and used the platform to verify the details of a historical fact missing in their notes. Less than an hour later at another table, a student mumbled “I’ll just ask ChatGPT,” to the person on the other side of their phone call as they finished a different course assignment.
Refusing gen ai in ed imaginations
Despite the frequency of comments such as these and universities’ discouraging promotion of ai-powered study tools, I approach each semester without harboring assumptions about students’ intent to use ChatGPT. After countless unprompted conversations with educators, students, and parents around the country both within and outside of higher education, I’ve distilled my primary responses about ai use in the classroom here. My hope is that these responses help educators assuage preoccupations about ai-facilitated cheating and refocus attention to goals for positive classroom experiences and commitments to creative curricular design.
1. “Chatbots can’t help with my course assignments.”
One of the main reasons why I am able to stave off any creeping obsessions with student chatbot use is because I am confident that chatbots aren’t particularly useful for the course assignments I’ve designed. Open-ended essay prompts and reading summaries may be go-to assignments but it’s easy for platforms like ChatGPT to generate responses to these. Even basic coding and mathematical formulas can be answered by ChatGPT (even though chatbots are notorious for their inability to assist with mathematical reasoning, they are improving in this area).
This provides educators the opportunity to exercise greater creativity in their curricular design. So far, gen ai isn’t so good at picking out key terms defined in theoretical writing. Students at various education levels need to practice this reading comprehension skill and there are multiple ways that educators can facilitate practice in this area. I often provide reading guides for students to use for dense theoretical texts. Each question on the guide is accompanied by a page number, allowing students to fill out the guide after having read the complete text or to turn directly to the pages where significant points are shared. The latter option engages students who want/need to reduce the time spent on the assignment while enabling them to strengthen their ability to identify important information. The former encourages dedicated students to practice the iterative techniques of deep reading. Providing questions in this format makes the extra work of copy-and-pasting questions into a chatbot superfluous, thus empowering students to complete the work without algorithmic assistance.
For many disciplines in the humanities and social sciences, essay writing is nearly unavoidable. Still, educators can develop rubrics that grade submissions based on the student’s ability to analyze specific examples and connect content to the goals and themes of the course rather than summarize.
2. “I track student improvement, not performance on a singular assignment.”
My goal is to help students improve their capacities for critical thinking. One of my favorite ways of doing this is to scaffold assignments so that rounds of feedback and revision are worked into the assignment. Another thing I’ve done more this year is incorporate peer review into these revision cycles. For example, in a seminar I taught this spring students prepared short journal articles about a topic of their choosing for a class special issue journal. Inspired by special issue publication formats, students first submitted abstracts, article drafts, and final article revisions. Their abstracts and drafts were reviewed by me and 1-2 other students in the course. This process made it easier to identify the areas in which students were individually and collectively struggling. It also brought students into the process of evaluation and provided greater opportunities for them to learn from each other.
3. “I’m less concerned about cheating than about the environmental impacts of gen ai.”
As a geography professor, I find it important to discuss the harmful human and environmental interactions that chatbot use relies on. To me, these effects are much more damning than the nearly unprovable infraction of academic misconduct. Fortunately, there is already strong data on the environmental impacts of gen ai requests. Many of the students that enroll in my courses are at least somewhat aware of these harms but spending a bit of extra time discussing energy use, resource extraction, and carbon emissions at the beginning of and at different points during the semester generates great discussion in the classroom and challenges students to think about their contributions to larger planetary issues. By relating the issue to local land and energy use and to human rights violations in the global South, I guide students to consider their roles as local and global actors.
4. “I don’t use ai to keep my students from using ai.”
Hopefully by now we’re all aware of the inaccuracies of ai-detection software. But for me, the issue surpasses the unreliability of ai-detection and amounts to the hypocrisy of using tools similar to that which you are instructing students not to use. The “do as I say not as I do” attitude undergirding educators’ use of ai-detection software is disappointing and, frankly, harmful. It also brings up a theoretical issue in which instructors must confront the logics within which their class is structured (from reading material to pedagogical praxis).
In both courses that I taught this year, I incorporated the study of surveillance technologies and facilitated discussions around works such as Foucault’s Discipline and Punish, Zuboff’s Surveillance Capitalism, and Browne’s Dark Matters: On the Surveillance of Blackness. These texts express the logics from which a diversity of digital technologies emerge. For example, Meta’s Llama3 is indebted to thousands of pirated books but the logics that the company relied on to justify this mass theft are from the same Renaissance and Enlightenment era liberal developments that gave us racial capitalism. As an instructor it is my job to guide students towards questioning and understanding such underlying issues obscured throughout society and challenge them (and myself) to disrupt them whenever possible.
While student behaviors and university guidelines are largely out of our control, educators have the power to choose how we approach gen ai use in our classrooms and pedagogy. By modeling our own ed tech imaginations, we might just influence students to demand more than a chatbot can give them for their educational experiences.