Article begins

How have you encountered AI today? Perhaps you listened to music recommended by an AI algorithm, used a navigational app to check AI-predicted traffic conditions, auto-captioned videos with AI-powered voice recognition, or checked email without even noticing the AI-filtered spam messages. 

AI is shaping our everyday lives, but as anthropology teaching faculty, most of our recent AI-related conversations have had a singular focus: how to deal with generative AI tools like ChatGPT in the classroom. The launch of ChatGPT in late 2022 sparked panic among instructors who realized it can answer homework questions, analyze data, and generate whole essays in seconds (although its facts and citations may not always be trustworthy). Many faculty feared that AI would usher in a new age of student cheating that was faster, cheaper, and more difficult to detect than ever before.

This essay describes our attempt to work through the panic and turn our concerns into a learning opportunity. We engaged in a collaborative project based in the School of Social Sciences at the University of California, Irvine (UCI) designed to explore how AI tools might support undergraduate learning in anthropology. Below, we present case studies from three anthropology courses using three different sets of AI tools. Christopher Lowman built a writing course for the era of ChatGPT, introducing anthropology majors to Large Language Models (LLMs) and their ability to prompt research topics and improve writing while teaching students to recognize AI’s limitations. Angela Jenks guided medical anthropology students through an analysis of direct-to-consumer artificial intelligence/machine learning (AI/ML) medical apps like Symptomate and DermAssist, teaching them to analyze this emerging technology while situating these apps in their historical, social, and economic contexts. Ian Straughn worked with students in an introductory archaeology course using Humata.ai to imagine and develop the research design for the archaeological investigation of UCI’s campus at some time in the future (perhaps an excavation to be conducted by non-human intelligence).

Together, these experiments contribute to ongoing conversations about the pedagogical value of AI tools, core competencies needed for AI literacy, and pedagogies that prepare students (and professors) to effectively, creatively, and ethically engage AI in the work of anthropology.  

Writing with ChatGPT (Christopher Lowman) 

In fall 2023, I taught an anthropological writing course playfully titled Writing a Time-Traveler’s Travel Guide. Students followed a travel guide format to write about how to dress, what to eat, where to sleep, and how to interact with others in a time and destination of their choice. I wanted to excite students about writing but also encourage them to think about everyday practices of people in the past. Inspired by Ian Mortimer’s English history Time Traveller’s Guide series and David Mountain’s podcast The Backpacker’s Guide to Prehistory, I also drew on Nanjala Nyabola’s critiques of travel guides’ Othering and colonial outlooks, which shaped class discussions on ethics, identity, and tone in writing. Students read historical, ethnographic, and archaeological literature relevant to destinations such as Bronze Age Mesopotamia, Imperial Rome, 1890s Mexico, and 1920s Harlem.

College writing in 2023 means reckoning with students’ use of ChatGPT. My experience playing with ChatGPT and reading about its capabilities led me to approach it like a calculator: a powerful tool, but useful only if students understand the principles behind their inputs. I integrated ChatGPT into class through three types of assignments. First, I used analog or otherwise “AI-proof” activities. Second, I encouraged play and experimentation with AI coupled with readings about ChatGPT’s capabilities and limitations. Third, I asked students to record and reflect on their use of ChatGPT.

One analog activity involved handwriting every day in class. I provided all fifteen students with paper journals and pens to respond to discussion questions. These questions frequently required students to integrate examples from class or their own experiences. Those details would be difficult for a program like ChatGPT to replicate, and the exercise taught students about the value of reflexivity in academic writing and research. These responses also gave me a baseline for their composition skills unaided by computer programs. In addition to activities such as observation and thick description, I asked students to write about their previous experiences with ChatGPT. Most had heard of ChatGPT, but few had used it frequently. Over half said professors had forbidden its use, equating it with plagiarism. None had encountered a class where it was a major topic of discussion. Generally, students described being cautious but curious about ChatGPT.

I introduced students to ChatGPT by reading aloud the results of one of my prompts: “Write a travel guide for a time-traveler visiting Tenochtitlan, Mexico in 1518.” The result provided useful starting points for asking research questions, but the generated text contained few substantive details, no citations, and some obsequious language that Nyabola would criticize (“Welcome, esteemed time-traveler…”). I suggested students start playing with ChatGPT and provided a set of articles about how LLMs work, including how to prompt them to generate comparisons or lists and how to cite AI tools. Students found Write What Matters (2020), a modular open-access textbook edited by Joel Gladd, Liza Long, and Amy Minervini, to be especially useful. Still, some students had difficulty differentiating between ChatGPT’s outputs and reliable sources of information, which led to discussions about trustworthiness, primary sources, and peer-review.

Ultimately, students wrote reflections on their use of ChatGPT, including both creative and practical aspects. Most used ChatGPT to workshop their writing by finding and correcting common grammatical mistakes or identifying wordiness or repetition. One student used ChatGPT to find patterns and thematic connections among copious notes, while another used it to overcome writer’s block. One student prompted ChatGPT to create an imagined historical character to inhabit his chosen destination (Gold Rush-era California). He fed ChatGPT historical information and then asked it for a description in the voice of this character, giving himself an imagined tour guide by utilizing the same abilities of LLMs now being used to create role-playing characters and nonplayable characters in video games.

ChatGPT provided, according to one of its own generated responses, “a futuristic edge to students’ historical explorations.” Beyond the novelty of its use, however, students left the class with literacy in the use of LLMs for writing that they had not received anywhere else and had often been discouraged from exploring before. According to another ChatGPT-generated self-description, the use of LLMs is an important part of “academic discourse in the digital age.” When students are aware of AI’s capacities as well as its limitations, their writing and research both improve.

Diagnosing (with) AI (Angela Jenks)

In winter 2024, students in my upper-division Cultures of Biomedicine course took a close look at direct-to-consumer artificial intelligence/machine learning (DTC AI/ML) diagnostic apps. These tools promise to democratize health information, empower patients, and reduce financial burdens on hospitals by offering consumers immediate symptom assessments and possible treatment options. Together, students and I entered hypothetical symptoms into various programs and compared the results. Students wondered: Why do some apps ask for so much more information than others? How much could we rely on these responses when potential diagnoses ranged from minor conditions to life threatening emergencies? What happens if you change just one piece of information like age, location, or gender? (As it turns out, a lot!) 

This discussion of AI medical apps came late in the term during a module focused on the “social lives of biotechnology.” In the previous months, we had already talked about biomedicine as a sociocultural practice. We explored core principles of bioethics, examined the politics of medical diagnosis, and considered multiple examples of algorithmic bias, particularly race correction in clinical algorithms. As we turned our attention to diagnostic apps, my primary goal was less focused on helping students learn to use the technology effectively and more focused on supporting their understanding of how the technology is situated in the social world.

To that end, the class activity had three main components. I began with a brief lecture giving some background on AI and machine learning, how AI differs from traditional computing, and the variety of ways it’s being incorporated into biomedicine. For example, AI is increasingly used for triage and risk stratification, analyzing medical images (e.g., mammograms, endoscopies, skin images, pathology slides, CT and MRI images, etc.), developing personalized treatment plans, and chatbot therapy or language interpretation. This background aimed to address a core competency of AI literacy: the ability to recognize AI, understand the different types that exist, and appreciate the role that humans play in programming and training AI systems. 

Next, students examined the apps themselves. Working in small groups, they chose one of five symptom checkers—WebMD, Ada Health, MediFind, Symptomate, or DermAssist—and, in an activity influenced by Lupton and Jutel’s research, examined the consumer-facing side of the technology. Students described and analyzed the imagery (logos, artwork, photographs) associated with each app, along with the strategies used to demonstrate credibility to users or to make claims to medical authority or expertise. They noted the presence or absence of disclaimers or stated limitations and explored the “fine print” of each program to see if they could determine how the technology works, where information/research used in the app comes from, and privacy or security policies related to users’ data. 

In the final stage of the project, students situated medical AI tools in a broader social and economic context. Drawing on Joe Dumit’s “Writing the Implosion” activity, they reflected on what they know about the history, production, impacts, and meanings of this technology. Our conversations focused especially on power dynamics at work in the development and use of AI/ML tools and long-standing racism in biomedical research and practice. (DermAssist, for example, is an AI-powered dermatology app designed to identify skin conditions in consumer photos; 90% of the images in its initial training set were of light-skinned people.) We further touched on core ethical questions related to reliability, privacy, transparency, and we discussed the extensive energy demands and environmental impacts of AI technologies.Ultimately, the activity left us all with more questions about appropriate and ethical uses of AI than clear answers. 

At the end of the term, I asked students to reflect on their learning throughout the course. Several students named the AI/ML case study as the most unexpected and memorable topic they had learned about. They described being “shocked” by the extent of AI in medicine and having developed a “deeper interest” in exploring it further. Most of these students are pre-med majors planning to pursue clinical careers, and what they learned demonstrates how social science coursework can play a key role in expanding public AI literacy. 

Humata.ai and the Reality of Archaeology (Ian Straughn)

In Winter 2024, I experimented with integrating Humata.ai into a lower division archaeology course on pseudoscience, conspiracy theories, and the forms of popularizing the ancient past that relish in rejecting “mainstream” archaeological explanations. Somehow it seemed fitting to fold generative AI into our explorations of knowledge production, misinformation, and the use of archaeological data in explanations about the human past. This particular tool allows users to upload their own documents—course readings, research articles, archives, etc.—so that they may be queried individually or as larger libraries using a chatbot interface. Humata markets itself as a tool that allows you to “chat your way through long documents” because “asking is faster than skimming.” With ChatGPT-4 as the LLM under the hood, Humata implements its proprietary reading strategy in response to the nature of the prompts that the user provides. As another pitch for the tool declares: “Your team can’t read it all. But Humata can.” Within seconds it can summarize a whole body of literature, compare competing arguments, or even suggest to the user what might be relevant questions to ask. You can see the potential appeal to students, researchers, and anyone eager to synthesize and query a large volume of textual materials. But can we trust what it has to say? 

Initially, I intended to curate a large database of documents (on the order of 100,000 pages) within Humata about the history and development of the UC Irvine campus as an archaeological site in the making, one that we could “excavate” with this generative AI tool. The plan was to include archival documents from the early days of the university’s planning and construction, environmental impact reports for campus development projects, cultural resource studies, ephemera, university policies, and other materials that could contextualize the emerging archaeological record of the institution. Building such a repository was overly ambitious. It ultimately was reduced to something on the order of 10,000 pages of such documents that was by no means comprehensive but representative of the kinds of materials available. A future article will discuss the excavation project and how we used Humata to imagine an archaeological investigation of the campus at some future time. I also created a second database that included all the course readings (required and optional) as an additional set of materials to experiment with using Humata. 

To get students familiar with the interface and workings of Humata, they were asked to upload a scholarly article with which they were already well versed, and it could be from any discipline. Once it appears in the interface, they then click the “ask” button and Humata provides a summary of the text and three potential questions that the user might consider for probing more deeply. Since this was an article with which they were already familiar, students then evaluated both the accuracy of the summary and the relevance of the questions on a 1 to 5 scale, with average ratings of 4.2 and 4.1, respectively. They then asked three of their own questions about the article and commented on the quality of the responses. While the majority reported that they did not necessarily learn anything new, since they were already quite familiar with the content, there were no accounts of misinformation or misleading responses. Some noted that this was still helpful as a quick way to refresh themselves about the content and arguments of that original article, especially since it would cite specific passages from the text to support its responses.

In another assignment students were asked to spend an hour having a conversation with Humata about the documents in the course readings database, attempting to convince the AI that archaeology isn’t real. They met with little success, even when using some of the hacks that can get ChatGPT in trouble. While Humata was willing to concede that archaeology could be biased, its findings mistaken, misused, and abused, it steadfastly maintained that the discipline was real. Humata would note that “alternative archaeologies” existed but distinguished them as engagements with the archaeological record that lacked scientific rigor and empirical evidence. It did not necessarily invalidate such approaches, only stating that they operated outside the consensus of scholarly experts. While some students were disappointed that it refused to believe their lies or follow their commands, most found it reassuring that the tool deferred to those materials which it judged as scholarly and methodologically sound. Ultimately, when I surveyed students at the end of the course, 61% reported that they would like to see more courses experiment with AI tools, 29% were undecided, and just 10% were reluctant to have it integrated into classroom pedagogy. 

Conclusion

Whether or not faculty feel ready to experiment with AI in the classroom, we and our students are already living and working with these technologies. Despite fears that generative AI might harm learning, the pedagogical experiments described here have encouraged us to be cautiously receptive to the ways it might enrich learning as well. As we have learned over the last year, AI tools have the potential to enhance our teaching of foundational anthropological skills like creative writing, scholarly reading, and social analysis. Christopher Lowman demonstrated how ChatGPT can be used as a writing aid and as an entry-point for teaching students the skills of prompt-engineering. Angela Jenks reinforced students’ awareness of how AI surrounds them every day and facilitated pre-med students’ critical examination of medical uses of AI. Ian Straughn’s demonstration of Humata.ai’s capacity to process massive amounts of textual data drew students’ attention not only to the possibilities of AI in research, but also to local and institutional history. Each of these case studies highlights the way AI is not a replacement for teaching and learning, but rather a tool that can be integrated into existing pedagogy.

At the same time, we recognize significant and real concerns about these tools, including their black box nature, algorithms that likely reproduce existing inequities and biases, and the opacity of data privacy practices. We argue that addressing these concerns involves learning what AI can do, rather than forbidding its use. To that end, in each of our experiments, students learned not only how to use AI but also why it works and what some of its limitations continue to be. AI technologies became subjects of critical inquiry, particularly as students explored the reliability and the methodological, social, and ethical implications of AI-generated content. This process led students to reflect on their role in knowledge production as they navigated the increasingly blurred lines between their own intellectual contributions and the ease with which such tools create content. We argue that part of the success of these experiments comes precisely from this productive anxiety about the human-technology relationship that they experienced. For this reason it was important for students to encounter these tools in a controlled classroom environment that felt safe. An anthropological approach to AI literacy is crucial moving forward, and these classroom experiments illustrate pedagogical possibilities for helping students develop a nuanced understanding of AI tools.

Acknowledgements: This project was funded by the Generative AI for Student Success program at the University of California, Irvine. Thank you to all the students enrolled in our courses for their engagement with these AI tools and to Janet DiVincenzo for her expert guidance and support.

Authors

Angela Jenks

Angela Jenks is Associate Professor of Teaching with a focus on medical anthropology.

Christopher Lowman

Christopher Lowman is Assistant Professor of Teaching with a focus on historical archaeology/museum studies.

Ian Straughn

Ian Straughn is Associate Professor of Teaching with a focus on archaeology/Middle East Studies.

Cite as

Jenks, Angela, Christopher Lowman, and Ian Straughn. 2024. “AI for Learning: Experiments from Three Anthropology Classrooms.” Anthropology News website, June 27, 2024.

More Related Articles

Going Native: Praxis

Bernard C. Perley