AI in Academia

Academic librarians are helping both students and instructors navigate the rapidly evolving field of artificial intelligence.

Academic librarians are helping both students and instructors navigate the rapidly evolving field of artificial intelligence

Artificial intelligence (AI) is big news, with articles daily about developments in AI businesses, legal concerns around the use of copyrighted materials by AI companies, and even worries about “whether it might take [a person’s] job or it might replace them as a human,” says Kevin Neary, CEO of Orcawise, a company that helps businesses responsibly integrate AI into work.

Librarians face multiple related issues, and academic librarians are challenged with how to navigate new tools that their faculty and administration may have heard negative or even frightening stories about. However, interviews with several librarians around the country, as well as technologist Neary, reveals many reasons to be optimistic. The librarians LJ spoke with mentioned that there has recently been a sharp upswing in terms of AI interest and knowledge with students and instructors; that there’s more to their AI practice than ChatGPT; and that generative AI tools such as ChatGPT can be used in ways that maintain academic integrity.

Discussing the recent increase in AI awareness, Leo S. Lo, dean/professor at the University of New Mexico’s (UNM) College of University Libraries and Learning Sciences, cites his work that’s documented in his 2024 paper “Evaluating AI Literacy in Academic Libraries.” In the March–April 2023 survey that underpins the paper, Lo reports, the average self-reported faculty AI literacy level was “quite modest.” At that time, faculty “didn’t know what to expect or what they need to know,” and less than seven percent of them used the fee-based version of ChatGPT. Usage of that premium model has a high positive correlation with high AI literacy, Lo explains.

Less than a year later, a study of the same group in December 2023 showed a definite shift in attitude, and now “people see the transformative power of this technology,” Lo says. Asked what he attributes the shift to, Lo explains that people are now more aware of AI and better able to work around related problems, with hallucinations—the tendency of generative AI sometimes to make things up—viewed as less of a show stopper. “It hallucinates,” says Lo, “because that’s its role, to generate. So, it doesn’t mean that [it is] lying to you.”



The importance of understanding what generative AI is and what it can do well is emphasized by Nicole Hennig, eLearning Developer at the University of Arizona Libraries. Hennig has worked with colleagues to create AI-related LibGuides and explains that in that work, “One of the first things that came to our attention [in Spring 2023] was that students were using ChatGPT to find lists of articles and sources, but of course they didn’t exist because it makes up citations.”

Hennig says that she and colleagues added a FAQ to the library’s LibAnswers that explains, “Don’t use ChatGPT as a search engine to look up articles, because it makes them up. The articles sound very plausible because it knows who writes on certain topics, but it’ll make up things most of the time because it doesn’t have a way to look them up. But you can use ChatGPT for some other things.” Alternative uses, Hennig suggests, include asking ChatGPT to narrow down your topic, come up with keywords, construct a search strategy, and recommend library databases for a topic. The library provides both a student guide and a guide for instructors. Both guides offer advice on prompting, academic integrity, and more.

Hennig also works to help library users go beyond ChatGPT to try other tools, such as Claude, Copilot, or Perplexity. In “Which AI tool for your task?” Hennig explains “grounding,” which simply means combining a language model with a source of facts in one tool. ChatGPT is not grounded in this way, so it relies only on its training data. Ungrounded models like ChatGPT are better thought of as “wordsmiths” for working with text, she explains.

For searching, Hennig recommends using models that are grounded with a source of facts such as web search results. Grounded tools (like Perplexity or Copilot) provide links to the sources, so it’s easier to verify results. These models search the web and summarize the results using the language model. There are other models, like Elicit, that are grounded with results from academic papers (from Semantic Scholar). The language model behind Elicit summarizes each abstract in one sentence, making it easier to decide which articles are useful for your work.

Other libraries are using non-generative AI tools to create or enhance services and experiences for users. Trevor Watkins, teaching and outreach librarian at George Mason University (GMU), works in the school’s Learning, Research and Engagement department, which conducts special projects, many involving AI. At the moment, Watkins, who has a software engineering background, is working on “what I like to call a conversational agent but others would call a chatbot, but it’s a little bit more sophisticated than that.” It’s not built using generative AI, which Watkins explains is partially because the large language models available are blackboxed. “Those large-language models are proprietary, so I have no knowledge of how they are accounting for bias,” says Watkins. Instead, the agent is built using “one of the older AI techniques,” expert systems, and offers information that’s specific to orientation.

Another AI-boosted outreach tool being created by Watkins’s department is an augmented-reality, 3-D tour of the library using Blippar and software that Watkins developed. It grew out of a tour that he saw at the University of Tennessee, as well as a 3-D tour he saw of one of the Smithsonian museums in Washington, DC, and will be integrated into an online campus tour that is being created by another group at the school. “We want to enhance the experience of students and how they view the library,” says Watkins, “especially now that more of the curriculum is online and we want to be able to reach our online learners.” Students can take a guided or self-guided version of the tour remotely using VR glasses, or if they’re in the library, using a smartphone. An audio-only version of the tour is available for visually impaired students, and Deaf or hard of hearing students will see text describing the spots visited. So that the tour can’t be used to plan a mass shooting, it doesn’t show all of the entry/exit points—the ones that are not shown publicly on the library’s website are omitted.



Most libraries will be starting with AI in terms of faculty and students looking for guidance regarding ChatGPT. A good place to begin is in other institutions’ LibGuides. One example is the “AI in the Classroom” LibGuide created by Sarah Paige, program manager, libraries and assistant professor/online learning librarian at Eastern Florida State College (EFSC). Paige explains that the EFSC libraries are not heavily working with AI yet, but faculty is starting to express interest, with a supervisor mentioning at the library’s Welcome Back event in January that some faculty had asked about librarians addressing AI during information-literacy teaching sessions.

“I myself put a slide mentioning AI into my teaching PowerPoints for the two speech instructors with whom I work each semester; I asked them if they wanted me to mention it, and they said yes and told me what their syllabi said about the topic,” says Paige. “I actually got some interested glances from students when I discussed the topic and one or two mentioned ‘hallucinations’ as something they didn’t know about.”

The LibGuide Paige created is a great place to start when creating guidance for faculty, as it touches on topics that users at other institutions will also need covered—how to cite AI-generated material and how AI works with various search engines, for example—and offers lengthy lists of AI-related articles that can quickly get readers up to speed on the world of AI and libraries. Faculty and administrators who are looking for syllabi that address AI, meanwhile, can consider the 3-tier approach used at the University of Arizona. Hennig explains that the school has options for professors who 1) want no AI used in their classes at all; 2) want AI used only for certain assignments or purposes; or 3) want AI to be used freely.

In terms of including AI use in information literacy sessions, librarians will be aware that students are using ChatGPT anyway, so it’s best to address that and give them the right critical thinking skills when it comes to using this tool and its output.

Sometimes using ChatGPT with a librarian guide can be enough to show students that it’s not worth using AI instead of library resources. This is the experience described in C.W. Howell’s 2023 WIRED article, “Don’t Want Students to Rely on ChatGPT? Have Them Use It,” as well as found by Watkins when he worked in a first-year English class at GMU in fall 2023, in which the professor was open to having generative AI demonstrated and wanted to learn more about it himself. The students also didn’t know much about ChatGPT but were curious about it.

In the initial session, the students researched using library databases after learning research techniques such as how to use Boolean operators and come up with research questions. Next time, Watkins introduced them to Google’s Bard AI (now Gemini) and ChatGPT, having the students compare those tools’ output to each other and compare the AI output to the library materials they found.

“The students found it rather tedious, because they had to fact-check everything,” says Watkins, continuing, “About 90 percent of the time it got [citations] wrong. It was creating authors that didn’t exist, journal articles that didn’t exist. It opened their eyes.”



The above projects and activities are just a sample of what’s going on regarding AI in the library world, showing that while there is still appropriate caution being observed by librarians who are interested in AI, they are moving ahead with what students and faculty need. This is an approach encouraged by Orcawise’s Neary, who notes that

“these language models are all based around language, in text primarily, and the academic library world is as well, so [libraries] should be really early adopters and leading the way with it given the nature of their business.”

How can libraries do this? All those interviewed for this article mentioned communities of practice. UNM’s Dean Lo, for example, says that the institution paid 10 employees to use ChatGPT 4 (the premium version) in a project of their choosing for work. The dean’s assistant took part in the group and now uses ChatGPT for calendaring and taking meeting minutes, someone from the school’s university press is using it for summarizing abstracts, and another person who works in instruction is using it to create lesson plans, to take just some examples.

“We got together every other week and share lessons learned, challenges, and tips,” said Lo, and at the end of the program, “everybody felt more confident, everybody felt like their AI literacy level went up and they learned about things to watch out for, like data privacy. So that will be the way forward, to coordinate communities of practice.”

Lo is hoping that the community of practice will act as AI ambassadors at UNM—as he says, to “use them as our champions. A lot of the time, a peer group is very important, to be able to say that my colleagues have used it and gained from it. I highly recommend that.” Hennig is trying a similar approach, and when LJ spoke to her was planning an online “AI Tool Exporation Hour,” during which colleagues would spend time “individually or collectively playing with and exploring one or more [AI] tools,” with breakout groups and in-person meetings optional.

To sum up, the practitioners interviewed for this piece are all up front with their users about the drawbacks of AI and the ethical issues involved, but are working around those to create new practices and tools, or enhance existing ones, using AI.

Neary describes the way ahead: “There’s a gap between tech savvy, digital literacy–aware people and others, and the problem with AI is that that gap could widen…. Libraries have got a really big opportunity here to move [AI literacy] forward very quickly and they can do that by building communities around this and a responsible AI strategy for the library world.”

Henrietta Thornton, formerly LJ’s Reviews Editor, is Information Literacy Content and Strategy Manager at Infobase and a cofounder of free crime-fiction newsletter firstCLUE.

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.



We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing