Next Gen AI: Libraries Work with ChatGPT and Other Emerging AI Tools

2023 was a breakout year for generative artificial intelligence, and librarians are in a position to help patrons work with this technology.

2023 was a breakout year for generative artificial intelligence, and librarians are in a position to help patrons work with this technology

For years, people have relied on artificial intelligence (AI)–powered tools ranging from grammar checkers and Netflix recommendations to voice assistants and credit card fraud detection systems, but those seemed like old hat when ChatGPT burst onto the scene on November 30, 2022, reportedly reaching one hundred million users within two months. OpenAI’s free-to-use, large language model “Chat Generative Pre-trained Transformer” chatbot was engaging users in detailed conversations, quickly answering questions about seemingly any topic, and even writing simple computer code on command. Similar tools soon followed, with Meta releasing its Large Language Model Meta AI (LLaMA) and Microsoft debuting the Bing Chat extension for its Edge browser in February 2023, and Google launching Bard AI in March. A growing number of people continued exploring text-to-image generating AI platforms such as Midjourney, OpenAI’s DALL-E, and Stability AI’s Stable Diffusion.

The technology will almost certainly transform the way people work, with the potential to automate repetitive, time-consuming tasks in many different fields. For libraries, specifically, metadata creation and cataloging seem likely to be impacted, but generative AI can also be used to answer reference questions—often imperfectly for now—and provide personalized recommendations for books, movies, and other content. In “AI in Higher Education: The Librarians’ Perspectives,” a survey of 125 librarians published last spring by Helper Systems, developers of the free kOS PDF indexing app, respondents were asked what excited them about AI. One wrote, “I’m a librarian, so not a damn thing. In fact, it’s one step closer to the end of this occupation as we know it.”

Content creators may be impacted as well. In November, the science and tech news site Futurism wrote an article questioning whether Sports Illustrated had published articles composed entirely by AI, accompanied by made-up bylines, fictious bios, and AI-generated author headshots. When Futurism writer Maggie Harrison reached out to the magazine’s publisher, all of the articles in question were taken offline overnight. Although Sports Illustrated officials later claimed that the articles were created by AdVon Commerce, a third-party content and marketing company, in December the board of Sports Illustrated’s parent company Arena Group fired its COO Andrew Kraft, President Rob Barrett, and later its CEO Ross Levinsohn, following public backlash. Even more prominently, growing concern about the potential impact that generative AI could have on the jobs of screenwriters and actors was a major factor leading to last year’s monthslong strike by the Writers Guild of America and the Screen Actors Guild unions in Hollywood.

Broader concerns extend well beyond the future of work. “Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” states an open letter signed by more than 1,000 scientists, academics, and tech company leaders and published by the nonprofit Future of Life Institute in March 2023. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.”

The letter called for an immediate pause of at least six months in the development and training of generative AI systems in order to better assess the risks these systems might pose. That didn’t happen.

 

These images were generated by Stability AI’s Stable Diffusion platform at openart.ai using simple prompts such as “future library” and “artificial intelligence in libraries”

 

ETHICAL USE

In the library field, the conversation about AI currently runs the gamut from concern to enthusiasm. “People are both extremely interested in using it and are also kind of like, ‘How do we use this and not mess anything up?’ ” says Joy DuBose, associate professor, Extended Reality and Gaming Librarian for Mississippi State University Libraries, and current co-chair of the American Library Association’s Core Artificial Intelligence and Machine Learning in Libraries Interest Group. “I’m on several listservs, and half the time, people [start discussing] AI and it’s like, ‘We’re off!’ You have a discussion that lasts for three days, and you hear everything from fear to ‘Oh my gosh, let’s do this.’ People are excited and curious and cautious. It’s a giant Pandora’s box.”

Aside from their personal interest or concerns, librarians will need to be prepared to help patrons use AI tools, answer questions about them, and in many cases, help their institutions navigate emerging issues including academic honesty and broader ethical concerns, such as potential built-in biases or the current lack of transparency and sourcing.

“In terms of the academic honesty issue, I feel like librarians are viewing it as a tool, and they’re saying…there are good ways that this can be used,” DuBose says. However, “you are seeing [some] professors just be dead set against it…. It’s very difficult for librarians to walk that line when a whole bunch of people in your university are saying no when others are interested.”

R. David Lankes, Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin, says that many school districts and universities that adopted knee-jerk bans on the use of generative AI have already started backing down. One development, he notes, is that major technology companies like Microsoft have already begun embedding AI tools in everyday software that students and faculty need to use. Last spring the company integrated the large language model-driven Microsoft 365 Copilot AI into its ubiquitous Office software suite. “Increasingly, it’s just there,” Lankes says.

“It’s in what you’re doing, so how can you forbid it? ‘You will not use AI.’ It’s like, ‘But I have to use Microsoft Word?’ ”

Separately, while many librarians note that these generative AI platforms can generate incorrect, unclearly sourced answers to many types of questions, the programs currently being used to “detect” AI cheating and papers written with AI have some of the same problems. “AI is not perfect, and the checkers that are designed to check for AI are not perfect,” DuBose says. Many teachers are skeptical. In a survey of more than 200 U.S. K–12 teachers conducted by study.com in early 2023, 26 percent claimed that they had already caught a student cheating using ChatGPT. And while 67 percent of respondents said that they did not believe the AI tool should be banned in schools, 43 percent said they believed the advent of generative AI tools such as ChatGPT would soon make their jobs more difficult, and 72 percent said that they had yet to receive any guidance from their institution regarding its use.

There could be a significant near term need for people who can help students and instructors outline how to use AI in academically honest ways—whether that means something as simple as properly citing how it was used in a paper or project, or creating frameworks that specify what a teacher or professor should usually find reasonable, and what uses should be off limits.

Lankes notes that academic libraries have “staked out a space” where librarians assist faculty, students, and researchers with data science and digital humanities projects, and that academic, public, and K–12 libraries have an opportunity to do the same with AI, potentially assisting users with the ethical use of these emerging tools. “Libraries have become places to experiment—not just support what’s going on,” he says. “I think this is another opportunity for academic libraries and school libraries and public libraries to do the same. Let’s create a constructive, playful space for people to come in and try these things.”

In one example, he noted that text-to-image generative AI tools such as Midjourney, DALL-E, and Stable Diffusion could help people amplify their creative voices. “It’s really fun to go in and try some of these image generation tools to see how I can use my words to try and make images that I wish I could draw and paint,” Lankes said.

 

THE YEAR AHEAD

As another librarian respondent to the Helper Systems AI survey put it, “Once the genie is out of the bottle, you can’t put it back in, so you just have to find a way to grapple with the new reality.” Generative AI was definitively let out of the bottle in 2023, and while there were U.S. congressional hearings on AI oversight and calls for international technical standards issued by G7 leaders last spring, development of these tools by multibillion-dollar companies remains mostly unchecked. Laws and regulations are generally slow to catch up with new technologies.

Libraries are once again placed in the role of consumer advocate during a time of rapid technological change. The polarization fomented by social media should serve as a cautionary tale, Lankes notes. Social media can help people connect with friends and family, but stoking anger and divisiveness—often with misinformation—can often be a more efficient way to get and keep people’s attention. The algorithms that many online platforms use to drive engagement are just programmed to keep users scrolling, clicking, and viewing. Big technology companies don’t have the best recent track record addressing the problems their own platforms cause.

“How can libraries—as responsible for the well-being of their communities, schools, universities, cities, towns, businesses—ensure that [AI] technology will be used to connect and support healthy communities and not continue creating more division?” Lankes asks. “The only way we’re going to do that is if we build strong partnerships with people who want to do that…. How can we ally our assets in such a way that we can bring pressure to the industry” and enact “appropriate legislation” to ensure that AI is developed and used in ways that are beneficial to people?

Many librarians are already deeply involved with generative AI, but these tools could soon become so pervasive that it will be important for most librarians to have at least a familiarity with the major platforms and the basics of how they work.

“Being aware of what is available is a big part,” DuBose notes. “So, if somebody comes up to [a librarian] and says, ‘I’m interested in doing research about how AI is changing art,’ they can point them in the right direction.” Ideally, just like any other type of resource, a librarian on staff will know enough about AI platforms to suggest one based on what the patron needs and help them get started with it. DuBose adds that development is happening very quickly. “Follow it as best as you can, but realize something’s going to fall through the cracks,” she says. “It’s impossible for one person to keep up with everything, and that’s why that networking is so important.”

For librarians looking to learn more about AI, Lankes suggests experimenting with ChatGPT or other generative AI tools with a topic they’re interested in. YouTube and online courses are available for librarians interested in going more in-depth to learn how the technology works, he adds.

“There are some large concepts that people should know,” Lankes says. “For example, understanding that the core of all these technologies is based on Neural Networks or sometimes Bayes’ Theorem—the idea that, rather than being prescriptive programming with a bunch of rules, [the AI is] gathering a bunch of data and having the structure emerge—those concepts, while abstract, would be helpful” for librarians to be familiar with going forward.

In an October 2023 leadership brief by the Urban Libraries Council, the group wrote that “The ideal future is one where AI augments productivity of current organizational processes, workflow systems, and personnel. It can do this while creating new businesses and equitable pathways to wealth that are more broadly distributed. Libraries can be a key player in ushering in a preferred future with AI through the following approaches,” which include providing avenues for the practical application of AI, leveraging prompt engineering skills to help patrons become part of an AI-capable workforce, emphasizing information literacy to help patrons validate information from generative AI tools, creating an AI-focused digital inclusion network to ensure that these technologies do not further widen the digital divide, and advocating for the responsible use of AI. These will all be major challenges in 2024, but libraries have proven that they can help patrons adapt to major technological innovations before.

Author Image
Matt Enis

menis@mediasourceinc.com

@MatthewEnis

Matt Enis (matthewenis.com) is Senior Editor, Technology for Library Journal.

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?