Artificial intelligence (AI) was a hot topic at this year’s American Library Association LibLearnX conference in Baltimore, January 19–22, with multiple presentations, panels, and workshops covering the technology and its impact on libraries and the people they serve, touching on both AI’s potential and its current flaws.
Artificial intelligence (AI) was a hot topic at this year’s American Library Association (ALA) LibLearnX conference in Baltimore, January 19–22, with multiple presentations, panels, and workshops covering the technology and its impact on libraries and the people they serve, touching on both AI’s potential and its current flaws.
“One of the reasons that you need to get involved with this technology now…is because this is actually the least risky time that it's ever going to be,” said Juan Rubio, digital media learning program manager for the Seattle Public Library (SPL) during the “AI and Libraries: A Discussion on the Future” panel. AI technology is “getting more and more powerful, to the point that you can ask these tools questions and it will go out and it will write code and develop software to do those things that you want it to do. Once it can do that, there isn’t much that’s not on the table.”
Rubio was joined on the panel by Virginia Cononie, associate librarian and coordinator of research services for the University of South Carolina Upstate library; Nathan Flowers, systems librarian for Francis Marion University; and Dray McFarlane, cofounder of the consulting company Tasio. Rebecca Headrick, chief information technology officer for ALA, was moderator.
Regarding generative AI tools such as ChatGPT, which has ballooned in terms of public awareness and usage during the past year, Flowers said, “I think you should treat it like a person. A super smart person who is not trustworthy. It can tell you about anything you could possibly want to know, and it will also very confidently lie to you about it.” As the panelists including McFarlane explained, generative AI tools are trained on massive stores of text and data, and then, in response to user prompts, generate long strings of text by predicting the most likely word to follow after each prior word.
As Flowers noted, biases in the content that the AI was trained on will seep into its responses to prompts, and not infrequently, these AI tools will “hallucinate” or present inaccurate information and even completely fabricated citations. “It’s going to get better and better and less likely to lie to us…. I hope it does,” Flowers said. For now, though, librarians “need to be definitely focusing on the privacy concerns…and any biases that may be programmed into it,” to help students and patrons use these tools, because “it’s kind of a black box, what it’s being trained on.”
In terms of privacy concerns, Cononie, who had given an earlier, solo show floor presentation on AI, said that “recently, I realized that Google Bard was saving every single search that I put into Google. And I immediately had to change those settings, because if I’m going to be showing it to other people, every single thing I search on Google was coming up. Luckily, Google Bard does allow us to do that, but to get the full breadth of AI tools [and demonstrate their use to patrons], you are unable to shield those searches…. Just getting in the settings and understanding [what] you’re agreeing to…as you are using these free tools would be one of the first things that I would bring up.”
McFarlane, who noted that his company Tasio develops “AI products using the latest generation of artificial intelligence,” added during his introduction that “the way we’re seeing AI go, the technical people like me are becoming less and less relevant, and the people who understand how information works are becoming more and more important. I really like to be here because I want to learn from [librarians]. I want to hear the questions, have this conversation, take this back to my company, and encourage people to hire more people who have more expertise in library science.”
At the “Chatbot-based Learning Activities Mapped to ACRL's ‘Searching as Strategic Exploration’ Frame Knowledge Practices” learning lab session led by Breck Turner and Jacob White—both informationists for the Welch Medical Library at Johns Hopkins University—the two discussed how to utilize generative AI tools in library lesson plans and how to increase one’s expert-level searching skills and techniques using these AI chatbots. Marcus Spann, also an informationist for the Welch Medical Library, managed audience questions during the session, which was also livestreamed.
Turner set the stage by explaining the six “frames” of the Association of College and Research Libraries’ (ACRL) Framework for Information Literacy for Higher Education: Authority is Constructed and Contextual, Information Creation as a Process, Information Has Value, Research as Inquiry, Scholarship as Conversation, and Searching as Strategic Exploration. And he added a simple definition of how to use AI chatbots ethically in their current form. “We think you should try to use it as part of a creative or discovery process, rather than relying on it for factual answers or passing it off as your own work,” Turner said.
As for why librarians should consider using generative AI chatbots in their information literacy programs, “a major reason is that you can use it to visualize searching techniques and concepts; that makes them less abstract,” Turner said. “If you want to illustrate how truncation or adjacency searching are useful, you can generate lists of example terms that could be searched by prompting a chatbot without having to pinpoint examples in database search results. Using generative AI can also make your lessons more flexible. In a lot of our examples, you can just swap out the topic being used and change it to something else that’s a better fit for your target audience.”
White then detailed several activities adapted from ACRL’s textbook Teaching Information Literacy Threshold Concepts: Lesson Plans for Librarians and its “Searching as Strategic Exploration” lesson plans. The capacity crowd of librarians in attendance could then use these ideas for workshops and information literacy courses. Examples included prompts that led ChatGPT to generate lists of organizations that produce literature about Sustainable Development Goal (SDG) 11, organizations that produce grey literature about SDG11, then a list of academic databases that would have literature about SDG11. White also explained how generative AI chatbots could provide an easy way to show information literacy students how to identify synonyms for search terms, have a tool such as ChatGPT generate Research Information Systems .RIS files and modify those files, help students learn how to identify ways to filter search results in library databases, work with controlled vocabulary terms and hierarchies, understand how truncation and adjacency searching can impact results, and more.
“We’re really framing [generative AI chatbots] from the strategic exploration element as kind of a good place to find starting points and good places to maybe think of [search] words that you may not have thought of,” White said. “Sometimes, the chatbot will make some really useful suggestions.”
Show attendees who were interested in taking an interactive dive into AI and how it might help libraries better serve their communities also had the opportunity to attend Saturday’s three-hour workshop “Unleashing AI’s Potential: A Design Sprint for Library Staff,” led by Rubio from SPL and Linda W. Braun, learning consultant for the LEO Group.
Braun and Rubio led small groups of attendees through a variety of exercises, including discussions of case studies and a game of Connections, where players examine 16 words and try to put them into four groups of four related words, tying the activities back to AI. With this game, several of the groups figured out that the words hive, honey, comb, and wax were things created by bees; the words angel, witch, clown, and pirate were potential Halloween costumes; the words spell, while, stretch, and period were all slang for intervals of time; and the words hair, wail, dear, and hoarse were homophones for different animals. “Imagine if you were to write an algorithm for a game like Connections,” Rubio said after that exercise. “How difficult would it be to create a program that would do that for you?” Braun noted that the iterative, collaborative nature of the exercise was “reflective of using AI, because you get to talk to someone else about what you’re trying to figure out. With AI, you talk to the tool about what you’re trying to figure out.”
Following the exercises demonstrating several AI concepts, the design sprint portion of the workshop had all attendees quickly come up with eight potential uses of AI that could help their community—or libraries more generally—and then pitch them to their groups. The groups then decided on a single idea to pitch to the rest of the workshop’s attendees. Group “action plans” included AI-facilitated human resources features that could help answer questions for new employees, maintain a degree of institutional knowledge, maintain forms and policies, and more; establish programs that help librarians and library staff better understand AI and its pitfalls; how rural and small libraries could use AI to inventory, organize, and maintain archives and special collections; or how libraries might use AI to create a privacy-focused mental health chatbot that could allow teens and adults to vent or discuss non–intervention necessitating problems anonymously.
Melissa Del Castillo, virtual learning and outreach librarian for Florida International University, and Hope Kelly, online learning librarian for Virginia Commonwealth University, led a half-hour show floor presentation titled “ChatGPT Is a Liar and other Lessons Learned from Information Literacy Instructors.”
As LJ previously reported, Del Castillo noted that results from querying ChatGPT and other generative AI tools can be biased because these tools are being trained, in large part, on content from the web, “and everyone knows the web is full of bias, racial profiling, misinformation, [and] malinformation.”
Kelly and Del Castillo also shared results of their recent survey of library professionals in which 65 percent of respondents agreed or strongly agreed that instruction with ChatGPT is a good idea, but only 43 percent felt that studying with ChatGPT is a good idea. An even greater majority—72 percent—agreed or strongly agreed that they intend to use ChatGPT in instruction, while 67 percent intended to use it in other areas of work, and 79 percent intend to use it in the future.
Other AI-focused sessions at this year’s show included Cononie’s 20-minute show floor presentation on “Enhancing Research Services: Leveraging the Power of Artificial Intelligence,” and a separate 20-minute show floor presentation on “Teaching Student Workers to Use ChatGPT for Creating Metadata?!” by Jenny Bodenhamer, digital services librarian for Oklahoma State University.
We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing
Add Comment :-
Comment Policy:
Comment should not be empty !!!