New Educational Resources, Licensing Terms, Hackathons, and More | AI Roundup

Libraries, vendors, and library organizations have been busy with several recent artificial intelligence (AI) initiatives—check out LJ's roundup of the latest news from the field.

abstract techy looking AI logoLibraries, vendors, and library organizations have been busy with several recent artificial intelligence (AI) initiatives. Here are a few from the past six weeks. For ongoing coverage of AI in the library world and beyond, be sure to check LJ’s infoDOCKET.

Sage has launched The Art of ChatGPT Interactions, a two-hour online course on prompt building, ethical artificial intelligence usage, and implementing the CLEAR framework (Concise, Logical, Explicit, Adaptive, and Reflective) when using Large Language Model (LLM) AI tools such as ChatGPT. Taught by Dr. Leo S. Lo, Dean of the College of University Libraries and Learning Sciences at the University of New Mexico, the new course is freely available through the Sage Campus platform.

Johns Hopkins University Press has published Teaching with AI: A Practical Guide to a New Era of Human Learning, by José Antonio Bowen and C. Edward Watson. The book aims to help educators learn how to use new AI tools and resources and suggests interactive learning techniques, advanced assignment and assesment strategies, and practical considerations for integrating AI into teaching environments. In addition, “Bowen and Watson tackle crucial questions related to academic integrity, cheating, and other emerging issues,” according to the publisher.

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) released its 2024 AI Index, the seventh edition of its report covering technical advancements, public perceptions, and geopolitical dynamics surrounding the development of AI. Top takeaways in this year’s index are that AI has surpassed humans on benchmarks including some in image classification, visual reasoning, and English understanding, while trailing behind on other tasks. Commercial AI solutions are outpacing academia in AI, with industry producing 51 notable machine learning models in 2023, compared with 15 in academia. And the training costs of state-of-the-art AI models are reaching unprecedented levels, with OpenAI’s GPT-4 costing an estimated $78 million to train, and Google’s Gemini Ultra $191 million.

The Scholarly Publishing and Academic Resources Coalition (SPARC) recently hosted a webcast regarding widespread publisher efforts to push libraries to accept AI restrictions in licensing contracts. These include restrictions that some publishers are attempting to insert as mid-contract amendments. Presenters argued that there is a strong fair use case in the United States for academic uses of AI on licensed content, and that libraries should avoid agreeing to contract language that restricts use of AI. Libraries may be able to find common ground during negotiations with vendors and publishers by addressing key concerns such as preventing unauthorized access or using licensed content to train products that could then compete with the vendor. Resources available to help libraries develop contract language that avoids AI restrictions include the University of California's Office of Scholarly Communications’ adaptable licensing language, the SPARC Contracts Library, and the ESAC Registry. A recording of the full webcast is exclusively available to SPARC members, and can be requested via email at operations@sparcopen.org.

The Association of Research Libraries (ARL) has issued “Research Libraries Guiding Principles for Artificial Intelligence” articulating “a foundational framework for the ethical and transparent use of AI and [reflecting] the values we hold in research libraries.” According to ARL, the organization will rely on the following principles in policy advocacy and engagement:

  • Libraries democratize access to artificial intelligence tools and technology to foster digital literacy among all people.
  • Libraries commit to understanding where distortions and biases are present in AI models and applications.
  • Libraries champion transparency and information integrity. Libraries believe “no human, no AI” (underscoring the importance of human involvement in critical decision-making junctures).
  • Libraries prioritize the security and privacy of users in the use of AI tools, technology, and training data.
  • Libraries assert that copyright law in the United States and Canada is flexible and robust enough to respond to many copyright issues that arise from the intersection of technology and artificial intelligence.
  • And libraries negotiate to preserve the scholarly use of digital information.

In addition, ARL has published the ARL and the Coalition for Networked Information (CNI) “2035 Scenarios: AI-Influenced Futures in the Research Environment.” The scenarios are designed to help leaders in research environments “proactively address the challenges and opportunities that AI presents.” The strategic focus and uncertainties in the scenarios were identified through focus groups, workshops, and one-on-one interviews with more than 300 ARL and CNI members during the winter of 2023 and spring of 2024.

Ithaka S+R posted preliminary results from a survey regarding AI Chatbots in Education of 77 faculty, 111 staff, and 224 students conducted at Bryant University from November 2023 to February 2024. The survey indicates that students and faculty are increasingly using AI chatbots. However, university staff are lagging behind in adoption, with 21 percent of faculty and 28 percent of students describing themselves as extremely or very familiar with AI chatbots, compared with only 8 percent of staff. Only four percent of staff respondents said they use AI chatbots daily, compared with nine percent of faculty and 17 percent of students. Bryant University plans to publish a full survey report on its website at a later date. The university is participating in Ithaka S+R’s cohort project, Making AI Generative for Higher Education.

Carnegie Mellon University Libraries in April offered an AI Literacy Resource Hackathon. Led by Open Knowledge Librarian Emily Bongiovanni; Data Curation, Visualization, and GIS Specialist Emma Slayton; and Associate Dean for Academic Engagement Nicky Agate, the event hosted 36 attendees from 17 different institutions including Ohio State University, West Virginia University, Princeton University, and Binghamton University. According to a news post on the library’s website, “the goal of the event was to gather academic librarians, staff, and other interested parties to develop open educational materials on the emerging principles of AI literacy.”

Abstract and citation database Scopus this week announced the introduction of four new features to Scopus AI, according to a blog post on the organization's website. 1) The concept map, a visual tool with topical, branching “nodes,” has been enhanced so that when a user clicks on a node, it will explain how the node relates to the topic. 2) A new small language model reranker has been implemented to enhance the precision of searches. 3) A reflection layer has been added to provide context regarding the AI’s responses. For responses with a high level of certainty, the reflection layer will use direct language and possibly add nuance such as pointing out potential biases. Responses with medium level certainty will note that there is limited information on the topic in Scopus. And when the AI cannot answer a question, it will state that there is not enough information and suggest alternative, related queries. 4) References surfaced while using Scopus AI can now be exported to Elsevier’s SciVal research platform.

Clarivate, which last year announced plans to bring generative-AI capabilities to the Web of Science, recently confirmed that they plan to launch the AI-powered Web of Science Research Assistant in September 2024. “In addition to retrieving results and highlighting top cited papers, the research assistant can also reveal how and why researchers are citing a paper by leveraging enriched cited reference data,” according to a blog post describing Clarivate's plans for the AI assistant. “This additional context offers researchers a signpost so they can more adeptly explore the literature.”

Author Image
Matt Enis

menis@mediasourceinc.com

@MatthewEnis

Matt Enis (matthewenis.com) is Senior Editor, Technology for Library Journal.

Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?