Collective Wisdom in an Age of Algorithms | Peer to Peer Review

Changes to platforms we use regularly are always slightly traumatic, as we invariably discover when we roll out a new library website and the complaints begin, or we find out a database interface has changed radically the day we’re introducing it to students. Platform changes are even more distressing when they are sites to which we contribute content. By creating social circles and sharing information on websites, we often forget they belong to other people who have values and motivations different from ours.
Barbara FisterChanges to platforms we use regularly are always slightly traumatic, as we invariably discover when we roll out a new library website and the complaints begin, or we find out a database interface has changed radically the day we’re introducing it to students. Platform changes are even more distressing when they are sites to which we contribute content. By creating social circles and sharing information on websites, we often forget they belong to other people who have values and motivations different from ours. Those values are not likely to appear in their tagline or mission statement. (Google, for instance, is not really in the business of defeating evil.) Their motivations aren’t going to be easy to find in the click-through terms of service, either, and morphing privacy policies don’t clarify the situation. The motivations that shape a library’s website are pretty consistent with the motivations of its users: to make it as easy as possible to find stuff (though users may sometimes think we are doing it wrong). The purpose of a social site like Twitter is not so singular. For some users, it’s a branding opportunity or a chance to connect with celebrities. For others (like me and a great many academics), it’s an indispensable site for crowdsourced news, commentary, and occasional hilarity. For Twitter’s management, it’s a business that needs to attract new members and make a profit. It’s not surprising that these different motivations come into conflict. In the case of Twitter, a conflict is brewing between members who want to reserve the right to control their Twitter streams and the business side, which would prefer to curate content algorithmically to meet its economic goals. For many of us, substituting an algorithm for user-based curation, even partially, strikes at the whole point of using Twitter. We’re there to see what the people we choose to follow have to say and what they think is worth reading. We’re not there to find out which tweets are widely popular. That would be like reading the New York Times with only the “most emailed” stories featured. People may not email friends stories about what’s happening in Ukraine in large numbers. That doesn’t mean we don’t want to know what’s going on there. There’s a fundamental confusion here between what algorithms can do—measure, sift, and select by the numbers—and what humans can do. This came home to me when reading “Why Twitter Should Not Algorithmically Curate the Timeline” by the always-brilliant Zeynep Tufekci. (See what I just did? Human curation.) She makes the claim that human choices are precisely what make Twitter work. People decide whom to follow based on self-selected affiliation and choose what they find worth sharing. Interrupting those active choices by inserting content that seems important because it’s being shared by lots of people while editing out content that isn’t getting attention could make Twitter virtually useless. Tufekci explains how human curation can be sharper and faster: "An algorithm can perhaps surface guaranteed content, but it cannot surface unexpected, diverse, and sometimes weird content exactly because of how algorithms work: they know what they already know. Yet, there is a vast amount of judgment and knowledge that is in the heads of Twitter users that the algorithm will inevitably flatten as it works from the data it has: past user behavior and metrics." The difference was illustrated starkly when Michael Brown was shot by a police officer in Ferguson, MO, and police responded to protests with a show of force. Twitter lit up. Facebook didn’t. The algorithm had decided it wasn’t relevant. Relevance is one of those words that sets off all of my alarms these days. Fear of irrelevance is used as a kind of shock collar for librarians. Jump aboard this trend or—zap! Okay, enough, where do I sign? In this case, we need to ask “relevant for whom?” Do we really want trade-secret metrics to decide which tweets we may read, or would we prefer to control our reading choices ourselves? In some ways this reminds me of the trade-off we made when adopting Big Deals. We save ourselves the trouble of making our own choices in exchange for a standardized package of much more stuff. Universities make the same trade-off when they sign on with Pearson to provide course materials rather than develop them locally. Likewise when we shop at Amazon or Wal-Mart: more stuff, lower prices. These choices seem inevitable when price and scale are what matters. Hidden behind these seemingly unarguable facts—a low price is better than a high one; more content is better than less; a multinational company has the resources to deliver distance ed content more efficiently than a single institution—are hidden costs. Low prices come with low wages and high environmental costs. Big Deals sustain big profit margins while gobbling the budget for discretionary purchases. We outsource courses to a giant corporation because we’d rather pay adjuncts poverty wages to provide locally branded support for Pearson products than hire full-time faculty and technical staff (and lack access to the political powers-that-be to close tax loopholes and raise the funds for a robust public education system). We don’t have to do things this way. Academic libraries collectively have lots of financial clout. We could reroute our Big Deal funds and pool our budgets to create open access scholarship platforms. Universities could contribute to a common effort to develop high-quality open education resources rather than write big checks to Pearson. For that matter, if everyone who valued the human curation of Twitter paid an annual subscription (which I would be happy to do), we could probably sustain a social platform that worked for us rather than treat us as a product to be manipulated at will. So long as we assume that we’re little people without power in a world shaped by inevitable market forces, that we can’t look behind the curtain because what’s back there is a trade secret, and that what we actually want is not relevant because corporations say so, we’ll be helpless. We need to find ways to take collective action to share at scale on our terms. Supporting curation, sharing, and self-directed discovery—gee, that sounds like a job for librarians.
Comment Policy:
  • Be respectful, and do not attack the author, people mentioned in the article, or other commenters. Take on the idea, not the messenger.
  • Don't use obscene, profane, or vulgar language.
  • Stay on point. Comments that stray from the topic at hand may be deleted.
  • Comments may be republished in print, online, or other forms of media.
  • If you see something objectionable, please let us know. Once a comment has been flagged, a staff member will investigate.
Sorry !!! Your comment is not submited properly Or you left some fields empty. Please check with your admin


Rory Litwin

It seems funny to me that librarians seem to intuitively understand the problem with algorithmically filtering the twitter feed, but with few exceptions have no problem with the recommender systems put in place over the past decade in bibliographic databases. Those recommender systems function in the same way, and ultimately present the same problems.

Posted : Sep 12, 2014 03:59

Barbara

I guess I haven't really noticed those. I have found relevance ranking pretty hard to understand at times. One difference for me is that I can still (presumably?) seek info about a subject or a specific title or author without relying on recommendations. What I find especially troubling about the FB/Twitter "we'll decide what you see" approach is that I don't get to see what people I'm following post. I get what somebody else decided was important. I hope Twitter doesn't go there after all - and I'm not clear what FB is doing because I abandoned it years ago - but it seemed these companies were saying "we'll choose for you because [trade secret]" and in the case of Twitter that would wreck its primary value for me. Full disclosure: very little of what I read is found by searching databases or catalogs by subject. I either bump into something in the course of the day (on Twitter, conversation, mentioned in passing) and pursue it or I am looking for a text that came up in a references. It would be intriguing to know if students are more inclined to follow references because they are used to seeing them in Wikipedia articles. Anecdotally students mention that strategy more in recent years - rarely did they start with a printed reference book and track down cited works even when I suggested it.

Posted : Sep 12, 2014 03:59


RELATED 

ALREADY A SUBSCRIBER?

We are currently offering this content for free. Sign up now to activate your personal profile, where you can save articles for future viewing

ALREADY A SUBSCRIBER?