Information Literacy and deciding what is “good”

Listen to this post
Getting your Trinity Audio player ready...

This article in the Chronicle of Higher Education is the latest discussion of the woeful information literacy skills of contemporary undergraduates (a topic about which I have blogged already).  No fewer than six of my friends and colleagues sent me the link to this article, because it’s also an excellent presentation of the ERIAL library ethnography project.  My colleague Andrew Asher was one of the anthropologists participating in this large-scale study, which aimed to ground what we know about students and their interactions with libraries and library resources in their actual behavior, through open-ended interviews, participant observation, and other research instruments like photo diaries (all methods I engage in as a part of the Atkins Ethnography Project here at UNC Charlotte).

Particularly highlighted in the Chronicle’s coverage of the ERIAL project (and its upcoming publications via ALA) is the surprise that so-called “digital natives” could be so terribly unskilled at evaluating information.  I don’t think we need to be surprised by students capable of googling not being aware of how to pick which source to use, any more than we would be surprised by children kept away from books all of their lives being unable to figure out what to do with all of those large blocks-full-of-paper things in the library.  Digital literacy has never been the same as information literacy, and all of the digital toys in the world will not render our students (or anyone else’s students, for that matter) capable of distinguishing a reliable source from an unreliable source.

Persistent, consistent instruction in information literacy is what will give our students that skill set.  And it cannot begin at university–this skill should be taught and exercised throughout K-12 education (and beyond).  The testing culture of our current educational system makes critical thinking far less valued than retention and regurgitation of facts, and we are paying for that emphasis with the lack of preparation we see in our undergraduates.  The idea that an undergraduate degree is “to get a job,” rather than a basis for becoming a thinking and contributing (not just in economic terms) member of society, also gets in the way of educators advocating for critical thinking in the classrooms.  Some students get frustrated by it (being asked to think critically about class content, society, life in general) because they are not necessarily used to being asked to do it, and professors are frustrated by students’ frustration–why did they come to college if not to think?

I am collaborating on a project now that involves interviewing and observing high school seniors and college freshman as they look for information, academic and otherwise.  My research partners and I are beginning to analyze the interview data now, and among the many striking things is the standard by which students judge information to be “reliable:”  repetition.  Several students say things like, “if I find it more than once on the web, I know that it’s reliable information.”  Why do they think this?  Where are they getting this standard of reliability?  Is it possible that they’re not being told any other standards?  Or are they simply assuming that the most popular Google link is popular for a fact-based reason?

I think about how students evaluate information when I see their interest in the library website providing reviews of books, articles, and other materials that they can access in our collections.  They want an service whereby they can see what previous users of the materials have said about the materials, so that the students can make an informed decision about the utility of the materials for their purposes.  If you think about the Amazon-style reviews, (see, for instance, the reviews of this Economics textbook), you see that the reviewers writing the “most useful” reviews are explicit about what they wanted out of the book, how the book met their needs (or didn’t), and allow the reader of the reviews to evaluate the extent to which the reviewer’s standards are the reader’s own.  Something is given stars based on whether or not it met a particular user’s needs, therefore context is necessary in a review, for other users to be able to effectively evaluate the potential of an item.

What is “good,” therefore, is a subjective, shifting thing.  Students who are writing five-page essays might review books as “too long” for what they need to do, and articles as “just the thing.”  Graduate students working on dissertations might review books according to their theoretical perspectives.  Reviews on a library web site might give students the ability to get in virtual form the kind of feedback that they already ask their peers for in person (or on facebook, via text, or via emails) about the materials they need for papers, exams, and other coursework.

Students already evaluate information in non-academic settings.  They read (and act on) reviews of movies, cars, live music shows, and restaurants.  They take into account who is doing the reviewing, and whether that reviewer’s perspective is relevant and informed (or not).  It is not that they are utterly incapable of critical thinking.  It is that they are not doing it in academic settings.  They have not been trained to do it.  Neither have they been told by our educational institutions (writ large) that critical thinking is terribly important.

Beefing up information literacy programs at the university level, and at K-12, would be an important first step towards remedying the problem.  But the problem has other deep structural reasons for its existence, and those problems require fixes that come from outside of the educational system.

4 thoughts on “Information Literacy and deciding what is “good”

  1. Stephen Francoeur

    It’s really interesting to hear that students want reviews for the purposes you mention. While there are efforts to make review/comments system scale globally in catalogs like BiblioCommons and SOPAC and thereby make it more likely there will be a critical mass of user-generated reviews, I can’t think of a way that a library to draw on a similar system for databases and other “local” resources.” It’s a fascinating design challenge to try to envision how a library could present a list of databases, for exmaple, and offer student reviews next to each one; maybe students could see reviews from students on their own campus and then click to see reviews from students at other colleges and universities.

  2. Mickey Schafer

    Stephen, our libguides comes with a “star” rating system attached, but no user reviews, making the rating pretty meaningless. What does it mean to give “Pubmed” a 5 star review? I would appreciate a system that at least posed 4-5 questions, each with their own starred answer, dealing especially with user experience, and including the college rank of the rater (a freshman who finds a database easy to use is claiming something quite different from a grad student [or so I would guess]). Sort of a “rate my professor” type system, but for library items.

  3. Lynn Silipigni Connaway

    Interesting post and references to our Digital Visitors and Residents project –, Donna.

    In our research in virtual reference (Seeking Synchronicity: Evaluating Virtual Reference Services from User, Non-User, and Librarian Perspectives – we learned that some individuals do not want to learn how to find the information, but simply to get the answer. This was not unusual. We also identified in the virtual reference transcripts that there is a “teachable moment,” both in face-to-face and virtual encounters, which should not be neglected.

    In the study, Sense-Making the Information Confluence:The Whys and Hows of College and University User Satisficing of Information Needs ( undergraduate student was so happy that a librarian told him about JSTOR for his project. He said he now uses JSTOR for everything. This is great that he learned about a resource that he needed for a specific project. However, that particular resource may not be the best one for different projects. “If you give them a hammer, everything looks like a nail.”

    In our meta-analysis of 12 user behaviour studies completed in the US and UK in 2005-2010 ( we discovered that individuals are very confident in their research skills and do not believe they need assistance, nor will they ask for help. Their digital literacy skills are more advanced than their information literacy skills (the process of finding and analyzing information). It was also reported that doctoral students adopt the research practices and skills of their major professors, who admitted that they had little or no formal information literacy instruction.

    Teaching information literacy skills with no context associated with the exercise probably will not benefit most individuals. The context and situation of the information need, as well as the individual’s willingness to accept instruction and advice are important factors in teaching and learning information literacy skills.

Comments are closed.