How do we help our users identify trustworthy scholarly content?

In the New York Times today: Scientific Articles Accepted (Personal Checks, Too), by Gina Kolata. 

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk….

The role of libraries?

It is clear to me that libraries have a responsibility and a role to play in helping our users distinguish the ‘legitimate’ scholarly peer-reviewed publications from the ‘junk’ ones that will print anything for a fee.  It’s part of our core mission, and succeeding in meeting new challenges like this will help  justify the continuing existence and funding of academic libraries.

It’s clear to me that our online interfaces — my own area of work — need to be involved in this role.

It’s less clear to me how to do so.

Tools vs Answers?

Libraries traditionally have sought to give our patrons the tools to evaluate sources, including on credibility and trustworthiness — but have generally stayed away from subjectively judging resource quality or trustworthiness ourselves.

In part because, it is in the end subjective, and who are we to impose our subjective judgements on our users?  At the extremes there may be broad agreement as to journals that are “predatory” or “fake”.   But moving toward the middle,  you get to the ‘fake’ marketting journals published by Elsevier, and then on to questions about what pervasive pharmaceutical industry funding or even industry ghostwriting of articles in ‘respected’ journals does to credibility and reliability.  Laika’s MedLibLog raises some troubling questions about the ethics and credibility of even mainstream medical publishing.

And our increasingly impatient patrons don’t really want to be given the tools to judge the credibilty of sources themselves, they really do want our interfaces to simply tell them: “Is this a good source?”.

Nevertheless, I think we have the responsibility to attempt to educate our patrons as to the real world complexities of credibility in scholarship, including the tools they need to evaluate sources themselves, as well as the context of ethical complexity raised for instance in Laika’s MedLibLog. (Sadly, this may be additionally challenging to happen because of the conflict of interest in academic libraries questioning the ethical and credibility implications of practices common to researchers at our own institutions.)

And we probably also need to improve our online interfaces to provide more information to help our patrons judge credibility — including, even, the immediate black and white answer our impatient patrons demand,  some immediate “Yes, this is generally considered a credibly peer-reviewed resource” or “No it is not.”

Possible approaches

The New York Times article mentions Jeffrey Beall’s “List of Predatory Open-Access Publishers”.  It’s still not clear to me if a librarian-maintained ‘blacklist’ of ‘bad’ journals is the appropriate solution. But it’s not clear to me that it’s not either. I’m not really sure.

If a librarian-maintained blacklist is appropriate and useful, it probably needs to be maintained by a collective or consortium or collaboration, not just one guy.  (Or maybe even crowd-sourced social interaction mass voting, somehow?  That is actually a very intriguing idea to me, thousands of librarians and academics ‘voting’ to establish the reputable vs not reputable journals? A platform to support that? Some potential pitfalls there too of course, there’s a lot we could talk about there. But very intriguing.)

And it needs a better technical infrastructure, with machine-readable API endpoints so the information can be embedded in our interface. (I’m not even sure if the facebook URI I linked to above is the ‘canonical’ source for Beall’s list, but nothing I found was suitable for software integration).

Even aside from the issue of these ‘spam’ journals, our patrons, especially undergraduates, are already asking for better black-and-white “peer-reviewed” vs “not” indicators in our interfaces, to help them fulfill the requirements of their assignments. (Again, I don’t think we can give up on trying to teach evaluative skills to these patrons; but I don’t think we can avoid trying to give them what they want either).

Some interfaces use sources such as Ulrich’s to label peer-reviewed vs not. (I believe Serial Solutions Summon, for instance, uses Ulrich’s data. Self-made library interfaces could also possibly use Ulrichs’ data via an api included in your Ulrich’s subscription). I’m curious how many ‘bad’ journals on Beall’s list would still show up as “peer-reviewed” in Ulrich’s list.  It wouldn’t be that hard to check. I suspect many of them; I think Ulrich’s probably just takes the publisher’s word at whether a journal is “peer-reviewed”.

Libraries must engage to remain relevant — and to remain funded

One could suggest we should be demanding Ulrich’s exersize some kind of independent judgement as to journal credibility or the veracity of a ‘peer-review’ claim — but simply outsourcing this decision to a vendor doesn’t make it’s subjectivity any less problematic (it probably in fact makes it even more so, when these vendors may be involved in publishing themselves, and have conflicts of interests).  And just as importantly, this kind of outsourcing doesn’t help libraries continue to demonstrate their important role and worthiness of being funding.

I am not sure exactly what the right solution is. But I am sure that libraries need to be involved in figuring it out, and trying things to figure out.  Because our users need it, because it’s part of our mission, and because only by rising to new challenges like this that are core to our mission will academic libraries continue to be seen by our hosting institutions as having value commensurate with cost, and continue to be  funded.  So while I’m not sure about Beall’s approach, he gets a lot of credit for attempting to engage with the issue. Most libraries and librarians are not, as with so many other newly emerging challenges in our theoretical areas of core competence — and it does not bode well for the continued success of libraries.

7 thoughts on “How do we help our users identify trustworthy scholarly content?

  1. Crowdsourcing reputation recommendations indeed seems preferable to relying on just one guy’s list — and in this particular case, I agree with Karen Coyle that Beall’s list is problematic. Nonetheless, I would agree that librarians need to exercise their professional judgment, and should show their work when doing so. Everything may be subjective, but that doesn’t excuse doing the legwork of looking at the evidence.

    But what I think would really make (academic) libraries continue to be relevant would be for librarians to seriously consider getting into scholarly publishing. This would be a different kind of publishing — rather fewer journals, I suspect, and more emphasis on managing individual articles — and it wouldn’t be cheap. But then again, I think the current model is completely unsustainable.

  2. thanks for the link to Karen’s article, Galen, interesting stuff. Karen raises some good points that make me conclude that indeed Beall’s current approach is not the right one, at least in its specifics.

    I tend to agree that getting into scholarly publishing would make a LOT of sense for libraries… but also that it’s not going to happen. Among other reasons, because some Universities already HAVE publishing arms publishing journals… usually operating under the same ‘legacy’ problematic unsustainable business models, and sometimes attempting to generate a surplus to fund other university programs. But it’s unlikely to make sense to a university to fund it’s own internal (expensive) competition to an existing university program/subsidiary.

  3. Just to make the picture even murkier, let me add the versioning issue: a “good” article may exist in preprint/postprint form in an institutional repository or on an author’s website, as well as in the journal’s webspace.

  4. Dorothea, those ‘off print’ copies are usually identified as to which journal they (or some version, anyway) were published in, right? So I’m not sure if it really makes the problem of judging credibility based on publication venue any murkier. (It does make _access_ issues more troublesome!)

  5. … that’d be nice, but it doesn’t always happen. (Metadata. It’s hard. Who knew?) Perhaps closer to your concern is the mixing of pre/postprints of peer-reviewed material with gray lit, datasets, student research, ETDs, posters, conference presentations, records, etc. So the canonical “student who uses Google” may be completely confused when s/he arrives in an IR and doesn’t have a whole lot of cues to what the content is and whether (within the parameters of their work) to trust it.

  6. I’m not sure that the existence of university presses is a blocker per se. Not all presses publish journals, for one thing, and even for those that do, with the possible exception of the likes of Oxford and Cambridge, they don’t publish journals in all fields. That leaves room for the local library to get a toehold.

    That said, I suspect that it would be library consortia that would band together to get into publishing, which may help override local pushback.

    Of course, I did beg the question of what libraries-as-publishers would look like. My guess is that they’d look like arxiv.org with better curation and cat-herding of reviewers, where the reputation markers are associated with the article, not the collection.

Leave a comment