In the New York Times today: Scientific Articles Accepted (Personal Checks, Too), by Gina Kolata.
But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk….
The role of libraries?
It is clear to me that libraries have a responsibility and a role to play in helping our users distinguish the ‘legitimate’ scholarly peer-reviewed publications from the ‘junk’ ones that will print anything for a fee. It’s part of our core mission, and succeeding in meeting new challenges like this will help justify the continuing existence and funding of academic libraries.
It’s clear to me that our online interfaces — my own area of work — need to be involved in this role.
It’s less clear to me how to do so.
Tools vs Answers?
Libraries traditionally have sought to give our patrons the tools to evaluate sources, including on credibility and trustworthiness — but have generally stayed away from subjectively judging resource quality or trustworthiness ourselves.
In part because, it is in the end subjective, and who are we to impose our subjective judgements on our users? At the extremes there may be broad agreement as to journals that are “predatory” or “fake”. But moving toward the middle, you get to the ‘fake’ marketting journals published by Elsevier, and then on to questions about what pervasive pharmaceutical industry funding or even industry ghostwriting of articles in ‘respected’ journals does to credibility and reliability. Laika’s MedLibLog raises some troubling questions about the ethics and credibility of even mainstream medical publishing.
And our increasingly impatient patrons don’t really want to be given the tools to judge the credibilty of sources themselves, they really do want our interfaces to simply tell them: “Is this a good source?”.
Nevertheless, I think we have the responsibility to attempt to educate our patrons as to the real world complexities of credibility in scholarship, including the tools they need to evaluate sources themselves, as well as the context of ethical complexity raised for instance in Laika’s MedLibLog. (Sadly, this may be additionally challenging to happen because of the conflict of interest in academic libraries questioning the ethical and credibility implications of practices common to researchers at our own institutions.)
And we probably also need to improve our online interfaces to provide more information to help our patrons judge credibility — including, even, the immediate black and white answer our impatient patrons demand, some immediate “Yes, this is generally considered a credibly peer-reviewed resource” or “No it is not.”
The New York Times article mentions Jeffrey Beall’s “List of Predatory Open-Access Publishers”. It’s still not clear to me if a librarian-maintained ‘blacklist’ of ‘bad’ journals is the appropriate solution. But it’s not clear to me that it’s not either. I’m not really sure.
If a librarian-maintained blacklist is appropriate and useful, it probably needs to be maintained by a collective or consortium or collaboration, not just one guy. (Or maybe even crowd-sourced social interaction mass voting, somehow? That is actually a very intriguing idea to me, thousands of librarians and academics ‘voting’ to establish the reputable vs not reputable journals? A platform to support that? Some potential pitfalls there too of course, there’s a lot we could talk about there. But very intriguing.)
And it needs a better technical infrastructure, with machine-readable API endpoints so the information can be embedded in our interface. (I’m not even sure if the facebook URI I linked to above is the ‘canonical’ source for Beall’s list, but nothing I found was suitable for software integration).
Even aside from the issue of these ‘spam’ journals, our patrons, especially undergraduates, are already asking for better black-and-white “peer-reviewed” vs “not” indicators in our interfaces, to help them fulfill the requirements of their assignments. (Again, I don’t think we can give up on trying to teach evaluative skills to these patrons; but I don’t think we can avoid trying to give them what they want either).
Some interfaces use sources such as Ulrich’s to label peer-reviewed vs not. (I believe Serial Solutions Summon, for instance, uses Ulrich’s data. Self-made library interfaces could also possibly use Ulrichs’ data via an api included in your Ulrich’s subscription). I’m curious how many ‘bad’ journals on Beall’s list would still show up as “peer-reviewed” in Ulrich’s list. It wouldn’t be that hard to check. I suspect many of them; I think Ulrich’s probably just takes the publisher’s word at whether a journal is “peer-reviewed”.
Libraries must engage to remain relevant — and to remain funded
One could suggest we should be demanding Ulrich’s exersize some kind of independent judgement as to journal credibility or the veracity of a ‘peer-review’ claim — but simply outsourcing this decision to a vendor doesn’t make it’s subjectivity any less problematic (it probably in fact makes it even more so, when these vendors may be involved in publishing themselves, and have conflicts of interests). And just as importantly, this kind of outsourcing doesn’t help libraries continue to demonstrate their important role and worthiness of being funding.
I am not sure exactly what the right solution is. But I am sure that libraries need to be involved in figuring it out, and trying things to figure out. Because our users need it, because it’s part of our mission, and because only by rising to new challenges like this that are core to our mission will academic libraries continue to be seen by our hosting institutions as having value commensurate with cost, and continue to be funded. So while I’m not sure about Beall’s approach, he gets a lot of credit for attempting to engage with the issue. Most libraries and librarians are not, as with so many other newly emerging challenges in our theoretical areas of core competence — and it does not bode well for the continued success of libraries.