JStor seems to be a big offender of sending bad DOIs to Google Scholar, which then sends them on to me. Google Scholar is full of bad DOIs from JStor. DOIs which don’t resolve, but have a JStor prefix.
When I find them, I’ve been clicking the button CrossRef provides to report them to the publisher. It’s not clear if JStor actually is interested in fixing this or not. They usually email me back with the full citation. So I’ve started putting in the comments field “I am a systems librarian, just alerting you to the issue, don’t need the citation.”
If they told me “We’re aware of the problem, and am working to fix it, please stop reporting,” I would.
But in the meantime, when I run into them in my testing, click the ‘report to publisher’ button I will! And it is not hard to run into them in testing, they’re everywhere.
Since Google Scholar usually only gives me author, title, and DOI — when the DOI doesn’t actually resolve to metadata at CrossRef, it’s pretty a pretty useless citation.
5 thoughts on “JStor and bad DOIs”
JSTOR cannot be the only offender, right? Is there a way to see on a mass scale if other providers are supplying bad DOIs? What is the anatomy of a bad DOI? That is, what makes it bad? How is it possible to supply a bad DOI? I don’t have much experience with them, so I am curious to know these things. Thanks.
I’ve just been doing lots of random searches on Google Scholar, and then clicking on the link to my link resolver.
Google Scholar often only sends author, article title, and DOI. If the DOI is “good”, then my link resolver is able to look up other metadata from the DOI/CrossRef system. If it is not, then my link resolver is not. My link resolver, if it has no other full text links, provides a link through DOI/CrossRef to the URL the publisher has associated with the DOI.
If the DOI is ‘bad’, I get an error from DOI/CrossRef. Here’s an example of a “bad” DOI:
My test searches tend to focus on non-STEM topics, for variety, even though I know Google Scholar doesn’t do too well with these. JStor is frequently in my search results, G.Sch DOES seem to have indexed most of JStor. If I click on an OpenURL for a Jstor article, the chances are pretty good it will end up being a bad DOI. Of all the bad DOIs I’ve found through this round of testing, only ONE has belonged to anyone but JStor, all the rest were JStor.
What makes the DOI bad is that it isn’t actually registered with dx.doi.org. You need to register a DOI. In one of their emails to me, JStor said that Google Scholar had scraped/indexed a field that _looked_ like a DOI, but wasn’t really intended to be a DOI. I couldn’t tell you why JStor has data that looks _just like_ a DOI (like 10.2307/2769388), but isn’t actually registered. Seems like a bad practice to me, but perhaps one built into JStor’s infrastructure. I don’t know if JStor gives a G.Sch indexer or anyone else a way to tell these ‘fake’ DOIs from real ones. If not, that would be problematic.
But surely there are other sources of bad DOIs for G.Sch. too. In the particular testing I’ve been doing lately, I’ve mostly only been running into JStor though, they are definitely a major offender.
JASIST puts DOIs on their articles in early view before they’re registered with Cross Ref – or at least, that’s the sense I’ve made of it. These are “bad” DOIs because they don’t resolve, but they typically resolve a week later. Pretty annoying sometimes.
The DOI 10.2307/2769388 is not registered at all (e.g. http://dx.doi.org/10.2307/2769388 does not work). JSTOR appears to have published it and have forgotten to register it.
Arghhhhh. This problem occurs more than I’d like, but there is no way to know how often. I’m going after JSTOR. I’ll also go after JASIST (which is Wiley I think)
It annoys me very much, since basically all my JSTOR DOIs in my database do not resolve anymore. I mean – that is the point of a DOI.. that it never changes.. AAAAhrgh