In a thread on the Code4Lib listserv, I offered Umlaut as an example of providing “just in time” links to open access content, integrated into our existing services like the catalog.
Eric Hellman liked Umlaut’s example, and asks if there has been any opposition, and what can be done to make this more wide-spread? My response, expanded very slightly from what I posted on list.
On 6/15/2011 9:31 AM, Eric Hellman wrote:
Clearly, Jonathan has gone through the process of getting his library to think through the integration, and it seems to work.
Has there been any opposition?
Not opposition exactly, but it doesn’t work perfectly, and people are unhappy when it doesn’t work. It can sometimes find the wrong match on a ‘foreign’ site like Amazon etc. Or avoid finding a right one of course.
Or the definition of right/wrong can be not entirely clear too — on a bib record for a video of an opera performed, is it right or wrong to supply a link to the print version of the opera? What if the software isn’t smart enough to _tell_ you it’s an alternate format (it’s not), and the link is just in the single flat list of links?
Also issues with avoiding duplicate double URLs when things are in bib records AND in SFX kb AND maybe looked for otherwise by Umlaut. (we have _some_ HathiTrust URLs in our bib records, that came that way from OCLC, who knew?)
These things get really complicated, quickly. I am constantly finding time to do more tweaking, but it’ll never be perfect, so people have to get used to lack of perfection. Still when I ask, okay, this HathiTrust/Amazon/Google linking feature is not going to be perfect, would you rather keep it with imperfections we may not be able to fix, or eliminate it — nobody says eliminate.
There is not opposition to the basic idea, only dissatisfaction with it’s necessarily imperfect implementation.
What are the reasons that this sort of integration not more widespread? Are they technical or institutional? What can be done by producers of open access content to make this work better and easier? Are “unified” approaches being touted by vendors delivering something really different?
I think they are mostly technical. This stuff is hard, because of the (lack of) quality of our own metadata, the lack of quality of third party metadata, the lack of sufficient APIs and Services, and the lack of a local technical infrastructure to support tying everythign together.
So on the one hand, I’m trying to find time for an overhaul of Umlaut to make it easier for people to install and maintain, and I’m hoping I can get some more adoption at that point. To at least provide some open source “local technical infrastructure”. Umlaut is intentionally designed to be as easy as possible to integrate with your existing catalog or other service points, as well as to provide ‘just in time’ services from third party external searches — that’s it’s mission, this kind of just-in-time service. (“easy as possible” — or as easy as I can make it, which sometimes still isn’t easy enough, especially if you don’t have local technical resources).
Better metadata, better metadata, better metadata — and an API to lookup against it
But still, it’s metadata, metadata, metadata. So what can producers of open access content do to make this work better and easier?
1) Have good metadata for their content, especially including as many identifiers as possible — ISBN, OCLCnum, LCCN. Even if you aren’t an OCLC member and don’t have an “OCLC record”, if you can figure out what OCLC record represents this thing you’ve got, list it in the metadata. Even if the ISBN/OCLCnum/LCCN doesn’t represent the exact same thing, list it — ideally somehow identified as ‘an alternate manifestation’. Identifiers are crucial for identifying “the same thing” in different databases/corpuses/hosts. ISBN/OCLCnum/LCCN are what we have in our legacy databases to match to, but experiment with newfangled identifiers too, sure.
Also have author, title, publisher, publication year metadata. If you can have author metadata as an NAF/VIAF controlled form or identifier, even better. Metadata is expensive, but metadata is valuable, the better it is, the better Umlaut’s approach can work.
Share the metadata publicly, in case someone wants to do something creative with it to support discovery.
2) Provide an API that allows lookup of your open access content, searching against the good metadata from #1. Including identifier searches. The thing is, each of (dozens, hundreds, thousands) of open access content providers having such an API — it’s a burdensome expense for each of them, but it’s also unrealistic for client software to talk to dozens/hundreds/thousands of APIs.
So this stuff needs to be aggregated in fewer major service points. It could be an aggregator of just metadata that links to content hosted on individual hosts, or it could be an aggregator of content itself. Either way, it needs a good API based on good metadata. “Google” doesn’t work as such an aggregator, the APIs it has are too limited functionally and by ToS, and the results do not have sufficient metadata. Maybe the Internet Archive does — although IA’s API’s and metadata are sometimes a bit sketchy (If you do put it in IA, make sure it somehow shows in the “Open Library” section and it’s APIs — the OL API’s IA has are sufficient for Umlaut’s use, but general Internet Archive APIs are not). Or maybe a new aggregator(s) have to be collectively created.
It wouldn’t hurt to try to get your stuff into commercial “link resolver” Knowledge Bases too. That is a sort of aggregation. If that worked, it might make it more accessible for existing library infrastructures subscribing to such services. But I don’t see that as a good solution long term — most commercial link resolver knowledge bases I’m seen just aren’t capable of dealing very well with this type of resource (providing the right API over the right metadata) — these are products that were invented for different use cases, getting users to journal articles (rather than monographs), hosted on paywalled sites (rather than open access).