So any catalogers reading this, would appreciate some ideas or background information if you’ve got em.
So recently, someone (PCC, the Program for Cooperative Cataloging, I guess?), came up with this “Provider-Neutral” policy for “e-monograph” records. Which apparently is now being implemented, picking up steam.
Previously, if I understand right, if there was an e-book published on several different platforms, the cooperative cataloging corpus (meaning basically OCLC, and perhaps also LC) would have a seperate bib record for each one. Although they were largely identical, they had different URLs, among other things. (Not sure how this applies to the increasingly popular case of an ebook that is downloadable, not on the web, but that’s not what this post is about).
Now, instead, all the e-versions will share a bib record.
On it’s face, this made a lot of sense to me when I heard about it. More efficient, why create duplicate records that are pretty much the same, sure, why not, consolidate them, why spend time describing the unique aspects of a partiuclar provider’s representaiton that nobody really cares about anyway, “provider-neutral”, why not.
But when combined with our standard (truly insane) actual real world practices, this seems to result in some big problems.
So we buy a new ebook package, and we get a bunch of recorsd for ebooks. Sometimes we get them for free from the content vendor. Sometimes we get them from OCLC (it’s not entirely clear to me how we bulk download the right OCLC records for a several thousand book package, but we definitely don’t pick em out manually one by one). Sometimes we might get em from yet another party, not sure. Previous to “provider-neutral”, we’d get the record(s) for our licensed provider(s), which would individually have the URLs (marc 856) to access the content from those providers in em. We’d load in em our catalog, which would display the 856 urls, users would see them and click on them, great.
Now, due to the gradual adoption of ‘provider-neutral’, when we get those same records (from any of those sources), if I understand things correctly, an increasingly large portion of them (eventually to be all of them when ‘provider-neutral’ is fully adopted) have 856 URLs for every known provider of the e-book in each record. Half a dozen, more, who knows.
If we just load these all in our catalog, then for our patrons it’s like a game of scratch lotto to actually get to the content. Click on a URL, maybe you’ll hit a paywall and a solicitation to pay them $40 for access, or maybe you’ll actually pick the one(s) we’ve actually paid for as a library, who knows!
This obviously is not acceptable. But there’s also no clear way for us to filter out only the marc 856 urls that we license from the marc bibs. How do you even know which url goes with what platform? They aren’t even identified from any kind of controlled vocubulary. Sometimes the platform seems to be stuffed in a $3 subfield, which is odd since marc defines $3 as “Materials specified”, like “first four chapters only”, or “table of contents only” or what have you. “SpringerLink” is not a materials specified. And on top of that, they’re just thrown into the $3 in narrative form, along with english sentences that may or may not actually describe the ‘materials specified’, as far as I can tell using whatever language the cataloger felt like to specify the provider and/or platform. Different catalogers could use different words to identify the same provider and/or platform (sometimes what is referenced seems to be a provider, other times a platform, two subtly different things). This is not suitable data for machine processing to determine if the platform/provider specified is one we license.
So, um, what the heck are we supposed to do? What were those behind the “provider-neutral e-monograph policy” expecting that we’d do? What are other libraries doing? Having to give users a lotto scratch card of providers and hoping they “get lucky” is a big step backwards in our user experience.
Can anyone shed any light on this? Am I misunderstanding what’s actually going on? Is what’s actually going on different than the “provider neutral” drafters expected to happen? Is anyone in the cataloging world alarmed about this?
There’s an FAQ in the Provider-neutral document that provides some hand-waving about how libraries “might” handle these records in the future. Well, the future is now, what is being done? The answer talks about what libraries “using WorldCat local”might do; surely cooperative cataloging decisions, I hope, haven’t been taken to try to lock people into WorldCat Local, we don’t use it. It says “it is very likely that libraries, vendors, and OCLC will work together to provide the URLs, OCLC numbers, and vendor specific information on MARC records using the provider-neutral OCLC record as the base record.” — has that happened? Even if it has, it sounds like that means “You can pay a lot of money to a vendor for a new service you didn’t used to have to pay for at all, to wind up with basically what you used to have without paying for it.” That’s extra money we don’t have.
Also, on MARC changes
Reading the policy document, it suggests there are some corresponding changes in Marc values, but I’m confused about what they are.
Therefore, we have written a Discussion Paper to MARBI to add two new values in the fixed field byte for “Form of item” across all formats, for online access and for direct access. Currently byte 008/23 “s” is used for records in the “Books” format and in most of the other formats as well; byte 008/29 “s” is used for records in the “Maps” and “Visual materials” formats. If our recommendations in the form of a proposal to MARBI are successful, then code “s” for electronic will be replaced by the two new values, and code “s” will be made obsolete
Did this happen? If so, what are the two new values, and are they documented anywhere?
I can’t but worry that this is one more nail in the coffin of our cooperative cataloging enterprise. It was a noble endeavor, but it’s just so so broken. Almost all libraries pay for expensive vendor processing services on top of our theoretically cooperatively cataloged records (additional processing the results of which can not be shared cooperatively), and we still end up with (very expensive) data that is not actually sufficient to power the systems that serve our users. And things seem to be getting worse, not better. For a while people have been saying about cooperative cataloging (and cataloging in general) “if we don’t fix this soon, it’ll be too late.” I worry we may already have passed the “too late”, whether we noticed or not, this stuff is a mess, it’s not efficient, it doesn’t serve our users, it is not going well.