Eric Lease Morgan asks on code4lib:
I heard someplace recently that APIs are the newest form of vendor
lock-in. What’s your take?
My reply (expanded a bit from my listserv post):
When they are custom vendor-specific APIs and not standards-based APIs, they can definitely function that way. I’m still not sure if even a vendor-specific API is more or less lock-in than NOT having an API. On the one hand, you will start to have software written against the vendor-specific API, that won’t work without changing it up if you switch vendors. But on the other hand, with SFX and Umlaut, for instance, Umlaut does so much more than SFX, and the SFX adapter piece is such a small part, that in that case, for us at least, having SFX with an API and Umlaut on top of it it definitely makes it _easier_ for us to switch link resolvers without disrupting our services built on top of it.
Which we don’t do well at
But really, what you want is standards-based APIs, not vendor-specific APIs. That would give you the best of all worlds. There are a couple challenges that keep us from getting there though. One is that the library community, historically, is, well, pretty AWFUL at writing standards. We come up with standards that don’t actually accomplish what they were intended to accomplish, are too complicated for anyone to implement right (on either producer or consumer side), and leave so much wiggle room that someone can claim they support the standard but not in a way that any other software will ever understand. (NCIP anyone?)
So there are a couple ways to try to get better at this. One is definitely looking outside the library world for standards to use. But unlike code4libbers, I don’t think (from my experience) that’s always possible or easy. We have priority problems that, while they are not entirely foreign to the larger world, aren’t as high a priority for most of the non-library world, meaning they don’t yet have robust standards solutions. However, especially when standards are extensible (like XML ones often are), you can sometimes start with a general standard and extend it for the library space.
Standards based on, not preceeding, practice
Secondly, instead of creating standards before anyone has actually tried solving the problem the standard is meant to solve (as we often seem to do), the BEST standards are created by generalizing/abstracting from existing best practices. A buncha people try it first, you see what works and what doesn’t, you see what the actual use cases and needs are, you take the best out of what’s been done, and you standardize it. But doing it this way means you need to go through a period of vendor/product specific (eg) APIs before you can get to the standard. The library world is still immature in developing good software infrastructures, we’re going to need to through some more pain for a while, no way around it.
But another problem in all of this is that vendors may not have the interest OR the in-house expertise to actually provide standards-based APIs. The APIs we often get now from vendors, frankly, are kind of kludgey, and do not fill me with confidence that the vendor actually has the proper staff or resources allocated to create good standards-based APIs — which, definitely, takes more time than creating a kludgey vendor-specific one-off. Or maybe the vendor actually is dis-interested in this because they want lock-in. Or maybe it’s just the case that the quality of your APIs doesn’t effect your sales at all, so it doesn’t make (short term at least) business sense to do it well. (Heck, the _presence_ of an API has only just begun to effect sales, but libraries aren’t good enough at judging how good it is, that even a crappy API is probably ‘good enough’ for sales).
Open source, community work
One way out of this is definitely open source. We’ll work out the best practices and standards ourselves, and then we start insisting that vendors follow them. The DLF-DI API is perhaps one example of an attempt at this, created from a generalization of the experience of library developers. But the library developer community is also small, and generally fairly in-experienced. Creating APIs is done best by experienced developers who understand what’s going to make the API useable or not.
But, anyway, one step at a time. I firmly believe that even vendor-specific kludgey APIs are better than no APIs at all — we learn how to do better by trying.
It’s also worth pointing out, as some subsequent commenters on that thread did, that the application consuming an API bears some reponsibility here. As much as possible, you need abstract out the API connector code, so you can easily switch the app to use multiple APIs, so long as they all have more or less the same data/capabilities (something which certainly isn’t guaranteed, admitted). This too takes more time, but is do-able. Among the software I work on, Umlaut manages to do it pretty well, Xerxes does not. This is in part because of the more focused and limited function of a link resolver compared to a federated search engine, made it easier to do with Umlaut. And I guess half of the SFX API more or less is standards-based: OpenURL.
As a result, even though both SFX and Metalib have vendor-specific APIs, our use of the SFX API, in my opinion, lessens our vendor lock-in, while our use of the Metalib API increases it.
In this case, this was mostly due to factors outside our control. But it also can definitely depend on how well you’ve architected your client code, to abstract out the API connectors. Sometimes I feel like this is heresy in code4lib with it’s “just get it done” ethos, but good, well-architected code matters.