Ex Libris URM — post-ILS ILS?

I am at the ELUNA (Ex-Libris user’s group) conference, and just saw a presentation on Ex Libris strategy from Oren Beit-Arie, chief strategy officer of Ex Libris, and Catherine [someone], URM project manager.

I was quite impressed. The URM strategy is basically Ex Libris’ vision for a new kind of ILS (post-ILS ILS?).  [I talked about this before after last year’s ELUNA] I hope they make the slides and such from this presentation public, because I can’t quite cover what they did. They showed basically a structural diagram of the software (fairly consistent with what I wrote on future directions of library systems), and some mock-ups of staff user interfaces for workflow.

But my basic summary is:  Yes! This is indeed a vision for an ILS that makes sense. They get it. I imagine most code4libbers if they saw this presentation would agree, wow, yeah, that’s the way an ILS should actually work:  supporting staff workflow in an integrated way that actually makes sense, modular and componentized, full of APIs and opportunities for customer-written plugins, talking to various third-party software (from vendors (of various classes) to ERP software etc.), etc etc.

The vision was good. What’s the execution going to be?  So enough praise, let’s move on to questions, concerns, and some interesting implications.

Timelines?

The timelines given were pretty ambitious.  Did they really mean to say that those mock-ups of staff interfaces Catherine showed are planned to be reality by the end of 2010?  The software is ambitious enough to begin with, but on top of that the mock-ups shown were heavily dependent on being able to talk to external software via web services, and getting other people’s software that may or may not have those services available to do what’s needed in that time too… I’m a bit skeptical. [And with the staffing required to pull this off and based on pricing for other products… I would have to predict that end pricing is going to be in the mid six figures].

On top of that, when Oren supplied his timeline there seemed to be a bit of slight of hand going on that confused me a bit. He said that next version of SFX was expected end of 2009, and with that lit up a bunch of boxes on his structural diagram that he said this release of SFX would fulfill.  If SFX is really going to fill those boxes for the integrated and modular architectural vision he outlined (with rationalized workflow and data management not based on the existing borders of silos that exist for historical reasons, and which SFX definitely exhibits—and SFX is not known for a staff interface that supports rational workflow)…

….then either SFX is going to become quite different software than it is now (by end of 2009?)—-or the execution is going to be significantly less than the vision offered.

Network-level metadata control?

Part of the vision involved a network-level metadata store, with individual libraries linking holdings to truly shared metadata records. (Catherine at one point said “MARC… er… descriptive records” That is, an actual slip of the tongue, not one made intentionally for effect. I suspect she intended to avoid saying “MARC” at all to avoid bringing up the issue without specifying MARC…. hm.).  The example used was that 4200 libraries use essentially the same record for Freakonomics, and they all manage it seperately, and when they enhance it, their enhancements are seldom shared–and this makes no sense.

We all know this vision makes sense. We also all know that this is Ex Libris challenging OCLC’s baliwick.  And, I will say, with some sadness, the fact that this vision sounds so enticing and is so different from what we’ve got is kind of an indictment of OCLC’s stewardship of what they do. This is how OCLC should be working, we all know. So Ex Libris has a vision to make it work.

How is this going to interfere with OCLC?  Where are they going to get all these records from?  In the flowchart of where these records would come from, libraries were identified. Can libraries legally share these records with an Ex Libris central metadata store, will OCLC let them (not if they can help it!).   The screenshots of staff workflow imply that when a new acquisition comes in (or really, even at the suggestion/selection level), a match will be immediately be made to a metadata record in the central system—this implies the central system will have metadata records for virtually everything (ie, like Worldcat).  Can they pull this off?

If they DO pull it off, it’ll be a great system—and will divide library metadata into several worlds, some libraries using a central metadata system provided by a for-profit vendor, others using the OCLC coooperative, others using… what?  It’ll be sad to me to fragment the shared cataloging corpus like this, divide it along library ‘class’ lines, and surrendure what had been owned by the collective library community through a non-profit cooperative to a for-profit vendor.

On the other hand, lest we forget, the shared metadata world really already is divided on “class” lines–many libraries can not afford to live in the OCLC world (although I suspect those same libraries will not be able to afford to live in the Ex Libris shared metadata world either).   And if OCLC actually acted more like the non-profit cooperative representing the collective interests of the library sector, as it claims to be… it would be even more distressing that it is not succeeding in supplying the kind of shared metadata environment that a for-profit vendor is envisoning challenging them with.

True Modularity?

Oren suggested that their vision was open-ness, integration, and modularity. The implication to me is that I should be able to mix-and-match components of this envisioned URM architecture with other non-ExLibris (proprietary or open source)  components.

Is this really going to be?  As I in fact said last year after the introduction of the URM strategy, this URM strategy is technically ambitious even without this kind of true mix-and-match modularity.  To add that in in a realistic way makes it even more ambitious.  And to what extent does it really meet Ex Libris business interests (not suggesting they are misleading us about the goal, but when your plan is too ambitious to meet the timelines you need to stay in business…   what’s going to be the first thing to drop?).

For instance, to go back to our Metadata discussion, if Worldcat (or LibLime, or anyone else) did provide a central metadata repository with the kind of functionality envisioned there, and further provided a full set of web services (both read and write) for this functionality… could I use Worldcat in this URM vision instead of the Ex Libris centralized metadata repository?  (by 2010?).   Really?

For another example, Primo is in some ways written quite satisfactorily to be inter-operable with third party products. But what if I want to buy just the “eShelf” function of Primo (because it’s pretty good), and use someone elses discovery layer with this?  What if I want to buy Primo without the eShelf and use some other product for personal collection/saved record/eShelf functionality?  Right now I can’t. How truly  “modular mix-and-match” something is depends on where you draw the lines between modules, doesn’t it?

[If Ex Libris really were interested in prioritizing this kind of mix-and-match modularity, I’d encourage them to explore using the Evergreen OpenSRF architecture in an Evergreen-compatible way. But, yes, to do this in a real way would take additional development resources in an already ambitious plan.]

[And in Primo’s defense, if I wanted to use Primo with a third-party “eShelf”, or use the Primo eShelf with a third party discovery layer, Primo’s configuration and customizability would _probably_ support this.  The former might even make financial sense with no discount—buying Primo just for the eShelf obviously would not.  As more and more complex features are added, however, will they be consistently modularized to even allow this technically? It’s definitely not a trivial technological project.]

May you live in interesting times

If nothing else, it’s disturbingly novel to be presented with a vendor who seems to really get it. They are talking the talk (and mocking the interface mock-ups) that match where library software really does need to go.

Will they pull it off?  If they do, will it really be open in the modular mix-and-match way we all know we need to avoid vendor lock-in and continue to innovate?  Will we be able to afford it? We will see.

I would think the strength of this vision would light a fire under their competitors (including OCLC, and maybe the open source supporters too), spurring more rational vision-making from other library industries, making it more clear (if it wasn’t already) that keeping on going the way you can keep on going is not an option.  I hope so. (On the other hand, Ex Libris is clearly targetting a library customer field that can afford it, in several definitionsof ‘afford’. Do other vendors think they can keep on keeping on to target different markets?]

Advertisements

Free Covers? From Google?

Tim Spalding writes about Google Book Search API, cover availability, and terms of service:

In NGC4Lib:

Basically, I’ve been told [I can’t help but wonder: Told by whom? -JR] that I was wrong to promote the use of GBS for covers in an OPAC, that the covers were licensed from a cover supplier-ten guesses!-and should not have allowed this, and that the new GBS API terms of service put the kibosh on the use I promoted.

And on his blog:

The back story is an interesting one. Soon after I wrote and spoke about the covers opportunity, a major cover supplier contacted me. They were mifffed at me, and at Google. Apparently a large percentage of the Google covers were, in fact, licensed to Google by them. They never intended this to be a “back door” to their covers, undermining their core business. It was up to Google to protect their content appropriately, something they did not do. For starters, the GBS API appears to have gone live without any Terms of Service beyond the site-wide ones. The new Terms of Service is, I gather, the fruit of this situation.

Now, I am not surprised. As soon as I heard the Google staff person on the Talis interview implying that Google had no problem with use of cover images in a library catalog application, I knew that something would come through the pipeline to put the kibosh on that. Not least because I too had had backchannel communications with a certain Large Library Vendor, about Amazon, where they revealed (accidentally I think), that they had had words with Amazon about Amazon’s lack of terms of service for their cover image. Even then, I wondered exactly what the legal situation was, in the opinion of this Large Vendor, of Amazon, or of any other interested parties.

More questions than answers

But here’s the thing. When I read GBS’s new Terms of Service looking for something to prevent library catalog cover image use… I don’t see anything. And if there WAS going to be something, what the heck would it even look like anyway?

Amazon tried to put the kibosh on this by having their terms of service say “You can only use our whole API if the primary purpose of your website is to sell books from Amazon.” Making not just cover usage, but really any use of the Amazon API by libraries pretty much a violation. If Google goes _that_ route, it’d be disastrous for us.

But I doubt they would–not even just trying to limit cover image usage by those terms–because, first of all, they intended from the start for this service to be used by libraries, and had development partners from the start that included libraries and library vendors. Secondly, what would the equivelent be for Google? You can only use this service if your primary business is sending users to google full text? Ha! That’s nobody’s primary business!

What terms could possibly restrict us anyway?

Without restricting what Google was trying to do in the first place.

And besides, the whole point of the GBS API having cover images was to let people put a cover image next to a link to GBS. The utility of this is obvious.

But isn’t that exactly what we library catalog users are doing, no more and no less? So what could their terms say?

“you are allowed to use these covers next to a link to GBS, but ONLY if you are not a library or someone else who is the target market for Large Library Market Vendors. You can only use it if Large Library Market Vendor is NOT trying to sell you a cover image service.”

Or, “you can only use it if you’re not a library.”

Can they really get away with that? Just in terms of PR, especially since, unlike Amazon, they get most of their content from library partners?

I know the Major Library Vendors want to keep us on the hook for paying them Big Money for this service. And they’re the same ones selling these images to Google. But it’s unclear to me what terms could possibly prevent us from using the covers, while allowing the purposes that Google licensed them for in the first place.

And what’s the law, anyway?

Then we get to the actual legal issues. To begin with a “terms of service” that you do NOT in fact need to even “click through” to use the service—thus you don’t ever have had to have READ to use the service–I’m not sure it’s enforceable at all. But they could fix that by requiring an API key for the GBS API, and making you click-through to get the key.

But the larger issue is that legal issues around cover image usage is entirely unclear to begin with.

I remain very curious what the Large Library Vendor’s agreements with the publishers (who actually own the intellectual property for cover images, generally) is, and what makes them think they have exclusive right to provide libraries with this content? It also remains an unanswered question exactly what “fair use” rights we have with cover images. Of course, that’s all moot if you have a license agreement with yoru source of cover images, that trumps fair use (thus the “terms of service”. But again, I dont’ see anything in the terms of service to prevent cover image use by libraries).

Tagging and motivation in library catalogs?

Eh, this comment was long enough I might as well post it here too, revised and expanded a bit. (I’ve been flagging on the blogging lately). Karen Schneider thinks about “tagging in a workflow context

Tagging in library catalogs hasn’t worked yet for a number of reasons…

Karen goes on to discuss much of the ‘when’ of tagging, but I still think the ‘why’ of tagging is more relevant. Why would a user spend their valuable time adding tags to books in your library catalog?

I think the vast majority of succesful tagging happens when users tag to aid their OWN workflow. Generally to keep track of things. You tag on delicious to keep track of your bookmarks. You tag on librarything to organize your collections. The most succesful tagging isn’t done to help _other_ people find things, but to keep track of things yourself–at least not at first, not the tagging that builds the successful tag ecology. Most cases of a successful tagging community where people do to tag to help others find things–I’d suggest it would be because it somehow benefits them personally to help people find things. Such as, maybe, tagging your blog posts on wordpress.com because you want others to find your blog posts–still a personal benefit.

A succesful tag ecology is generally built on tagging actions that serve very personal interests which do not need the succesful tagging ecology on top of it. Interests served even if you are the only one who is tagging. The succesful tagging ecology which builds out of it–and which goes on to provide collective benefit that was not the original intent of the taggers–is an epiphenomenon.

Amazon might be a notable exemption to this hypothesis, perhaps because it such a universally used service before tagging already. (Unlike our library catalogs).  I would be interested to understand what motivates users to tag in Amazon. Anyone know of anyone who’s looked into this? It’s also possible that if amazon’s tags are less useful, it is in fact because of this lack of personal benefit from tagging.

So what personal benefit can a user get in tagging in a library catalog? If we provided better ’saved records’ features, perhaps, keep tracks of books you’ve checked out, books you might want to check out, etc. But I’m not sure if our users actually USE our catalogs enough to find this useful, no matter how good a ’saved records’ feature we provide. In an academic setting, items from the catalog no longer neccesarily make up a majority of a user’s research space.

To me, that suggests, can we capture tags from somewhere else? My users export items to refworks. Does refworks allow tagging yet? If it did, is there a way to export (really re-import) these tags BACK to the catalog, when a user tags something? But even if so, it would be better if Refworks somehow magically aggregated tags from _different_ catalogs, of the same work. But that relies on identifier issues we haven’t solved yet. If our catalogs provide persistent URLs (which they don’t usually, which is a tragedy), users COULD tag in delicious if they wanted to. Is there a way to scan delicious for any tags including your catalogs url, and import those back in?

In addition to organizing one’s research and books/items of interest, are there other reasons it would serve a patron’s interest to tag, other things they could get out of it?  A professor might tag books of interest for their students, perhaps (not that most professors are looking for more technological things to spend time on helping students, but some are).   And librarians themselves might tag things with non-controlled-vocabulary topic areas they know would be of use to a particular class or program or department, with terms of use to those classes or programs or departments.  Can anyone think of any other reasons tagging could be of benefit to a user (not whether a successful tagging ecology would be of collective benefit–but benefits an individual user can get from assigning tags in a library catalog).

Worldcat covers a much larger share of my academic users’ research universe than my own catalog. And worldcat has solved the “aggregating different copies of this work from different libraries” problem to some extent. Which is why it would make so much sense for worldcat to offer a tagging service–which can be easily incorporated into your own local catalog for both assigning and displaying tags (if not for searching) ala library thing. It is astounding to me that OCLC hasn’t provided this yet. It seems to be a very ‘low hanging fruit’ (a tagging interface on worldcat.org with a good API is not rocket science) that is worth a try.

Can licensing make an API useless?

As I discussed in a previous essay, it’s the collaborative power of the internet that makes the open source phenomenon possible. The ability to collaborate cross-institution and develop a ‘community support’ model is what can make this round of library-developed software much more successful than the ‘home grown’ library software of the 1980s.

So how does this apply to APIs? Well, library customers are finally demanding APIs, and some vendors are starting to deliver. But the point of an API is that a third party will write client code against it. If that client code is only used by one institution, as I discussed, it’s inherently a more risky endeavor than if you have client code that’s part of a cross-institutional collaborative project. For all but the smallest projects involving API client code, I think it is unlikely to be a wise managed risk to write in-house software that is only going to be used or seen by your local institution.

The problem is if a vendor’s licenses, contracts, or other legal agreements require you to do this, by preventing you from sharing the code you write against the API with other customers.

On the Code4Lib listserv, Yitzchak Schaffer writes

here’s the lowdown from SerSol:

“The terms of the NDA do not allow for client signatories to share of any information related to the proprietary nature of our API’s with other clients. However, if you would like to share them with us we can make them available to other API clients upon request. I think down the road we may be able to come up with creative ways to do this – perhaps an API user’s group, but for now we cannot allow sharing of this kind of information outside of your institution.”

To me, this seriously limits the value of their APIs. So limiting, that I am tempted to call them useless for all but the simplest tasks. For any significant development effort, it’s probably unwise for an institution to undertake a development effort under these terms. That’s assuming that an individual institution even has the capacity tod o this–the power of internet collaboration is that it increases our collective capacity by making that capacity collective. Both that increased capacity and managed risk through a shared support scenario requires active collaboration between different institutions—even SerSol’s offer to perhaps make a finished product available to other clients “upon request” is not sufficient, active and ongoing collaboration between partners is required.

If I’m involved in any software evaluation processes where we evaluate SerSol’s products, I am going to be sure to voice this opinion and it’s justification. If any existing SerSol API customers are equally disturbed by this, I’d encourage you to voice that concern to SerSol. Perhaps they will see the error of their ways of customers (and especially potential not yet signed customers) complain.

Ross Singer notes that this is especially ironic when SerSol claims their APIs are “standards based” (http://www.serialssolutions.com/ss_360_link_features.html). What proprietary information could they possibly be trying to protect (from their own customers!).

Open source, support status, and risk management

Deciding whether to go with a particular open source product is an exercise in risk management. To be sure, let’s be clear—deciding whether to go with a particular proprietary product is also an an exercise in risk management. (And really, most organizational management decisions probably are, but what do I know, I’ve never been a manager and don’t have an mba).

Evaluating the risk level of an open source product is kind of new terrain for some in the library world. It is comforting to remember that there are some aspects of evaluation that really aren’t much different for open source software than for any other software — for instance, looking at whether the product has the features you need, and how well it works.

There are other aspects that need to be approached differently for open source. In this essay, I’m going to look at just one of them, that is cause for particular concern among some people — open source support models, how you get support for an open source product, what you are risking in terms of support with an open source product. All open source products/projects are not equal here. In trying to explain to others how to approach risk management related to support options in a particular open source product, I’ve found it useful to talk about three situations or statuses an open source project may have with regard to support. Continue reading “Open source, support status, and risk management”

Think you can use Amazon api for library service book covers?

Update 19 May 2008: See also Alternatives To Amazon API including prudent planning for if Amazon changes it’s mind.

Update: 17 Dec 2008: This old blog post is getting a LOT of traffic, so I thought it important to update it with my current thoughts, which have kind of changed.

Lots of library applications out there are using Amazon cover images, despite the ambiguity (to be generous; or you can say prohibition if you like) in the Amazon ToS.  Amazon is unlikely to care (it doesn’t hurt their business model at all). The publishers who generally own copyright on covers are unlikely to care (in fact, they generally encourage it).

So who does care, why does Amazon’s ToS say you can’t do it?  Probably the existing vendors of bulk cover image to libraries. And, from what I know, my guess is that one of them had a sufficient relationship with Amazon to get them to change their terms as below. (After all, while Amazon’s business model isn’t hurt by you using cover images for your catalog, they also probably don’t care too much about whether you can or not).

Is Amazon ever going to notice and tell you to stop? I doubt it. If that hypothetical existing vendor notices, do they even have standing to tell you to stop? Could they get Amazon to tell you to stop? Who knows.  I figure I’ll cross that bridge when we come to it.

Lots of library apps are using Amazon cover images, and nobody has formally told them to stop yet. Same for other Amazon Web Services other than covers (the ToS doesn’t just apply to covers).

But if you are looking for a source of cover images without any terms-of-service restrictions on using them in your catalog, a couple good ones have come into existence lately.  Take a look at CoverThing (with it’s own restrictive ToS, but not quite the same restrictions) and OpenLibrary (with very few restrictions). Also, the Google Books API allows you to find cover images too, but you’re on your own trying to figure out what uses of them are allowed by their confusing ToS.

And now, to the historically accurate post originally from March 19 2008….

Think again.

http://listserv.nd.edu/cgi-bin/wa?A2=ind0803&L=ngc4lib&T=0&O=D&X=77132057060E3A8667&P=6033

Jesse Haro of the Phoenix Public Library writes:

Following the release of the Customer Service Agreement from Amazon this past

December, we requested clarification from Amazon regarding the use of AWS for library catalogs and received the following response:

“Thank you for contacting Amazon Web Services. Unfortunately your application does not comply with section 5.1.3 of the AWS Customer Agreement. We do not allow Amazon Associates Web Service to be used for library catalogs. Driving traffic back to Amazon must be the primary purpose for all applications using Amazon Associates Web
Service.”

There are actually a bunch of reasons library software might be interested in AWS. But the hot topic is cover images. If libraries could get cover images for free from AWS, why pay for the expensive (and more technically cumbersome!) Bowker Syndetics service to do the same? One wonders what went on behind the scenes to make Amazon change their license terms in 2007 to result in the above. I am very curious as to where Amazon gets their cover images and under what, if any, licensing terms. I am curious as to where Bowker Syndetics gets their cover images and on what licensing terms–I am curious as to whether Bowker has an exclusive license/contract with publishers to sell cover images to libraries (or to anyone else other than libraries? I’m curious what contracts Bowker has with whom). All of this I will probably never know unless I go work for one of these companies.

I am also curious about the copyright status of cover images and cover image thumbnails in general. Who owns copyright on covers? The publisher, I guess? Is using a thumbnail of a cover image in a library catalog (or online store) possibly fair use that would not need copyright holder permission? What do copyright holders think about this? This we may all learn more about soon. There is buzz afoot about other cover image services various entities are trying to create with an open access model, without any license agreements with publishers whatsoever.

LCCN permalink

LCCN permalink. A pretty simple thing, but one that makes so much sense. That LC is providing it encourages one to hope such sensical but not quite as simple things may be in the pipeline too.

Also interesting to note that, as far as I’m aware, this means that LC has beat OCLC to having a persistent URI for a (approximated) manifestation that resolves to a publically accessible structured machine actionable bib record (in choice of formats). That LC did it first surely has as much or more to do with business models than it does with tech. It’s actually gratifying and surprising that LC has done it. Stuart Weibel has surely taken note.

issn metadata access

Did you guys know that issn.org sold z39.50 access to the ISSN registry/portal? I didn’t.

What might you want to use this for? Well, if the “linking ISSN” is deployed succesfullly, and the information is successfully included in the information available from the ‘issn portal’, then this is a machine-actionable source of correspondences between ISSNs that really represent the same title in different formats. I trust that many of my readers can think of all sorts of uses they could make being able to embed that information in their various discovery applications.

OCLC xISSN also can potentially provide some of this data in machine actionable form. (Haven’t explored it yet myself). I assume that xISSN correspondences are currently algorithmically/heuristically generated from what information is available in a cataloging record, as opposed to the “linking ISSN” based metadata, which presumably will be manually controlled? But then an interesting question is the cost comparison of these two services licensed for the uses we’d want to put them to. Would be nice to have two competing metadata web services available for a change, instead of usually having NONE that do what we need.

OCLC: A Cooperative

What the heck, I’m a controversial roll here, might as well keep it up. OCLC is fond of reminding us that they are not a “vendor”, because they are a cooperative. Now, I’d say that, well, OCLC is a vendor–they are an entity which is in the business of selling goods and services to libraries, and they therefore have many things in common with other entities in that business (and they sell products and services which compete with those of other vendors).

At the same time, it is a very important and unique thing that OCLC is a cooperative. What that means is that OCLC is in fact owned by it’s customers. Or at least a significant subset of it’s customers–it’s members, including many of our employers. OCLC was created by libraries to serve the collective interests of the library community (originally a regional community, but now a national or potentially even international one). Unlike a standard commercial entity who’s fundamental basic reason for existing is profit for it’s owners (whether shareholders, founders, private equity firms, whatever)—a cooperative like OCLC’s primary foundational mission is to it’s member-customers-owners interests, and profits are just a way of serving that basic mission. This is indeed an important thing, and I think the library community is well served to have an organization with the power and reach of OCLC which is owned by the collective library community.

But this is of course only true in reality to the extent that OCLC’s members actually do have the power to exersize their governance as owners of OCLC. Which is why I’m pretty disturbed by proposals in the recent OCLC governance study to reduce the number of OCLC board members elected by members from 6 to 4 on a 15 member board.   Is this in the interests of OCLC’s owners?  I confess I haven’t had time to read the entire study and their rational for this reccommendation, so I’m open to hearing an argument to the contrary–but my gut reaction is: no!  Would the owners (shareholders) of any ordinary company tolerate such a dilution of governance power?

I expect many of my readers will agree with me. But also share my lack of confidence that OCLC’s members–that is, our employers–that is our administrators who make such organization-level decisiosn for our employers–will make sure their governance power is not diluated. Which brings us to another issue.  Many of us think that OCLC sometimes (often?) does not act like the interests of us, it’s members, are a main purpose to which organizational profits can not take a priority.   I would submit that, if that’s true, much of the blame has to be laid at the feet of OCLC’s owners, the libraries. If our administrators agreed with us about what OCLC behavior was in the community’s interest, and believed it was important, and actively pushed OCLC to do what was in our interest–it would happen. We do own and control OCLC, right?  So if this is not happening, blame has to be laid at the feet of libraries, not OCLC.  This library-owned vendor starkly illustrates the point I was trying to make in my last post: That the change we need demands that libraries themselves take ownership of the strategic direction of libraries as a community.

Of course, this is only true so long as OCLC’s members really do govern OCLC. If and when this becomes no longer the case, then OCLC really will be just another vendor. And that would be a loss for libraries. I hope instead that libraries can step up in owning the strategic directions which they direct the vendor they own to follow.

Brain drain?

So, in the past year or so I’ve noticed a pretty astounding number of innovative, capable library technologists move from libraries to vendors. [Just some examples. Roy Tennant to OCLC. Nicole Engard to LibLime. Ross Singer to Talis. Andrew Pace to OCLC. Casey Durfee to LibraryThing. Steve Toub to Bibliocommons. There are probably more I’m not thinking of. ]

Now, first let’s get this out of the way: There’s nothing inherently wrong with vendors or working for them. We do indeed need our vendors to have people who are smart and understand technology and have an idea of library futures working for them. Most of these folks have gone to work on interesting and useful projects. I understand (I think) some of the allure here, and suspect that at some point in a hopefully long library career I’ll end up working for a vendor for at least a little while.

Nonetheless,would it be safe to categorize this as a ‘brain drain’? A veritable ‘giant sucking sound’ (blast from presidential election seasons past)?

What does this mean about the library sector? What does this mean for the library sector?

Some people might assume that money is the motivating factor here. While vendors in general probably can pay higher salaries than libraries in general, I’m not sure this is mostly about money. Rather, I think people who realize the huge and potentially exciting changes that are possible and neccessary in the library environment want to work in change-oriented organizations with clear strategic directions, in environments that value innovation, value these people’s work, and let them work on interesting and important projects with other smart, capable, and future-oriented colleagues.[1]

I think most readers will sadly recognize that it is exceedingly difficult to find that kind of environment working for a library.

I’ve been saying for a while that in order to achieve the change that we all realize libraries need, libraries can’t just rely on the vendors delivering an ‘out of the box’ solution–no matter how much we’re willing to pay. It’s by relinquishing all responsibility for innovation to vendors that got us to where we are today, and it’s not a pretty place. Libraries need to participate in figuring out where we are going, and defining the strategic directions to get us there. To be sure, libraries still need vendors, of all kinds. But libraries need to step up and be partners in innovation with their vendors.

I think that without this, the prospects of change are dim. Vendors can’t do it alone, no matter how many smart people they hire. And, if libraries can’t hold on to smart future-oriented people who understand the role technology can play in creating an exciting future for us–the prospects of libraries accepting the mantle of innovation also seem dim.

When was the last time you heard about a brilliant library technology worker moving from a vendor to a library?

[1]: While already thinking about this topic, I happened to read an article in the nyt about yahoo/microsoft recruiting woes:

“Engineers here want to work on tomorrow’s technology, not yesterday’s,” said Bill Demas… “If it’s perceived that Yahoo or anyone else is not focused on the future, it’s going to be very difficult to recruit top people,”