“Digital Libraries” and DLF

I attended the DLF Fall Forum a couple weeks ago, and found it to be a very rewarding experience, even more so than I expected. It is always nice to be with so many people who are more or less on the same page when it comes to where library services are headed and the work we need to do (and are doing) to get there. I’ll provide a round up of some interesting presentations and things learned below, but first a word on the concept of the ‘digital library’.

It is becoming increasingly clear to me that all (or anyway most) of our libraries are “digital libraries”. Every area of our library now involves digital resources and digital services–and this will only increase. From licensed ebooks to scanned books to licensed electronic scholarly content; from email reference to the OPAC to document delivery to federated search and licensed databases. There’s digital content and services everywhere. Our libraries are already digital libraries–the future is here (and probably always was, paradoxically). Our task is making them better, improving and adding digital services, improving access to and adding digital content. There is no reason to reserve the phrase ‘digital library’ to any particular programs, services, collections, or technology platforms–and there is no reason to segregate to particular organizational units attention to our digital future or our digital present (and the two better be related if we actually plan on moving from the present to that future). Digital content, access, services, and strategy need to be coordinated across the library. The “single business“.

I am certainly not alone in thinking this way, but it’s not clear to me if it’s yet the majority perspective among either DLF-community type people or the library community in general.

So, that rant aside, how about a DLF fall forum 2007 roundup? As with any good conference, the best experiences are the conversations in the halls, and I enjoyed talking to colleagues I had met before and colleagues I hadn’t, including Terry Reese, Corey Harper, Jenn Riley, Bess Sadler, Emily Lynema, Tito Sierra, and Ian Mulvany (from Nature/Connotea) among others.

A common topic at DLF was the Service Oriented Architecture (SOA), both the focus of several presentations and a side topic at others. SOA is basically a software engineering approach to providing the infrastructure to accomplish what I tried to describe in a blog post in much less technical terms. While I’m used to discussing these things with people in an attempt to get them a step or two forward to where I am in that blog post, I was pleased (and jealous) to see that some libraries are several steps ahead of me on this one. We definitely have a developing understanding/technology gap among libraries. (And that’s without even getting to the OCLC member/not-member gap; I’m talking about a gap among relatively well funded large OCLC-member libraries!).

I saw more of blacklight at DLF. I am more and more impressed with it as a solid flexible platform for building a SOLR-based library discovery/access tool. In addition to seeming like an all around solid and generalizable tool, I like that they have a specific ‘musical’ interface (search by instrument!), and that they are integrating digital Institutional Repository content into their search–two things we’ll definitely want where I work. We need to not only keep re-inventing the wheel, but we need to start using the same shared wheels, we need to have shared codebase open source so we can more easily share our improvements and bugfixes with each other. I think blacklight could be such a project–if UVa would officially make it open source, which hopefully they will soon.

The discussion with the DLF ILS Discovery API taskforce was encouraging. It’s a pretty tricky course they are on, both technically and business/politically. But they have the right idea about where they want to head, and seem to be doing a good idea of getting there.

I talked about the Bowker ISTC presentation before. A while ago I got used to the idea that there aren’t going to be ‘canonical identifiers’ for our entities of interest. Our systems are going to have to deal with multiple identifiers. That doesn’t scare me anymore. But I realized while thinking about ISTC that we’re also going to be in a world with multiple decisions about the relationships between/sets of entities too. The FRBR model groups manifestations in an expression set and (the resulting) expressions in a work set. We’re not going to have a canonical decision as to what manifestations are in an expression set either. I’m still wrapping my head around how our systems can possibly do something useful in that environment, but like it or not, I think that’s the environment they will be in (and of course already are in, to the extent that these relationship/set decisions are made–but they’re not dealing with it very well!).

I found the presentation on Adapting Technology: Digital Imaging in Kabul, Afghanistan to be fascinating and inspiring, while it doesn’t interact with my own professional work much. They’ve set up folks in afghanistan to scan national library heritage materials up to the same standards that would be used in the US, and the process continues to produce digital materials at a nice clip. (Can’t find slides online, but you can read the description in the program.)

Another seriously amazing project reported on is the crazy geographic information viewable through a web browser at PhillyHistory.org. They did that by hiring a local medium sized software firm to do it, and they’ve done some amazing stuff. It makes me think–here at my library, we have all sorts of licensed and free GIS data. For patrons to use it, they’ve got to either come to the library, or install ArcView or another GIS program on their own computer (we do have a limited site license for ArcView). Why can’t we make ALL those layers available through a Google maps-like interface instead? The licensed stuff we can put behind authentication. Aren’t there some open source tools to provide geographical display in the browser ala google maps? Somebody has just got to get a grant to take one of those tools and implement it with library geographical data sets. Would be huge.

And that’s my roundup.

This entry was posted in General. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s