The first issue of the Code4Lib Journal is out. I have nothing more to say about than what I said in my editorial introduction, except to re-iterate that this project ended up taking quite a bit more time then I naively thought it would!
Update Dec 28. It occurs to me that the ‘qualification’ for an article to get into a standard scholarly journal might be “Is this reporting significant research?” In contrast, I hope the ‘qualification’ for Code4Lib Journal is “Is this article going to be helpful to others trying to improve library services?” You can have an article about really good research, and the article might accurately report that research—but it might not be very good at explaining to someone else what they can actually _do_ with it (to repeat it, or to act upon what they’ve found). This could be because of the way it’s written, or because of what’s left out. Personally (and I only speak for myself), that article would need more work before going in c4lj. On the other hand, there can be an article that isn’t about original research _at all_, but is incredibly helpful to others in innovating in their library, and that would be a shoe-in to c4lj, but probably wouldn’t qualify for a journal with a mission more traditional-scholarly.
Even though I never really listen to podcasts, I still participated in a Talis podcast that ended up being a sort of free discussion on the state of the library software market.
I attended the DLF Fall Forum a couple weeks ago, and found it to be a very rewarding experience, even more so than I expected. It is always nice to be with so many people who are more or less on the same page when it comes to where library services are headed and the work we need to do (and are doing) to get there. I’ll provide a round up of some interesting presentations and things learned below, but first a word on the concept of the ‘digital library’.
It is becoming increasingly clear to me that all (or anyway most) of our libraries are “digital libraries”. Every area of our library now involves digital resources and digital services–and this will only increase. From licensed ebooks to scanned books to licensed electronic scholarly content; from email reference to the OPAC to document delivery to federated search and licensed databases. There’s digital content and services everywhere. Our libraries are already digital libraries–the future is here (and probably always was, paradoxically). Our task is making them better, improving and adding digital services, improving access to and adding digital content. There is no reason to reserve the phrase ‘digital library’ to any particular programs, services, collections, or technology platforms–and there is no reason to segregate to particular organizational units attention to our digital future or our digital present (and the two better be related if we actually plan on moving from the present to that future). Digital content, access, services, and strategy need to be coordinated across the library. The “single business“.
I am certainly not alone in thinking this way, but it’s not clear to me if it’s yet the majority perspective among either DLF-community type people or the library community in general.
So, that rant aside, how about a DLF fall forum 2007 roundup? Continue reading ““Digital Libraries” and DLF”
Here are some notes on near/medium future directions of the library systems environment/architecture, and a sketch of requirements on where we want to go. These notes may or may not end up as part of an internal white paper here, as we analyze where we want to be headed (something good to do anyway, but we got a kick in the pants when the vendor ended development on our current ILS).
The challenge here for me was to produce something that took the set of assumptions that are obvious to many of us library tech geeks, and make them both clear and convincing to those they aren’t already obvious to. While at the same time being useful even to those of us in the ‘in’ group, by making things explicit on paper, helping us be clear to and among ourselves about what we mean, and confirm we actually agree and have thought it through. While at the SAME time being concise. I’m not sure I succeeded, especially on that last one, but here you go.
Continue reading “Notes on future directions of Library Systems”
Very interesting article in today’s NYT Business section (Annoyingly, WordPress.com doesn’t let me put a COinS in my blog post! Argh! Sorry. June 3, 2007. New York Times. “Google Keeps Tweaking Its Search Engine” by Saul Hansell) about Google’s relevancy ranking algorithms.
This article has a sub-text (well, not too sub) about how insanely awesome Google is, how much further ahead than anyone else they are. No doubt getting press like that is part of the reason Google gave the reporter access to this department which is usually instead cloaked in trade-secrecy.
Still, that’s definitley part of the story. It’s important to remember/realize taht Google’s relevancy ranking algorithms are very sophisticated and complex, and getting constantly more so, in order to give us the simplicity of the good results we see. Our simplistic conception of ‘page rank’ is just one increasingly small part of the whole set of algorithms. So, no, we can’t “just copy what Google does” (not least, but not only, because we are dealing with a different data domain than Google).
The solution to what we need isn’t just waiting out there in the open for us to copy. The solution(s) are waiting for us to discover and invent. On the other hand, of course we want to pay attention to what we can learn from Google and what Google does (in broad principles and–where we can figure them out–specific details) in figuring it out.
Some choice quotes: Continue reading “Google’s algorithms”
Erik Hatcher’s essay on their experiences prototyping blacklight at UVa ought to be required reading for anyone interested in the future of library digital services.
To my mind, the most important point he makes is this:
Let me reiterate that what I see needed is a process, not a product. With Solr in the picture, we can all rest a bit easier knowing that a top-notch open source search engine is readily available… a commodity. The investment for the University of Virginia, then, is not in search engine technology per se, but rather in embracing the needs of the users at a fine-grained level.
This is a point I see many library decision makers not fully grasping. It’s not about buying a product (whether open source _or_ proprietary), it’s about somehow getting multiple parts of the libraries on board in a coordinated effort to focus our work where it matters. The tech may make this possible for the first time–and some tech may be better than other tech–but tech can’t solve things for you. Just plunking your money down for the ‘right’ product from a vendor (Yes, even if that vendor is OCLC!) can not be an end point.
But organizational strategy is a lot harder than just buying an expensive product, unfortunately.
Is there anyone who’s profession is programming who isn’t destroying their arms? Is it just me, or is it ridiculous that standard input devices continue to be medically dangerous to us? The way the kids today use the net, in 20 years we’re going to be a nation of cripples.
My current ergonomic shopping list
Split keyboard. Belkin Ergoboard. [Internet reviews suggest this is better than the newer Pro model. I dunno, I guess all I can do is try it….]
PS/2 to USB adaptor for above
(Blue – MOUSE) $12.99
Adesso/Cirque Easy Cat USB Touchpad [I got the evoluent ‘veritical mouse’ reccommended by dchud. It helps, but also has just moved the pain to a different part of my arms. I plan to keep using it, thus the wrist-rest above, but also think this touchpad which I could hold in my lap sounds great. One frustrating thing about this stuff is that it seems to me that you kind of have to get a device, and then try it, and if it’s not working, get another one. Can get expensive.] [One possible source:]
I want an email client that offers a ‘reply to list’ function in addition to ‘reply to author’ and ‘reply to all’.
Even more importantly, I want it to put my message compose window in bright red or something whenever I’m replying to a list by default ‘reply’, which it should be able to detect.
Oh, while we’re at it, let me pick own my default-reply on a per-list basis.
Is that too much to ask? From how often I see people making this mistake (sometimes embarressingly), I’d guess this is one of the biggest email usability issues (right after spam prevention).
Email list servs don’t always use list-* headers that would make it easy for a client to do this but I can think of some heuristics that could succesfully identify list traffic much of the time. Like when the “To:” header matches the “Sender:” header, or when the “To:” header matches the “Reply-to:” header–most of the time that’ll be a mailing list, and most mailing list messages can probably be caught by rules like this.
Hadn’t seen this blogged yet anywhere I read, so I’ll cite it, even though I know some people are annoyed by just pointing to existing resources in your blog.
Has some good practical advice for attempting to integrate institutional repository and ‘e-print/pre-print’ content into your link resolver.
Shigeki Sugita et al.
Linking Service to Open Access Repositories
Volume 13 Number 3/4 ISSN 1082-9873
I have been thinking lately of a library subject guide system. A really great utopian library subject system. I imagined a system where librarians would list databases and other resources (chosen from Metalib and/or some other central repository of our stuff, when possible; URLs entered manually when not); also add other narrative text as desired. And organize the whole thing coherently somehow, without knowing any HTML.
And then we’re kind of at the kind of subject guides most of us have today (but perhaps easier to create), but then there’s all sorts of cool new features (I really hate saying ‘2.0’) we can imagine: Continue reading “Library Subject Guides (does this have something to do with Sakai?)”