And other impenetrable acronyms.
I share the generalized optimism toward the recent announcement of the DC/RDA joint project.
It’s confusing to talk and think about these sorts of ideas, because to talk about metadata like this, you need to talk so very abstractly. We try to mean very precise things, but we don’t always have the precise words to describe them, or to be understood by people who may not mean the same things by the same words.
I’ve been confused by the DCAM for a while, myself. As I keep circling around and around trying to understand what’s going on, at this particular stage in my circling I’ve found this paper, Towards an Interoperability Framework for Metadata Standards, by Nilsson et al, to be very helpful, and I think I’m getting closer to understanding what DCAM is. When I go and look at my comments made to Pete Johnston’s blog post, linked above, around five months ago, already I wouldn’t ask those same questions now (although I can’t exactly answer them in clear language either–so tricky to talk about this stuff!)
I do start to wonder, though: Is DCAM trying to solve the _exact_ same problem RDF is? Is there any reason to have both? What does DCAM have that the “RDF suite” does not? Nilsson et al do say that “The RDF suite of specifications, however, follow a more similar pattern to the framework presented here.”
Erik Hatcher’s essay on their experiences prototyping blacklight at UVa ought to be required reading for anyone interested in the future of library digital services.
To my mind, the most important point he makes is this:
Let me reiterate that what I see needed is a process, not a product. With Solr in the picture, we can all rest a bit easier knowing that a top-notch open source search engine is readily available… a commodity. The investment for the University of Virginia, then, is not in search engine technology per se, but rather in embracing the needs of the users at a fine-grained level.
This is a point I see many library decision makers not fully grasping. It’s not about buying a product (whether open source _or_ proprietary), it’s about somehow getting multiple parts of the libraries on board in a coordinated effort to focus our work where it matters. The tech may make this possible for the first time–and some tech may be better than other tech–but tech can’t solve things for you. Just plunking your money down for the ‘right’ product from a vendor (Yes, even if that vendor is OCLC!) can not be an end point.
But organizational strategy is a lot harder than just buying an expensive product, unfortunately.
As I try to set up my OPAC to display non-roman chars (what the librarians call ‘vernacular’, which seems odd to me, I’ll stick with ‘non-roman’), I have run into a really weird thing with the way browsers are displaying text that has mixed Roman and Hebrew characters. This doesn’t seem to effect any other non-roman chars we have, I can only guess it’s somehow related to the right-to-left-ness of Hebrew, but it’s weird.
Check out my simple reproduced test-case, and see if you can tell me what the heck is going on. Any advice appreciated. (Even once I figure out what’s going on, and even if I can identify a fix, there’s no telling if I can get my OPAC to act that fix. But anyway.)
See my reproduced simple demonstration:
Specifically with the stuff that generates the coverage strings missing from the SFX API. In some cases of SFX including services from ‘related’ SFX objects, my current code will generate empty or incorrect coverage strings that do not match what the SFX menu itself provides.
I am trying to come up with a very hacky workaround to this. If you want the new version, contact me in a couple days (I should REALLY put this stuff in a publically available SVN). But a VERY hacky workaround is what it will have to be, and will probably still have some edge cases it can not deal correctly with. To explain what the deal is… it’s confusing to explain, but I’ll just give you my Ex Libris enhancement request:
Continue reading “Issues with my SFX in HIP code”
Is there anyone who’s profession is programming who isn’t destroying their arms? Is it just me, or is it ridiculous that standard input devices continue to be medically dangerous to us? The way the kids today use the net, in 20 years we’re going to be a nation of cripples.
My current ergonomic shopping list
Split keyboard. Belkin Ergoboard. [Internet reviews suggest this is better than the newer Pro model. I dunno, I guess all I can do is try it….]
PS/2 to USB adaptor for above
(Blue – MOUSE) $12.99
Adesso/Cirque Easy Cat USB Touchpad [I got the evoluent ‘veritical mouse’ reccommended by dchud. It helps, but also has just moved the pain to a different part of my arms. I plan to keep using it, thus the wrist-rest above, but also think this touchpad which I could hold in my lap sounds great. One frustrating thing about this stuff is that it seems to me that you kind of have to get a device, and then try it, and if it’s not working, get another one. Can get expensive.] [One possible source:]
So, how do I set up a Rails app so it doesn’t live at web server root, and can share the web server with other rails apps?
I feel like answering this question should be a simple ‘find it in the docs’ or even on an FAQ, but instead it’s turned into a Google research project of conflicting advice.
Can anyone just point me to what I need to know?
Another followup post to RDA-L. I think there’s been a lot of miscommunication and misunderstanding in what (insufficient) dialog we’re having on the future of cataloging, at least on those lists. I think we’re all really after the same thing, although people have different visions of how to accomplish it. Interestingly, both Roy Tennant and Hal Cain emailed me off list saying that the idea in this post was a welcome one that needed to be said. Maybe it’s possible to begin speaking the same language on cataloging after all.
One sentence in Hal Cain’s post jumps out at me, I think it’s important to make clear that the issue (from my point of view anyway) is NOT about “inhibit[ing our practices] by constraints asserted to be demanded in the interests of efficient machine processing.”
I believe this is neither desirable NOR necessary–I don’t think the necessities of machine processing should require us to give up any bibliographic control. (Whether there are _other_ reasons to desire or require us to give up certain bibliographic control is another debate, but I don’t believe ‘constraints in the interests of efficient machine processing’ are or should be _or need to be_ valid reasons—the machines exist to serve our needs not vice versa).
Rather, the issue is instead that our data as currently formatted is _not taking maximum advantage of what is possible in the digital world_. It is not about constraints of machine processing, quite the opposite, it is about new opportunities and possibilities of machine processing, that ought to be able to make our systems of bibliographic control so much more powerful and flexible than they once were. [Which is good, because they NEED to do so much more than they used to, to keep up with the ever expanding universe of recorded knowledge and information]. But our current systems put barriers in the way. As other non-library metadata systems (more nimble, without the baggage of 100 years of metadata, and able to start controlling only a portion of what constitutes our present ‘bibliographic universe’) take advantage of these new possibilities, our systems of control, once truly state of the art and ahead of their time, seem increasingly archaic, have increasing difficulty interoperating with these other (ever multiplying) new systems, and have increasing difficulty doing what users are coming to expect from their experience with other systems. (The statements at the first LC session on users and uses of the catalog do a good job of explaining some of this context).
But if it instead appears we’re being _constrained_ by the machine world (compared to our pre-machine world)– that would instead indicate a case of not using the machines properly (as indeed we all know that we are NOT currently, indeed). This is not a point of view I subscribe to, and I say with some confidence that it’s not what other (for lack of a better term, and at the risk of suggesting there factions when really I think we are all after the same goals) advocates of ‘cataloging modernisation’ such as Hillman, Coyle, etc., advocate either. The idea that machine processing demands constraints on control. (As opposed to being constrained by other factors of our current environment, such as economics–those constraints can be all too real).
Two meanings of identifier = Two Functions of ‘Identifiers’ => Fulfilled with only one mechanism in traditional anglo-american cataloging.
And the harm this causes.
This is an essay I recently posted to the RDA list and the FRBR list, in response to an essay posted by Martha Yee. I think my essay can stand on it’s own to some extent. I think it starts to get at some of the failure to communicate that’s been occuring in certain cataloging discussions….
Karen beat me to it, but I’d been drafting this response to Martha Yee for a couple days. Sorry it’s so long, but I’ve learned not to assume that readers share the same assumptions as me, so explicit clarity is required.
Reading some of Martha Yee’s articles in library school was invaluable to developing what understanding of bibliographic control I have. I think Martha has, in particular to this discussion, makes a very valuable contribution by frequently reminding us that our ‘primary access points’ are best thought of as ‘work identifiers’ and ‘author identifiers’.
I think Martha’s recent comments on FRAD are a good start to important dialog, but I entirely disagree with some of her conclusions.
I think the best way to get at this disagreement is to talk about what I see as two functions of our traditional primary access points (I think Martha rightly refers to these ‘primary access points’ as ‘identifiers’, as I’ve discussed in a different way in a post on the RDA-L list, 14 Feb 2007, http://www.collectionscanada.ca/jsc/docs/rdal0702.txt).
Continue reading “Two meanings of “Identifier””
A way I’ve been verbalizing something we all think about lately:
Our ILSs were originally intended to support our work flow. The old-fashioned-sounding phrase we still use–“automation”–points at this. We were providing ‘automation’ to make individual jobs easier. Whether the ILSs we have do this well or not is another question, but some still assume that the evaluation of an ILS stop and ends here.
But in fact, in the current environment there is another factor to evaluate. Not just work flow support, but ILS serving as a data store for the ‘business’ of the library. The work that the ILS is supporting produces all sorts of data—the most obvious ones being bib records and holdings information, but it doesn’t stop there. The ILS needs to provide this data, not just to support current identified work flow and user tasks, but to be a data store in and of itself for other software, existing and yet to be invented, supporting tasks existing and yet to be discovered. The ILS needs to function as a data store for the ‘single business’ of our library. Not neccesarily the only one (although there’s something to be said for a single data store for our ‘single business’), but the data that IS in there needs to be accesible.
Thanks to Lorcan Dempsey for pointing out the concept of ‘single business systems environment’ in the National Library of Australia’s IT Architecture report, which got me articulating in this way. I think it’s on the way to a useful way to articulate things which are often understood by ‘us’ but not articulated well to ‘them’ (adminstrators, vendors, our less technically minded colleagues).
So I feel like I just won a wrestling match with the Horizon Information Portal (our OPAC), and wanted to gloat and share what I’ve done. The rules of the match were ‘XSLT’, because HIP has a point of intervention where you can change display using XSLT (not always very easily). My stuff isn’t entirely deployed yet, but I’ll show you on our test server.
Continue reading “I fought the XSLT and I won: Browse index hightlighting in HIP”