ergonomic shopping list

Is there anyone who’s profession is programming who isn’t destroying their arms?  Is it just me, or is it ridiculous that standard input devices continue to be medically dangerous to us? The way the kids today use the net, in 20 years we’re going to be a nation of cripples.

My current ergonomic shopping list

Split keyboard. Belkin Ergoboard.  [Internet reviews suggest this is better than the newer Pro model. I dunno, I guess all I can do is try it….]
http://catalog.belkin.com/IWCatProductPage.process?Product_Id=98781

                                                                                                $39.99

PS/2 to USB adaptor for above
http://catalog.belkin.com/IWCatProductPage.process?Product_Id=156482
                                                                                                $19.99

 
Mouse wrist-rest
http://www.imakproducts.com/products/ergobeads.htm#ergobeadsfabricwristsupport
(Blue – MOUSE)                                                                     $12.99

Adesso/Cirque  Easy Cat USB Touchpad [I got the evoluent ‘veritical mouse’ reccommended by dchud. It helps, but also has just moved the pain to a different part of my arms. I plan to keep using it, thus the wrist-rest above, but also think this touchpad which I could hold in my lap sounds great. One frustrating thing about this stuff is that it seems to me that you kind of have to get a device, and then try it, and if it’s not working, get another one. Can get expensive.] [One possible source:]
http://www.buy.com/prod/easycat-2btn-touchpad-usb-black-cirque-glidepoint-technology/q/loc/101/10392081.html
                                                                                                $49.99


TOTAL:                                                                                   $122.96

Advertisements

rails?

So, how do I set up a Rails app so it doesn’t live at web server root, and can share the web server with other rails apps?

I feel like answering this question should be a simple ‘find it in the docs’ or even on an FAQ, but instead it’s turned into a Google research project of conflicting advice.

Can anyone just point me to what I need to know?

Cataloging still must change, but not like that

Another followup post to RDA-L. I think there’s been a lot of miscommunication and misunderstanding in what (insufficient) dialog we’re having on the future of cataloging, at least on those lists. I think we’re all really after the same thing, although people have different visions of how to accomplish it. Interestingly, both Roy Tennant and Hal Cain emailed me off list saying that the idea in this post was a welcome one that needed to be said. Maybe it’s possible to begin speaking the same language on cataloging after all.

***

One sentence in Hal Cain’s post jumps out at me, I think it’s important to make clear that the issue (from my point of view anyway) is NOT about “inhibit[ing our practices] by constraints asserted to be demanded in the interests of efficient machine processing.”

I believe this is neither desirable NOR necessary–I don’t think the necessities of machine processing should require us to give up any bibliographic control. (Whether there are _other_ reasons to desire or require us to give up certain bibliographic control is another debate, but I don’t believe ‘constraints in the interests of efficient machine processing’ are or should be _or need to be_ valid reasons—the machines exist to serve our needs not vice versa).

Rather, the issue is instead that our data as currently formatted is _not taking maximum advantage of what is possible in the digital world_. It is not about constraints of machine processing, quite the opposite, it is about new opportunities and possibilities of machine processing, that ought to be able to make our systems of bibliographic control so much more powerful and flexible than they once were. [Which is good, because they NEED to do so much more than they used to, to keep up with the ever expanding universe of recorded knowledge and information]. But our current systems put barriers in the way. As other non-library metadata systems (more nimble, without the baggage of 100 years of metadata, and able to start controlling only a portion of what constitutes our present ‘bibliographic universe’) take advantage of these new possibilities, our systems of control, once truly state of the art and ahead of their time, seem increasingly archaic, have increasing difficulty interoperating with these other (ever multiplying) new systems, and have increasing difficulty doing what users are coming to expect from their experience with other systems. (The statements at the first LC session on users and uses of the catalog do a good job of explaining some of this context).

But if it instead appears we’re being _constrained_ by the machine world (compared to our pre-machine world)– that would instead indicate a case of not using the machines properly (as indeed we all know that we are NOT currently, indeed). This is not a point of view I subscribe to, and I say with some confidence that it’s not what other (for lack of a better term, and at the risk of suggesting there factions when really I think we are all after the same goals) advocates of ‘cataloging modernisation’ such as Hillman, Coyle, etc., advocate either. The idea that machine processing demands constraints on control. (As opposed to being constrained by other factors of our current environment, such as economics–those constraints can be all too real).

Two meanings of “Identifier”

Two meanings of identifier = Two Functions of ‘Identifiers’ => Fulfilled with only one mechanism in traditional anglo-american cataloging.

And the harm this causes.

This is an essay I recently posted to the RDA list and the FRBR list, in response to an essay posted by Martha Yee. I think my essay can stand on it’s own to some extent. I think it starts to get at some of the failure to communicate that’s been occuring in certain cataloging discussions….

Karen beat me to it, but I’d been drafting this response to Martha Yee for a couple days. Sorry it’s so long, but I’ve learned not to assume that readers share the same assumptions as me, so explicit clarity is required.

Reading some of Martha Yee’s articles in library school was invaluable to developing what understanding of bibliographic control I have. I think Martha has, in particular to this discussion, makes a very valuable contribution by frequently reminding us that our ‘primary access points’ are best thought of as ‘work identifiers’ and ‘author identifiers’.

I think Martha’s recent comments on FRAD are a good start to important dialog, but I entirely disagree with some of her conclusions.

I think the best way to get at this disagreement is to talk about what I see as two functions of our traditional primary access points (I think Martha rightly refers to these ‘primary access points’ as ‘identifiers’, as I’ve discussed in a different way in a post on the RDA-L list, 14 Feb 2007, http://www.collectionscanada.ca/jsc/docs/rdal0702.txt).
Continue reading “Two meanings of “Identifier””

Work flow support vs. data store

A way I’ve been verbalizing something we all think about lately:

Our ILSs were originally intended to support our work flow. The old-fashioned-sounding phrase we still use–“automation”–points at this. We were providing ‘automation’ to make individual jobs easier. Whether the ILSs we have do this well or not is another question, but some still assume that the evaluation of an ILS stop and ends here.

But in fact, in the current environment there is another factor to evaluate. Not just work flow support, but ILS serving as a data store for the ‘business’ of the library. The work that the ILS is supporting produces all sorts of data—the most obvious ones being bib records and holdings information, but it doesn’t stop there. The ILS needs to provide this data, not just to support current identified work flow and user tasks, but to be a data store in and of itself for other software, existing and yet to be invented, supporting tasks existing and yet to be discovered. The ILS needs to function as a data store for the ‘single business’ of our library. Not neccesarily the only one (although there’s something to be said for a single data store for our ‘single business’), but the data that IS in there needs to be accesible.

Thanks to Lorcan Dempsey for pointing out the concept of ‘single business systems environment’ in the National Library of Australia’s IT Architecture report, which got me articulating in this way. I think it’s on the way to a useful way to articulate things which are often understood by ‘us’ but not articulated well to ‘them’ (adminstrators, vendors, our less technically minded colleagues).

I fought the XSLT and I won: Browse index hightlighting in HIP

So I feel like I just won a wrestling match with the Horizon Information Portal (our OPAC), and wanted to gloat and share what I’ve done. The rules of the match were ‘XSLT’, because HIP has a point of intervention where you can change display using XSLT (not always very easily). My stuff isn’t entirely deployed yet, but I’ll show you on our test server.

Continue reading “I fought the XSLT and I won: Browse index hightlighting in HIP”

Good class modelling in Ruby?

So I know there are some Ruby-ites who have read this blog in the past, maybe you could give me some advice? Sorry this entry is so long, man I write too much.

I’m having trouble switching my brain from understanding good class hieararchy design in a statically typed language (like Java), to Ruby’s world of ‘duck typing’ which I’m still not entirely comfortable with.

Continue reading “Good class modelling in Ruby?”

Alternatives to RFPs?

Someone on horizon-l wrote: (Since it’s a closed list, I leave out names of other people, but then i’m also avoiding giving credit and ‘google juice’ where it’s due. Any suggestions as to appropriate etiquette? I’m too lazy to contact anyone I’m quoting in a blog entry each time, so don’t suggest that)

But I hope we don’t have to do an actual RFP. I hear that those are quite expensive for the vendor. See the comments from Carl Grant, President & COO, VTLS, Inc., on the Hectic Pace post “No Roaming”, at http://blogs.ala.org/pace.php?title=no_roaming&more=1&c=1&tb=1&pb=1

It would be very interesting if some vendors and libraries got together to devise a standard alternative to the RFP that would meet everyone’s needs. Cheaper/easier, while still giving libraries the information they need. Does anyone know if there’s been any work in that direction?

Are/were RFP’s (or rather the P’s in response to RFPs) originally intended as a legally binding document and basis to sue, as one of the quotes below [not included here] suggests? I agree that seems unrealistic and unnecessary. Isn’t that the job of the actual contract you sign, anyway? (Whether the contracts we have do that job, or even what it would look like to do that job well–another topic).

More on open source

Another post I made to horizon-l. I am trying to be a rational (rather than propaganda-spewing) voice of pro-FLOSS. Feel free to let me know if you think I’ve mis-stepped in any direction.

Another member posted:

and it may be that your library administration
would never consider open sourcing your ILS....

To which I replied:

It is important to remember that Evergreen and Koha both have vendors who will provide commercial support. Realizing that an ILS with an open source license can also come with a support contract, I can’t think of any good reason for administrators to rule out open source ILSs as a class. That’s the real meaning of how open source and commercial can be “both/and”. Evergreen and Koha are both commercial software AND licensed under an open source license. .

Is there anything I’m missing about a rational reason to dismiss open source as a class, to specifically reject software that’s licensed under an open source license, preferring software that is not?

Now, just because there may not be any reason to reject open source as a class, that doesn’t mean that any particular existing open source licensed ILS is right for you. The way you evaluate it is the same way you’ve always evaluated software, some particularly pertinent criteria in this case include:

1) Total cost of ownership, naturally. Your support contract for the open source licensed ILS being evaluated, plus hardware, plus if the software doens’t yet have all the features you need/want, the cost to pay a vendor (or hire staff) to add them. Compared to total cost of ownership of your other (proprietary) options.

2) As with any product, strength and staying power of the vendor. While both Koha and Evergreen have vendors providing commercial support, you want to ask yourself the question, how financially and otherwise organizationally healthy are they? How likely are they to stay around for the long term? As you would with any vendor. (And the market has of course seen that there’s no guarantee of continued viability of vendors of proprietary software either).

There will be negatives and positives with any particular solution you evaluate, which have to be balanced against one another and against the other options. For open source or proprietary. Open source by it’s very nature comes with some more _potential_ positives—if the currently existing vendor _does_ go under or decide to stop supporting the product: a) You can still keep using the software as long as you want without permission from anyone (that’s the nature of open source) b) Another vendor(s) can spring into existence to support and/or further develop the same product, without any permission from anyone, with full access to the source code (if not neccesarily the institutional expertise). Another unique possibility, if the open source ILS market continues to develop, is that there could

Many of these unique benefits to open source are just _potential_ in the ILS market at the moment, there are no guarantees—if you don’t want to consider these potential benefits yet, you don’t have to: You can evaluate open source ILS options head-to-head with proprietary options, without giving any weight to the potential unique benefits of open source. I think Koha and/or Evergreen are quite possible to still come out looking good in such a comparison.

But when you undertake the difficult calculus of evaluating what software is best for you among a collection of non-perfect options, I don’t see any reason to exclude open source licensed options from consideration altogether.

Jonathan

ruby, and mix-ins vs. mult inheritance

So, I’m afraid I’m joining the ruby cult. Or at least learning ruby.

I like a lot of things about it (so nice to work with a really object oriented language, where the community writes really object oriented code), and don’t like some things about it (still suspicious of ‘duck typing’, the community doens’t seem to write very good documentation).

But my question is: How are “mix-ins” any different than multiple inheritance? What am I missing? mix-ins as a feature seem to be exactly no more and no less than multiple inheritance. Is it just the convention for how to use them (just a few methods focused on a particular function) that makes them “mix-ins”? Like, mix-ins are mult inheritance used in a certain way? Is there something I’m missing?