Thank you again, Edward Snowden

According to this Reuters article, the NSA intentionally weakened encryption in popular encryption software from the company RSA.

They did this because they wanted to make sure they could continue eavesdropping on us all, but in the process they made us more vulnerable to eavesdropping from other attackers too. Once you put in a backdoor, anyone else that figures it out can access it too, it wasn’t some kind of NSA-only backdoor.  I bet, for instance, China’s hackers and mathematicians are as clever as ours.

“We could have been more sceptical of NSA’s intentions,” RSA Chief Technologist Sam Curry told Reuters. “We trusted them because they are charged with security for the U.S. government and U.S. critical infrastructure.”

I’m not sure if I believe him — the $10 million NSA paid RSA for inserting the mathematical backdoors probably did a lot to assuage their skepticism too. What did they think NSA was paying for?

On the other hand, sure, the NSA is charged with improving our security, and does have expertise in that.  It was fairly reasonable to think that’s what they were doing. Suggesting they were intentionally putting some backdoors in instead would have probably got you called paranoid… pre-Snowden.  Not anymore.

It is thanks only to Edward Snowden that nobody will be making that mistake again for a long time. Edward Snowden, thank you for your service.

Posted in General | Leave a comment

Academic freedom in Israel and Palestine

While I mostly try to keep this blog focused on professional concerns, I do think academic freedom is a professional concern for librarians, and I’m going to again use this platform to write about an issue of concern to me.

On December 17th, 2013, the American Studies Association membership endorsed a Resolution on Boycott of Israeli Academic Institutions. This resolution endorses and joins in a campaign organized by Palestinian civil society organizations for boycott of Israel for human rights violations against Palestinians — and specifically, for an academic boycott called for by Palestinian academics.

In late December and early January, very many American university presidents released letters opposing and criticizing the ASA boycott resolution, usually on the grounds that the ASA action threatened the academic freedom of Israeli academics.

Here at Johns Hopkins, the President and Provost issued such a letter on December 23rd. I am quite curious about what organizing took place that resulted in letters from so many university presidents within in a few weeks. Beyond letters of disproval from presidents, there has also been organizing to prevent scholars, departments, and institutions from affiliating with the ASA or to retaliate against scholars who do so (such efforts are, ironically, quite a threat to academic freedom themselves).

The ASA resolution (and the Palestinian academic boycott campaign in general) does not call for prohibition of cooperation with Israeli academics, but only against formal collaborations with Israeli academic institutions — and in the case of the ASA, only formal partnerships by the ASA itself, they are not trying to require any particular actions by members as a condition of membership in the ASA.  You can read more about the parameters of the ASA resolution, and the motivation that led to it, on the ASA’s FAQ on the subject, a concise and well-written document I definitely recommend reading.

So I don’t actually think the ASA resolution will have significant effect on academic freedom for scholars at Israeli institutions.  It’s mostly a symbolic action, although the fierce organizing against it shows how threatening the symbolic action is to the Israeli government and those who would like to protect it from criticism.

But, okay, especially if academic boycott of Israel continues to gain strength, then some academics at Israeli institutions will, at the very least, be inconvenienced in their academic affairs.  I can understand why some people find academic boycott an inappropriate tactic — even though I disagree with them.

But here’s the thing. The academic freedom of Palestinian scholars and students has been regularly, persistently, and severely infringed for quite some time.  In fact, acting in solidarity with Palestinian colleagues facing restrictions on freedom of movement and expression and inquiry was the motivation of the ASA’s resolution in the first place, as they write in their FAQ and the language of the resolution itself.

You can read more about restrictions in Palestinian academic freedom, and the complicity of Israeli academic institutions in these restrictions, in a report from Palestinian civil society here; or this campaign web page from Birzeit University and other Palestinian universities;  this report from the Israeli Alternative Information Center;  or in this 2006 essay by Judith Butler; or this 2011 essay by Riham Barghouti, one of the founding members of the Palestinian Campaign for the Academic and Cultural Boycott of Israel.

What are we to make of the fact that so many university presidents spoke up in alarm at an early sign of possible, in their views, impingements to academic freedom of scholars at Israeli institutions, but none have spoken up to defend significantly beleaguered Palestinian academic freedom?

Here at Hopkins, Students for Justice in Palestine believes that we do all have a responsibility to speak up in solidarity with our Palestinian colleagues, students and scholars, whose freedoms of inquiry and expression are severely curtailed; and that administrators silence on the issue does not in fact represent our community.  Hopkins SJP thinks the community should speak out in concern and support for Palestinian academic freedom, and they’ve written a letter Hopkins affiliates can sign on to.

I’ve signed the letter. I’d urge any readers who are also affiliated to Hopkins to read it, and consider it signing it as well. Here it is.

Posted in General | Leave a comment

“users hate change”

reddit comment with no particularly significant context:

Would be really interesting to put a number on “users hate change”.

Based on my own experience at a company where we actually researched this stuff, the number I would forward is 30%. Given an existing user base, on average 30% will hate any given change to their user experience, independent of whether the that experience is actually worse or better.

“Some random person on reddit” isn’t scientific evidence or anything, but it definitely seems pretty plausible to me that some very significant portion of any user base will generally dislike any change at all — I think I’ve been one of those users for software I don’t develop, I’m thinking of recent changes to Google Maps, many changes to Facebook, etc.

I’m not quite sure what to do with that though, or how it should guide us.  Because, if our users really do want stability over (in the best of cases) improvement, we should give it to them, right? But if it’s say 1/3rd of our users who want this, and not necessarily the other 2/3rds, what should that mean?  And might we hear more from that 1/3rd than the other 2/3rds and over-estimate them yet further?

But, still, say, 1/3rd, that’s a lot. What’s the right balance between stability and improvement? Does it depend on the nature of the improvement, or how badly some other portion of your userbase are desiring change or improvement?

Or, perhaps, work on grouping changes into more occasional releases instead of constant releases, to at least minimize the occurrences of disruption?  How do you square that with software improvement through iteration, so you can see how one change worked before making another?

Eventually users will get used to change, or even love the change and realize it helped them succeed at whatever they do with the software (and then the change-resistant won’t want the new normal changed either!) — does it matter how long this period of adjustment is? Might it be drastically different for different user bases or contexts?

Does it matter how much turnover you should expect or get in your user base?  If you’re selling software, you probably want to keep all the users you’ve got and keep getting more, but the faster you’re growing, the quicker the old users (the only ones to whom a change is actually a change) get diluted by newcomers.   If you’re developing software for an ‘enterprise’ (such as most kinds of libraries), then the turnover of your userbase is a factor of the organization not of your market or marketing.  Either way, if you have less turnover, does that mean you can even less afford to irritate the change-resistant portion of the userbase, or is it irrelevant?

In commercial software development, the answer (for better or worse) is often “whatever choice makes us more money”, and the software development industry has increasingly sophisticated tools for measuring the effect of proposed changes on revenue. If the main goal(s) of your software development effort is something other than revenue, then perhaps it’s important to be clear about exactly what those goals are,  to have any hope of answering these questions.

Posted in General | Leave a comment

blacklight_advanced_search 5.0.0 released for blacklight 5.x

blacklight_advanced_search 5.0.0 has been released.

https://github.com/projectblacklight/blacklight_advanced_search

If you were previously using the gem directly from it’s github repo on the ‘blacklight5′ branch, recommend you switch to using the released gem instead, with a line in your Gemfile like this:

gem 'blacklight_advanced_search', "~> 5.0"

Note that the URL format of advanced search form facet limits has changed from previous versions; if you previously had a previous version deployed, there is a way to configure in redirects for the old style, in order to keep previous bookmarked URLs working. See the README.

Posted in General | Leave a comment

“Code as Research Object”

Mozilla Science Lab, GitHub and Figshare team up to fix the citation of code in academia

Academia has a problem. Research is becoming increasingly computational and data-driven, but the traditional paper and scientific journal has barely changed to accommodate this growing form of analysis. The current referencing structure makes it difficult for anyone to reproduce the results in a paper, either to check findings or build upon their results. In addition, scientists that generate code for middle-author contributions struggle to get the credit they deserve.

The Mozilla Science LabGitHub and Figshare – a repository where academics can upload, share and cite their research materials – is starting to tackle the problem. The trio have developed a system so researchers can easily sync their GitHub releases with a Figshare account. It creates a Digital Object Identifier (DOI) automatically, which can then be referenced and checked by other people.

http://thenextweb.com/dd/2014/03/17/mozilla-science-lab-github-figshare-team-fix-citation-code-academia

[HackerNews thread]

Posted in General | Leave a comment

traject, blacklight, and crazy facet tricks

So in our Blacklight-based catalog, we have a facet/limit for “Location”, that is based on the collection/location codes from holdings, and is meant to limit to just items held a particular sub-library of our Hopkins-wide system.

We’ve gotten a new requirement, which is that when you’ve limited to any of these location limits (for instance, only items in the “Milton S. Eisenhower Library”), the result set should also include all ‘Online’ items. No matter what Location limits you’ve applied, the result set should always include all Online items too. (Mine is not to reason why…). 

One thing that’s trickier than you might think is spec’ing exactly what counts as an ‘online’ item and how you identify it from the MARC records — but our Catalog already has an ‘Online’ limit, and we’ll just re-use that existing classification. How it classifies MARC records as ‘Online’ or not is a different discussion.

I can think of a couple approaches for making the feature work this way.

Option 1. Change how Blacklight app makes Solr requests

Ordinarily in a Blacklight app, if you choose a limit from a facet — say “Milton S. Eisenhower Library” from the “Location” facet — it will add on an `fq` param to the Solr query: say, “&fq=location_facet:Milton S. Eisenhower Library” (except URL-encoded in the actual Solr URL of course).

This is done by the add_facet_fq_to_solr method, and that method is called for creating the Solr URL because it’s listed in the solr_search_params_logic array.

So Blacklight actually gives us an easy way to customize this. We could remove add_facet_fq_to_solr from the solr_search_params_logic array in our local CatalogController, replacing it with our own custom local_add_facet_fq_to_solr method.

Our custom local method would, for facet limits from the Location facet only, do something special to add a different fq on to the Solr query, that looks more like: `&fq:location_facet:Milton S. Eisenhower Library OR format_facet:Online`.  For other facet limits, our custom local method would just call the original add_facet_fq_to_solr.

This wouldn’t change our Solr index at all, and would still make it possible to implement some other (possibly hidden back-end) feature that really limited to the original location without throwing “Online” in too, in case eventually people realize they need that after all.

I am not sure if it would effect performance of applying those limits; I think it would probably not, that expanded ‘fq’ with the ‘OR’ in it can be cached in the solr filter cache same as anything else.

I worry it might be a fragile solution though, that could break in future versions of Blacklight (say, if Blacklight refactors/renames it’s request builder methods, so our code is no longer succesfully replacing the original logic in the `add_facet_fq_to_solr` method) — and then be confusing for future developers who aren’t me to figure out why it’s broken and how to fix it. It’s potentially a bit too clever a solution.

Option 2. Change how location facet is indexed

The other option is changing how the location_facet Solr field is indexed, so every bib that is marked “Online” is also assigned to every location facet value.

Then, without any other changes at all to app code, limiting to a particular location facet value will always include every ‘Online’ record too, because all those records are simply included in every location facet value in the index.

We do our indexing with traject, and it’s fairly straightforward to implement something like this in traject.

In our indexing file, after the rule for possibly assigning ‘Online’ to the `format_facet`, we’d create a rule that looked something like this:

each_record do |record, context|
   if (context.output_hash["format"] || []).include? "Online"
      context.output_hash["location_facet"] ||= []
      context.output_hash["location_facet"].concat all_the_locations
   end
end

Pretty easy-peasy, eh? I think I would have had a lot more trouble doing this concisely and maintainably in SolrMarc, but maybe that’s just because I’m more comfortable in ruby and with traject (having written traject with Bill Dueber). But I think it actually might be because traject is awesome.

The only other trick is where I get that `all_the_locations` from. My existing code uses not one but TWO different translation maps to go from MARC data to Location facet values. The only place ‘all possible locations’ exists in code is in the values in these two hashes. If I just hard code it into a variable, it’ll be fragile and easily get out of sync with those. I guess I’d have to write ruby code to look at both those location maps, get all the values, and stick em in a variable, at boot-time.

No problem, just in the traject configuration file anywhere before the indexing rule we define above:

all_the_locations = []
all_the_locations.concat Traject::TranslationMap.new("jh_locations").to_hash.values
all_the_locations.concat Traject::TranslationMap.new("jh_collections").to_hash.values
all_the_locations.uniq!

The benefit of traject being just ruby is that you can just write ruby, and I’ve tried to make the traject classes and api’s flexible so you can do what you need with them (I hadn’t considered this use case specifically when i wrote the TranslationMap api, but I gave it a to_hash figuring all sorts of things could be done with that, as ruby Hash has a flexible api).

Anyhow. Benefits of this approach is that no fancy potentially fragile “create a custom Solr query” code is needed, and the Solr `fq`s for facet queries are still ordinary “field:value” with Solr performance characteristics we are well familiar with.

Disadvantages might be that we’re adding something to our indexing size with all these additional postings (probably not too much though, Solr is pretty efficient with this stuff), and possibly changing the performance characteristics of our facet queries by changing the number and distribution of postings in location_facet.

Another disadvantage is that we’ve made it impossible to query the “real” location facet, without the inclusion of “Online”, but that does meet the specs we’ve been currently given.

So which approach to take?

I’m actually not entirely sure. I lean to option 2, despite it’s downsides, because my intuition still says it’s less fragile and easier for future developers to understand (a huge priority for me these days), but I’m not entirely sure i’m right about that.

Any opinions?

Posted in General | 6 Comments

Agility vs ‘agile’

Yes, more of this please. From Dave Thomas, one of the originators of the ‘agile manifesto’, who I have a newfound respect for after reading this essay.

Agile Is Dead (Long Live Agility)

However, since the Snowbird meeting, I haven’t participated in any Agile events, I haven’t affiliated with the Agile Alliance, and I haven’t done any “agile” consultancy. I didn’t attend the 10th anniversary celebrations.

Why? Because I didn’t think that any of these things were in the spirit of the manifesto we produced…

Let’s look again at the four values:

Individuals and Interactions over Processes and Tools
Working Software over Comprehensive Documentation
Customer Collaboration over Contract Negotiation, and
Responding to Change over Following a Plan

The phrases on the left represent an ideal—given the choice between left and right, those who develop software with agility will favor the left.

Now look at the consultants and vendors who say they’ll get you started with “Agile.” Ask yourself where they are positioned on the left-right axis. My guess is that you’ll find them process and tool heavy, with many suggested work products (consultant-speak for documents to keep managers happy) and considerably more planning than the contents of a whiteboard and some sticky notes…

Back to the Basics

Here is how to do something in an agile fashion:

What to do:

  • Find out where you are
  • Take a small step towards your goal
  • Adjust your understanding based on what you learned
  • Repeat

How to do it:

When faced with two of more alternatives that deliver roughly the same value, take the path that makes future change easier.

And that’s it. Those four lines and one practice encompass everything there is to know about effective software development. Of course, this involves a fair amount of thinking, and the basic loop is nested fractally inside itself many times as you focus on everything from variable naming to long-term delivery, but anyone who comes up with something bigger or more complex is just trying to sell you something.

http://pragdave.me/blog/2014/03/04/time-to-kill-agile/

I think people tricked by others trying to sell them something isn’t actually the only, or even the main, reason people get distracted from actual agility by lots of ‘agile’ rigamarole which is anything but.

I think there are intrinsic distracting motivations and interests in many organizations too: The need for people in certain positions to feel in control; the need for blame to be assigned when something goes wrong; just plain laziness and desire for shortcuts and magic bullets; prioritizing all of these things (whether you realize it or not) over actual product quality.

Producing good software is hard, for both technical and social/organizational reasons. But my ~18 years of software engineering (and life!) experience lead me to believe that there are no ‘tool’ shortcuts or magic bullets, you do it just the way Thomas says you do it: you just do it, always in small iterative steps always re-evaluating next steps and always in continual contact with ‘stakeholders’ (who need to put time and psychic energy in too).  Anything else is distraction at best but more likely even worse, misdirection.

And there’s a whole lot of distraction and misdirection labelled ‘agile’.

Posted in General | Leave a comment

Another gem packaging of chosen.js for rails asset pipeline

chosen-rails already existed as a gem to package chosen.js assets for the Rails asset pipeline.

But I was having trouble getting it to work right, not sure why, but it appeared to be related to the compass dependency.

The compass dependency is actually in the original chosen.js source too — chosen.js is originally written in SASS. And chosen-rails is trying to use the original chosen.js source.

I made a fork which instead uses the post-compiled pure JS and CSS from the chosen.js release, rather than it’s source. (Well, it has to customize the CSS a bit to change referenced url()s to Rails asset pipeline asset-url() calls.)

I’ve called it chosen_assets. (rubygems; github).  Seems to be working well for me.

Posted in General | Leave a comment

vendor optical disc format promoted as ‘archival’?

Anyone in the digital archivist community want to weigh in on this, or provide  citations to reviews or evaluations?

I’m not sure exactly who the market actually is for these “Archival Discs.” If it was actually those professionally concerned with long-term reliable storage, I would think the press release would include some information on what leads them to believe the media will be especially reliable long-term, compared to other optical media. Which they don’t seem to.

Which makes me wonder how much of the ‘archival’ is purely marketing. I guess the main novelty here is just the larger capacity?

Press Release: “Archival Disc” standard formulated for professional-use next-generation optical discs

Tokyo, Japan – March 10, 2014 – Sony Corporation (“Sony”) and Panasonic Corporation (“Panasonic”) today announced that they have formulated “Archival Disc”, a new standard for professional-use, next-generation optical discs, with the objective of expanding the market for long-term digital data storage*.

Optical discs have excellent properties to protect themselves against the environment, such as dust-resistance and water-resistance, and can also withstand changes in temperature and humidity when stored. They also allow inter-generational compatibility between different formats, ensuring that data can continue to be read even as formats evolve. This makes them robust media for long-term storage of content. Recognizing that optical discs will need to accommodate much larger volumes of storage going forward, particularly given the anticipated future growth in the archive market, Sony and Panasonic have been engaged in the joint development of a standard for professional-use next-generation optical discs.

Posted in General | Leave a comment

A Proquest platform API

We subscribe to a number of databases via Proquest.

I wanted an API for having my software execute fielded searches against a Proquest a database — specifically Dissertations and Theses in my current use ase — and get back structured machine-interpretable results.

I had vaguely remembered hearing about such an API, but was having trouble finding any info about it.

It turns out, while you’ll have trouble finding any documentation about it, or even any evidence it exists on the web, and you’ll have trouble getting information about it from Proquest support too — such an api does exist.  Hooray.

You may occasionally see it called the “XML Gateway” in some Proquest documentation materials (although Proquest support doesn’t neccesarily know this term). And it was probably intended for and used by federated search products — which makes me realize, oh yeah, if I have any database that’s used by a federated search product, then it’s probably got some kind of API.

And it’s an SRU endpoint.

(Proquest may also support z39.50, but at least some Proquest docs suggest they recommend you transition to the “XML Gateway” instead of z39.50, and I personally find it easier to work with then z39.50).

Here’s an example query:

http://fedsearch.proquest.com/search/sru/pqdtft?operation=searchRetrieve&version=1.2&maximumRecords=30&startRecord=1&query=title%3D%22global%20warming%22%20AND%20author%3DCastet

For me, coming from an IP address recognized as ‘on campus’ for our general Proquest access, no additional authentication is required to use this API. I’m not sure if we at some point prior had them activate the “XML Gateway” for us, likely for a federated search product, or if it’s just this way for everyone.

The path component after “/sru”, “pqdtft” is the database code for Proquest Dissertations and Theses. I’m not sure where you find a list of these database codes in general; if you’ve made a succesful API request to that endpoint, there will be a <diagnosticMessage> element near the end of the response listing all database codes you have access to (but without corresponding full English names, you kind of have to guess).

The value of the ‘query’ parameter is a valid CQL query, as usual for SRU. It can be a bit tricky figuring out how to express what you want in CQL, but the CQL standard docs are decent, if you spend a bit of time with them to learn CQL.

Unfortunately, there seems to be no SRU “explain” response available from Proquest to tell you what fields/operators are available. But guessing often works, “title”, “author”, and “date” are all available — I’m not sure exactly how ‘date’ works, need to experiment more — although doing things like `date > 1990 AND date <= 2010` appears initially to work.

The CQL query param above un-escaped is:

title="global warming" AND author=Castet

Responses seem to be in MARCXML, and that seems to be the only option.

It looks like you can tell if a full text is available (on Proquest platform) for a given item, based on whether there’s an 856 field with second indicator set to “0″ — that will be a URL to full text. I think. It looks like.

Did I mention if there are docs for any of this, I don’t have them?

So, there you go, a Proquest search API!

I also posted this to the code4lib listserv, and got some more useful details and hints from Andrew Anderson.

Oh, and if you want to link to a document you found this way, one way that seems to work is to take the Proquest document ID from the marc 001 field in the response, and construct a URL like `http://search.proquest.com/pqdtft/docview/$DOCID$`.  Seems to work. Linking to fulltext if it’s available otherwise a citation page.  Note the `pqdtft` code in the URL, again meaning ‘Proquest Dissertations and Theses’ — the same db I was searching to find to the doc id.

Posted in General | 1 Comment