Large collections in JS in the browser

Developers from the New York Times have released some open source software meant for displaying and managing large digital content collections, and doing so client-side, in the browser with JS.

Developed for journalism, this has some obvious potential relevance to the business of libraries too, right?  Large collections (increasingly digital), that’s what we’re all about, ain’t it?

Pourover and Tamper

Today we’re open-sourcing two internal projects from The Times:

  • PourOver.js, a library for fast filtering, sorting, updating and viewing large (100k+ item) categorical datasets in the browser, and
  • Tamper, a companion protocol for compressing categorical data on the server and decompressing in your browser. We’ve achieved a 3–5x compression advantage over gzipped JSON in several real-world applications.

Collections are important to developers, especially news developers. We are handed hundreds of user submitted snapshots, thousands of archive items, or millions of medical records. Filtering, faceting, paging, and sorting through these sets are the shortest paths to interactivity, direct routes to experiences which would have been time-consuming, dull or impossible with paper, shelves, indices, and appendices….

…The genesis of PourOver is found in the 2012 London Olympics. Editors wanted a fast, online way to manage the half a million photos we would be collecting from staff photographers, freelancers, and wire services. Editing just hundreds of photos can be difficult with the mostly-unimproved, offline solutions standard in most newsrooms. Editing hundreds of thousands of photos in real-time is almost impossible.

Yep, those sorts of tasks sound like things libraries are involved in, or would like to be involved in, right?

The actual JS does some neat things with figuring out how to incrementally and just-in-time send delta’s of data, etc., and some good UI tools. Look at the page for more.

I am increasingly interested in what ‘digital journalism’ is up to these days. They are an enterprise with some similarities to libraries, in that they are an information-focused business which is having to deal with a lot of internet-era ‘disruption’.    Journalistic enterprises are generally for-profit (unlike most of the libraries we work in), but still with a certain public service ethos.  And some of the technical problems they deal with overlap heavily with our area of focus.

It may be that the grass is always greener, but I think the journalism industry is rising to the challenges somewhat better than ours is, or at any rate is putting more resources into technical innovation. When was the last time something that probably took as many developer-hours as this stuff, and is of potential interest outside the specific industry, came out of libraries?

Posted in General | Leave a comment

“You build it, you run it”

I have seen several different approaches to division of labor in developing, deploying, and maintaining web apps.

The one that seems to work best to me is when the same team responsible for developing an app is the team responsible for deploying it and keeping it up, as well as for maintaining it. The same team — and ideally the same individual people (at least at first; job roles and employment changes over time, of course).

If the people responsible for writing the app in the first place are also responsible for deploying it with good uptime stats, then they have incentive to create software that can be easily deployed and can stay up reliably. If it isn’t at first, then the people who receive the pain of this are the same people best placed to improve the software to deploy better, because they are most familiar with it’s structure and how it might be altered.

Software is always a living organism, it’s never simply “done”, it’s going to need modifications in response to what you learn from how it’s users use it, as well as changing contexts and environments.  Software is always under development, the first time it becomes public is just one marker in it’s development lifecycle, and not a clear boundary between “development” and “deployment”.

Compare this to other divisions of labor, where maybe one team does “R&D” on a nice prototype, then hands their code over to another team to turn it into a production service, or to figure out how to get it deployed and keep it deployed reliably and respond to trouble tickets.  Sometimes these teams may be in entirely different parts of the organization.  If it doesn’t deploy as easily or reliably as the ‘operations’ people would like, do they need to convince the ‘development’ people that this is legit and something should be done? And when it needs additional enhancements or functional changes, maybe it’s the crack team of R&Ders who do it, even though they’re on to newer and shinier things; or maybe it’s the operations people expected to it, even though they’re not familiar with the code since they didn’t write it; or maybe there’s nobody to do it at all, because the organization is operating on the mistaken assumption that developing software is like constructing a building, when it’s done it’s done.[1]

I just don’t find that it works well to create robust, reliable software which can evolve to meet changing requirements.

 

Recently I ran into a quote from an interview with Werner Vogels, Chief Technology Officer at Amazon, expressing these benefits of “You build it, you run it.”:

There is another lesson here: Giving developers operational responsibilities has greatly enhanced the quality of the services, both from a customer and a technology point of view. The traditional model is that you take your software to the wall that separates development and operations, and throw it over and then forget about it. Not at Amazon. You build it, you run it. This brings developers into contact with the day-to-day operation of their software. It also brings them into day-to-day contact with the customer. This customer feedback loop is essential for improving the quality of the service.

I was originally directed to that quote by this blog post on the need for shared dev and ops responsibility, which I reccommend too.

In this world of silos, development threw releases at the ops or release team to run in production.

The ops team makes sure everything works, everything’s monitored, everything’s continuing to run smoothly.

When something breaks at night, the ops engineer can hope that enough documentation is in place for them to figure out the dial and knobs in the application to isolate and fix the problem. If it isn’t, tough luck.

Putting developers in charge of not just building an app, but also running it in production, benefits everyone in the company, and it benefits the developer too.

It fosters thinking about the environment your code runs in and how you can make sure that when something breaks, the right dials and knobs, metrics and logs, are in place so that you yourself can investigate an issue late at night.

As Werner Vogels put it on how Amazon works: “You build it, you run it.”

The responsibility to maintaining your own code in production should encourage any developer to make sure that it breaks as little as possible, and that when it breaks you know what to do and where to look.

That’s a good thing.

None of this means you can’t have people who focus on ops other people who focus on dev; but I think it means they should be situated organizationally close to each other, on the same teams, and that the dev people have to have share some ops responsibilities, so they feel some pain from products that are hard to deploy, or hard to keep running reliably, or hard to maintain or change.

[1] Note some people think even constructing a building shouldn’t be “when it’s done it’s done”, but that buildings too should be constructed in such a way that allows continual modification by those who inhabit them, in response to changing needs or understandings of needs.

Posted in General | Leave a comment

Thank you again, Edward Snowden

According to this Reuters article, the NSA intentionally weakened encryption in popular encryption software from the company RSA.

They did this because they wanted to make sure they could continue eavesdropping on us all, but in the process they made us more vulnerable to eavesdropping from other attackers too. Once you put in a backdoor, anyone else that figures it out can access it too, it wasn’t some kind of NSA-only backdoor.  I bet, for instance, China’s hackers and mathematicians are as clever as ours.

“We could have been more sceptical of NSA’s intentions,” RSA Chief Technologist Sam Curry told Reuters. “We trusted them because they are charged with security for the U.S. government and U.S. critical infrastructure.”

I’m not sure if I believe him — the $10 million NSA paid RSA for inserting the mathematical backdoors probably did a lot to assuage their skepticism too. What did they think NSA was paying for?

On the other hand, sure, the NSA is charged with improving our security, and does have expertise in that.  It was fairly reasonable to think that’s what they were doing. Suggesting they were intentionally putting some backdoors in instead would have probably got you called paranoid… pre-Snowden.  Not anymore.

It is thanks only to Edward Snowden that nobody will be making that mistake again for a long time. Edward Snowden, thank you for your service.

Posted in General | Leave a comment

Academic freedom in Israel and Palestine

While I mostly try to keep this blog focused on professional concerns, I do think academic freedom is a professional concern for librarians, and I’m going to again use this platform to write about an issue of concern to me.

On December 17th, 2013, the American Studies Association membership endorsed a Resolution on Boycott of Israeli Academic Institutions. This resolution endorses and joins in a campaign organized by Palestinian civil society organizations for boycott of Israel for human rights violations against Palestinians — and specifically, for an academic boycott called for by Palestinian academics.

In late December and early January, very many American university presidents released letters opposing and criticizing the ASA boycott resolution, usually on the grounds that the ASA action threatened the academic freedom of Israeli academics.

Here at Johns Hopkins, the President and Provost issued such a letter on December 23rd. I am quite curious about what organizing took place that resulted in letters from so many university presidents within in a few weeks. Beyond letters of disproval from presidents, there has also been organizing to prevent scholars, departments, and institutions from affiliating with the ASA or to retaliate against scholars who do so (such efforts are, ironically, quite a threat to academic freedom themselves).

The ASA resolution (and the Palestinian academic boycott campaign in general) does not call for prohibition of cooperation with Israeli academics, but only against formal collaborations with Israeli academic institutions — and in the case of the ASA, only formal partnerships by the ASA itself, they are not trying to require any particular actions by members as a condition of membership in the ASA.  You can read more about the parameters of the ASA resolution, and the motivation that led to it, on the ASA’s FAQ on the subject, a concise and well-written document I definitely recommend reading.

So I don’t actually think the ASA resolution will have significant effect on academic freedom for scholars at Israeli institutions.  It’s mostly a symbolic action, although the fierce organizing against it shows how threatening the symbolic action is to the Israeli government and those who would like to protect it from criticism.

But, okay, especially if academic boycott of Israel continues to gain strength, then some academics at Israeli institutions will, at the very least, be inconvenienced in their academic affairs.  I can understand why some people find academic boycott an inappropriate tactic — even though I disagree with them.

But here’s the thing. The academic freedom of Palestinian scholars and students has been regularly, persistently, and severely infringed for quite some time.  In fact, acting in solidarity with Palestinian colleagues facing restrictions on freedom of movement and expression and inquiry was the motivation of the ASA’s resolution in the first place, as they write in their FAQ and the language of the resolution itself.

You can read more about restrictions in Palestinian academic freedom, and the complicity of Israeli academic institutions in these restrictions, in a report from Palestinian civil society here; or this campaign web page from Birzeit University and other Palestinian universities;  this report from the Israeli Alternative Information Center;  or in this 2006 essay by Judith Butler; or this 2011 essay by Riham Barghouti, one of the founding members of the Palestinian Campaign for the Academic and Cultural Boycott of Israel.

What are we to make of the fact that so many university presidents spoke up in alarm at an early sign of possible, in their views, impingements to academic freedom of scholars at Israeli institutions, but none have spoken up to defend significantly beleaguered Palestinian academic freedom?

Here at Hopkins, Students for Justice in Palestine believes that we do all have a responsibility to speak up in solidarity with our Palestinian colleagues, students and scholars, whose freedoms of inquiry and expression are severely curtailed; and that administrators silence on the issue does not in fact represent our community.  Hopkins SJP thinks the community should speak out in concern and support for Palestinian academic freedom, and they’ve written a letter Hopkins affiliates can sign on to.

I’ve signed the letter. I’d urge any readers who are also affiliated to Hopkins to read it, and consider it signing it as well. Here it is.

Posted in General | Leave a comment

“users hate change”

reddit comment with no particularly significant context:

Would be really interesting to put a number on “users hate change”.

Based on my own experience at a company where we actually researched this stuff, the number I would forward is 30%. Given an existing user base, on average 30% will hate any given change to their user experience, independent of whether the that experience is actually worse or better.

“Some random person on reddit” isn’t scientific evidence or anything, but it definitely seems pretty plausible to me that some very significant portion of any user base will generally dislike any change at all — I think I’ve been one of those users for software I don’t develop, I’m thinking of recent changes to Google Maps, many changes to Facebook, etc.

I’m not quite sure what to do with that though, or how it should guide us.  Because, if our users really do want stability over (in the best of cases) improvement, we should give it to them, right? But if it’s say 1/3rd of our users who want this, and not necessarily the other 2/3rds, what should that mean?  And might we hear more from that 1/3rd than the other 2/3rds and over-estimate them yet further?

But, still, say, 1/3rd, that’s a lot. What’s the right balance between stability and improvement? Does it depend on the nature of the improvement, or how badly some other portion of your userbase are desiring change or improvement?

Or, perhaps, work on grouping changes into more occasional releases instead of constant releases, to at least minimize the occurrences of disruption?  How do you square that with software improvement through iteration, so you can see how one change worked before making another?

Eventually users will get used to change, or even love the change and realize it helped them succeed at whatever they do with the software (and then the change-resistant won’t want the new normal changed either!) — does it matter how long this period of adjustment is? Might it be drastically different for different user bases or contexts?

Does it matter how much turnover you should expect or get in your user base?  If you’re selling software, you probably want to keep all the users you’ve got and keep getting more, but the faster you’re growing, the quicker the old users (the only ones to whom a change is actually a change) get diluted by newcomers.   If you’re developing software for an ‘enterprise’ (such as most kinds of libraries), then the turnover of your userbase is a factor of the organization not of your market or marketing.  Either way, if you have less turnover, does that mean you can even less afford to irritate the change-resistant portion of the userbase, or is it irrelevant?

In commercial software development, the answer (for better or worse) is often “whatever choice makes us more money”, and the software development industry has increasingly sophisticated tools for measuring the effect of proposed changes on revenue. If the main goal(s) of your software development effort is something other than revenue, then perhaps it’s important to be clear about exactly what those goals are,  to have any hope of answering these questions.

Posted in General | Leave a comment

blacklight_advanced_search 5.0.0 released for blacklight 5.x

blacklight_advanced_search 5.0.0 has been released.

https://github.com/projectblacklight/blacklight_advanced_search

If you were previously using the gem directly from it’s github repo on the ‘blacklight5′ branch, recommend you switch to using the released gem instead, with a line in your Gemfile like this:

gem 'blacklight_advanced_search', "~> 5.0"

Note that the URL format of advanced search form facet limits has changed from previous versions; if you previously had a previous version deployed, there is a way to configure in redirects for the old style, in order to keep previous bookmarked URLs working. See the README.

Posted in General | Leave a comment

“Code as Research Object”

Mozilla Science Lab, GitHub and Figshare team up to fix the citation of code in academia

Academia has a problem. Research is becoming increasingly computational and data-driven, but the traditional paper and scientific journal has barely changed to accommodate this growing form of analysis. The current referencing structure makes it difficult for anyone to reproduce the results in a paper, either to check findings or build upon their results. In addition, scientists that generate code for middle-author contributions struggle to get the credit they deserve.

The Mozilla Science LabGitHub and Figshare – a repository where academics can upload, share and cite their research materials – is starting to tackle the problem. The trio have developed a system so researchers can easily sync their GitHub releases with a Figshare account. It creates a Digital Object Identifier (DOI) automatically, which can then be referenced and checked by other people.

http://thenextweb.com/dd/2014/03/17/mozilla-science-lab-github-figshare-team-fix-citation-code-academia

[HackerNews thread]

Posted in General | Leave a comment

traject, blacklight, and crazy facet tricks

So in our Blacklight-based catalog, we have a facet/limit for “Location”, that is based on the collection/location codes from holdings, and is meant to limit to just items held a particular sub-library of our Hopkins-wide system.

We’ve gotten a new requirement, which is that when you’ve limited to any of these location limits (for instance, only items in the “Milton S. Eisenhower Library”), the result set should also include all ‘Online’ items. No matter what Location limits you’ve applied, the result set should always include all Online items too. (Mine is not to reason why…). 

One thing that’s trickier than you might think is spec’ing exactly what counts as an ‘online’ item and how you identify it from the MARC records — but our Catalog already has an ‘Online’ limit, and we’ll just re-use that existing classification. How it classifies MARC records as ‘Online’ or not is a different discussion.

I can think of a couple approaches for making the feature work this way.

Option 1. Change how Blacklight app makes Solr requests

Ordinarily in a Blacklight app, if you choose a limit from a facet — say “Milton S. Eisenhower Library” from the “Location” facet — it will add on an `fq` param to the Solr query: say, “&fq=location_facet:Milton S. Eisenhower Library” (except URL-encoded in the actual Solr URL of course).

This is done by the add_facet_fq_to_solr method, and that method is called for creating the Solr URL because it’s listed in the solr_search_params_logic array.

So Blacklight actually gives us an easy way to customize this. We could remove add_facet_fq_to_solr from the solr_search_params_logic array in our local CatalogController, replacing it with our own custom local_add_facet_fq_to_solr method.

Our custom local method would, for facet limits from the Location facet only, do something special to add a different fq on to the Solr query, that looks more like: `&fq:location_facet:Milton S. Eisenhower Library OR format_facet:Online`.  For other facet limits, our custom local method would just call the original add_facet_fq_to_solr.

This wouldn’t change our Solr index at all, and would still make it possible to implement some other (possibly hidden back-end) feature that really limited to the original location without throwing “Online” in too, in case eventually people realize they need that after all.

I am not sure if it would effect performance of applying those limits; I think it would probably not, that expanded ‘fq’ with the ‘OR’ in it can be cached in the solr filter cache same as anything else.

I worry it might be a fragile solution though, that could break in future versions of Blacklight (say, if Blacklight refactors/renames it’s request builder methods, so our code is no longer succesfully replacing the original logic in the `add_facet_fq_to_solr` method) — and then be confusing for future developers who aren’t me to figure out why it’s broken and how to fix it. It’s potentially a bit too clever a solution.

Option 2. Change how location facet is indexed

The other option is changing how the location_facet Solr field is indexed, so every bib that is marked “Online” is also assigned to every location facet value.

Then, without any other changes at all to app code, limiting to a particular location facet value will always include every ‘Online’ record too, because all those records are simply included in every location facet value in the index.

We do our indexing with traject, and it’s fairly straightforward to implement something like this in traject.

In our indexing file, after the rule for possibly assigning ‘Online’ to the `format_facet`, we’d create a rule that looked something like this:

each_record do |record, context|
   if (context.output_hash["format"] || []).include? "Online"
      context.output_hash["location_facet"] ||= []
      context.output_hash["location_facet"].concat all_the_locations
   end
end

Pretty easy-peasy, eh? I think I would have had a lot more trouble doing this concisely and maintainably in SolrMarc, but maybe that’s just because I’m more comfortable in ruby and with traject (having written traject with Bill Dueber). But I think it actually might be because traject is awesome.

The only other trick is where I get that `all_the_locations` from. My existing code uses not one but TWO different translation maps to go from MARC data to Location facet values. The only place ‘all possible locations’ exists in code is in the values in these two hashes. If I just hard code it into a variable, it’ll be fragile and easily get out of sync with those. I guess I’d have to write ruby code to look at both those location maps, get all the values, and stick em in a variable, at boot-time.

No problem, just in the traject configuration file anywhere before the indexing rule we define above:

all_the_locations = []
all_the_locations.concat Traject::TranslationMap.new("jh_locations").to_hash.values
all_the_locations.concat Traject::TranslationMap.new("jh_collections").to_hash.values
all_the_locations.uniq!

The benefit of traject being just ruby is that you can just write ruby, and I’ve tried to make the traject classes and api’s flexible so you can do what you need with them (I hadn’t considered this use case specifically when i wrote the TranslationMap api, but I gave it a to_hash figuring all sorts of things could be done with that, as ruby Hash has a flexible api).

Anyhow. Benefits of this approach is that no fancy potentially fragile “create a custom Solr query” code is needed, and the Solr `fq`s for facet queries are still ordinary “field:value” with Solr performance characteristics we are well familiar with.

Disadvantages might be that we’re adding something to our indexing size with all these additional postings (probably not too much though, Solr is pretty efficient with this stuff), and possibly changing the performance characteristics of our facet queries by changing the number and distribution of postings in location_facet.

Another disadvantage is that we’ve made it impossible to query the “real” location facet, without the inclusion of “Online”, but that does meet the specs we’ve been currently given.

So which approach to take?

I’m actually not entirely sure. I lean to option 2, despite it’s downsides, because my intuition still says it’s less fragile and easier for future developers to understand (a huge priority for me these days), but I’m not entirely sure i’m right about that.

Any opinions?

Posted in General | 6 Comments

Agility vs ‘agile’

Yes, more of this please. From Dave Thomas, one of the originators of the ‘agile manifesto’, who I have a newfound respect for after reading this essay.

Agile Is Dead (Long Live Agility)

However, since the Snowbird meeting, I haven’t participated in any Agile events, I haven’t affiliated with the Agile Alliance, and I haven’t done any “agile” consultancy. I didn’t attend the 10th anniversary celebrations.

Why? Because I didn’t think that any of these things were in the spirit of the manifesto we produced…

Let’s look again at the four values:

Individuals and Interactions over Processes and Tools
Working Software over Comprehensive Documentation
Customer Collaboration over Contract Negotiation, and
Responding to Change over Following a Plan

The phrases on the left represent an ideal—given the choice between left and right, those who develop software with agility will favor the left.

Now look at the consultants and vendors who say they’ll get you started with “Agile.” Ask yourself where they are positioned on the left-right axis. My guess is that you’ll find them process and tool heavy, with many suggested work products (consultant-speak for documents to keep managers happy) and considerably more planning than the contents of a whiteboard and some sticky notes…

Back to the Basics

Here is how to do something in an agile fashion:

What to do:

  • Find out where you are
  • Take a small step towards your goal
  • Adjust your understanding based on what you learned
  • Repeat

How to do it:

When faced with two of more alternatives that deliver roughly the same value, take the path that makes future change easier.

And that’s it. Those four lines and one practice encompass everything there is to know about effective software development. Of course, this involves a fair amount of thinking, and the basic loop is nested fractally inside itself many times as you focus on everything from variable naming to long-term delivery, but anyone who comes up with something bigger or more complex is just trying to sell you something.

http://pragdave.me/blog/2014/03/04/time-to-kill-agile/

I think people tricked by others trying to sell them something isn’t actually the only, or even the main, reason people get distracted from actual agility by lots of ‘agile’ rigamarole which is anything but.

I think there are intrinsic distracting motivations and interests in many organizations too: The need for people in certain positions to feel in control; the need for blame to be assigned when something goes wrong; just plain laziness and desire for shortcuts and magic bullets; prioritizing all of these things (whether you realize it or not) over actual product quality.

Producing good software is hard, for both technical and social/organizational reasons. But my ~18 years of software engineering (and life!) experience lead me to believe that there are no ‘tool’ shortcuts or magic bullets, you do it just the way Thomas says you do it: you just do it, always in small iterative steps always re-evaluating next steps and always in continual contact with ‘stakeholders’ (who need to put time and psychic energy in too).  Anything else is distraction at best but more likely even worse, misdirection.

And there’s a whole lot of distraction and misdirection labelled ‘agile’.

Posted in General | Leave a comment

Another gem packaging of chosen.js for rails asset pipeline

chosen-rails already existed as a gem to package chosen.js assets for the Rails asset pipeline.

But I was having trouble getting it to work right, not sure why, but it appeared to be related to the compass dependency.

The compass dependency is actually in the original chosen.js source too — chosen.js is originally written in SASS. And chosen-rails is trying to use the original chosen.js source.

I made a fork which instead uses the post-compiled pure JS and CSS from the chosen.js release, rather than it’s source. (Well, it has to customize the CSS a bit to change referenced url()s to Rails asset pipeline asset-url() calls.)

I’ve called it chosen_assets. (rubygems; github).  Seems to be working well for me.

Posted in General | Leave a comment