Solr Indexing in Kithe

So you may recall the kithe toolkit we are building in concert with our new digital collections app, which I introduced here.

I have completed some Solr Indexing support in kithe. It’s just about indexing, getting your data into Solr. It doesn’t assume Blacklight, but should work fine with Blacklight; there isn’t currently any support in kithe for what you do to provide UX for your Solr index.  You can look at the kithe guide documentation for the indexing features for a walk-through.

The kithe indexing support is based on ActiveRecord callbacks, in particular the after_commit callback. While callbacks get a bad rap, I think they are appropriate here, and note that both the popular sunspot gem (Solr/Rails integration, currently looking for new maintainers) and the popular searchkick gem (ElasticSearch/Rails integration) base their indexing synchronization on AR callbacks too. (There are various ways in kithe’s feature to turn off automatic callbacks temporarily or permanently in your code, like there are in those other two gems too). I spent some time looking at API’s, features, and implementation of the indexing-related functionality in sunspot, and searchkick, as well as other “prior art”, before/while developing kithe’s support.

The kithe indexing support is also based on traject for defining your mappings.

I am very happy with how it turned out, I think the implementation and public API both ended up pretty decent. (I am often reminded of the quote of uncertain attribution “I didn’t have time to write a short letter, so I wrote a long one instead” — it can take a lot of work to make nice concise code).

The kithe indexing support is independent of any other kithe features and doesn’t depend on them. I think it might be worth looking at for anyone writing a an app whose persistence is based on ActiveRecord. (If something ActiveModel-like but not ActiveRecord, it probably doesn’t have after_commit callbacks, but if it has after_save callbacks, we could make the kithe feature optionally use those instead; sunspot and searchkick can both do that).

Again, here’s the kithe documentation giving a tour of the indexing features. 

Note on traject

The part of the architecture I’m least happy with is traject, actually.

Traject was written for a different use case — command-line executed high-volume bulk/batch indexing from file serializations. And it was built for that basic domain and context at the time, with some YAGNI thoughts.

So why try to use it for a different case of event-based few-or-one object sync’ing, integrated into an app?  Well, hopefully it was not just because I already had traject and was the maintainer (‘when all you have is a hammer’), although that’s a risk. Partially because traject’s mapping DSL/API has proven to work well for many existing users. And it did at least lead me to a nice architecture where the indexing code is separate and fairly decoupled from the ActiveRecord model.

And the Traject SolrJsonWriter already had nice batching functionality (and thread-safety, although didn’t end up using it in current kithe architecture), which made it convenient to implement batching features in a de-coupled way (just send to a writer that’s batching, the other code doesn’t need to know about it, except for maybe flushing at the end).

And, well, maybe I just wanted to try it. And I think it worked out pretty well, although there are some oddities in there due to traject’s current basic architectural decisions. (Like, instantiating a Traject “Indexer” can be slow, so we use a global singleton in the kithe architecture, which is weird.)  I have some ideas for possible refactors of traject (some backwards compat some not) that would make it seem more polished for this kind of use case, but in the meantime, it really does work out fine.

Note on times to index, legacy sufia app vs our kithe-based app

Our collection, currently in a sufia app, is relatively small. We have about 7,000 Works (some of which are “child works”), 23,000 “FileSets” (which in kithe we call “Assets”), and 50 Collections.

In our existing Sufia-based app, it takes about 6 hours to reindex to Solr on an empty index.

  • Except actually, on an empty index it might take two re-index operations, because of the way sufia indexing is reliant on getting things out of the index to figure out the proper way to index a thing at hand. (We spent a lot of work trying to reorganize the indexing to not require an index to index, but I’m not sure if we succeeded, and may ironically have made performance issues with fedora worse with the new patterns?) So maybe 12 hours.
  • Except that 6 hours is just a guess from memory. I tried to do a bulk reindex-everything in our sufia app to reconfirm it — but we can’t actually currently do a bulk reindex at all, because it triggers an HTTP timeout from Fedora taking too long to respond to some API request.
    • If we upgraded to ActiveFedora 12, we could increase the timeout that ActiveFedora is willing to wait for a fedora response for. If we upgraded to ActiveFedora 12.1, it would include this PR, which I believe is intended to eliminate those super long fedora responses. I don’t think it would significantly change our end-to-end indexing time, the bulk of it is not in those initial very long fedora API calls. But I could be wrong. And not sure how realistic it is to upgrade our sufia app to AF 12 anyway.
    • To be fair, if we already had an existing index, but needed to reindex our actual works/collections/filesets because of a Solr config change, we had another routine which could do so in only ~25 minutes.

In our new app, we can run our complete reindexing routine in currently… 30 seconds. (That’s about 300 records/second throughput — only indexing Works and Collections. In past versions as I was building out the indexing I was getting up to 1000 records/second, but I haven’t taken time to investigate what changed, cause 30s is still just fine).

In our sufia app we are backing up our on-disk Solr indexes, because we didn’t want to risk the downtime it would take to rebuild (possibly including fighting with the code to get it to reindex).  In addition to just being more bytes to sling, this leads to ongoing developer time on such things as “did we back up the solr data files in a consistent state? Sync’d with our postgres backup?”, and “turns out we just noticed an error in the backup routine means the backup actually wasn’t happening.” (As anyone who deals with backups of any sort knows can be A Thing).

In the new system, we can just… not do that.  We know we can easily and quickly regenerate the Solr index whenever, from the data in postgres. (And if we upgrade to a new Solr version that requires an index rebuild, no need to figure out how to do so without downtime in a complicated way).

Why is the new system so much faster? I’ve identified three areas I believe are likely, but haven’t actually tried to do much profiling to determine which of these (if any?) are the predominant factors, so couldn’t say.

  1. Getting things out of fedora (at least under sufia’s usage patterns) is slow. Getting things out of postgres is fast.
  2. We are now only indexing what we need to support search.
    • The only things that show up in our search results are Works and Collections, so that’s all we’re indexing. (Sufia indexes not only FileSets too, but some ancillary objects such as one or two kinds of permission objects, and possibly a variety of other things I’m not familiar with. Sufia is trying to put pretty much everything that’s in fedora in Solr. For Reasons, mainly that it’s hard to query your things in Fedora with Fedora).
    • And we are only indexing the fields we actually need in Solr for those objects. Sufia tries to index a more or less round-trippable representation to Solr, with every property in it’s own stored solr field, etc. We aren’t doing that anymore. We could put all text in one “text” field, if we didn’t want to boost some higher than others. So we only index to as many fields as need different boosts, plus fields for facets, etc. Only what need to support the Solr functionality we want.
      • If you want to render your results from only Solr stored fields (as sufia/hyrax do, and blacklight kind of wants you to) you’d also need those stored fields, sufficiently independently addressable to render what you want (or perhaps just in one big serialized JSON?). We are hoping to not use solr stored fields for rendering at all, but even if we end up with Solr stored fields for rendering, it will be just enough that we need for rendering. (For instance, some people using Blacklight are using solr stored fields for the “index”/search results/hits page, but not for the individual record ‘show’ page).
  3. The indexing routines in new thing send updates to Solr in an efficient way, both batching record updates into fewer Solr HTTP update requests, and not sending synchronous Solr “hard commits” at all. (the bulk reindex, like the after_commit indexing, currently sends a softCommit per update request, although this could be configured differently).

So

Check out the kithe guide on indexing support! Maybe you want to use kithe, maybe you’re writing an ActiveRecord-based apps and want to consider kithe’s solr indexing support in isolation, or maybe you just want to look at it for API and implementation ideas in your own thing(s).

Advertisements

very rough benchmarking of Solr update batching performance characteristics

In figuring out how I want to integrate a synchronized Solr index into my Rails application, I am doing some very rough profiling/benchmarking of batching Solr adds vs not, just to get a general sense of it.

(This is all _very rough estimates_ and may depend a lot on your environment and Solr setup, including how many records you have in Solr, if Solr is being simultaneously used for queries, etc).

One thing some Solr (or ElasticSearch) integration packages sometimes end up concentrating on is batching multiple index-change-needed events into fewer Solr update requests.

Based on my observations, I think it’s not actually the separate HTTP requests that are expensive. (although I’m benchmarking with a solr on localhost).

But the commits are — if you are doing them. In my benchmarks reindexing a whole bunch of things, if I’m not doing any commits, whether I batch into fewer HTTP update requests to Solr or not has no appreciable effect on speed.

But sending a softCommit per record/update makes it around 2.5x slower.

Sending a (hard) commit per record makes it around 4x slower.

Even without explicit commit directives, if you have your solr setup to autocommit (soft or hard), it may of course occasionally pause to do some commits, so your measured time may depend on if you hit one of those.

So if you don’t care about realtime/near-realtime, you may not have to care about batching. I had already gotten the sense from Solr’s documentation that Solr will really like it better if the client never sends commits, but just lets Solr’s autoCommit/autoSoftCommit/commitWithin configuration to make sure updates become visible within a certain amount of maximum time. The reason to have the client send commits is generally because you need to guarantee that the updates will be visible to queries as soon as your code doing the update is finished.

The reason so many end up caring about batching updates might not because individual http requests to solr are a problem, but because too many _commits_ are. So if for some reason it was more convenient, only sending a commit per X records might be just as good as actually batching http requests — if you have to send commits from the client at all.

What “search engine” to use in a digital collections Rails app?

Traditional samvera apps have Blacklight, and it’s Solr index, very tightly integrated into many parts of persistence and discovery functionality, including management interfaces.

In rewriting our digital collections app , we have the opportunity to make other choices. Which of course is both a blessing and a curse, who wants choices?

One thing I know I don’t want is as tight and coupled an integration to Solr as a typical sufia or hyrax app.

We should be able to at least find persisted model items by id (or iterate through all of them), make some automated changes (say correcting a typo), and persist them to storage — without a Solr index existing at all. To the extent a Solr (or other text-search-engine) index exists at all, discrepencies between what’s in the index and what’s in our “real” store should not cause any usual mutation-and-persistence APIs to fail (either with an error or with wrong side effect outcome).

Really, I want a back-end interface that can do most if not all things a staff user needs to do in managing the repo, without any Solr index existing at all.  Just plain postgres ‘like’ search may sometimes be enough, when it’s not using pg’s full text indexing features likely are. These sorts of features are not as powerful as a ‘text search engine’ product like lucene or Solr — they aren’t going to do the complicated stemming that Solr does, or probably features like “phrase boosting”. They can give you filters/limits, but not facets in the Solr sense (telling you what terms are present in a given already restricted search, with term counts).

So we almost certainly still want Solr or a similar search engine providing user-facing front-end discovery, for this powerful search experience. We just want it sitting loosely on top or our app, not tightly integrated into every persistence and retrieval operation like it ends up in a typical sufia or hyrax app.

And part of this, for me, is I only want to have to index in Solr (or similar) what is neccesary for discovery/retrieval features, for search. This is how Solr works best. I don’t want to have to store a complete representation of the model instance in Solr, with every field addressable from a Solr result. So, that means, even in the implementation of the front-end UX search experience, i want display of the results to be from my ordinary ActiveRecord model objects (even on the ‘index’ search results page, and certainly on the ‘item detail’ page).  This is in fact how sunspot works — after solr returns a hit list, take the db pk’s (and model names) from the solr response, and then just fetch the results again from the database.  In a nice efficient SQL, using pre-loading (via SQL joins) etc. This is how one attempt at at elasticsearch-rails integration works too.

Yeah, it’s doing an “extra” fetch from the db, when it theoretically could have gotten everything it needed to display from Solr.  But properly optimized fetches from the db to display one page of search results are pretty quick, certainly faster than what was going on with ActiveFedora in our sufia app anyway, and the developer pain (and subsequent bugs) that can come from trying to duplicate everything in Solr just aren’t worth trying to optimize away the db fetch. There’s a reason popular Solr or ElasticSearch integrations with Rails do the “extra” fetch.

OK, I know what I want (and what I don’t), but what am I going to do? There’s still some choices, curses!

1. Blacklight, intervened in to return actual ActiveRecord models to views?

Blacklight was originally written for “library catalog” use cases where you might not be indexing from a local rdbms at all, you might be indexing from a third party data API, and you might not have representations in a local rdbms, solr is all you’ve got. So it was written to find everything it needs to display the results found in the Solr response.

But can we intervene in Blacklight to instead take the Solr responses, use them to get model names and pks to then fetch from a local rdbms instead? Like sunspot does?

This was my initial plan, and at first I thought I could easily. In fact, years ago, when writing a Blacklight catalog app, I had to do something in some ways analagous. We wanted our library catalog to show live checked in/out status for things returned by Solr. But we had no good way to get this into our Solr index in realtime. So, okay, we can’t provide a filter/facet by this value without it in the index, but can we still display it with realtime accuracy?

We could! We wanted to hook into the solr search results process in Blacklight, take the unique IDs from the solr response, and use them to make API calls to our ILS to figure out item status. (checked out, etc). And we wanted to do this in one bulk query (with all the IDs that are on the page of results), not one request per search hit, which would have been a performance problem. (We won’t talk about how our ILS didn’t actually have an API; let’s just say we gave it one).

So I had already done that, and thought the hook points and functions were pretty similar (just look up ‘extra info’ differently, this time the ‘extra info’ is an actual ActiveRecord model associated with each hit, instead of item status info). So I figured I could do it again!

The Blacklight method I had overridden to do that (back in maybe Blacklight 2.x days), was the search_results method called by Catalog#index action among other places. Overriding this got every place Blacklight got ‘results’, so we could make sure to hook in to get ‘extra stuff’ on every results fetching. it returned the @response itself, so we could hook into it to enhance the SolrDocument‘s returned to have extra stuff! Or hey, it’s a controller method, we can even have it set other iVars if we want. A nice clean intervention.

But alas! While I had used this in many-years-ago Blacklight, and it made it all the way to Blacklight 6… this method no longer exists in Blacklight 7, and there’s no great obvious place to override to do similar. It looks like it actually went away in a BL commit a couple years ago, but that commit didn’t make it into a BL release until 7.0. The BL 7 def index action method… doesn’t really have any clear point to intervene to do similar.

Maybe I could do something over in the ‘search_service.search_result’  method, I guess in a custom search_service. It’s not a controller method, so couldn’t provide it’s own iVar, but it could still modify the @response to enhance the SolrDocuments in it.  There are some more layers of architecture to deal with (and possibly break with future BL releases), and I haven’t yet really figured out what the search_service is and where it comes from! But it could be done.

Or I could try to get search_results cover method back in BL. (Not sure how ammenable BL would be to such a PR).

Or I could… override even more? The whole catalog index method? Don’t even use the blacklight Catalog controller at all but write my own? Both of those, my intuition based on experience with BL says, there be dragons.

2. Use Solr, but not Blacklight

So as I contemplated the danger of overriding big pieces of BL, I thought, ok, wait, why am I using BL at all, actually?

A couple senior developers at a couple institutions I talked to (I won’t reveal their names in case I’m accidentally misrepresenting them, and to not bring down the heat of bucking the consensus on them) said they were considering just writing ruby code to interact with Solr. (We’re talking on search/discovery, indexing — getting the data into Solr in the first place — is another topic). They said, gee, what we need in our UI for Solr results just isn’t that complicated, we think maybe it’s not actually that hard to just write the code to do it, maybe easier than fighting with BL, which in the words of one developer has a tendency to “infect” your application making everything more complicated when you try doing things the “Blacklight way”.

And it’s true we spend a lot of time overriding or configuring Blacklight to turn off features we didn’t think worked right, or we just don’t want in our UX. (Sufia/hyrax have traditionally tried to override Blacklight to make the ‘saved searches’ feature go away for instance). And there’s more features we just don’t use.

Could we just write our own code for issuing queries to BL, and displaying facets and hits from the results? Maybe. Off-hand, I can think of a couple things we get from BL that are non-trivial.

  1. The “back to search” link on the item detail page. Supplying a state-ful UX on top of state-less HTTP web app (while leaving the URLs clean) is a pain.  The Blacklight implementation has gone through various iterations and probably still has some weirdness, but my stakeholders have told me this feature is important.
  2. The date range facet with little mini-histogram, provided by blacklight_range_limit. This feature is also implemented kind of crazily (I should know, I wrote the first version, although I’m not currently really the maintainer) — if all you want is a date range limit where you enter a start and end year, without the little bar-graph-ish histogram, that’s actually easy, and I think some people are using blacklight_range_limit when that’s all they want, and could be doing it a lot simpler. But the histogram, with the nice calculated (for the particular result set!) human-friendly range endpoints (it’ll be 10s or 5s or whatever, at the right order of magnitude for your current facetted results!), kind of a pain, and it just works with blacklight_range_limit (although I don’t actually know if blacklight_range_limit works for BL7, it may not).

Probably a few more things I’m not thinking of that I’d run into.

On the plus side, wouldn’t have to fight with Blacklight to turn off the things I don’t want, or to get it to have the retrieval behavior I want retrieving hits my actual rdbms for display.

Hmm.

(While I keep looking at sunspot for ideas — it is/was somewhat popular, so must have at least gotten some developer APIs right for some use cases involving rdbms data searched in Solr — it’s got some awfully complicated implementation, is assuming certain “basic search” use cases as the golden path and definitely has some things I’d have to fight with, and has a “Looking for maintainers” line on it’s README, so I’m not really considering actually using it).

3. Should we use ElasticSearch?

Hypothetically, I think Blacklight was abstracted at one point to support ElasticSearch. I’m not totally sure how that went, if anyone is using BL with ES in production or whatever.

But if I wanted to use ElasticSearch, I think I probably wouldn’t try to use it with BL, but as an extension of the “2” part. If I’m going to be writing it myself anyway, might we want to use ElasticSearch instead?

ElasticSearch, like Solr, is an HTTP-api search engine app built on top of lucene. In some ways, I think Solr is a victim of being first. It’s got a lot more… legacy.  And different kinds of deployments it supports. (SolrCloud or not cloud? Managed schema or not? What?) Solr can sometimes seem to me like it gives you a billion ways to do whatever you want to do, but you’ve got to figure out which one to do (and whatever you choose may break some other feature). Whereas ElasticSearch just seems to be more straighfforrward. Or maybe that’s just that it seems to have better clearer documentation. It just seems less overwhelming, and I theoretically am familiar with Solr from years of use (but I always learned just enough to get by).

For whatever reasons, ElasticSearch seems to have possibly overtaken Solr in popularity, be, seems to be easier to pay someone else for a cloud-hosted PaaS instance at an affordable price, and seems to just generally be considered a bit easier to get started with.

I’ve talked to some other people in samvera space who are hypothetically considering ElasticSearch too, if they could do whatever they wanted (although I know of nobody actually moving forward with plans).

ElasticSearch at least historically didn’t have all the features and flexiblity of Solr, but it’s caught up a lot. Might it have everything we actually need for this digital collections app?

I’m not sure. Historically ES had problems with facets, and while it seems to have caught up a lot… they don’t work quite like Solr’s, and looking around to see if they do everything I need, it seems like there are a couple different ES features that approach Solr’s “facets”, and I’m not totally sure either does what I actually need (ordinary Solr facets: exact term counts, sorted by most-represented-term-first, within a result set).

It might! But really ES’s unfamiliarity is the biggest barrier. I’d have to figure out how to do things with slightly different featureset, and sometimes might find myself up against a brick wall, and am not sure I’d know that for sure until I’m in the middle of it. I have a pretty good sense of what Solr can do at this point, I know what I’m getting into.

(ES also maybe exposes different semantics around lucene ‘commits’? If you need synchronous “realtime” commits immediately visible on next query, I think maybe you can get that from ES, but I’m not 100% confident, it’s definitely not ES’s “happy path”. Historically samvera apps have believed they needed this; I’m not sure I do if I succesfully have search engine functionality resting more lightly on the app. But I’m not sure I don’t).

So what will I do?

I’m actually not sure, I’m a bit stumped.

I think going to ElasticSearch is probably too much for me right now, there’s too many balls in the air in rewriting this app to add in search engine software I’m not familiar with that may not have quite the same featureset.

But between using BL and doing it myself… I’m not sure, both offer some risks and possible rewards.

The fact that I can’t use the override-point in BL I was planning to, cause it’s gone from BL 7, annoys me and pushes me a bit more to consider a DIY approach. But I’m not sure if I’m going to regret that later. I might start out trying it out and seeing where it gets me… or I might just figure out how to hack in the rdbms-retrieval pattern I want into BL, even if it’s not pretty. I know want to write my display logic in terms of my ActiveRecord models, and with full access to ActiveRecord eager-loading to load any associated records I need (ala sunspot), instead of trying to jam it all into a Solr record in a denormalized fashion. Being able to get out of that by escaping from sufia/hyrax was one of the main attractions of doing so!

Our progress on new digital collections app, and introducing kithe

In September, I wrote a post on a “Proposed Rails-based digital collections developer’s toolkit”

What has happened since then?

Yes we decided to go ahead with a rewrite of our digital collections app, with the new app not based on Hyrax or Valkryie, but a persistence layer based on ActiveRecord (making use of postgres-specific features were appropriate), and exposing ActiveRecord models to the app as a whole.

No, we are not going forward with trying to make that entire toolkit”, with all the components mentioned there.

But Yes, unlike Alberta, we are taking some functionality and putting it in a gem that can be shared between institutions and applications. That gem is kithe. It includes some sharable modeling/persistence code, like Valkyrie (but with a very different approach than Valkyrie), but also includes some additional fundamental components too.

Scaling back the ambition—and abstraction—a bit

The total architecture outlined in my original post was starting to feel overwhelming to me. After all, we also need to actually produce and launch an app for ourselves, on a “reasonable” timeline, with fairly high chance of success.  I left my conversation with U Alberta (which was quite useful, thank you to the Alberta team!), concerned about potential over-reach and over-abstraction. Abstraction always has a cost and building shared components is harder and more time-consuming than building a custom app.

But, then, also informed by my discussion with Alberta,  I realized we basically just had to build a Rails app, and this is something I knew how to do, and we could, as we progressed, jetison anything that didn’t seem actually beneficial for that goal or seem feasible at the moment. And, also after discussion with a supportive local team, my anxiety about the project went down quite a bit — we can do this.

Even when writing the original proposal, I knew that some elements might be traps. Building a generalized ACL permissions system in an rdbms-based web app… many have tried, many have fallen. :)  Generalized controllers are hard, because they are a piece very tightly tied to your particular app’s UI flows, which will vary.

So we’ve scaled back from trying to provide a toolkit which can also be “scaffolding” for a complete starter app.  The goals of the original thought-experiment proposal — a toolkit which provides  pieces developers put together when building their own app — are better approached, for now, by scaling back and providing fewer shared tools, which we can make really solid.

After all, building shared code is always harder than building code for your app. You have more use cases to figure out and meet, and crucially, shared code is harder to change because it’s (potentially) got cross-institutional dependents, which you have to not break. For the code I am putting into kithe, I’m trying to make it solidly constructed and well-polished. In purely local code,  I’m more willing to do something experimental and hacky — it’s easy enough (comparatively!) to change local app code later.  As with all software, get something out there that works, iterating, using what you learn. (It’s just that this is a lot harder to do with shared dependencies without pain!)

So, on October 1st, we decided to embark on this project. We’re willing to show you our fairly informal sketch of a work plan, if you’d like to look.

Introducing kithe

But we’re not just building a local app, we are also trying to create some shareable components. While the costs and risks of shared code and abstractions are real,  I ultimately decided that “just Rails” would not get us to the most maintainable code after all. (And of course nothing is really just Rails, you are always writing code and using non-Rails dependencies; it’s a matter of degree, how much your app seems like a “typical” Rails app to developers).

It’s just too hard to model the data we ourselves already needed (including nested/compound/repeated models) in “just” ActiveRecord, especially in a way that lets you work with it sanely as “just” ActiveRecord, and is still performant. (So we use attr_json, which I also developed, for a No-SQLy approach without giving up rdbms or ActiveRecord benefits including real foreign-key-based associations). And in another example, ActiveStorage was not flexible/powerful enough for our file-handling needs (which are of course at the core of our domain!), and I wasn’t enthused about CarrierWave either — it makes sense to me to make some solid high-quality components/abstractions for some of our fundamental business/domain concerns, while being aware of the risks/costs.

So I’ve put into kithe the components I thought seemed appropriate on several considerations:

  • Most valuable to our local development effort
  • Handling the “trickiest” problems, most useful to share
  • Handling common problems, most likely to be shareable; and it’s hard to build a suite of things that work together without some modelling/persistence assumptions, so got to start there.
  • I had enough understanding of the use-cases (local and community) that I thought I could, if I took a reasonable amount of extra time, produce something well-polished, with a good developer experience, and a relatively stable API.

That already includes, in maybe not 1.0-production-ready but used in our own in-progress app and released (well-tested and well-documented) in kithe:

  • A modeling and persistence layer tightly coupled to ActiveRecord, with some postgres-specific features, and recommending use of attr_json, for convenient “NoSQL”-like modelling of your unique business data (in common with existing samvera and valkyrie solutions, you don’t need to build out a normalized rdbms schema for your data). With models that are samvera/PCDM-ish (also like other community solutions).
    • Including pretty slick handling of “representatives”, dealing with the performance issues in figuring out representative to display with constant query time (using some pg-specific SQL to look up and set “leaf” representative on save).
    • Including UUIDs as actual DB pk/fks, but also a friendlier_id feature for shorter public URL identifiers, with logic to automatically create such if you wish.
  • A nice helper for building Rails forms with repeatable complex embedded values. Compare to the relevant parts of hydra-editor, but (I think) lighter and more flexible.
  • A flexible file-handling architecture based on shrine — meaning transparent cloud-storage support out of the box.
    • Along with a new derivatives architecture, which seems to me to have the right level of abstraction and affordances to provide a “polished” experience.
    • All file-handling support based on assuming expensive things happen in the background, and “direct upload” from browser pre-form-submit (possibly to cloud storage)

It will eventually include some solr/blacklight support, including a traject-based indexing setup, and I would like to develop an intervention in blacklight so after solr results are returned, it immediately fetches the “hit” records from ActiveRecord (with specified eager-loading), so you can write your view code in terms of your actual AR models, and not need to duplicate data to solr and logic for dealing with it. This latter is taken from the design of sunspot.

But before we get there, we’re going to spend a little bit of time on purely local features, including export/import routines (to get our data into the new app; with some solid testing/auditing to be confident we have), and some locally bespoke workflow support (I think workflow is something that works best just writing the Rails). 

We do have an application deployed as demo/staging, with a basic more-than-just-MVP-but-not-done-yet back-end management interface (note: it does not use Solr/Blacklight at all which I consider a feature), but not yet any non-logged-in end-user search front-end. If you’d like a guest login to see it, just ask.

Technical Evaluation So Far

We’ve decided to tie our code to Rails and ActiveRecord. Unlike Valkyrie, which provides a data-mapper/repository pattern abstraction, kithe expects the dependent code to use ActiveRecord APIs (along with some standard models and modelling enhancements kithe gives you).

This means, unlike Valkyrie, our solution is not “persistence-layer agnostic”. Our app, and any potential kithe apps, are tied to Rails/ActiveRecord, and can’t use fedora or other persistence mechanisms. We didn’t have much need/interest in that, we’re happy tying our application logic and storage to ActiveRecord/postgres, and perhaps later focusing on regularly exporting our data to be stored for preservation purposes in another format, perhaps in OCFL.

It’s worth noting that the data-mapper/repository pattern itself, along the lines valkyrie uses, is favored by some people for reasons other than persistence-swapability. In the Rails and ruby web community at large, there is a contingent that think the data-mapper/repository pattern is better than what Rails gives you, and gives you better architecture for maintainable code. Many of this contingent is big on hanami, and the dry-rb suite.  (I have never been fully persuaded by this contingent).

And to be sure, in building out our approach over the last 4 months, I sometimes ran right into the architectural issues with Rails “model-based” architecture and some of what it encourages like dreaded callbacks.  But often these were hypothetical problems, “What if someone wanted to do X,” rather than something I actually needed/wanted to do now. Take a breath, return to agility and “build our app”.

And a Rails/ActiveRecord-focused approach has huge advantages too. ActiveRecord associations and eager-loading support are very mature and powerful tools, that when exposed to the app as an API give you very mature, time-tested tools to build your app flexibly and performantly (at least for the architectures our community are used to, where avoiding n+1 queries still sometimes seems like an unsolved problem!).  You have a whole Rails ecosystem to rely on, which kithe-dependent apps can just use, making whatever choices they want (use reform or not?) as with most any Rails app, without having to work out as many novel approaches or APIs. (To be sure, kithe still provides some constraints and choices and novelty — it’s a question of degree).

Trying to build up an alternative based on data-mapper/repository, whether in hanami or valkyrie, I think you have a lot of work to do to be competitive with Rails mature solutions, sometimes reproducing features already in ActiveRecord or it’s ecosystem. And it’s not just work that’s “time implementing”, it’s work figuring out the right APIs and patterns. Hanami, for instance, is probably still not as mature, as Rails, or as easy to use for a newcomer.

By not having to spend time re-inventing things that Rails already has solutions for, I could spend time on our actual (digital collections) domain-specific components that I wasn’t happy with existing solutions for. Like spending time on creating shareable file handling and derivatives solutions that seem to me to be well-polished, and able to be used for flexible use-cases without feeling like you’re fighting the system or being surprised by it. Components that hopefuly can be re-used by other apps too.

I think schneem’s thoughts on “polish” are crucial reading when thinking about the true costs of shared abstractions in our community.  There is a cost to additional abstractions: in initial implementation, ongoing maintenance, developer on-boarding, and just figuring out the right architectures and APIs to provide that polish. Sometimes these costs are worthwhile in delivered benefits, of course.

I’d consider our kithe-based approach to be somewhere in between U Alberta’s approach and valkryie, in the dimension of “how close do we stick to and tie our line to ‘standard’ Rails”.

Unlike Hyrax, we are building our own app, not trying to use a shared app or “solution bundle” like Hyrax. I would suggest we share that aspect with both the U Alberta approach as well as the several institutions building valkyrie-not-hyrax apps. But if you’ve had good experiences with the over-time maintenance costs of Hyrax, you have a use case/context where Hyrax has worked well for you — then that’s great, and there’s never anything wrong with doing what has worked for you.

Overall, 4 months in, while some things have taken longer to implement than I expected, and some unexpected design challenges have been encountered — I’m still happy with the approach we are taking.

If you are considering a based-on-valkyrie-no-hyrax approach, I think you might be in a good position to consider a kithe approach too.

How do we evaluate success?

Locally,

We want to have a replacement app launched in about a year.

I think we’re basically on target, although we might not hit it on the nose, I feel confident at this point that we’re going to succeed with a solid app, in around that timeline. (knock on wood).

When we were considering alternate approaches before committing to this one, we of course tried to compare how long this would take to various other approaches. This is very hard to predict, because you are trying to compare multiple hypotheticals, but we had to make some ballpark guesses (others may have other estimates).

Is this more or less time than it would have taken to migrate our sufia app to current hyrax? I think it’s probably taking more time to do it this new way, but I think migrating our sufia app to current hyrax (with all it’s custom functionality for current features) would not have been easy or quick — and we weren’t sure current hyrax was a place we wanted to end up.

Is it going to take more or less time than it would have taken to write an app on valkyrie, including any work we might contribute to valkyrie for features we needed? It’s always hard to guess these things, but I’d guess in the same ballpark, although I’m optimistic the “kithe” approach can lead to developer time-savings in the long-run.

(Of course, we hope if someone else wants to follow our path, they can re-use what’s now worked out in kithe to go quicker).

We want it to be an app whose long-term maintenance and continued development costs are good

In our sufia-based app, we found it could be difficult and time-consuming to add some of the features we needed. We also spent a lot of time trying to performance-tune to acceptable levels (and we weren’t alone), or figure out and work towards a manageable and cost-efficient cloud deployment architecture.

I am absolutely confident that our “kithe” approach will give us something with a lower TCO (“total cost of ownership”) than we had with sufia.

Will it be a lower TCO than if we were on the present hyrax (ignoring how to get there), with our custom features we needed? I think so, and that current hyrax isn’t different enough from sufia we are used to — but again this is necessarily a guess, and others may disagree. In the end, technical staff just has to make their best predictions based on experience (individual and community).  Hyrax probably will continue to improve under @no-reply’s steady leadership, but I think we have to make our decisions on what’s there now, and that potential rosey future also requires continued contribution by the community (like us) if it is to come to fruition, which is real time to be included in TCO too.   I’m still feeling good about the “write our own app” approach vs “solution bundle”.

Will we get a lower TCO than if we had a non-hyrax valkyrie-based app? Even harder to say. Valkryie has more abstractions and layers that have real ongoing maintenance costs (that someone has to do), but there’s an argument that those layers will lower your TCO over the long-term. I’m not totally persuaded by that argument myself, and when in doubt am inclined to choose the less-new-abstraction path, but it’s hard to predict the future.

One thing worth noting is the main thing that forced our hand in doing something with our existing sufia-based app is that it was stuck on an old version of Rails that will soon be out-of-support, and we thought it would have been time-consuming to update, one way or another.  (When Rails 6.0 is released, probably in the next few months, Rails maintenance policy says nothing before 5.2 will be supported.) Encouragingly, both kithe and attr_json dependency (also by me), are testing green on Rails 6.0 beta releases — and, I was gratified to see, didn’t take any code changes to do so, they just passed.  (Valkyrie 1.x requires Rails 5.1, but a soon-to-be-released 2.0 is planned to work fine up to Rails 6; latest hyrax requires Rails 5.1 as well, but the hyrax team would like to add 5.2 and 6 soon).

We want easier on-boarding of new devs for succession planning

All developers will leave eventually (which is one reason I think if you are doing any local development, a one-developer team is a bad idea — you are guaranteeing that at some point 100% of your dev team will leave at once).

We want it to be easier to on-board new developers. We share U Alberta’s goal that what we could call a “typical Rails developer” should be able to come on and maintain and enhance the app.

Are we there? Well, while our local app is relatively simple rails code (albeit using kithe API’s), the implementation of  kithe and attr_json, which a dev may have to delve into, can get a bit funky, and didn’t turn out quite as simple as I would have liked.

But when I get a bit nervous about this, I reassure myself remembering that:

  • a) Our existing sufia-based app is definitely high-barrier for new devs (an experience not unique to us), I think we can definitely beat that.
    • Also worth pointing out that when we last posted a position, we got no qualified applicants with samvera, or even Rails, experience. We did make a great hire though, someone who knew back-end web dev and knew how to learn new tools; it’s that kind of person that we ideally need our codebase to be accessible to, and the sufia-based one was not.
  • b) Recruiting and on-boarding new devs is always a challenge for any small dev shop, especially if your salaries are not seen as competitive.  It’s just part of the risk and challenge you accept when doing local development as a small shop on any platform. (Whether that is the right choice is out of scope for this post!)

I think our code is going to end up more accessible to actually-existing newly onboarded devs  than a customized hyrax-based solution would be. More than Valkyrie? I do think so myself, I think we have fewer layers of “specialty” stuff than valkyrie, but it’s certainly hard to be sure, and everyone must judge for themselves.

I do think any competent Rails consultancy (without previous LAM/samvera expertise) could be hired to deal with our kithe-based app no problem; I can’t really say if that would be true of a Valkyrie-based app (it might be); I do not personally have confidence it would be true of a hyrax-based app at this point, but others may have other opinions (or experience?).

Evaluating success with the community?

Ideally, we’d of course love it if some other institutions eventually developed with the kithe toolkit, with the potential for sharing future maintenance of it.

Even if that doesn’t happen, I don’t think we’re in a terrible place. It’s worth noting that there has been some non-LAM-community Rails dev interest in attr_json, and occasional PRs; I wouldn’t say it’s in a confidently sustainable place if I left, but I also think it’s code someone else could step into and figure out. It’s just not that many lines of code, it’s well-tested and well-documented, and and i’ve tried to be careful with it’s design — but take a look at and decide for yourself!. I can not emphasize enough my belief that if you are doing local development at all (and I think any samvera-based app has always been such), you should have local technical experts doing evaluation before committing to a platform — hyrax, valkyrie, kithe, entirely homegrown, whatever.

Even if no-one else develops with kithe itself, we’d consider it a success if some of the ideas from kithe influence the larger samvera and digital collections/repository communities. You are welcome to copy-paste-modify code that looks useful (It’s MIT licensed, have at it!). And even just take API ideas or architectural concepts from our efforts, if they seem useful.

We do take seriously participating in and giving back to the larger community, and think trying a different approach, so we and others can see how it goes, is part of that. Along with taking the extra time to do it in public and write things up, like this. And we also want to maintain our mutually-beneficial ties to samvera and LAM technologist communities; even if we are using different architectures, we still have lots of use-cases and opportunities for sharing both knowledge and code in common.

Take a look?

If you are considering development of a non-Hyrax valkyrie-based app, and have the development team to support that — I believe you have the development team to support a kithe-based approach too.

I would be quite happy if anyone took a look, and happy to hear feedback and have conversations, regardless of whether you end up using the actual kithe code or not. Kithe is not 1.0, but there’s definitely enough there to check it out and get a sense of what developing with it might be like, and whether it seems technically sound to you. And I’ve taken some time to write some good “guide” overview docs, both for potential “onboarding” of future devs here, and to share with you all.

We have a staging server for our in-development app based on kithe; if you’d like a guest login so you can check it out, just ask and I can share one with you.

Our local app also should also probably be pretty easy for you to get installed (with dependencies) from a git checkout, and just run it and see how it goes. See: https://github.com/sciencehistory/scihist_digicoll/

Hope to hear from you!

Browser dominance, standards setting, and WHATWG vs W3C

Reda Lemeden writes a warning note about what Chrome’s dominance means for the “Web as an open platform”, in “We Need Chrome No More.”

Lemeden doesn’t mention WHATWG, but in retrospect, I think the practical shifting of web-standards-setting from an at least possibly neutral standards body representing multiple interests (W3C) to a a body wholly controlled by browser-vendors (WHATWG)… may have been good for speed of “innovation” for a time, but was in the long-term not good for the “Web as an open platform” in Lemeden’s phrase.  Lemeden writes:

Making matters worse, the blame often lands on other vendors for “holding back the Web”. The Web is Google’s turf as it stands now; you either do as they do, or you are called out for being a laggard.

Indeed, I think it’s the structural politics of WHATWG that make that hard to counter. WHATWG was almost founded on the principles of “not being a laggard” and “doing what we do”. When there were several browser-vendors with roughly equal market power they could counter-balance each other, but when there’s an elephant in the room…

That is, the W3C folks that were accused of “holding back the web” while trying to keep standards setting from going to to the “faster” WHATWG… were perhaps correct all along.

People can disagree, but 10-15 years on, I think we’re overdue a larger discussion and retrospective evaluation of the consequences of the WHATWG “coup”. I haven’t seen much discussion of this yet.

On code-craft, and writing code for other programmers to use

The New Yorker this week has a profile of Google programmer pair Jeff Dean and Sanjay Ghemawat — if the annoying phrase “super star programmer” applies to anyone it’s probably these guys, who among other things conceived and wrote the original Google Map Reduce implementation–  that includes some comments I find unusually insightful about some aspects of the craft of writing code. I was going to say “for a popular press piece”, but really even programmers talking to each other don’t talk about this sort of thing much. I recommend the article, but was especially struck by this passage:

At M.I.T., [Sanjay’s] graduate adviser was Barbara Liskov, an influential computer scientist who studied, among other things, the management of complex code bases. In her view, the best code is like a good piece of writing. It needs a carefully realized structure; every word should do work. Programming this way requires empathy with readers. It also means seeing code not just as a means to an end but as an artifact in itself. “The thing I think he is best at is designing systems,” Craig Silverstein said. “If you’re just looking at a file of code Sanjay wrote, it’s beautiful in the way that a well-proportioned sculpture is beautiful.”

…“Some people,” Silverstein said, “their code’s too loose. One screen of code has very little information on it. You’re always scrolling back and forth to figure out what’s going on.” Others write code that’s too dense: “You look at it, you’re, like, ‘Ugh. I’m not looking forward to reading this.’ Sanjay has somehow split the middle. You look at his code and you’re, like, ‘O.K., I can figure this out,’ and, still, you get a lot on a single page.” Silverstein continued, “Whenever I want to add new functionality to Sanjay’s code, it seems like the hooks are already there. I feel like Salieri. I understand the greatness. I don’t understand how it’s done.”

I aspire to write code like this, it’s a large part of what motivates me and challenges me.

I think it’s something that (at least for most of us, I don’t know about Dean and Ghemawat), can only be approached and achieved with practice — meaning both time and intention. But I think many of the environments that most working programmers work in are not conducive to this practice, and in some cases are actively hostile to it.  I’m not sure what to think or do about that.

It is most important when designing code for re-use, when designing libraries to be used in many contexts and by many people.  If you are only writing code for a particular business “seeing code not just as a means to an end but as an artifact in itself” may not be what’s called for.  It really is a means to an end of the business purposes. Spending too much time on “the artifact itself”, I think, has a lot of overlap with what is often derisively called “bike-shedding”.  But when creating an artifact that is intended to be used by lots of other programmers in lots of other contexts to build things to meet their business purposes — say, a Rails… or a samvera — “empathy with readers” (which is very well-said, and very related to:) and creating an artifact where “it seems like the hooks are already there” are pretty much indispensable to creating something successful at increasing the efficiency and success of those developers using the code.

It’s also not easy even if it is your intention, but without the intention, it’s highly unlikely to happen by accident. In my experience TDD can (in some contexts) actually be helpful to accomplishing it — but only if you have the intention, if you start from developer use-cases, and if you do the “refactor” step of “red-green-refactor”.  Just “getting the tests to pass” isn’t gonna do it. (And from the profile, I suspect Dean and Ghemawat may not write tests at all — TDD is neither necessary nor sufficient).  That empathy part is probably necessary — understanding what other programmers are going to want to do with your code, how they are going to come to it, and putting yourself in their place, so you can write code that anticipates their needs.

I’m not sure what to do with any of this, but I was struck by the well-written description of what motivates me in one aspect of my programming work.

“Against software development”

Michael Arntzenius writes:

Beautiful code gets rewritten; ugly code survives.

Just so, generic code is replaced by its concrete instances, which are faster and (at first) easier to comprehend.

Just so, extensible code gets extended and shimmed and customized until under its own sheer weight it collapses, then replaced by a monolith that Just Works.

Just so, simple code grows, feature by creeping feature, layer by backward-compatible layer, until it is complicated.

So perishes the good, the beautiful, and the true.

In this world of local-optimum-seeking markets, aesthetics alone keep us from the hell of the Programmer-Archaeologist.

Code is limited primarily by our ability to manage complexity. Thus,

Software grows until it exceeds our capacity to understand it.

HackerNews discussion. 

Ruby Magic helps sponsor Rubyland News

I have been running the Rubyland.news aggregator for two years now, as just a hobby spare time thing. Because I wanted a ruby blog and news aggregator, and wasn’t happy with what was out there then,  and thought it would be good for the community to have it.

I am not planning or trying to make money from it, but it does have some modest monthly infrastructure fees that I like getting covered. So I’m happy to report that Ruby Magic has agreed to sponsor Rubyland.news for a modest $20/month for six months.

Ruby Magic is an email list you can sign up for for occasional emails about ruby. They also have an RSS feed, so I’ve been able to include them on Rubyland.news for some time.  I find their articles to often be useful introductions or refreshers to particular topics about ruby language fundamentals. (It tends not to be about Rails, I know some people appreciate some non-Rails-focused sources of ruby info).  Personally, I’ve been using ruby for years, and the way I got as comfortable with it as I am is by always asking “wait, how does that work then?” about things I run into, always being curious about what’s going on and what the alternatives are and what tools are available, starting with the ruby language itself and it’s stdlib.

These days, blogging, on a platform with an RSS feed too, seems to have become a somewhat rarer thing, so I’m also grateful that Ruby Magic articles are available through RSS feed, so I can include then in rubyland.news. And of course for the modest sponsorship of Rubyland.news, helping to pay infrastructure costs to keep the lights on.  As always, I value full transparency in any sponsorship of rubyland.news; I don’t intend it to affect any editorial policies (I was including Ruby Magic feed already); but I will continue to be fully transparent about any sponsorship arrangements and values, so you can judge for yourself (a modest $20/month from Ruby Magic; no commitment beyond a listing on About page, and this particular post you are reading now, which is effectively a sponsored post).

I also just realized I am two years into Rubyland.news. I don’t keep usage analytics (was too lazy to set it up, and not entirely clear how to do that in case where people might be consuming it as an RSS feed itself), although it’s got 156 followers on it’s twitter feed (all aggregated content is also syndicated to twitter, which I thought was a neat feature).  I’m honestly not sure how useful it is to anyone other than me, or what people changes people might want; feedback is welcome!

Some notes on what’s going on in ActiveStorage

I work in a library-archives-museum digital collections and preservation. This is of course a domain that is very file-centric (or “bytestream”-centric, as some might say). Keeping track of originals and their metadata (including digests/checksums), making lots of derivative files (or “variants” and/or “previews” as ActiveStorage calls them; of images, audio, video, or anything else)

So, building apps in this domain in Rails, I need to do a lot of things with files/bytestreams, ideally without having to re-invent wheels of basic bytestream management in rails, or write lots of boilerplate code. So I’m really interested in file attachment libraries for Rails. How they work, how to use them performantly and reliably without race conditions, how to use them flexibly to be able to write simple code to meet our business and user requirements.  I recently did a bit of a “deep dive” into some aspects of shrine;  now, I turn my attention to ActiveStorage.

The ActiveStorage guide (or in edge from master) is a great and necessary place to start (and you should read it before this; I love the Rails Guides), but there were some questions I had it didn’t answer. Here are some notes on just some things of interest to me related to the internals of ActiveStorage.

ActiveStorage is a-changing

One thing to note is that ActiveStorage has some pretty substantial changes in between the latest 5.2.1 release and master. Sadly there’s no way I could find to use github compare UI (which i love) limited just to the activestorage path in the rails repo.

If you check out Rails source, you can do: ​git diff v5.2.0...master activestorage. Not sure how intelligible you can make that output. You can also look at merged PR’s to Rails mentioning “activestorage” to try and see what’s been going on, some PR’s are more significant than others.

I’m mostly looking at 5.2.1, since that’s the one I’d be using were I use it (until Rails 6 comes out, I forget if we know when we might expect that?), although when I realize that things have changed, I make note of it.

The DB Schema

ActiveStorage requires no changes to the table/model of a thing that should have attached files. Instead, the attached files are implemented as ActiveRecord has_many (or the rare has_one in case of has_one_attached) associations to other table(s), using ordinary relational modeling designs.  Most of the fancy modelling/persistence/access features and APIs (esp in 5.2.1) are seem to be just sugar on top of ordinary AR associations (very useful sugar, don’t get me wrong).

ActiveStorage adds two tables/models.

The first we’ll look at is ActiveStorage::Blob, which actually represents a single uploaded file/bytestream/blob. Don’t be confused by “blob”, the bytestream itself is not in the db, rather there’s enough info to find it in whatever actual storage service you’ve configured. (local disk, S3, etc. Incidentally, the storage service configuration is app-wide, there’s no obvious way to use two different storage services in your app, for different categories of file).

The table backing ActiveStorage::Blob has a number of columns for holding information about the bytesteam.

  • id (ordinary Rails default pk type)
  • key: basically functions as a UID to uniquely identify the bytestream, and find it in the storage. Storages may translate this to actual paths or storage-specific keys differently, the Disk storage files in directories by key prefix, whereas the S3 service just uses the key without any prefixes.
    • The key is generated with standard Rails “secure token” functionality–pretty much just a good random 24 char token. 
    • There doesn’t appear to be any way to customize the path on storage to be more semantic, it’s just the random filing based on the random UID-ish key.
  • filename: the original filename of the file on the way in
  • content_type: an analyzed MIME/IANA content type
  • byte_size: what it says on the tin
  • metadata: a Json serialized hash of arbitrary additional metadata extracted on ingest by ActiveStorage. Default AS migrations just put this in a text column and use db-agnostic Rails functions to serialize/deserialize Json, they don’t try to use a json or jsonb column type.
  • created_at: the usual. There is no updated_at column, perhaps because these are normally expected to be immutable (which means not expected to add metadata after point of creation either?).

OK, so that table has got pretty much everything needed. So what’s the ActiveStorage::Attachment model?  Pretty much just a standard join table.  Using a standard Rails polymorphic association so it can associate an ActiveStorage::Blob with any arbitrary model of any class.  The purpose for this “extra” join table is presumably simply to allow you to associate one ActiveStorage::Blob with multiple domain objects. I guess there are some use cases for that, although it makes the schema somewhat more complicated, and the ActiveStorage inline comments warn you that “you’ll need to do your own garbage collecting” if you do that (A Blob won’t be deleted (in db or in storage) when you delete it’s referencing model(s), so you’ve got to, with your own code, make sure Blob’s don’t hang around not referenced by any models unless in cases you want them to).

These extra tables do mean there are two associations to cross to get from a record to it’s attached file(s).  So if you are, say, displaying a list of N records with their thumbnails, you do have an n+1 problem (or a 2n+1 problem if you will :) ). The Active Storage guide doesn’t mention this — it probably should — but AS some of the inline AS comment docs do, and scopes AS creates for you to help do eager loading.

Indeed a dynamically generated with_attached_avatar (or whatever your attachment is called) scope is nothing but a standard ActiveRecord includes  reaching across the join to the blog. (for has_many_attached or has_one_attached).

And indeed if I try it out in my console, the inclusion scope results in three db queries, in the usual way you expect ActiveRecord eager loading to work.

irb(main):019:0> FileSet.with_attached_avatar.all
  FileSet Load (0.5ms)  SELECT  "file_sets".* FROM "file_sets" LIMIT $1  [["LIMIT", 11]]
  ActiveStorage::Attachment Load (0.8ms)  SELECT "active_storage_attachments".* FROM "active_storage_attachments" WHERE "active_storage_attachments"."record_type" = $1 AND "active_storage_attachments"."name" = $2 AND "active_storage_attachments"."record_id" IN ($3, $4)  [["record_type", "FileSet"], ["name", "avatar"], ["record_id", 19], ["record_id", 20]]
  ActiveStorage::Blob Load (0.5ms)  SELECT "active_storage_blobs".* FROM "active_storage_blobs" WHERE "active_storage_blobs"."id" IN ($1, $2)  [["id", 7], ["id", 8]]
=> #<ActiveRecord::Relation [#<FileSet id: 19, title: nil, asset_data: nil, created_at: "2018-09-27 18:27:06", updated_at: "2018-09-27 18:27:06", asset_derivatives_data: nil, standard_data: nil>, #<FileSet id: 20, title: nil, asset_data: nil, created_at: "2018-09-27 18:29:00", updated_at: "2018-09-27 18:29:08", asset_derivatives_data: nil, standard_data: nil>]>

When is file created in storage, when are associated models created?

ActiveStorage expects your ordinary use case will be attaching files uploaded through a form user.avatar.attach(params[:avatar]), where params[:avatar] is a meaning you get the file as a ActionDispatch::Http::UploadedFile. You can also attach a file directly, in which case you are required to supply the filename (and optionally a content-type):  user.avatar.attach(io: File.open("whatever"), filename: "whatever.png").  Or you can also pass an existing ActiveStorage::Blob to ‘attach’.

In all of these case, ActiveStorage normalizes them to the same code path fairly quickly.

In Rails 5.2.1, if you call attach on an already persisted record, immediately (before any save), an ActiveStorage::Blob row and ActiveStorage::Attachment row have been persisted to the db, and the file has been written to your configured storage location.  There’s no need to call save on your original record, the update took place immediately. Your record will report it has (and of course ActiveStorage’s schema means no changes had to be saved for the row for your record itself — and your record does not think it has outstanding changes via changed?, since it does not).

If you call attach on a new (not yet persisted) record, the ActiveStorage::Blob row is _still_ created, and the bytestream is still persisted to your storage service. But an ActiveStorage::Attachment (join object) has not yet been created.  It will be when you save the record.

But if you just abandon the record without saving it… you have an ActiveStorage::Blob nothing is pointing to, along with the persisted bytestream in your storage service. I guess you’d have to periodically look for these and clean then up….

But master branch in Rails tries to improve this situation with a fairly sophisticated implementation of storing deltas prior to save. I’m not entirely sure if that applies to the “already persisted record” case too. In general, I don’t have a good grasp of how AS expects your record lifecycles to effect persistence of Blobs — like if the record you were attaching it to failed validation, is the Blob expected to be there anyway? Or how are you expected to have validation on the uploaded file itself (like only certain content types allowed, say). I believe the PR in Rails master is trying to improve all of that, I don’t have a thorough grasp of how successful it is at making things “just work” how you might expect, without leaving “orphaned” db rows or storage service files.

Metadata

Content-type

ActiveStorage stores the IANA Media Type (aka “MIME type” or “content type”) in the dedicated content_type column in ActiveStorage::Blob. It uses the marcel gem (from the basecamp team) to determine content type.  Marcel looks like it uses file-style magic bytes, but also uses the user-agent-supplied filename suffix or content-type when it decides it’s necessary — trusting the user-agent supplied content-type if all else fails.  It does not look like there is any way to customize this process;  likely most people wouldn’t need that, but I may be one of the few that maybe does. Compare to shrine’s ultra-flexible content-type-determination configuration.

For reasons I’m not certain of, ActiveStorage uses marcel to identify content-type twice.

When (in Rails 2.5.1) you call ​some_model.attach, it calls ActiveStorage::Blob#create_after_upload!, which calls ActiveStorage::Blob#build_after_upload, which calls ActiveStorage::Blob.upload, which sets the content_type attribute to the result of extract_content_type method, which calls marcel.

Additionally, ActiveStorage::Attachment (the join table) has an after_create_commit hook which calls :identify_blob, which calls blob.identify, defined in ActiveStorage::Blob::Identifiable mixin, which also ends up using marcel — only if it already hasn’t been identified (recorded by an identified key in the json serialized metadata column).   This second one only passes the first 4k of the file to marcel (perhaps because it may need to download it from remote storage), while the first one above seems to pass in the entire IO stream.

Normally this second marcel identify won’t be called at all, because the Blob model is already recorded as identified? as a result of the first one. In either case, the operations takes place in the foreground inline (not a bg job), although one of them in an after-commit hook with a second save. (Ah wait, I bet the second one is related to the direct upload feature which I haven’t dived into. Some inline comment docs would still be nice!)

In Rails master, we get an identify:false argument to attach, which can be used to skip which you can use to skip content-type-identification (it might just use the user-agent-supplied content-type, if any, in that case?)

Arbitrary Metadata

In addition to some file metadata that lives in dedicated database columns in the blob table, like content_type, recall that there is a metadata column with a serialized JSON hash, that can hold arbitrary metadata. If you upload an image, you’ll ordinarily find height and width values in there, for instance.  Which you can find eg with ‘model..avatar.metadata[“width”]’ or  ‘model.avatar.metadata[:width]’ (indifferent access, no shortcuts like ‘model.avatar.width’ though, so far as I know).

Where does this come from? It turns out ActiveStorage actually has a nice, abstract, content-type-specific, system for analyzer plugins.  It’s got a built-in one for images, which extracts height and width with MiniMagick, and one for videos, which uses ffprobe command line, part of ffmpeg.

So while this blog post suggests monkey-patching Analyzer::ImageAnalyzer to add in GPS metadata extracted from EXIF, in fact it oughta be possible in 5.2.1+ to use the analyzer plugin to add, remove, or replace analyzers to do your customization, no ugly forwards-compat-dangerous monkey-patching required.  So there are intentional API hooks here for customizing metadata extraction, pretty much however you like.

Unlike content-type-identification which is done inline on attach, metadata analysis is done by ActiveStorage in a background ActiveJob. ActiveStorage::Attachment (the join object, not the blog), has an after_create_commit hook (reminding us that ActiveStorage never expects you to re-use a Blob db model with an altered bytestream/file), which calls blob.analyze_later (unless it’s already been analyzed).   analyze_later simply launches a perform_later ActiveStorage::AnalyzeJob with the (in this case) ActiveStorage::Blob as an argument.  Which just calls analyze on the blob.

So it, at least in theory, this can accommodate fairly slow extraction, because it’s in the background. That does mean you could have an attachment which has not yet been analyzed; you can check to see if analyzation has happened yet with analyzed? — which in the end is just an analyzed: true key in the arbitrary json metadata hash. (Good reminder that ActiveRecord::Store exists, a convenience for making cover methods for keys in a serialized json hash).

This design does assume only one bg job per model that could touch the serialized json metadata column exists at a time — if there were two operating concurrency (even with different keys), there’d be a race condition where one of the sets of changes might get lost as both processes race to 1) load from db, 2) merge in changes to hash, 3) save serialization of merged to db.  So actually, as long as “identified: true” is recorded in content-type-extraction, the identification step probably couldn’t be a bg job either, without taking care of the race condition, which is tricky.

I suppose if you changed your analyzer(s) and needed to re-analyze everything, you could do something like ActiveStorage::Blob.find_each(&:analyze!). analyze! is implemented in terms of update!, so should persist it’s changes to db with no separate need to call save.

Variants

ActiveStorage calls “variants” what I would call “derivatives” or shrine (currently) calls “versions” — basically thumbnails, resizes, and other transformations of the original attachment.

ActiveStorage has a very clever way of handling these that doesn’t require any additional tracking in the db.  Arbitrary variants are created “on demand”, and a unique storage location is derived based on the transformation asked for.

If you call avatar.variant(resize: "100x100"), what’s returned is an ActiveStorage::Variant.  No new file has yet been created if this is the first time you asked for that. The transformation will be done when you call the processed method. (ActiveStorage recommends or expects for most use cases that this will be done in controller action meant to deliver that specific variant, so basically on-demand).   processed will first see if the variant file has already been created, by checking processed?. Which just checks if a file already exists in the storage with some key specific to the variant. The key specific to the variant is  “variants/#{blob.key}/#{Digest::SHA256.hexdigest(variation.key)}“. Gives it some prefixes/directory nesting, but ultimately makes a SHA256 digest of variation.key.  Which you can see the code in ActiveStorage::Variation, and follow it through ActiveStorage.verifier, which is just an instance of ActiveSupport::MessageVerifier — in the end we’re basically just taking a signed (and maybe encyrpted) digest of the serialization of the transformation arguments passed in in the first place,  `{ resize: “100×100” }`.

That is, basically through a couple of cryptographic digests and some crypto security too, were just taking the transformation arguments and turning them into a unique-to-those-arguments key (file path).

This has been refactored a bit in master vs 5.2.1 — and in master the hash that specifies the transformations, to be turned into a key, becomes anything supported by image_processing with either MiniMagick or vips processors instead of 5.2.1’s bespoke Minimagick-only wrapper. (And I do love me some vips, can be so much more performant for very large files).  But I think the basic semantics are fundamentally the same.

This is nice because we don’t need another database table/model to keep track of variants (don’t forget we already have two!) — we don’t in fact need to keep track of variants at all. When one is asked for, ActiveStorage can just check to see if it already exists in storage at the only key/path it necessarily would be at.

On the other hand, there’s no way to enumerate what variants we’ve already created, but maybe that’s not really something people generally need.

But also, as far as I can find there is no API to delete variants. What if we just created 100×100 thumbs for every product photo in our app, but we just realized that’s way too small (what is this, 2002?) and we really need something that’s 630×630. We can change our code and it will blithely create all those new 630×630 ones on demand. But what about all the 100x100s already created? They are there in our storage service (say S3).  Whatever ways there might be to find the old variants and delete them are going to be hacky, not to mention painful (it’s making a SHA256 digest to create filename, which is intentionally irreversible. If you want to know what transformation a given variant in storage represents, the only way is to try a guess and see if it matches, there’s no way to reverse it from just the key/path in storage).

Which seems like a common use case that’s going to come up to me? I wonder if I’m missing something. It almost makes me think you are intended to keep variants in a storage configured as a cache which deletes old files periodically (the variants system will just create them on demand if asked for again of course) — except the variants are stored in the same storage service as your originals, and you certainly don’t want to purge non-recently-used originals!  I’m not quite sure what people are doing with purging no-longer-used variants in the real world, or why it hasn’t come up if it hasn’t.

And something that maybe plenty of people don’t need, but I do — ability to create variants of files that aren’t images: PDFs, any sort of video or audio file, really any kind of file at all. There is a separate transformation system called previewing that can be used to create transformations of video and PDF out of the box — specifically to create thumbnails/poster images.  There is a plugin architecture, so I can maybe provide “previews” for new formats (like MS Word), or maybe I want to improve/customize the poster-image selection algorithm.

What I need aren’t actually “previews”, and I might need several of them. Maybe I have a video that was uploaded as an AVI, and I need to have variants as both mp4 and webm, and maybe choose to transcode to a different codec or even adjust lossy compression levels. Maybe I can still use ‘preview’ function nonetheless? Why is “preview” a different API than “variant” anyway? While it has a different name, maybe it actually does pretty much the same thing, but with previewer plugins? I don’t totally grasp what’s going on with previews, and am running out of steam.

I really gotta get down into the weeds with files in my app(s), in an ideal world, I would want to be able to express variants as blocks of whatever code I wanted calling out to whatever libraries I wanted, as long as the block returned an IO-like object, not just hashes of transformation-specifications. I guess one needs something that can be transformed into a unique key/path though. I guess one could imagine an implementation had blocks registered with unique keys (say, “webm”), and generated key/paths based on those unique keys.  I don’t think this is possible in ActiveStorage at the moment.

Will I use ActiveStorage? Shrine?

I suspect the intended developer-user of ActiveStorage is someone in a domain/business/app for which images and attachments  are kind of ancillary. Sure, we need some user avatars, maybe even some product images, or shared screenshots in our basecamp-like app. But we don’t care too much about the details, as long as it mostly works.  Janko of Shrine told me some users thought it was already an imposition to have to add a migration to add a data column to any model they wanted to attach to, when ActiveStorage has a generic migration for a couple generic tables and you’re done (nevermind that this means extra joins on every query whose results you’ll have to deal with attachments on!) — this sort of backs up that idea of the native of the large ActiveStorage target market.

On the other hand, I’m working in a domain where file management is the business/domain. I really want to have lots of control over all of it.

I’m not sure ActiveStorage gives it to me. Could I customize the key/paths to be a little bit more human readable and reverse-engineerable, say having the key begin with the id of the database model? (Which is useful for digital preservation and recovery purposes).Maybe? With some monkey-patches? Probably not?

Will ActiveStorage do what I need as far as no-boundaries flexibility to variant creation of video/audio/arbitrary file types?  Possibly with custom “previewer” plugin (even though a downsampled webm of an original .avi is really not a “preview”), if I’m willing to make all transformations expressable as a hash of specifications?  Without monkey-patching ActiveStorage? Not sure?

What if I have some really slow metadata generation, that I really don’t want to do inline/foreground?  I guess I could not use the built-in metadata extraction, but just make my own json column on some model somewhere (that has_one_attachment), and do it myself. Maybe I could do that variants too, with additional app-specific models for variants (that each have a has_one_attached with the variant I created).  I’d have to be careful to avoid adding too many more tables/joins for common use cases.

If I only had, say, paperclip and carrierwave, I might choose ActiveStorage anyway, cause they aren’t so flexible either. But, hey, shrine! So flexible! It still doesn’t do everything I need, and the way it currently handles variants/derivatives/versions isn’t suitable for me (not set up to support on-demand generation without race conditions, which I realize ironically ActiveStorage is) — but I think I’d rather build it on top of shrine, which is intended to let you build things on top of it, than ActiveStorage, where I’d likely have to monkey-patch and risk forwards-incompatible.

On the other hand, if ActiveStorage is “good enough” for many people… is there a risk that shrine won’t end up with enough user/maintainer community to stay sustainable? Sure, there’s some risk. And relatively small risk of ActiveStorage going away.  One colleague suggested to me that “history shows” once something is baked into Rails, it leads to a “slow death of most competitors”, and eventually more features in the baked-into Rails version. Maybe, but…. as it happens, I kind of need to architect a file attachment solution for my app(s) now.

As with all dependency and architectural choices, you pays yer money and you takes yer chances. It’s programming. At best, we hope we can keep things clearly delineated enough architecturally, that if we ever had to change file attachment support solutions, it won’t be too hard to change.  I’m probably going with shrine for now.

One thing that I found useful looking at ActiveStorage is some, apparently, “good enough” baselines for certain performance/architectural issues. For instance, I was trying to figure out a way to keep my likely bespoke derivatives/variants solution from requiring any additional tables/joins/preloads (as shrine out of the box now requires zero extra) — but if ActiveStorage requires two joins/preloads to avoid n+1, I guess it’s probably okay if I add one. Likewise, I wasn’t sure if it was okay to have a web architecture where every attachment image view is going to result in a redirect… but if that’s ActiveStorage’s solution, it’s probably good enough.

Notes on deep diving with byebug

When using byebug to investigate some code, as I did here, and regularly do to figure out a complex codebase (including but not limited to parts of Rails), a couple Rails-related tips.

If there are ActiveJobs involved, ‘config.active_job.queue_adapter = :inline’ is a good idea to make them easier to ‘byebug’.

If there are after_commit hooks involved (as there were here), turning off Rails transactional tests (aka “transactional fixtures” before Rails 5) is a good idea. Theoretically Rails treats after_commit more consistently now even with transactional tests, but I found debugging this one I was not seeing the real stuff until I turned off transactional tests.  In Rspec, you do this with ‘config.use_transactional_fixtures = false’  in the rails_helper.rb rspec config file.

What “Just standard Rails” means to the University of Alberta libraries

I recently had a chance to speak with the development team at the University of Alberta about their development of their jupiter digital repository app (live, github).

UAlberta had a sufia 6 app in production that was a pretty stock “institutional repository holding PDFs. Around Fall 2015, they started trying to “catch up” to sufia 7 with PCDM etc. — to get features they believed would make it easier to incorporate more ‘digital collections’ content, and to just avoid stale non-maintained dependencies.

In Summer 2017, after having spent almost two years trying to get on sufia 7, with mounting frustrations and still seeming far from the finish line — and after having hired a few non-library-archives-museum-experienced but experienced Rails developers — the University of Alberta libraries development team decided on a radical new approach. They decided it wasn’t clear what Sufia was giving them to justify the difficulty they were having with it. They started over, trying to keep things as close to “ordinary Rails” as possible.

At that time, Fedora still was an institutional requirement.  So they didn’t toss out all of the samvera stack. They decided that they’d chop off the trunk as close to the bottom as they could while still getting tools for working with fedora, and to them that meant a hydra-works dependency, but few other hyrax dependencies.  They basically started the app over.

Within about 6 months of that effort (Early spring 2018) with approximately two full-time developers, they were live with their app (jupiter repo), and have been happy with it so far. But they also still haven’t gotten to the originally planned content beyond the IR-type PDFs, the scanned monographs, newspapers, etc. And have had some developer turnover. (Hey, they’re currently hiring y’all).

The jupiter app implementation

My understanding of how their app works is based on just an hour conversation, plus a few hours spent looking at their source code and internal docs — I may get some things wrong!

Jupiter seems to be to be a pretty simple app, a fairly basic idea of an “institutional repository”.  Most of the items are single PDFs, without children/members.  The software does support some items being composed of a list of files — but no “child works”.  The metadata seems relatively simple; some repeatable strings, but no nested/compound objects (ie, an attribute whose values are multi-property objects themselves). While there is some self-deposit, there is no complicated workflow, basically just an edit form.

The permissions model is relatively simple. Matt Barnett, a lead developer for much of this process (who was there for our conversation, but left the team soon after that) told me that originally some internal stakeholders had been pushing for a more complex permissions model. But knowing what a tar-pit trying to implement ACLs could be, Matt pushed back, and they ultimately implemented a simple model: There are owners who can edit the things they own, and admins who can edit everything, and that’s about it.  By virtue of their campus SSO system, they got “shared accounts” for free, so people could log into a shared account when they needed to share edit privs.

They had been using hydra-deratives for their simple derivative needs (maybe just a single thumbnail for each PDF?), but when ActiveStorage, part of Rails, was released, they began switching the code to that (may or may not be merged into master/deployed in repo yet as this gets published).

Fedora is still there, modeled with hydra-works.  The indexing to solr is whatever was built into hydra-works. They just wrote their own straightforward forms with simple_form.  They also do a lot of CSV-based ingest, which they just wrote code for, like even sufia users would I think.

They use UUID primary keys.

Their app does index to solr — using the general ActiveFedora indexing methods, I think, solrizer and all.  You can see that their indexer is pretty stock, it mostly just calls “super”.

All of their objects exist as ActiveRecord “draft” objects while they are being edited, through more or less ordinary Rails mechanisms. When they have multi-valued fields, they use postgres json arrays, rather than actual normalized schema (which would suggest a different table). I’m not sure what they need to do to get this to work with forms and controller updates. These active record objects seem to use something custom for collection memberships, rather than actual active record associations. So in these regards it’s not quite a totally ordinary activerecord modelling.

The objects have a life in activerecord, but are mirrored to fedora at certain life cycle points — I believe this is also what gets them into solr (using samvera/active-fedora solr indexing code).  The public-facing front-end is based entirely on data from solr — but not using Blacklight, simply writing Rails code to issue queries and handle responses to Solr (with Rsolr I think).

A brief overview of their architecture, by Matt before he left, focusing especially on the persistence stuff (since that’s the least “rails”-y and most custom stuff), can be found in their public repo, here.   Warning, it is pretty negative about samvera/sufia/active_fedora, gird yourself. You can see there they have done a couple custom local things to make the ActiveFedora objects and classes to some extent mimic ActiveRecord, to make using them in Rails easier, trying to encapsulate the fedora-specific stuff inside API boundaries. While at a high level this is what ActiveFedora’s goal is — their implementation is simpler, smaller, less powerful and custom-fit to what they need. We can just say they’re happier with how their local implementation turned out. They also explicitly wrote it to support potential future elimination of fedora altogether.

Matt said if he had to do it over, he might have pushed harder on stripping fedora out too, and just having everything in postgres. And that is something the team still plans to look at seriously for the future.

So what does “just a rails app” mean?  And how do you deal with increased complexity of your requirements?

The most useful thing for me in the conversation was that Matt pushed back on my outline of a potential plan, saying I was still undertaking too much abstraction.

The U Alberta team suggested that I should keep it even simpler, with less DRY abstraction (and thus less tools that could be shared between institutions/apps), and more just building your app for what you need right now.  You can see some of this push-back, specifically in the context of what U Alberta needs, in another document he wrote before he left Alberta in the jupiter repo, on notes for future expansion. It is really worth reading,  to see an argument from even more extreme simplicity, from a developer experienced with Rails but not “infected” with “how libraries do things”   But beware, it’s not shy about our community shibboleths.

We developers (and we library folks) are really drawn the abstraction, generalization, and shared tools that meet as many needs as possible.  It can sometimes lead us astray. It is actually very common advice in software engineering to stick to what you  actually need today, for your app you are developing (you know your business/user needs and which are the highest priorities to business value, right?).  “Do the simplest thing that could possibly work”, “You aren’t gonna need it.” It keeps us honest.

However, I also think it’s possible to code yourself into a corner this way, where your app was fine for exactly what you needed then, but when you need one more thing… you can find you need to re-write large parts of it to accommodate.  In some ways this is my experience with current samvera stack, early fundamental architectural decisions pen us in when we have new use cases. That kind of problem stays smaller when you avoid  harder-to-change shared code, but I don’t it goes away entirely. Trying to play for the future always entails some “YAGNI” risk, but the more domain knowledge and experience you have… maybe you can do better at seeing where you are going and planning for it?

Just some of the specific component plans Matt was skeptical of…

attr_json vs. Just Plain ActiveRecord schemas

The jupiter app already has an activerecord implementation which isn’t strictly “ordinary” activerecord, in the sense they serialize multi-valued/repeatable fields to json arrays,  rather than needing a separate table for each attribute as an actual normalized schema would require. Plus the logic I don’t entirely understand but think might not be ordinary AR associations around collection and “community” membership.

So this already gets you away from the strict “ordinary Rails” path — I’m not sure how the JSON array fields are handled by form submission, or querying if you wanted to do querying (it’s possible all their querying is solr-based, which is familiar to samvera-land, and also unusual for “ordinary rails”).

At my institution, we already have the need for complex repeatable data–a simple example would be repeatable “inscription” notations, each of which has the text of the inscription and the location in the book.  So not just an array of strings, but perhaps an array of hashes.  Weiwei Shi (Digital Initiatives Applications Librarian) suggested in a follow-up message, “We can use the JSON data type to support a more complex data structure as needed” — that is, if I understand it, they are contemplating actual postgres representation somewhat similar to what I am with attr_json, if they end up needing complex json. Matt’s second document tries to draw a line between how they are doing things in “more-or-less completely standard Rails models” and the way I was proposing to do things — I’m not sure I actually see such a great distinction, the representations in postgres to me seem pretty similar, neither of which is standard Active Record patterns.

They do have each attribute in a separate column, whereas I was contemplating putting them all in a single json column. Their approach does have advantages for avoiding update race conditions (or needing optimistic locking to avoid them).  I perhaps should consider that, as an extra feature to attr_json. Although either way you get columns you can’t necessarily use ordinary ActiveRecord querying or form-based update with.

Everyone seems to doubt that attr_json is really going to work, ha. The skepticism towards newly invented non-trivial dependencies is justified, but I can point out attr_json is a lot fewer lines of code than ActiveFedora, or even Valkyrie —  I think it is a risk, but it’s scoped carefully and implemented robustly, and I can’t figure out any other way I’m confident would be simpler to actually meet our modeling needs — once you start doing this complex json stuff, I think you’ll find that it doesn’t behave like “ordinary rails” — for forms/updates, validations, etc. — and rather than hack it out on a case by case basis, it makes a lot of sense to me to solve the problem with something like attr_json, encapsulating the “not really like ordinary ActiveRecord” stuff as much as possible.

The other option of course would be an actual normalized schema, with one table per attribute. For our “inscriptions” that table might have two columns (text and location), for a simple repeatable alternate title it might only have one. It’s going to be a mess to try to prevent n+1 queries and keep db access performant.  I am encouraged I’m not on an insane track by the fact that even the U Alberta is using JSON serializations in postgres, not actually ordinary normalized data — I think as your data gets more complex (not just array of primitives, but need serialization as arrays of hashes), you’re really going to want something like attr_json.  But maybe I’m wrong.

And for better or worse, I have trouble giving up entirely on the idea of some shared tools to make things easier for others in the community too — because it’s fun and rewarding, and why should we all constantly re-invent the wheel? But it’s good to be reminded of the dangers that lie in that direction.

Associations

I’m not sure if Matt mentioned this specifically, but I realize I have added a lot of non “basic ActiveRecord” complexity to the data modelling my plan in order to support the PCDM-ish association modeling, where a work has “members” and the members can be either works of themselves (which can have multiple members) or single file objects, and they all need to be kept in order.

U Alberta’s app doesn’t have that. A work can have a list of files, the end.

At my institution I actually spent a while trying to convince stakeholders that we didn’t need that either, but it was the one thing I could make no headway on — eventually they convinced me we did, to accomplish our actual business goals.

If you need this, I can’t figure out any way to get there in an “ActiveRecord-first”-ish direction, except either single-table-inheritance or polymorphic associations.  Both of which are some of the oddest and least reliable corners of ActiveRecord. Of the two, I think STI is probably least weird and most likely to do more of standard use cases minimizing number of db queries. (Valkryie’s approach is somewhat similar in how it uses the DB to single-table inheritance, but without actually using that AR feature).

Shrine

Matt thought that shrine might do more than ActiveStorage now, but history shows things built into Rails will probably expand and get better. (Yes, but it’s unclear to me how to make audio or video “variants” or derivatives with ActiveStorage, which my place of work predicts to need very shortly. If we are really ruthless about only what we need right now, are we going to have to just rewrite it as soon as we need another thing? There are no easy answers, “YAGNI” is simpler when it’s all about software you are writing yourself and not dependencies… but there are grey areas too).

But I’m not certain about this, after trying to help shrine developers enhance the versions/derivatives functionality to better support some flexibility we need as to storage locations and point-in-time of creation. The answer may just be trying to write an app which adds on locally to do exactly what it needs (whether in terms of shrine  or ActiveStorage), without trying to make a shareable general purpose tool?

Blacklight

Matt was very suspicious of using Blacklight at all, he found that it was quite easy for them to just write the UI they needed based on Solr responses. And their app certainly is as good as many sufia/hyrax apps (it even has an actual search-within-the-collection feature on collection pages, which I think our sufia 7 app didn’t, although I think latest hyrax does).

But remember my inability to entirely give up on the idea of a shareable toolkit? I really would like something that serves as “scaffolding” that gives you an app out of the box with basic file ingest, metadata edit, and search out of the box. And Blacklight is a way to do this. And I think my plan to segregate Blacklight from the rest of the app (it should be a dependency you can switch out) by immediately fetching records from postgres corresponding to solr search results — may be able to keep Blacklight from “infecting” the rest of the app with Blacklight-isms, as Matt was worried it could.

How simple is simple?

It was useful to have Matt call my bluff to some extent: What I have been hypothetically proposing isn’t really “just plain rails”.  But it’s a lot closer than current samvera, or even valkyrie.

I think some of the valkyrites think valkyrie’s departures from “ordinary Rails” are a a positive, that they can use different patterns to do better than Rails…  which is not a unique idea to them…  but I think is a bit hubristic, to think you can invent something better (and easier to onboard new developers with?) than Rails. (I also wonder if the valkyrites, if freed from the need to support fedora too, would simply use hanami?)

The same charges of hubris can be brought to my initial sketch of plans though — it was useful to be challenged from the “left” of “you’re still not simple enough” by Matt. I am so used to thinking about my in-formation plans as a/the simple alternative to, well, samvera or even valkyrie… it was a refreshing and needed corrective to be talking to Matt who thought my plans were still too much abstraction, not as simple as possible, not sticking close enough to implementing only what was needed for my business needs. On the one hand, it makes me worried he’s right; on the other, it makes me more comfortable to be in a nice middle ground of moderation with people advocating things on both sides or me, both heavier-weight and lighter-weight, sharing more code with the LAM digital collections community on one side, and sharing basically none on the other.

Really, “just plain rails” or “just plain [any code]” is to some extent a mirage, or an aspiration. We’re always writing code when we build a Rails app.  We’re always using some dependencies. While there can be a false economy in trying to share all your code in hopes of minimizing the amount of code that has to be written in aggregate (it often doesn’t work out that way because building good re-usable abstractions is hard) — there can also of course be a false economy in never using anyone elses dependency, and “not invented here” syndrome.  And if you’re writing it yourself — it’s writing abstraction layers that are potentially giving you not-worth-it complexity, whether you keep them in the app or make them into a gem. But abstraction layers are also what allow us to do complex things that we can still comprehend as humans — when it works. 

Software is still a craft. As Alberta needs to add additional features, with their aspirations to add a lot more digital-collections-like content — it’s going to take software craftsmanship to figure out how to keep it simple.  What I like about U Alberta’s approach is they realize this.  They realize they are an internal development shop, and need to let developers do what developers do — rather than have non-technical stakeholders making technical decisions for non-technical reasons.  (At one point someone said: After having been ‘burned’ before, they are very suspicious of using common/shared software, vs. just writing their app — which is part of their skepticism towards attr_json —  I think they’re not wrong).

One thing letting an internal development shop excel entails is figuring out how to recruit and retain a solid development team with limited budget, which is one reason Alberta is trying to be so ruthless about keeping it simple and “standard”.  One phrase I heard repeated was “industry-standard onboarding”, which I think also implies needing to be accessible to relatively “junior” new hires, which requires keeping your stack simple and standard. (That is, traditional-samvera or valkyrie-using institutions do not necessarily have any less challenge here and may have more, as for instance Steven Anderson of BPL argued)

(But I wonder if on-boarding a new developer to an existing project that has a very small dev team is going to be challenging across the industry!  I am not convinced that “Where the Rails community has a diversity of opinions on an approach, we should prefer the approach espoused by the Rails core team” (from a Matt/Alberta manifestoalways and necessarily leads to the simplest code or easiest to on-board new developers with. sometimes you can build a monster in the pursuit of not doing something novel…. the irony, right? But it’s always worth considering the trade-offs).

I definitely might end up re-orienting.  For instance, Matt reminded me of something I knew but tried to forget even when writing out my notes for a possible plan: A generalized permissions/ACL system is a craggy shore that many ships have crashed upon. Should I just write for my app the permissions we need instead? After doing some more business analysis to figure out what they are?  Perhaps. More broadly, if we end up trying to implement this “toolkit” and I’m running into troubles and worrying our reach exceeded our grasp — retreat to just the app good enough for what we need right now is always a valid escape hatch.

U Alberta’s story, where they’ve been working on this app with a very different approach for over a year, and so far are happy —  is another good data point reminding us that dissatisfaction with the samvera stack is not new, especially institutions that have developers with wider Rails experience have been suspicious of the value propositions of fedora and samvera for some time.  And that there are a variety of approaches being tried. We all need community to bounce our ideas off of and get feedback, especially those of us who operate in 2-4 person development shops need more than we may get internally. I’m so glad they were willing to spend some time talking to me.  And I highly encourage reading all of Matt/U Alberta’s somewhat iconoclastic analysis docs, as one way of considering other perspectives.  I’m not sure if I can find the time, but I’d kind of like to “onboard” myself into their codebase, and understand how it works better as one example.

 


Thanks to the whole U Alberta team, and especially Peter Binkley, Weiwei Shi, and Matt Barnett, for spending time explaining what they were up to to me. Thanks to Peter and Weiwei for reviewing this post for any terrible errors.  All remaining mistakes and wrong opinions are my own.