Command-line utility to visit github page of a named gem

I’m working in a Rails stack that involves a lot of convolutedly inter-related dependencies, which I’m not yet all that familiar with.

I often want to go visit the github page of one of the dependencies, to check out the README, issues/PRs, generally root around in the source, etc.  Sometimes I want to clone and look at the source locally, but often in addition to that I’m wanting to look at the github repo.

So I wrote a little utility to let me do gem-visit name_of_gem and it’ll open up the github page in a browser, using the MacOS open utility to open your default browser window.

The gem doesn’t need to be installed, it uses the rubygems.org API (hooray!) to look up the gem by name, and look at it’s registered “Homepage” and “Source Code” links. If either one is a github.com link, it’ll prefer that. Otherwise, it’ll just send you to the Homepage, or failing that, to the rubygems.org page.

It’s working out well for me.

I wrote it in ruby (naturally), but with no dependencies, so you can just drop it in your $PATH, and it just works. I put ~/bin on my $PATH, so I put it there.

I’ll give to give you a gist of it (yes, i’m not maintaining it at present), but thinking about the best way to distribute this if I wanted to maintain it…. Distributing as a ruby gem doesn’t actually seem great, with the way people use ruby version switchers, you’d have to make sure to get it installed for every version of ruby you might be using to get the command line to work in them all.

Distributing as a homebrew package seems to actually make a lot of sense, for a very simple command-line utility like this. But homebrew doesn’t really like niche/self-submitted stuff, and this doesn’t really meet the requirements for what they’ll tolerate. But homebrew theoretically supports a third-party ‘tap’, and I think a github repo itself can be such a ‘tap’, and hypothetically I could set things up so you could cask tap my_github_url, and install (and later upgrade or uninstall!) with brew… but I haven’t been able to figure out how to do that from the docs/examples, just verified that it oughta be possible!  Anyone want to give me any hints or model examples?  Of a homebrew formula that does nothing but install a simple one-doc bash script, and is a third-party tap hosted  on github?

Posted in General | Tagged | 3 Comments

Monitoring your Rails apps for professional ops environment

It’s 2017, and I suggest that operating a web app in a professional way necessarily includes 1) finding out about problems before you get an error report from your users (or at best, before they even effect your users at all), and 2) and having the right environment to be able to fix them as quickly as possible.

Finding out before an error report means monitoring of some kind, which is also useful for diagnosis to get things fixed as quickly as possible. In this post, I’ll be talking about monitoring.

If you aren’t doing some kind of monitoring to find out about problems before you get a user report, I think you’re running a web app like a hobby or side project, not like a professional operation — and if your web app is crucial to your business/organization/entity’s operations, mission, or public perception, it means you don’t look like you know what you’re doing. Many library ‘digital collections’ websites get relatively little use, and a disproportionate amount of use comes from non-affiliated users that may not be motivated to figure out how to report problems, they just move on — if you don’t know about problems until they get reported by a human, the problem could have existed for literally months before you find out about them. (Yes, I have seen this happen).

What do we need from monitoring?

What are some things we might want to monitor?

  • error logs. You want to know about every fatal exception (resulting in a 500 error) in your Rails app. This means someone was trying to do or view something, and got an error instead of what they wanted.
    • even things that are caught without 500ing but represent something not right that your app managed to recover from, you might want to know about. These often correspond to erorr or warn (rather than fatal) logging levels.
    • In today’s high-JS apps, you might want to get these from JS too.
  • Outages. If the app is totally down, it’s not going to be logging errors, but it’s even worse. The app could be down because of a software problem on the server, or because the server or network is totally borked, or whatever, you want to know about it no matter what.
  • pending danger or degraded performance.
    • Disk almost out of capacity.
    • RAM excessively swapping, or almost out of swap. (or above quota on heroku)
    • SSL licenses about to expire
    • What’s your median response time? 90th or 95th percentile?  Have they suddenly gotten a lot worse than typical? This can be measured on the server (time server took to respond to HTTP request), or on the browser, and actual browser measurements can include actual browser load and time-to-onready.
  • More?

Some good features of a monitoring environment:

  • It works, there are minimal ‘false negatives’ where it misses an event you’d want to know about. The monitoring service doesn’t crash and stop working entirely, or stop sending alerts without you knowing about it. It’s really monitoring what you think it’s monitoring.
  • Avoid ‘false positives’ and information overload. If the monitoring service is notifying or logging too much stuff, and most of it doesn’t really need your attention after all — you soon stop paying attention to it altogether. It’s just human nature, the boy who cried wolf. A monitoring/alerting service that staff ignores doesn’t actually help us run professional operations.
  • Sends you alert notifications (Emails, SMS, whatever.) when things look screwy, configurable and least well-tuned to give you what you need and not more.
  • Some kind of “dashboard” that can give you overall project health, or an at a glance view of the current status, including things like “are there lots of errors being reported”.
  • Maybe uptime or other statistics you can use to report to your stakeholders.
  • Low “total cost of ownership”, you don’t want to  have to spend hours banging your head against the wall to configure simple things or to get it working to your needs. The monitoring service that is easiest to use will get used the most, and again, a monitoring service nobody sets up isn’t doing you any good.
  • public status page of some kind, that provides your users (internal or external) a place to look for “is it just me, or is this service having problems” — you know, like you probably use regularly for the high quality commercial services you use. This could be automated, or manual, or a combination.

Self-hosted open source, or commercial cloud?

While one could imagine things like a self-hosted proprietary commercial solution, I find that in reality people are usually choosing between ‘free’ open source self-hosted packages, and for-pay cloud-hosted services.

In my experience, I have come to unequivocally prefer cloud-hosted solutions.

One problem with open-source self-hosted, is that it’s very easy to misconfigure them so they aren’t actually working.  (Yes, this has happened to me). You are also responsible for keeping them working — a monitoring service that is down does not help you. Do you need a monitoring service for your monitoring service now?  What if a network or data center event takes down your monitoring service at the same time it takes down the service it was monitoring? Again, now it’s useless and you don’t find out. (Yes, this has happened to me too).

There exist a bunch of high-quality commercial cloud-hosted offerings these days. Their prices are reasonable (if not cheap; but compare to your staff time in getting this right), their uptime is outstanding (let them handle their own monitoring of the monitoring service they are providing to you, avoid infinite monitoring recursion); many of them are customized to do just the right thing for Rails; and their UI’s are often great, they just tend to be more usable software than the self-hosted open source solutions I’ve seen.

Personally, I think if you’re serious about operating your web apps and services professionally, commercial cloud-hosted solutions are an expense that makes sense.

Some cloud-hosted commercial services I have used

I’m not using any of these currently at my current gig. And I have more experience with some than others. I’m definitely still learning about this stuff myself, and developing my own preferred stack and combinations. But these are all services I have at least some experience with, and a good opinion of.

There’s no one service(I know of) that does  everything I’d want in the way I’d want it, so it probably does require using (and paying for) multiple services. But price is definitely something I consider.

Bugsnag

  • Captures your exceptions and errors, that’s about it. I think Rails was their first target and it’s especially well tuned for Rails, although you can use it for other platforms too.
  • Super easy to include in your Rails project, just add a gem, pretty much.
  • Gives you stack traces and other (configurable) contextual information (logged in user)
  • But additional customization is possible in reasonable ways.
  • Including manually sending ‘errors’
  • Importantly, groups the same error repeatedly together as one line (you can expand), to avoid info overload and crying wolf.
  • Has some pretty graphs.
  • Let’s you prioritize, filter, ‘snooze’ errors to pop up only if they happen again after ‘snooze’ time, and other features that again let you avoid info overload and actually respond to what needs responding to.
  • Email/SMS alerts in various ways including integrations with services like PagerDuty

Honeybadger

Very similar to bugsnag, it does the same sorts of things in the same sorts of ways. From some quick experimentation, I like some of it’s UX better, and some worse. But it does have more favorable pricing for many sorts of organizations than bugsnag — and offers free accounts “for non-commercial open-source projects”, not sure how many library apps would qualify.

Also includes optional integration with the Rails uncaught exception error page, to solicit user comments on the error that will be attached to the error report, which might be kind of neat.

Honeybadger also has some limited HTTP uptime monitoring functionality, ability to ‘assign’ error log lines to certain staff, and integration with various issue-tracking software to do the same.

All in all, if you can only afford one monitoring provider, I think honeybadger’s suite of services and price make it a likely contender.

(disclosure added in retrospect 28 Mar 2017. Honeybadger sponsors my rubyland project at a modest $20/month. Writing this review is not included in any agreement I have with them. )

New Relic

New Relic really focuses on performance monitoring, and is quite popular in Rails land, with easy integration into Rails apps via a gem.  New Relic also has javascript instrumentation, so it can measure actual perceived-by-user browser load times, as effected by network speed, JS and CSS efficiency, etc. New Relic has sophisticated alerting setup, that allows you to get alerted when performance is below the thresholds you’ve set as acceptable, or below typical long-term trends (I think I remember this last one can be done, although can’t find it now).

The New Relic monitoring tool also includes some error and uptime/availability monitoring; for whatever reasons, many people using New Relic for performance monitoring seem to use another product for these features, I haven’t spent enough time with New Relics to know why. (These choices were already made for me at my last gig, and we didn’t generally use New Relic for error or uptime monitoring).

Statuscake

Statuscake isn’t so much about error log monitoring or performance, as it is about uptime/availability.

But Statuscake also includes a linux daemon that can be installed to monitor internal server state, not just HTTP server responsiveness. Including RAM, CPU, and disk utilization.  It gives you pretty graphs, and alerting if metrics look dangerous.

This is especially useful on a non-heroku/PaaS deployment, where you are fully responsible for your machines.

Statuscake can also monitor looming SSL cert expiry, SMTP and DNS server health, and other things in the realm of infrastructure-below-the-app environment.

Statuscake also optionally provides a public ‘status’ page for your users — I think this is a crucial and often neglected piece, that really makes your organization seem professional and meet user needs (whether internal staff users or external). But I haven’t actually explored this feature myself.

At my prevous gig, we used Statuscake happily, although I didn’t personally have need to interact with it much.

Librato

My previous gig at the Friends of the Web consultancy used Librato on some projects happily, so I’m listing it here — but I honestly don’t have much personal experience with it, and don’t entirely understand what it does. It really focuses on graphing over time though — I think.  I think it’s graphs sometimes helped us notice when we were under a malware bot attack of various sorts, or otherwise were getting unusual traffic (in volume or nature) that should be taken account of. It can use it’s own ‘agents’, or accept data from other open source agents.

Heroku metrics

Haven’t actually used this too much either, but if you are deploying on heroku, with paid heroku dynos, you already have some metric collection built in, for basic things like memory, CPU, server-level errors, deployment of new versions, server-side response time (not client-side like New Relic), and request time outs.

You’ve got em, but you’ve got to actually look at them — and probably set up some notification/alerting on thresholds and unusual events — to get much value from this! So just a reminder that it is there, and one possibly budget-conscious option if you are already on heroku.

Phusion Union Station

If you already use Phusion Passenger  for your Rails app server, then Union Station is Phusion’s non-free monitoring/analytics solution that integrats with Passenger. I don’t believe you have to use the enterprise (paid) edition of Passenger to use Union Station, but Union Station is not free.

I haven’t been in a situation using Passenger and prioritizing monitoring for a while, and don’t have any experience with this product. But I mention it because I’ve always had a good impression of Phusion’s software quality and UX, and if you do use Passenger, it looks like it has the potential to be a reasonably-priced (although priced based on number of requests, which is never my favorite pricing scheme) all-in-one solution monitoring Rails errors, uptime, server status, and performance (don’t know if it offers Javascript instrumentation for true browser performance).

Conclusion

It would be nice if there were a product that could do all of what you need, for a reasonable price, so you just need one. Most actual products seem to start focusing on one aspect of monitoring/notification — and sometimes try to expand to be ‘everything’.

Some products (New Relic, Union Station) seem to be trying to provide an “all your monitoring needs” solution, but my impression of general ‘startup sector’ is that most organizations still put together multiple services to give them the complete monitoring/notification they need.

I’m not sure why. I think some of this is just that few people want to spend time learning/evaluating/configuring a new solution, and if they have something that works, or have been told by a trusted friend works, they stick with it.  Also perhaps the the ‘all in one’ solutions don’t provide as good UX and functionality for particular areas they weren’t originally focusing on as other tools that were originally focusing on those areas. And if you are a commercial entity making money (or aiming to do so), even an ‘expensive’ suite of monitoring services is a small percentage of your overall revenue/profit, and worth it to protect that revenue.

Personally, if I were getting started from virtually no monitoring, in a very budget-conscious non-commercial environment, I’d start with Honeybadger (including making sure to use it’s HTTP uptime monitoring), and then consider adding one or both of New Relic or Statuscake next.

Anyone, especially from the library/cultural/non-commercial sector, want to share your experiences with monitoring? Do you do it at all? Have you used any of these services? Have you tried self-hosted open source monitoring solutions? And found them great? Or disastrous? Or somewhere in between?  Any thoughts on “Total cost of ownership” of self-hosted solutions, do you agree or disagree with me that they tend to be a losing proposition? Do you think you’re providing a professionally managed web app environment for your users now?

One way or another, let’s get professional on this stuff!

Posted in General | Tagged | 1 Comment

“This week in rails” is a great idea

At some point in the past year or two (maybe even earlier? But I know it’s not many years old) the Rails project  started releasing roughly weekly ‘news’ posts, that mostly cover interesting or significant changes or bugfixes made to master (meaning they aren’t in a release yet, but will be in the next; I think the copy could make this more clear, actually!).

This week in Rails

(Note, aggregated in rubyland too!).

I assume this was a reaction to a recognized problem with non-core developers and developer-users keeping up with “what the heck is going on with Rails” — watching the github repo(s) wasn’t a good solution, too overwhelming, info overload. And I think it worked to improve this situation! It’s a great idea!  It makes it a lot easier to keep up with Rails, and a lot easier for a non-committing or non/rarely-contribution developer to maintain understanding of the Rails codebase.  Heck, even for newbie developers, it can serve as pointers to areas of the Rails codebase they might want to look into and develop some familiarity with, or at least know exist!

I think it’s been quite successful at helping in those areas. Good job and thanks to Rails team.

I wonder if this is something the Hydra project might consider?  It definitely has some issues for developers in those areas.

The trick is getting the developer resources to do it of course. I am not sure who writes the “This Week in Rails” posts — it might be a different core team dev every week? Or what methodology they use to compile it, if they’re writing on areas of the codebase they might not be familiar with either, just looking at the commit log and trying to summarize it for a wider audience, or what.  I think you’d have to have at least some familiarity with the Rails codebase to write these well, if you don’t understand what’s going on yourself, you’re going to have trouble writing a useful summary. It would be interesting to do an interview with someone on Rails core team about how this works, how they do it, how well they think it’s working, etc.

 

Posted in General | Tagged | 2 Comments

searching through all gem dependencies

Sometimes in your Rails project, you can’t figure out where a certain method or config comes from. (This is especially an issue for me as I learn the sufia stack, which involves a whole bunch of interrelated gem dependencies).  Interactive debugging techniques like source_location are invaluable and it pays to learn how to use them, but sometimes you just want/need to grep through ALL the dependencies.

As several stackoverflow answers can tell you, you can do that like this, from your Rails (or other bundler-using) project directory:

grep "something" -r $(bundle show --paths)
# Or if you want to include the project itself at '.' too:
grep "something" -r . $(bundle show --paths)

You can make a bash function to do this for you, I’m calling it gemgrep for now. Put your in your bash configuration file (probably ~/.bash_profile on OSX):

gemgrep () { grep "$@" -r . $(bundle show --paths); }

Now I can just gemgrep something from a project directory, and search through the project AND all gem dependencies. It might be slow.

I also highly recommend setting your default grep to color in interactive terminals, with this in your bash configuration:

export GREP_OPTIONS='--color=auto'
Posted in General | Tagged | Leave a comment

idea i won’t be doing anytime soon of the week: replace EZProxy with nginx

I think it would be possible, maybe even trivial, to replace EZProxy with nginx, writing code in lua using OpenResty.  I don’t know enough nginx (or lua!) to be sure, but some brief googling suggests to me the tools are there, and it wouldn’t be too hard. (It would be too hard for me in C, I don’t know C at all. I don’t know lua either, but it’s a “higher level” language without some of the C pitfalls). nginx is supposed to be really good at proxying, right? That was it’s original main use case — although as ‘reverse proxy’, but I don’t know if there’s any reason it wouldn’t work well as a forward proxy too — handle the EZProxy style load of lots and lots of sessions to lots and lots of sites?  Maybe. nginx is a workhorse.

You could probably even make it API-compatible with EZProxy on both the client and config file ends.

The main motivation of this is not to get something for ‘free’, EZProxy’s price is relatively reasonable. But EZProxy is very frustrating in some ways and lacks the configurability, dynamic extension points, precision of logging and rate limiting, etc., that many often want. And EZProxy development seems basically… well, in indefinite maintenance mode.  So the point would be to get a living application again that evolves to meet our actual present sophisticated needs.

I def won’t be working on this any time soon, but it sure would be a neat project. My present employer is more of a museum/archive/cultural institution than with the use cases of an academic or public library, we don’t have much EZProxy need.

(As an aside, it is an enduring frustrating and disappoint to me that the library community doesn’t seem much interested in putting developer ‘innovation’ resources towards, well, serving the research, teaching, and learning needs of scholars and students, which I thought was actually the primary mission of these institutions. The vast majority of institutional support for innovative development is just in the archival/institutional repository domain. Which is also important, but generally not to the day-to-day work of (eg) academic library patrons…. for some reason, actually improving services to patrons isn’t a priority for most administrators/institutions, I’m not really sure why).

Posted in General | 1 Comment

Return to libraryland

I’m excited to announce this week is my first week working for the Othmer Library division at the Chemical Heritage Foundation. CHF’s name isn’t necessarily optimal at explaining what the organization does: It’s actually an independent history of science institute (not in fact focusing exclusively on chemistry), with a museum, significant archival collection, and ongoing research activities. As they say on their home page, “The Chemical Heritage Foundation is dedicated to explaining a simple truth: science has a past and our future depends on it.” That’s a nice way to put it.

I’ll be working, at least initially, mostly on the Hydra/Sufia stack. CHF has been a significant contributor to this open source staff already (with existing developer Anna Headley, who’s still here at CHF, fortunately for us all!), and I am happy to be at a place that prioritizes open source contributions.  CHF has some really interesting collections (medieval alchemical manuscripts? Oral histories from scientists? that and more), which aren’t available on the web yet — but we’re working on it.

CHF is located in Philadelphia, but I’ll still be in Baltimore, mostly working remotely, with occasional but regular visits. (Conveniently, Philadelphia is only about 100 miles from Baltimore).

And I’m very happy to be back in the library community. It’s a little bit less confusing now if I tell people I’m “a librarian”. Just a little.  I definitely missed being in the library world, and the camaraderie and collaboration of the library open source tech community in my year+ I was mostly absent from it — it really is something special.

I have nothing but good things to say about Friends of the Web, where I spent the past 15 months or so. I’ll miss working with my colleagues there and many aspects of the work environment. They’re really a top-notch design and Rails/React/iOS dev firm, and if you’re looking for high-quality design or app implementation work, if you need something done in either web design or app development (or both!) that you don’t have in-house resources to do, I don’t hesitate to recommend them.

Posted in General | 3 Comments

rubyland infrastruture, and a modest sponsorship from honeybadger

Rubyland.news is my hobby project ruby RSS/atom feed aggregator.

Previously it was run on entirely free heroku resources — free dyno, free postgres (limited to 10K rows, which dashes my dreams of a searchable archive, oh well). The only thing I had to pay for was the domain. Rubyland doesn’t take many resources because it is mostly relatively ‘static’ and cacheable content, so could get by fine on one dyno. (I’m caching whole pages with Rails “fragment” caching and an in-process memory-based store, not quite how Rails fragment caching was intended to be used, but works out pretty well for this simple use case, with no additional resources required).

But the heroku free dyno doesn’t allow SSL on a custom hostname.  It’s actually pretty amazing what one can accomplish with ‘free tier’ resources from various cloud providers these days.  (I also use a free tier mailgun account for an MX server to receive @rubyland.news emails, and SMTP server for sending admin notifications from the app. And free DNS from cloudflare).  Yeah, for the limited resources rubyland needs, a very cheap DigitalOcean droplet would also work — but just as I’m not willing to spend much money on this hobby project, I’m also not willing to spend any more ‘sysadmin’ type time than I need — I like programming and UX design and enjoy doing it in my spare ‘hobby’ time, but sysadmin’ing is more like a necessary evil to me. Heroku works so well and does so much for you.

With a very kind sponsorship gift of $20/month for 6 months from Honeybadger, I used the money to upgrade to a heroku hobby-dev dyno, which does allow SSL on custom hostnames. So now rubyland.news is available at https, via letsencrypt.org, with cert acquisition and renewal fully automated by the letsencrypt-rails-heroku gem, which makes it incredibly painless, just set a few heroku config variables and you’re pretty much done.

I still haven’t redirected all http to https, and am not sure what to do about https on rubyland. For one, if I don’t continue to get sponsorship donations, I might not continue the heroku paid dyno, and then wouldn’t have custom domain SSL available. Also, even with SSL, since the rubyland.news feed often includes embedded <img> tags with their original src, you still get browser mixed-content warnings (which browsers may be moving to give you a security error page on?).  So not sure about the ultimate disposition of SSL on rubyland.news, but for now it’s available on both http and https — so at least I can do secure admin or other logins if I want (haven’t implemented yet, but an admin interface for approving feed suggestions is on my agenda).

Honeybadger

I hadn’t looked at Honeybadger before myself.  I have used bugsnag on client projects before, and been quite happy with it. Honeybadger looks like basically a bugsnag competitor — it’s main feature set is about capturing errors from your Rails (or other, including non-ruby platform) apps, and presenting them well for your response, with grouping, notifications, status disposition, etc.

I’ve set up honeybadger integration on rubyland.news, to check it out. (Note: “Honeybadger is free for non-commercial open-source projects”, which is pretty awesome, thanks honeybadger!) Honeybadger’s feature set and user/developer experience are looking really good.  It’s got much more favorable pricing than bugsnag for many projects–pricing is just per-app, not per-event-logged or per-seat.  It’s got pretty similar featureset to bugsnag, in some areas I like how honeybadger does things a lot better than bugsnag, in others not sure.

(I’ve been thinking for a while about wanting to forward all Rails.logger error-level log lines to my error monitoring service, even though they aren’t fatal exceptions/500s. I think this would be quite do-able with honeybadger, might try to rig it up at some point. I like the idea of being able to put error-level logging in my code rather than monitoring-service-specific logic, and have it just work with whatever monitoring service is configured).

So I’d encourage folks to check out honeybadger — yeah, my attention was caught by their (modest, but welcome and appreciated! $20/month) sponsorship, but I’m not being paid to write this specifically, all they asked for in return for sponsorship was a mention on the rubyland.news about page.

Honeybadger also includes some limited uptime monitoring.   The other important piece of monitoring, in my opinion, is request- or page-load time monitoring, with reports and notifications on median and 90th/95th percentile. I’m not sure if honeybadger includes that in any way. (for non-heroku deploys, disk space, RAM, and CPU usage monitoring is also key. RAM and CPU can still be useful with heroku, but less vital in my experience).

Is there even a service that will work well for Rails apps that combines error, uptime, and request time monitoring, with a great developer experience, at a reasonable price? It’s a bit surprising to me that there are so many services that do just one or two of these, and few that combine all of them in one package.  Anyone had any good experiences?

For my library-sector readers, I think this is one area where most library web infrastruture is not yet operating at professional standards. In this decade, a professional website means you have monitoring and notification to tell you about errors and outages without needing to wait for users to report em, so you can get em fixed as soon as possible. Few library services are being operated such, and it’s time to get up to speed.  While you can run your own monitoring and notification services on your own hardware, in my experience few open source packages are up to the quality of current commercial cloud offerings — and when you run your own monitoring/notification, you run the risk of losing notice of problems because of misconfiguration of some kind (it’s happened to me!), or a local infrastructure event that takes out both your app and your monitoring/notification (that too!).  A cloud commercial offering makes a lot of sense. While there are many “reasonably” priced options these days, they are admittedly still not ‘cheap’ for a library budget (or lack thereof) — but it’s a price worth paying, it’s what i means to run web sites, apps, and services professionally.

Posted in General | Tagged | Leave a comment

bento_search 1.7.0 released

bento_search is the gem for making embedding of external searches in Rails a breeze, focusing on search targets and use cases involving ‘scholarly’ or bibliographic citation results.

Bento_search isn’t dead, it just didn’t need much updating. But thanks to some work for a client using it, I had the opportunity to do some updates.

Bento_search 1.7.0 includes testing under Rails 5 (the earlier versions probably would have worked fine in Rails 5 already), some additional configuration options, a lot more fleshing out of the EDS adapter, and a new ConcurrentSearcher demonstrating proper use of new Rails5 concurrency API.  (the older BentoSearch::MultiSearcher is now deprecated).

See the CHANGES file for full list.

As with all releases of bento_search to date, it should be strictly backwards compatible and an easy upgrade. (Although if you are using Rails earlier than 4.2, I’m not completely confident, as we aren’t currently doing automated testing of those).

Posted in General | Leave a comment

CDs are not archival storage

Not even non-writeable factory-written ones.

When Discs Die: CDs were sold to consumers as these virtually indestructible platters, but the truth, as exemplified by the disc rot phenomenon, is more complicated.

Posted in General | Leave a comment

ruby VCR, easy trick for easy re-record

I do a lot of work with external HTTP API’s, and I love the vcr for use in writing tests/specs involving these. It records the interaction, so most of the time the tests are running based on a recorded interaction, not actually going out to the remote HTTP server.

This makes the tests run faster, it makes more sense on a CI server like Travis, it let’s tests run automatically without having to hard-code credentials in for authenticated services (make sure to use VCR’s filter_sensitive_data feature, figuring out the a convenient way to do that with real world use cases is a different discussion), and it even lets people run the tests without having credentials themselves at all to make minor PRs and such.

But in actual local dev, I sometimes want to run my tests against live data for sure, often as the exactly HTTP requests change as I edit my code. Sometimes I need to do this over and over again in a cycle. Previously, I was doing things like manually deleting the relevant VCR cassettes files , to ensure I was running with live data, or avoid VCR “hey, this is a new request buddy” errors.

Why did I never think of using the tools VCR already gives us to make it a lot easier on myself?

Normally works as always, but I just gotta VCR=all ./bin/rspec to run that run with brand newly recorded cassettes. Or VCR=all ./bin/rspec some_specific_spec.rb to re-record only that spec, or only the specs I’m working on, etc.

Geez, I should have figured that out years ago. So I’m sharing with you.

Just don’t ask me if it makes more sense to put VCR configuration in spec_helper.rb or rails_helper.rb. I still haven’t figured out what that split is supposed to be about honestly. I mean, I do sometimes VCR specs of service objects that do not have Rails dependencies…. but I usually just drop it (and all my other config) in rails_helper.rb and ignore the fact that rspec these days is trying to force us to make a choice I don’t really understand the implications or utility of and don’t want to think about.

Posted in General | Tagged | Leave a comment