On the graphic design of rubyland.news

I like to pay attention to design, and enjoy good design in the world, graphic and otherwise. A well-designed printed page, web page, or physical tool is a joy to interact with.

I’m not really a trained designer in any way, but in my web development career I’ve often effectively been the UI/UX/graphic designer of apps I work on, and I do my best, and always try to do the best I can (our users deserve good design), and to develop my skills by paying attention to graphic design in the world, reading up (I’d recommend Donald Norman’s The Design of Everyday Things, Robert Bringhurt’s The Elements of Typographic Style, and one free online one, Butterick’s Practical Typography), and trying to improve my practice, and I think my graphic design skills are always improving.   (I also learned a lot looking at and working with the productions of the skilled designers at Friends of the Web, where I worked last year).

Implementing rubyland.news turned out to be a great opportunity to practice some graphic and web design. Rubyland.news has very few graphical or interactive elements, it’s a simple thing that does just a few things. The relative simplicity of what’s on the page, combined with it being a hobby side project — with no deadlines, no existing branding styles, and no stakeholders saying things like “how about you make that font just a little bit bigger” — made it a really good design exercise for me, where I could really focus on trying to make each element and the pages as a whole as good as I could in both aesthetics and utility, and develop my personal design vocabulary a bit.

I’m proud of the outcome, while I don’t consider it perfect (I’m not totally happy with the typography of the mixed-case headers in Fira Sans), I think it’s pretty good typography and graphic design, probably my best design work. It’s nothing fancy, but I think it’s pleasing to look at and effective.  I think probably like much good design, the simplicity of the end result belies the amount of work I put in to make it seem straightforward and unsophisticated. :)

My favorite element is the page-specific navigation (and sometimes info) “side bar”.

Screenshot 2017-04-21 11.21.45

At first I tried to put these links in the site header, but there wasn’t quite enough room for them, I didn’t want to make the header two lines — on desktop or wide tablet displays, I think vertical space is a precious resource not to be squandered. And I realized that maybe anyway it was better for the header to only have unchanging site-wide links, and have page-specific links elsewhere.

Perhaps encouraged by the somewhat hand-written look (especially of all-caps text) in Fira Sans, the free font I was trying out, I got the idea of trying to include these as a sort of ‘margin note’.

Screenshot 2017-04-21 11.32.51

The CSS got a bit tricky, with screen-size responsiveness (flexbox is a wonderful thing). On wide screens, the main content is centered in the screen, as you can see above, with the links to the left: The ‘like a margin note’ idea.

On somewhat narrower screens, where there’s not enough room to have margins on both sides big enough for the links, the main content column is no longer centered.

Screenshot 2017-04-21 11.36.48.png

And on very narrow screens, where there’s not even room for that, such as most phones, the page-specific nav links switch to being above the content. On narrow screens, which are typically phones that are much higher than they are wide, it’s horizontal space that becomes precious, with some more vertical to spare.

Screenshot 2017-04-21 11.39.16

Note on really narrow screens, which is probably most phones especially held in vertical orientation, the margins on the main content disappear completely, you get actual content with it’s white border from edge-to-edge. This seems an obvious thing to do to me on phone-sized screens: Why waste any horizontal real-estate with different-colored margins, or provide a visual distraction with even a few pixels of different-colored margin or border jammed up against the edge?  I’m surprised it seems a relatively rare thing to do in the wild.

Screenshot 2017-04-21 11.39.36

Nothing too fancy, but I quite like how it turned out. I don’t remember exactly what CSS tricks I used to make it so. And I still haven’t really figured out how to write clear maintainable CSS code, I’m less proud of the actual messy CSS source code then I am of the result. :)

Posted in General | Tagged | Leave a comment

One way to remove local merged tracking branches

My git workflow involves creating a lot of git feature branches, as remote tracking branches on origin. They eventually get merged and deleted (via github PR), but i still have dozens of them lying around.

Via googling, getting StackOverflow answers, and sort of mushing some stuff I don’t totally understand together, here’s one way to deal with it, create an alias git-prune-tracking.  In your ~/.bash_profile:

alias git-prune-tracking='git branch --merged | grep -v "*" | grep -v "master" | xargs git branch -d; git remote prune origin'

And periodically run git-prune-tracking from a git project dir.

I do not completely understand what this is doing I must admit, and there might be a better way? But it seems to work. Anyone have a better way that they understand what it’s doing?  I’m kinda surprised this isn’t built into the git client somehow.

Posted in General | Tagged | 3 Comments

Use capistrano to run a remote rake task, with maintenance mode

So the app I am now working on is still in it’s early stages, not even live to the public yet, but we’ve got an internal server. We periodically have to change a bunch of data in our (non-rdbms) “production” store. (First devops unhappiness, I think there should be no scheduled downtime for planned data transformation. We’re working on it. But for now it happens).

We use capistrano to deploy. Previously/currently, the process for making these scheduled-downtime maintenance mode looked like:

  • on your workstation, do a cap production maintenance:enable to start some downtime
  • ssh into the production machine, cd to the cap-installed app, and run a bundle exec run a rake task. Which could take an hour+.
  • Remember to come back when it’s done and `cap production maintenance:disable`.

A couple more devops unhappiness points here: 1) In my opinion you should ideally never be ssh’ing to production, at least in a non-emergency situation.  2) You have to remember to come back and turn off maintenance mode — and if I start the task at 5pm to avoid disrupting internal stakeholders, I gotta come back after busines hours to do that! I also think every thing you have to do ‘outside business hours’ that’s not an emergency is a not yet done ops environment.

So I decided to try to fix this. Since the existing maintenance mode stuff was already done through capistrano, and I wanted to do it without a manual ssh to the production machine, capistrano seemed a reasonable tool. I found a plugin to execute rake via capistrano, but it didn’t do quite what I wanted, and it’s implementation was so simple that I saw no reason not to copy-and-paste it and just make it do just what I wanted.

I’m not gonna maintain this for the public at this point (make a gem/plugin out of it, nope), but I’ll give it to you in a gist if you want to use it. One of the tricky parts was figuring out how to get “streamed” output from cap, since my rake tasks use ruby-progressbar — it’s got decent non-TTY output already, and I wanted to see it live in my workstation console. I managed to do that! Although I never figured out how to get a cap recipe to require files from another location (I have no idea how I couldn’t make it work), so the custom class is ugly inlined in.

I also ordinarily want maintenance mode to be turned off even if the task fails, but still want a non-zero exit code in those cases (anticipating future further automation — really what I need is to be able to execute this all via cron/at too, so we can schedule downtime for the middle of the night without having to be up then).

Anyway here’s the gist of the cap recipe. This file goes in ./lib/capistrano/tasks in a local app, and now you’ve got these recipes. Any tips on how to organize my cap recipe better quite welcome.

Posted in General | Tagged | Leave a comment

Hash#map ?

I frequently have griped that Hash didn’t have a useful map/collect function, something allowing me to transform the hash keys or values (usually values), into another transformed hash. I even go looking for for it in ActiveSupport::CoreExtensions sometimes, surely they’ve added something, everyone must want to do this… nope.

Thanks to realization triggered by an example in BigBinary’s blog post about the new ruby 2.4 Enumerable#uniq… I realized, duh, it’s already there!

olympics = {1896 => 'Athens', 1900 => 'Paris', 1904 => 'Chicago', 1906 => 'Athens', 1908 => 'Rome'}
olympics.collect { |k, v| [k, v.upcase]}.to_h
# => => {1896=>"ATHENS", 1900=>"PARIS", 1904=>"CHICAGO", 1906=>"ATHENS", 1908=>"ROME"}

Just use ordinary Enumerable#collect, with two block args — it works to get key and value. Return an array from the block, to get an array of arrays, which can be turned to a hash again easily with #to_h.

It’s a bit messy, but not really too bad. (I somehow learned to prefer collect over it’s synonym map, but I think maybe I’m in the minority? collect still seems more descriptive to me of what it’s doing. But this is one place where I wouldn’t have held it against Matz if he had decided to give the method only one name so we were all using the same one!)

(Did you know Array#to_h turned an array of duples into a hash?  I am not sure I did! I knew about Hash(), but I don’t think I knew about Array#to_h… ah, it looks like it was added in ruby 2.1.0.  The equivalent before that would have been more like Hash( hash.collect {|k, v| [k, v]}), which I think is too messy to want to use.

I’ve been writing ruby for 10 years, and periodically thinking “damn, I wish there was something like Hash#collect” — and didn’t realize that Array#to_h was added in 2.1, and makes this pattern a lot more readable. I’ll def be using it next time I have that thought. Thanks BigBinary for using something similar in your Enumerable#uniq example that made me realize, oh, yeah.

 

Posted in General | Tagged | 1 Comment

“Polish”; And, What makes well-designed software?

Go check out Schneem’s post on “polish”. (No, not the country).

Polish is what distinguishes good software from great software. When you use an app or code that clearly cares about the edge cases and how all the pieces work together, it feels right. Unfortunately, this is the part of the software that most often gets overlooked, in favor of more features or more time on another project…

…When we say something is “polished” it means that it is free from sharp edges, even the small ones. I view polished software to be ones that are mostly free from frustration. They do what you expect them to and are consistent.…

…In many ways I want my software to be boring. I want it to harbor few surprises. I want to feel like I understand and connect with it at a deep level and that I’m not constantly being caught off guard by frustrating, time stealing, papercuts.

I definitely have experienced the difference between working with and on a project that has this kind of ‘polish’ and, truly, experiencing a deep-level connection of the code that lets me crazy effective with it — and working on or with projects that don’t have this.  And on projects that started out with it, but lost it! (An irony is that it takes a lot of time, effort, skill, and experience to design an architecture that seems like the only way it would make sense to do it, obvious, and as schneems says, “boring”!)

I was going to say “We all have experienced the difference…”, but I don’t know if that’s true. Have you?

What do you think one can do to work towards a project with this kind of “polish”, and keep it there?  I’m not entirely sure, although I have some ideas, and so does schneems. Tolerating edge-case bugs is a contraindication — and even though I don’t really believe in ‘broken windows theory’ when it comes to neighborhoods, I think it does have an application here. Once the maintainers start tolerating edge case bugs and sloppiness, it sends a message to other contributors, a message of lack of care and pride. You don’t put in the time to make a change right unless the project seems to expect, deserve, and accommodate it.

If you don’t even have well-defined enough behavior/architecture to have any idea what behavior is desirable or undesirable, what’s a bug– you’ve clearly gone down a wrong path incompatible with this kind of ‘polish’, and I’m not sure if it can be recovered from. A Fred Brooks “Mythical Man Month” quote I think is crucial to this idea of ‘polish’: “Conceptual integrity is central to product quality.”  (He goes on to say that having an “architect” is the best way to get conceptual integrity; I’m not certain, I’d like to believe this isn’t true because so many formal ‘architect’ roles are so ineffective, but I think my experience may indeed be that a single or tight team of architects, formal or informal, does correlate…).

There’s another Fred Brooks quote that now I can’t find and I really wish I could cause I’ve wanted to return to it and meditate on it for a while, but it’s about how the power of a system is measured by what you can do with it divided by the number of distinct architectural concepts. A powerful system is one that can do a lot with few architectural concepts.  (If anyone can find this original quote, I’ll buy you a beer or a case of it).

I also know you can’t do this until you understand the ‘business domain’ you are working in — programmers as interchangeable cross-industry widgets is a lie. (‘business domain’ doesn’t necessarily mean ‘business’ in the sense of making money, it means understanding the use-cases and needs of your developer users, as they try to meet the use cases and needs of their end-users, which you need to understand too).

While I firmly believe in general in the caution against throwing out a system and starting over, a lot of this caution is about losing the domain knowledge encoded in the system (really, go read Joel’s post). But if the system was originally architected by people (perhaps past you!) who (in retrospect) didn’t have very good domain knowledge (or the domain has changed drastically?), and you now have a team (and an “architect”?) that does, and your existing software is consensually recognized as having the opposite of the quality of ‘polish’, and is becoming increasingly expensive to work with (“technical debt”) with no clear way out — that sounds like a time to seriously consider it. (Although you will have to be willing to accept it’ll take a while to get feature parity, if those were even the right features).  (Interestingly, Fred Books was I think the originator of the ‘build one to throw away’ idea that Joel is arguing against. I think both have their place, and the place of domain knowledge is a crucial concept in both).

All of these are more vague hand wavy ideas than easy to follow directions, I don’t have any easy to follow directions, or know if any exist.

But I know that the first step is being able to recognize “polish”, a well-designed parsimoniously architected system that feels right to work with and lets you effortlessly accomplish things with it.  Which means having experience with such systems. If you’ve only worked with ball-of-twine difficult to work with systems, you don’t even know what you’re missing or what is possible or what it looks like. You’ve got to find a way to get exposure to good design to become a good designer, and this is something we don’t know how to do as well with computer architecture as with classic design (design school consists of exposure to design, right?)

And the next step is desiring and committing to building such a system.

Which also can mean pushing back on or educating managers and decision-makers.  The technical challenge is already high, but the social/organizational challenge can be even higher.

Because it is harder to build such a system than to not, designing and implementing good software is not easy, it takes care, skill, and experience.  Not every project deserves or can have the same level of ‘polish’. But if you’re building a project meant to meet complex needs, and to be used by a distributed (geographically and/or organizationally) community, and to hold up for years, this is what it takes. (Whether that’s a polished end-user UX, or developer-user UX, which means API, or both, depending on the nature of this distributed community).

Posted in General | Tagged | Leave a comment

Command-line utility to visit github page of a named gem

I’m working in a Rails stack that involves a lot of convolutedly inter-related dependencies, which I’m not yet all that familiar with.

I often want to go visit the github page of one of the dependencies, to check out the README, issues/PRs, generally root around in the source, etc.  Sometimes I want to clone and look at the source locally, but often in addition to that I’m wanting to look at the github repo.

So I wrote a little utility to let me do gem-visit name_of_gem and it’ll open up the github page in a browser, using the MacOS open utility to open your default browser window.

The gem doesn’t need to be installed, it uses the rubygems.org API (hooray!) to look up the gem by name, and look at it’s registered “Homepage” and “Source Code” links. If either one is a github.com link, it’ll prefer that. Otherwise, it’ll just send you to the Homepage, or failing that, to the rubygems.org page.

It’s working out well for me.

I wrote it in ruby (naturally), but with no dependencies, so you can just drop it in your $PATH, and it just works. I put ~/bin on my $PATH, so I put it there.

I’ll give to give you a gist of it (yes, i’m not maintaining it at present), but thinking about the best way to distribute this if I wanted to maintain it…. Distributing as a ruby gem doesn’t actually seem great, with the way people use ruby version switchers, you’d have to make sure to get it installed for every version of ruby you might be using to get the command line to work in them all.

Distributing as a homebrew package seems to actually make a lot of sense, for a very simple command-line utility like this. But homebrew doesn’t really like niche/self-submitted stuff, and this doesn’t really meet the requirements for what they’ll tolerate. But homebrew theoretically supports a third-party ‘tap’, and I think a github repo itself can be such a ‘tap’, and hypothetically I could set things up so you could cask tap my_github_url, and install (and later upgrade or uninstall!) with brew… but I haven’t been able to figure out how to do that from the docs/examples, just verified that it oughta be possible!  Anyone want to give me any hints or model examples?  Of a homebrew formula that does nothing but install a simple one-doc bash script, and is a third-party tap hosted  on github?

Posted in General | Tagged | 3 Comments

Monitoring your Rails apps for professional ops environment

It’s 2017, and I suggest that operating a web app in a professional way necessarily includes 1) finding out about problems before you get an error report from your users (or at best, before they even effect your users at all), and 2) and having the right environment to be able to fix them as quickly as possible.

Finding out before an error report means monitoring of some kind, which is also useful for diagnosis to get things fixed as quickly as possible. In this post, I’ll be talking about monitoring.

If you aren’t doing some kind of monitoring to find out about problems before you get a user report, I think you’re running a web app like a hobby or side project, not like a professional operation — and if your web app is crucial to your business/organization/entity’s operations, mission, or public perception, it means you don’t look like you know what you’re doing. Many library ‘digital collections’ websites get relatively little use, and a disproportionate amount of use comes from non-affiliated users that may not be motivated to figure out how to report problems, they just move on — if you don’t know about problems until they get reported by a human, the problem could have existed for literally months before you find out about them. (Yes, I have seen this happen).

What do we need from monitoring?

What are some things we might want to monitor?

  • error logs. You want to know about every fatal exception (resulting in a 500 error) in your Rails app. This means someone was trying to do or view something, and got an error instead of what they wanted.
    • even things that are caught without 500ing but represent something not right that your app managed to recover from, you might want to know about. These often correspond to erorr or warn (rather than fatal) logging levels.
    • In today’s high-JS apps, you might want to get these from JS too.
  • Outages. If the app is totally down, it’s not going to be logging errors, but it’s even worse. The app could be down because of a software problem on the server, or because the server or network is totally borked, or whatever, you want to know about it no matter what.
  • pending danger or degraded performance.
    • Disk almost out of capacity.
    • RAM excessively swapping, or almost out of swap. (or above quota on heroku)
    • SSL licenses about to expire
    • What’s your median response time? 90th or 95th percentile?  Have they suddenly gotten a lot worse than typical? This can be measured on the server (time server took to respond to HTTP request), or on the browser, and actual browser measurements can include actual browser load and time-to-onready.
  • More?

Some good features of a monitoring environment:

  • It works, there are minimal ‘false negatives’ where it misses an event you’d want to know about. The monitoring service doesn’t crash and stop working entirely, or stop sending alerts without you knowing about it. It’s really monitoring what you think it’s monitoring.
  • Avoid ‘false positives’ and information overload. If the monitoring service is notifying or logging too much stuff, and most of it doesn’t really need your attention after all — you soon stop paying attention to it altogether. It’s just human nature, the boy who cried wolf. A monitoring/alerting service that staff ignores doesn’t actually help us run professional operations.
  • Sends you alert notifications (Emails, SMS, whatever.) when things look screwy, configurable and least well-tuned to give you what you need and not more.
  • Some kind of “dashboard” that can give you overall project health, or an at a glance view of the current status, including things like “are there lots of errors being reported”.
  • Maybe uptime or other statistics you can use to report to your stakeholders.
  • Low “total cost of ownership”, you don’t want to  have to spend hours banging your head against the wall to configure simple things or to get it working to your needs. The monitoring service that is easiest to use will get used the most, and again, a monitoring service nobody sets up isn’t doing you any good.
  • public status page of some kind, that provides your users (internal or external) a place to look for “is it just me, or is this service having problems” — you know, like you probably use regularly for the high quality commercial services you use. This could be automated, or manual, or a combination.

Self-hosted open source, or commercial cloud?

While one could imagine things like a self-hosted proprietary commercial solution, I find that in reality people are usually choosing between ‘free’ open source self-hosted packages, and for-pay cloud-hosted services.

In my experience, I have come to unequivocally prefer cloud-hosted solutions.

One problem with open-source self-hosted, is that it’s very easy to misconfigure them so they aren’t actually working.  (Yes, this has happened to me). You are also responsible for keeping them working — a monitoring service that is down does not help you. Do you need a monitoring service for your monitoring service now?  What if a network or data center event takes down your monitoring service at the same time it takes down the service it was monitoring? Again, now it’s useless and you don’t find out. (Yes, this has happened to me too).

There exist a bunch of high-quality commercial cloud-hosted offerings these days. Their prices are reasonable (if not cheap; but compare to your staff time in getting this right), their uptime is outstanding (let them handle their own monitoring of the monitoring service they are providing to you, avoid infinite monitoring recursion); many of them are customized to do just the right thing for Rails; and their UI’s are often great, they just tend to be more usable software than the self-hosted open source solutions I’ve seen.

Personally, I think if you’re serious about operating your web apps and services professionally, commercial cloud-hosted solutions are an expense that makes sense.

Some cloud-hosted commercial services I have used

I’m not using any of these currently at my current gig. And I have more experience with some than others. I’m definitely still learning about this stuff myself, and developing my own preferred stack and combinations. But these are all services I have at least some experience with, and a good opinion of.

There’s no one service(I know of) that does  everything I’d want in the way I’d want it, so it probably does require using (and paying for) multiple services. But price is definitely something I consider.

Bugsnag

  • Captures your exceptions and errors, that’s about it. I think Rails was their first target and it’s especially well tuned for Rails, although you can use it for other platforms too.
  • Super easy to include in your Rails project, just add a gem, pretty much.
  • Gives you stack traces and other (configurable) contextual information (logged in user)
  • But additional customization is possible in reasonable ways.
  • Including manually sending ‘errors’
  • Importantly, groups the same error repeatedly together as one line (you can expand), to avoid info overload and crying wolf.
  • Has some pretty graphs.
  • Let’s you prioritize, filter, ‘snooze’ errors to pop up only if they happen again after ‘snooze’ time, and other features that again let you avoid info overload and actually respond to what needs responding to.
  • Email/SMS alerts in various ways including integrations with services like PagerDuty

Honeybadger

Very similar to bugsnag, it does the same sorts of things in the same sorts of ways. From some quick experimentation, I like some of it’s UX better, and some worse. But it does have more favorable pricing for many sorts of organizations than bugsnag — and offers free accounts “for non-commercial open-source projects”, not sure how many library apps would qualify.

Also includes optional integration with the Rails uncaught exception error page, to solicit user comments on the error that will be attached to the error report, which might be kind of neat.

Honeybadger also has some limited HTTP uptime monitoring functionality, ability to ‘assign’ error log lines to certain staff, and integration with various issue-tracking software to do the same.

All in all, if you can only afford one monitoring provider, I think honeybadger’s suite of services and price make it a likely contender.

(disclosure added in retrospect 28 Mar 2017. Honeybadger sponsors my rubyland project at a modest $20/month. Writing this review is not included in any agreement I have with them. )

New Relic

New Relic really focuses on performance monitoring, and is quite popular in Rails land, with easy integration into Rails apps via a gem.  New Relic also has javascript instrumentation, so it can measure actual perceived-by-user browser load times, as effected by network speed, JS and CSS efficiency, etc. New Relic has sophisticated alerting setup, that allows you to get alerted when performance is below the thresholds you’ve set as acceptable, or below typical long-term trends (I think I remember this last one can be done, although can’t find it now).

The New Relic monitoring tool also includes some error and uptime/availability monitoring; for whatever reasons, many people using New Relic for performance monitoring seem to use another product for these features, I haven’t spent enough time with New Relics to know why. (These choices were already made for me at my last gig, and we didn’t generally use New Relic for error or uptime monitoring).

Statuscake

Statuscake isn’t so much about error log monitoring or performance, as it is about uptime/availability.

But Statuscake also includes a linux daemon that can be installed to monitor internal server state, not just HTTP server responsiveness. Including RAM, CPU, and disk utilization.  It gives you pretty graphs, and alerting if metrics look dangerous.

This is especially useful on a non-heroku/PaaS deployment, where you are fully responsible for your machines.

Statuscake can also monitor looming SSL cert expiry, SMTP and DNS server health, and other things in the realm of infrastructure-below-the-app environment.

Statuscake also optionally provides a public ‘status’ page for your users — I think this is a crucial and often neglected piece, that really makes your organization seem professional and meet user needs (whether internal staff users or external). But I haven’t actually explored this feature myself.

At my prevous gig, we used Statuscake happily, although I didn’t personally have need to interact with it much.

Librato

My previous gig at the Friends of the Web consultancy used Librato on some projects happily, so I’m listing it here — but I honestly don’t have much personal experience with it, and don’t entirely understand what it does. It really focuses on graphing over time though — I think.  I think it’s graphs sometimes helped us notice when we were under a malware bot attack of various sorts, or otherwise were getting unusual traffic (in volume or nature) that should be taken account of. It can use it’s own ‘agents’, or accept data from other open source agents.

Heroku metrics

Haven’t actually used this too much either, but if you are deploying on heroku, with paid heroku dynos, you already have some metric collection built in, for basic things like memory, CPU, server-level errors, deployment of new versions, server-side response time (not client-side like New Relic), and request time outs.

You’ve got em, but you’ve got to actually look at them — and probably set up some notification/alerting on thresholds and unusual events — to get much value from this! So just a reminder that it is there, and one possibly budget-conscious option if you are already on heroku.

Phusion Union Station

If you already use Phusion Passenger  for your Rails app server, then Union Station is Phusion’s non-free monitoring/analytics solution that integrats with Passenger. I don’t believe you have to use the enterprise (paid) edition of Passenger to use Union Station, but Union Station is not free.

I haven’t been in a situation using Passenger and prioritizing monitoring for a while, and don’t have any experience with this product. But I mention it because I’ve always had a good impression of Phusion’s software quality and UX, and if you do use Passenger, it looks like it has the potential to be a reasonably-priced (although priced based on number of requests, which is never my favorite pricing scheme) all-in-one solution monitoring Rails errors, uptime, server status, and performance (don’t know if it offers Javascript instrumentation for true browser performance).

Conclusion

It would be nice if there were a product that could do all of what you need, for a reasonable price, so you just need one. Most actual products seem to start focusing on one aspect of monitoring/notification — and sometimes try to expand to be ‘everything’.

Some products (New Relic, Union Station) seem to be trying to provide an “all your monitoring needs” solution, but my impression of general ‘startup sector’ is that most organizations still put together multiple services to give them the complete monitoring/notification they need.

I’m not sure why. I think some of this is just that few people want to spend time learning/evaluating/configuring a new solution, and if they have something that works, or have been told by a trusted friend works, they stick with it.  Also perhaps the the ‘all in one’ solutions don’t provide as good UX and functionality for particular areas they weren’t originally focusing on as other tools that were originally focusing on those areas. And if you are a commercial entity making money (or aiming to do so), even an ‘expensive’ suite of monitoring services is a small percentage of your overall revenue/profit, and worth it to protect that revenue.

Personally, if I were getting started from virtually no monitoring, in a very budget-conscious non-commercial environment, I’d start with Honeybadger (including making sure to use it’s HTTP uptime monitoring), and then consider adding one or both of New Relic or Statuscake next.

Anyone, especially from the library/cultural/non-commercial sector, want to share your experiences with monitoring? Do you do it at all? Have you used any of these services? Have you tried self-hosted open source monitoring solutions? And found them great? Or disastrous? Or somewhere in between?  Any thoughts on “Total cost of ownership” of self-hosted solutions, do you agree or disagree with me that they tend to be a losing proposition? Do you think you’re providing a professionally managed web app environment for your users now?

One way or another, let’s get professional on this stuff!

Posted in General | Tagged | 1 Comment

“This week in rails” is a great idea

At some point in the past year or two (maybe even earlier? But I know it’s not many years old) the Rails project  started releasing roughly weekly ‘news’ posts, that mostly cover interesting or significant changes or bugfixes made to master (meaning they aren’t in a release yet, but will be in the next; I think the copy could make this more clear, actually!).

This week in Rails

(Note, aggregated in rubyland too!).

I assume this was a reaction to a recognized problem with non-core developers and developer-users keeping up with “what the heck is going on with Rails” — watching the github repo(s) wasn’t a good solution, too overwhelming, info overload. And I think it worked to improve this situation! It’s a great idea!  It makes it a lot easier to keep up with Rails, and a lot easier for a non-committing or non/rarely-contribution developer to maintain understanding of the Rails codebase.  Heck, even for newbie developers, it can serve as pointers to areas of the Rails codebase they might want to look into and develop some familiarity with, or at least know exist!

I think it’s been quite successful at helping in those areas. Good job and thanks to Rails team.

I wonder if this is something the Hydra project might consider?  It definitely has some issues for developers in those areas.

The trick is getting the developer resources to do it of course. I am not sure who writes the “This Week in Rails” posts — it might be a different core team dev every week? Or what methodology they use to compile it, if they’re writing on areas of the codebase they might not be familiar with either, just looking at the commit log and trying to summarize it for a wider audience, or what.  I think you’d have to have at least some familiarity with the Rails codebase to write these well, if you don’t understand what’s going on yourself, you’re going to have trouble writing a useful summary. It would be interesting to do an interview with someone on Rails core team about how this works, how they do it, how well they think it’s working, etc.

 

Posted in General | Tagged | 2 Comments

searching through all gem dependencies

Sometimes in your Rails project, you can’t figure out where a certain method or config comes from. (This is especially an issue for me as I learn the sufia stack, which involves a whole bunch of interrelated gem dependencies).  Interactive debugging techniques like source_location are invaluable and it pays to learn how to use them, but sometimes you just want/need to grep through ALL the dependencies.

As several stackoverflow answers can tell you, you can do that like this, from your Rails (or other bundler-using) project directory:

grep "something" -r $(bundle show --paths)
# Or if you want to include the project itself at '.' too:
grep "something" -r . $(bundle show --paths)

You can make a bash function to do this for you, I’m calling it gemgrep for now. Put your in your bash configuration file (probably ~/.bash_profile on OSX):

gemgrep () { grep "$@" -r . $(bundle show --paths); }

Now I can just gemgrep something from a project directory, and search through the project AND all gem dependencies. It might be slow.

I also highly recommend setting your default grep to color in interactive terminals, with this in your bash configuration:

export GREP_OPTIONS='--color=auto'
Posted in General | Tagged | Leave a comment

idea i won’t be doing anytime soon of the week: replace EZProxy with nginx

I think it would be possible, maybe even trivial, to replace EZProxy with nginx, writing code in lua using OpenResty.  I don’t know enough nginx (or lua!) to be sure, but some brief googling suggests to me the tools are there, and it wouldn’t be too hard. (It would be too hard for me in C, I don’t know C at all. I don’t know lua either, but it’s a “higher level” language without some of the C pitfalls). nginx is supposed to be really good at proxying, right? That was it’s original main use case — although as ‘reverse proxy’, but I don’t know if there’s any reason it wouldn’t work well as a forward proxy too — handle the EZProxy style load of lots and lots of sessions to lots and lots of sites?  Maybe. nginx is a workhorse.

You could probably even make it API-compatible with EZProxy on both the client and config file ends.

The main motivation of this is not to get something for ‘free’, EZProxy’s price is relatively reasonable. But EZProxy is very frustrating in some ways and lacks the configurability, dynamic extension points, precision of logging and rate limiting, etc., that many often want. And EZProxy development seems basically… well, in indefinite maintenance mode.  So the point would be to get a living application again that evolves to meet our actual present sophisticated needs.

I def won’t be working on this any time soon, but it sure would be a neat project. My present employer is more of a museum/archive/cultural institution than with the use cases of an academic or public library, we don’t have much EZProxy need.

(As an aside, it is an enduring frustrating and disappoint to me that the library community doesn’t seem much interested in putting developer ‘innovation’ resources towards, well, serving the research, teaching, and learning needs of scholars and students, which I thought was actually the primary mission of these institutions. The vast majority of institutional support for innovative development is just in the archival/institutional repository domain. Which is also important, but generally not to the day-to-day work of (eg) academic library patrons…. for some reason, actually improving services to patrons isn’t a priority for most administrators/institutions, I’m not really sure why).

Posted in General | 1 Comment