Update 2014: See ruby 2.1 String#scrub, or my scrub_rb gem for a pure-ruby ‘polyfill’ in other ruby versions.
So it turns out you can have ruby strip illegal bytes for any arbitrary encoding (like UTF-8), or replace them with “?” or the unicode replacement char “�”.
You’ve got to use the second argument to String#encode, “source encoding”, and pass “binary” there.
# replace any bad bytes in `str` with unicode replacement # char str = str.encode( "UTF-8", "binary", :invalid => :replace, :undef => :replace) # or without assuming our string is UTF-8, just remove # bad bytes from the string regardless of it's encoding: str = str.encode( str.encoding, "binary", :invalid => :replace, :undef => :replace) # or of course the in-place mutating version str.encode!( str.encoding, "binary", :invalid => :replace, :undef => :replace)
Which actually doesn’t make a lot of sense — “binary”, also called “ASCII-8BIT”, is essentially the “null encoding”, it means “no encoding at all, just bytes”. So that call would seem to say “transcode from ‘raw bytes’ to UTF8” — which of course doesn’t mean anything, there is no such transformation defined.
But apparently what it means to ruby is “don’t trans-code, but do be willing to respect the
:invalid => :replace and
:undef => :replace options.”
If you just do
str.encode( str.encoding, :invalid => :replace, :undef => :replace), it’s always a no-op, ruby stdlib says “It’s already IN that encoding, I don’t need to do anything, done!”, and doesn’t touch your invalid bytes to replace them.
This isn’t, as far as I know, documented anywhere. It’s not, in my opinion, very obvious at all. But, there it is. I found this out in a blog post that I’ve unfortunately lost so I can’t give credit where it’s due — I have no idea how they discovered it, they just dropped it in passing in their blog as if it was something anyone might know.
The long history of this realization
So, I need to do this. I have input which is theoretically in UTF8. But it sometimes has bad bytes in it — bytes that are illegal for UTF8.
Which means as soon as you try to do much of anything with it, you’ll get a Encoding::InvalidByteSequenceError. You can rescue this exception — or check #valid_encoding? as soon as you read the input to discover it in advance — but then what? I guess you could just refuse to do anything else with that input, and say “Skipped that guy, it was illegal.”
But often, what I want to do instead is recover and continue, replacing the bad bytes with question marks to let the user know it was a bad byte which could not be interpreted. (Or sometimes with the empty string, just ignore it). This doesn’t seem like a weird thing to me to do. Plenty of other software does it, after all — open up a UTF8 doc with bad bytes in it in
vi and see what happens. Plenty of software does it, I’d think that would make it fairly obvious this is an ordinary thing to do.
But for some reason, I had a lot of trouble convincing anyone else in rubydom that this is something you’d ever want to do. Except my fellow library programmers, almost all of whom were like “Oh yeah, I need to do that all the time too.” Apparently our domain is such that we need to do this often, but most ruby devs don’t, I dunno.
I tried blogging the question, and posting my blog to reddit as a question. People either didn’t understand what I was asking, or tried to convince me I didn’t really want to do that after all, or else didn’t have any solution. (Perhaps my attempt at an engaging title back-fired and made people defensive, sorry). I tried asking on stackoverflow, same thing.
Encouraged by drbrain to do so, I filed as a bug with ruby the fact that String stdlib was missing API to easily remove bad bytes. The response was again to mostly say they didn’t understand the use case and it didn’t seem neccesary — but even on the ruby tracker, nobody realized it was already in the stdlib! They instead argued that there was no need for it in stdlib, ha.
But I still needed to do it. Not just for strings in UTF8, but sometimes in a library function that will work on a string of any arbitrary encoding — replace or remove the bad bytes in it. Not necessarily just for UTF8.
And it wasn’t completely obvious how to do this, although it ended up not being too hard or complicated.
So I went and wrote my own gem to do it. drbrain kindly showed me a way to make my gem more reliable and efficient, even though he presumably still didn’t understand why I’d ever want to do this.
Turns out it was built into stdlib all along, but I never knew it until recently, about 10 months after I first started asking about it.
I guess I’ll release a new version of my gem that simply wraps using
String#encode with a
binary source_encoding argument.
Meanwhile, most of rubydom still won’t understand why anyone would ever want to do this. (Thanks fellow code4libbers for keeping me sane). If you are still unconvinced of why this is a perfectly ordinary thing to do, I’ve learned I’m incapable of explaining it or convincing you, so I won’t try anymore.
I hope it’s not an accident in the ruby stdlib, and won’t go away in the future. If it does, I guess I can go back to my gem with it’s fairly simple implementation. If it is intentional, it seems like it would be nice if it were actually documented. But in the meantime, maybe this blog post will be findable by google, and save someone else that needs this function all the tzuris I went through to get to it!