Principle of avoiding “false promises” in interfaces

So lately I keep thinking about this idea I think of as a “false promise” in a user interface.  Not sure if other people already recognize this and refer to the concept by some other label, let me know if you know they do.

But the idea is that your software shouldn’t suggest by it’s input that it can do something that it really can’t do at all.   This becomes especially tricky when we’re dealing with our library data and systems that in fact can’t do a lot of things.  Some examples will help.

SFX ‘citation linker’ input screen

SFX by default has a screen that let’s you input an article citation, and then SFX will try to find links or other information for it.  (I don’t want to put a link to mine here cause I don’t want to attract the robots).

Now, to begin with, this is both an annoying process for the user, and an error-prone process for SFX. But I want to draw your attention to two particular fields on that screen: “Author” and “Article Title”.

The default input screen asks you to input an “Author”. However, in (estimating) 95%-99% of cases, SFX can’t actually do anything with that author or title you’ve input at all. It doesn’t help SFX find a match, it doesn’t effect SFX’s functionality at all.

So our interface implies that the user ought to enter author and title — a painful and annoying process for the user.  The “false promise” here, in my opinion, is that this will do anything at all. Now, granted, in a tiny minority of cases it will, which is why SFX puts the field there. But that means we’re making a “false promise” in the vast majority of cases, in my opinion. We’re “leading the user on.”

MARC relator codes

This might be a better example. So MARC fields for listing controlled authors or other contributors (100 and 700) theoretically allow the data to say particularly what relationship the contributor has to the work at hand. (Author? Editor? Illustrator? Performer on a musical composition? Composer? Wrote a preface?).

Most OPAC interfaces don’t do much with this. But if you start thinking of what you might want to do, an initial naive approach might be to allow the user to limit a search by these relator codes. Don’t just give me any record that has Noam Chomsky in any 100 or 700 — that’s what our traditional interfaces do, but for prolific people it might give me too much. I really only want books where Noam Chomksy wrote a preface.

So, okay, maybe you go ahead and provide this limit in your search interface.  The problem is that the vast majority of our data doesn’t have these relator codes. So if you just do a search for Noam Chomksy with relator code for ‘wrote a preface’, you’re going to miss most of the books that Noam Chomsky really did write a preface for.

You might miss it because Noam Chomsky is in a 700 field with no relator code. Or you might miss it because we don’t often record people who wrote prefaces at all.

In either case though, I think the interface was making a ‘false promise’, it suggested you could search limiting by role of the contributor, but our data doesn’t really support that at all. The results are going to be misleading if the user assumes the interface really can do what it suggests it can.

So?

So what do you think? Any other examples you can think of of ‘false promises’ that our interfaces make?

Identifying the ‘false promises’ is easier than fixing them. Usually they are there because of limitations in our software or data that are not easy or cheap to resolve.  If you really get rid of all of the false promises, you have to get rid of much of your functionality!  Or pepper it with disclaimers and limitations that most users won’t read anyway, and just make us look kind of incompetent if they do. (“WARNING: You can TRY to search on relator code, but your results will only include a tiny percentage of things that really matched your search.”)

This entry was posted in General. Bookmark the permalink.

5 Responses to Principle of avoiding “false promises” in interfaces

  1. Liz says:

    While you go into more detail and may have a slightly different frame around the issue, this brought to mind the fourteen heuristics used in OCLC heuristic evaluations. See #14: “Don’t lie to the user.”

    http://www.oclc.org/usability/heuristic/set.htm

  2. jrochkind says:

    An interesting list. It’s probably also related to the general notion of “Affordances” that OCLC includes on their list too: “Does the user understand what the text/graphic will do before they activate it?”

    But we sort of have another addition — does the user THINK they understand what it does, but they’re completely wrong. heh.

  3. Lukas Koster says:

    A very obvious example of “false promises” is the whole metasearch concept. In essence a great idea, but in reality lots of false promises. For instance: in MetaLib you always have the 7 default search indexes you can use for all databases in your system: “anywhere”, “title”, “author”, “subject”, “year”, ‘ISBN”, “ISSN”. But most of the databases in your system don’t support ALL of these indexes, or handle these indexes very differently from others, even the “anywhere” search index! Not to mention searching for subjects in databases in different languages simultaneously, or all the different ways author names are stored and indexed.
    With metasearch you never know if you find everything you searched for.

  4. Martha M. Yee says:

    And have you ever seen a “relevancy” ranking function that actually put documents relevant to you and your current search first? See Thomas Mann’s latest paper for a good example of a Google “relevancy” ranking failure (at http://www.guild2910.org/)…

  5. Well, honestly, most ‘relevancy’ rankings do a better job of putting useful documents first than a typical OPAC keyword search that ranks documents by accession date! “Relavancy” might not be quite the right word to describe what these algorithms are doing, trying to put documents that are closest/most matches to the search terms entered first. Maybe using the word ‘relevancy’ in a public interface is a form of ‘false promise’. But I think they often do a good enough job at this that most users will prefer using them to the alternative, that is practically a random ordering.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s