ebooks and privacy, the netflixes of ebooks?

Libraries have long considered reading habits, as revealed by circulation or usage records, to be private and confidential information. We have believed that freedom of inquiry requires confidentiality and privacy.

In the digital age however, most people don’t actually seem too concerned with their privacy, and it’s commonplace for our actions to be tracked by the software we use and the companies behind it. This extends to digital reading habits too.

What role should libraries have in educating users, or in protecting their privacy when using library-subscribed ebook services? Regardless of libraries’ role — and some of these services clearly have the potential to eclipse libraries entirely, replaced with commercial flat-fee digital ‘lending services —  how concerned should we be about the social effects of pervasive tracking of online reading habits? 

From the New York Times, an article that takes it as a given that what titles you read is tracked, but talks about new technology to track exactly what passages you read in what order at what times, too:

As New Services Track Habits, the E-Books Are Reading You

SAN FRANCISCO — Before the Internet, books were written — and published — blindly, hopefully. Sometimes they sold, usually they did not, but no one had a clue what readers did when they opened them up. Did they skip or skim? Slow down or speed up when the end was in sight? Linger over the sex scenes?

A wave of start-ups is using technology to answer these questions — and help writers give readers more of what they want. The companies get reading data from subscribers who, for a flat monthly fee, buy access to an array of titles, which they can read on a variety of devices. The idea is to do for books what Netflix did for movies and Spotify for music.

[...]

Last week, Smashwords made a deal to put 225,000 books on Scribd, a digital library here that unveiled a reading subscription service in October. Many of Smashwords’ books are already on Oyster, a New York-based subscription start-up that also began in the fall.

The move to exploit reading data is one aspect of how consumer analytics is making its way into every corner of the culture. Amazon and Barnes & Noble already collect vast amounts of information from their e-readers but keep it proprietary. Now the start-ups — which also includeEntitle, a North Carolina-based company — are hoping to profit by telling all.

“We’re going to be pretty open about sharing this data so people can use it to publish better books,” said Trip Adler, Scribd’s chief executive.

[...]

“Would we provide this data to an author? Absolutely,” said Chantal Restivo-Alessi, chief digital officer for HarperCollins Publishers. “But it is up to him how to write the book. The creative process is a mysterious process.”

The services say they will make the data anonymous so readers will not be identified. The privacy policies however are broad. “You are consenting to the collection, transfer, manipulation, storage, disclosure and other uses of your information,” Oyster tells new customers.

Before writers will broadly be able to use any data, the services must become viable by making deals with publishers to supply the books. Publishers, however, are suspicious of yet another disruption to their business. HarperCollins has signed up with Oyster and Scribd, but Penguin Random House and Simon & Schuster have thus far stayed away.

While the headline of the article is about new methods of tracking, it’s actually about several new businesses aiming to be “netflix for books” — flat-rate services that let you read all ebooks in their collection. You know, like a library, but not free.   These companies are having similar difficulties to libraries in working out deals with publishers, but if they succeed, what will it mean for libraries?

Before writers will broadly be able to use any data, the services must become viable by making deals with publishers to supply the books. Publishers, however, are suspicious of yet another disruption to their business. HarperCollins has signed up with Oyster and Scribd, but Penguin Random House and Simon & Schuster have thus far stayed away

Posted in General | Leave a comment

You never want to call html_safe in a Rails template

Sometimes when looking at old Rails code, especially code that’s existed since Rails2 days and been upgraded along with Rails, I sometimes see `html_safe` called directly in a view template:

WRONG!
<%= magicfy("foo").html_safe %>

It shows up in the old code because when someone (possibly me) upgraded to the version of Rails that first started tagging strings `html_safe`, there was some helper method producing HTML code that was winding up escaped and visible in the rendered output, and simply dropping in an `html_safe` seemed like a quick fix.

This is pretty much always wrong. In most cases, it’s going to be a potential security problem. In some cases it won’t be, but it can turn into one easily enough that it’s probably always a design flaw or ‘smell’. You’re fighting with Rails intended method of protecting you from HTML injection attacks — or just mal-formed HTML.

To illustrate, let’s imagine a helper method `magicfy` that simply wraps it’s argument in a span.magic:

def magicfy(value)
   content_tag(:span, value, :class => "magic")
end

Now let’s imagine we pass in an argument with some non-html-safe stuff in it.

<%= magicfy("1 > 2") %>

Because we’re using `content_tag`, the `>` will get properly escaped to a `&gt;` and the string returned will be marked html_safe already. content_tag takes care of it for us. We’ll get returned the equivalent of:

'<span class="magic"> 1 &gt; 2</span>'.html_safe

Same if the argument was a string that originally came from user-input in an attempt to do some HTML injection…

user_input = "<script> ..."
magicfy(user_input)
#=>  "<span class=\"magic\">&amp;lt;script&amp;gt; ...".html_safe

So adding on an `.html_safe` in the template is a redundant no-op, the results already are html-safe — both in the sense that they are marked `#html_safe`, and also that they truly are html-safe.

Now let’s look at another implementation of magicfy, completely wrong:

# WRONG!!!
def magicfy(value)
   "<span class='magic'>#{value}</span>"
end

Now under normal use, say we call `magicfy(“foo”)`, it returns the string we want, `<span class=”magic”>foo</span>`, but it’s not marked html_safe.  So if you display this in a Rails template, all the < and > will get escaped, “&lt;span…”, and you’ll see the literal HTML code in the rendered page, not what you want.

So maybe some poor coder says, oh, I’ll just throw in an .html safe in the template

WRONG! WRONG!
<%= magicfy("foo").html_safe %>

That appears to work, the problem is that the string isn’t neccesarily actually html-safe, because it doesn’t properly escape it’s input.

magicfy(“1 > 2″)

magicfy(“<script>…”)

You have made your code unsafe by just tagging on an `#html_safe` — depending on the arguments you might deliver invalid HTM (with un-escaped literal < or >), or if there’s user-input involved somewhere you might even be vulnerable to html injection.

The helper method itself has to be responsible for ensuring html-safety by escaping appropriate things, and then the helper method itself should mark the string .html_safe only once it’s done that — the same place that’s responsible for escaping has to be the place that marks html_safe, otherwise you’re just blindly marking html_safe without actually knowing it.

Using rails `content_tag` helpers is a great way to have html-safety just work appropriately. But you could do it yourself too (and sometimes have to), for instance:

# Safe, but it'd be easier to use content_tag
def magicfy(value)
   "<span class=\"magic\">#{ html_escape(value) }</span>".html_safe
end

If you do end up having to ensure html-safety and proper escaping yourself (always in the helper, never in the template), another really useful tool is the little-known safe_join helper.

If used properly, the Rails html-safety methods are pretty darn good at avoiding any possibility of HTML injection vulnerabilities or invalid HTML due to improperly unescaped chars. But `html_safe` method being called in template view code is, 99% of the time, a sign that you’re not doing things right and opening yourself up to problems. Code should never call html_safe on a string unless that code constructed the string and actually ensured it’s html-safety!

Posted in General | Leave a comment

rubyforge gone forever?

At one point, in pre-github days, rubyforge.org offered free hosting to open source ruby projects, with source control, listservs, documentation hosting, etc. It was a popular place to host open source ruby projects.

Since github took off, rubyforge has been much less popular.   rubyforge wasn’t receiving any development, or much support, or much communication from the developers/owners.  Most of the project pages still on rubyforge seemed to be abandoned.

Today I noticed rubyforge.org seemed to be down entirely.  According to this article I found on the internet, it may have gone down November 6th, after a security exploit.  However, according to the internet archive cache, it looks like it was up on November 21, but responding with an error message on December 11.   Today on Dec 16, there’s simply nothing responding at rubyforge.org at all.

Umlaut still has/had it’s project listserv hosted on rubyforge.  As recently as December 3, a listserv message came across the list. I sent one this morning to test it, and I think it hasn’t come through yet, but the listserv was sometimes slow even when it was up, so not sure.

I had been meaning to move the Umlaut listserv somewhere else for a while, but procrastinated in part because I wasn’t sure where.  But I finally took the plunge and created an Umlaut Google Group.  For various reasons I had been hoping to find an alternative to Google Groups, but I was unable to find anything that was free and that I felt like I could trust to stick around, and offer the features I wanted, including google-searchable threaded archives,  a decent UI, and decent spam protection.

It’s not clear if rubyforge.org is coming back or not. I think it’s probably for the best that it gets shut down — it was hosting many abandoned project pages that one would find on google, and perhaps get confused into thinking were current, but in fact the project had moved to github.  Although a more intentional transition with announcements would have been welcome.  I wish I had the chance to look at the subscriber list for the Umlaut listserv so I could have invited everyone to the new Google Group. And I wish the previous Umlaut listserv archives weren’t now pretty much lost from Googleability with no notice. (It’s possible although hardly sure I could retrieve them from archive.org somehow, but I lack the motivation to find out. And Google Groups offers no way to import archives anyway, bah).  But I can imagine that the developers/maintainers of rubyforge were getting pretty burnt out halfway maintaining something that was no longer much used or appreciated, and are glad to have an excuse to just pull the plug.

Posted in General | Leave a comment

simple rails helper for bootstrap3 nav-pill

Bootstrap3 has a nice little nav-pill device. 

(Bootstrap2 does too and it may work the same, but I did this with bootstrap3, and am putting bootstrap3 in the text so people googling will find it. Googling bootstrap solutions gets harder now that there are versions that work differently and you want a solution for your version.)

In Rails, you’re going to want to mark the right link as ‘active’; bootstrap3 needs you to set a class on the <li>, not the <a> for this.  And also requires that even the active link be an actual <a> tag — not a non-clickable <span> as I’ve often used before. Both of these mean that Rails little-known-but-useful `#link_to_unless` isn’t going to be helpful. Fortunately, link_to_unless is implemented in terms of the even-less-known-but-even-more-flexible #current_page?

Here’s a very simple helper method you could put in your `application_helper` to generate the <li><a>…</a></li> inside a bootstrap3 <ul class=”nav nav-pills”>, appropriately marking a link active if it’s link params match the current page.

  # helper to make bootstrap3 nav-pill <li>'s with links in them, that have
  # proper 'active' class if active.
  # http://getbootstrap.com/components/#nav-pills
  #
  # the current pill will have 'active' tag on the <li>
  #
  # html_options param will apply to <li>, not <a>.
  #
  # can pass block which will be given to `link_to` as normal.
  def bootstrap_pill_link_to(label, link_params, html_options = {})
    current = current_page?(link_params)

    if current
      html_options[:class] ||= ""
      html_options[:class] << " active "
    end

    content_tag(:li, html_options) do
      link_to(label, link_params)
    end
  end

You call it the same way you call Rails link_to, eg `bootstrap_pill_link_to(some_label, some_url_or_params)`

It could be made even fancier to do argument arity checking to support a &block on the link_to instead of a string label, etc. But I didn’t need that, this does the job.

It does leave the active link as a fully clickable nav link. I’m used to making the active link a non-clickable span instead, but bootstrap3′s nav-pill would make that tricky, which made me realize it’s probably fine or even better UX as a clickable link anyway. (Among other things, it means you can right-click it to bookmark it or whatever, even though it’s active. I dunno).

Posted in General | Leave a comment

bootstrap3 collapse without id’s

The bootstrap3 collapse javascript plugin is actually quite flexible, but it’s very terse documentation and examples might make it a bit hard to figure out how.

The docs example shows using a data-toggle attribute on the trigger for the collapsing element, the thing that the user will click on to expand/contract. Then, in data-target on the same element,  you might supply a string id which identifies the content body, which will be shown/hidden.

If you are dynamically generating content one way or another, you might not have convenient id’s on the content bodies.

So, we might start looking at the various options the collapse plugin takes, and at using it “via Javascript; enable manually…”

There are a couple things that were confusing there to me:

  1. When applying manually with `$some_element.collapse();`, you apply to the content body, not the trigger. (Even though the default, where you trigger purely with data- attributes, you apply those data attributes to the trigger).
  2. Then you can make your own trigger, simply something which, on click, finds the relevant body and and triggers the collapse toggle action, maybe like `$(this).next(“.content-child”).collapse(“toggle”);`
  3. The collapse plugin will add the class `collapse` to your content body, which is styled by boostrap CSS to be hidden. To avoid any JS ‘flicker’, you probably want to manually add the `collapse` class to your content body  element yourself, so it always starts out hidden.  Then, in page load, when you want to apply the collapse plugin to your body, apply it like: `some_element.collapse({toggle: false});`, to avoid toggling it’s state to visible on page load when you apply the collapse plugin to it.

This stack overflow item, with accompanying jsfiddle, pretty much provide this solution and a demonstration. But also with fairly laconic explanation, it took me a bit of head scratching and looking at source code to figure out what was actually going on here.

no-js?

Personally, I still try to follow the old-school doctrine of unobtrusive javascript, where if a user-agent has javascript turned off, everything will still display perfectly usably.

So adding the .collapse class directly to my content body produces some undesirable effects — if javascript is turned off, the body will still be hidden by the CSS, but there will be no way to show it by triggering the collapse trigger.

This problem applies to the by-the-example use of bootstrap3 collapse too, as it also has the .collapse class applied directly to the content body.

The solution is pretty simple if you use the “no-js” class technique originally demonstrated by Modernizr. 

.no-js .collapse {
   display: block;
}

 

Posted in General | Leave a comment

Google scholar adds citation management

Thanks to Clarke who pointed out in a comment on a recent post here, that Google Scholar now has a “saved citations” citation management feature.

I haven’t done any experimenting with it; anyone have a review? What do you think, is this going to end up drawing a significant portion of our patron’s use away from other citation management alternatives (including some we pay for)?

Google Scholar Library. 

Today we’re launching Scholar Library, your personal collection of articles in Scholar. You can save articles right from the search page, organize them by topic, and use the power of Scholar’s full-text search & ranking to quickly find just the one you want – at any time and from anywhere. You decide what goes into your library and we’ll provide all the goodies that come with Scholar search results – up to date article links, citing articles, related articles, formatted citations, links to your university’s subscriptions, and more. And if you have a public Scholar profile, it’s easy to quickly set up your library with the articles you want – with a single click, you can import all the articles in your profile as well as all the articles they cite.

Posted in General | Leave a comment

personal archiving

Without institutional support, without worrying about legality, she just…. archived for posterity.

In a storage unit somewhere in Philadelphia, 140,000 VHS tapes sit packed into four shipping containers. Most are hand-labeled with a date between 1977 and 2012, and if you pop one into a VCR you might see scenes from the Iranian Hostage Crisis, the Reagan Administration, or Hurricane Katrina.

It’s 35 years of history through the lens of TV news, captured on a dwindling format.

It’s also the life work of Marion Stokes, who built an archive of network, local, and cable news, in her home, one tape at a time, recording every major (and trivial) news event until the day she died in 2012 at the age of 83 of lung disease.

…There weren’t any provisions for the tape collection in Stokes’s will, but anyone who knew her knew she wanted them to be used as an archive. She had been born at the beginning of the Great Depression, and like many people of her generation, saved a lot of things. Scattered throughout the family’s various properties, she had stored a half-century of newspapers and 192 Macintosh computers. But the tapes were special. “I think my mother considered this her legacy,” Metelits says.

The Incredible Story Of Marion Stokes, Who Single-handedly Taped 35 Years Of Tv News
from 1977 To 2012, She Recorded 140,000 Vhs Tapes Worth Of History. Now The Internet Archive Has A Plan To Make Them Public And Searchable. Sarah Kessler. fastcompany.com. 

Another article I was alerted to on HackerNews.  I’ve noticed that the audience on HackerNews is very interested in library-and-archive type issues (whether involving actual libraries and archives or not, but frequently so), as well as generally quite supportive of actual libraries, archives, librarians, and archivists. I worry some of the support is more nostalgic than anything else, or maybe aspirational is a better way to think of it, supportive of what libraries and librarians could be doing.  (Not to take away from the awesome stuff the Internet Archive is doing).

Posted in General | Leave a comment

one to anonymously leave on your boss’s desk without being spotted leaving it

I had an office. Now I don’t.

I’m not looking for your pity; I want your own righteous indignation. Because you, too, deserve an office. We deserve better. We all deserve offices. But it gets worse: We’ve been told that our small squat in the vast openness of our open-office layouts, with all its crosstalk and lack of privacy, is actually good for us. It boosts productivity. It leads to a happy utopia of shared ideas and mutual goals.

These are the words of imperceptive employers and misguided researchers. The open-office movement is like some gigantic experiment in willful delusion. It’s like something dreamed up in Congress. Maybe we can spend less on space, the logic seems to go, and convince employees that it’s helping them. And for a while, the business press (including, let’s be honest, some of the writing in this very publication) took it seriously…

…No. This is a trap. This is saying, “Open-office layouts are great, and if you don’t like them, you must have some problem.” Oh, I have a problem: It’s with open-office layouts. And I have a solution, too: Every workspace should contain nothing but offices.…

…Peace and quiet and privacy and decency and respect for all. We people who spend more waking hours at work than we do at home, we people who worked hard to be where we are, we deserve a few square feet and a door. Call me old fashioned, call me Andy Rooney if you must, but Andy Rooney had an office.

OFFICES FOR ALL! WHY OPEN-OFFICE LAYOUTS ARE BAD FOR EMPLOYEES, BOSSES, AND PRODUCTIVITY

IN PART ONE OF OUR TWO-PART SERIES, FAST COMPANY SENIOR EDITOR JASON FEIFER MAKES A CASE FOR GIVING ALL WORKERS A LITTLE ALONE TIME–BEHIND AN OFFICE DOOR.

fastcompany.com

…“There’s some evidence that removing physical barriers and bringing people closer to one another does promote casual interactions,” explains a Harvard Business Review piece that nicely summarized the research on this subject. “But there’s a roughly equal amount of evidence that because open spaces reduce privacy, they don’t foster informal exchanges and may actually inhibit them. Some studies show that employees in open-plan spaces, knowing that they may be overheard or interrupted, have shorter and more-superficial discussions than they otherwise would.”…

Posted in General | Leave a comment

Extending your Solr indexing via a gem with traject

One of the goals of the traject MARC->Solr indexing tool was to support easy re-use of mapping logic and code between projects and institutions.

I’ll talk here about how I just used some code shared by Bill Dueber at University of Michigan, to add ‘physical carrier’ information to my indexing rules for a ‘format’ facet.

The Background

Getting form/format/carrier/genre information out of MARC is very tricky.  I’m talking about categories like “Newspapers”, “Pamphlets”, “Dissertations”, “CDs”, “DVDs”, “Print”, “Online”, “Video games,” and similar and related categories.

It’s tricky in part because the way we humans think of these things is not very clear or consistent.  Even one person’s internal categories aren’t neccesarily very consistent if you actually try to tease them out; let alone consistency between people, communities, and over time: needs and categories have also changed over the historical sweep of MARC standardization and cataloging record creation, making it even harder with our large collections with cataloging created over decades!

So anyhow. It’s a tricky problem, and it’s not totally MARC’s fault. But different organizations and software come up with different algorithms and heuristics. I took the one that we’ve been using where I work, and made it a built-in distributed option in traject, just to have something there, but it’s hardly the ultimate solution or anything.

The set of algorithms we’ve been using doesn’t cover physical ‘carrier’ types like CD, DVD, Laserdisc, VHS, LP, or what have you. Getting those out of MARC can pretty tricky — in some cases, you’ve got to scan free entry text fields for things like “sound disc. 12 in. 33 1/3 rpm” to know that means a standard vinyl long-playing record (LP).  There’s not necessarily one right way to do this, it takes some experimentation and development and iteration to come up with the right set of rules.

Bill Dueber has put the University of Michigan’s logic for form/format/carrier categorization up as a traject-compatible ruby gem.  It’s also not the ultimate word or anything — it has it’s own idiosyncracies, and in some cases contains logic based on local U of Michigan cataloging practices or local U of Michigan call numbers.

But the umich logic does logic in it for detection of the particular physical carrier categories that we were most interested in: audio CD, video DVD, LP, VHS.

Adding Umich’s logic to my indexing

So I figured, why not try using Bill’s gem in my traject project, to add those classifications on to what I’ve already got.

Turns out it is quite simple and concise to do so.

First I added `gem “traject_umich_format”` to my local `Gemfile’, since I’m using bundler to manage my gem dependencies in my traject project, as is common in ruby projects, and is optional but recommended for traject. 

Then I just add these lines to my traject indexing configuration, to ask teh traject_umich_format gem to classify a record; take only the categories I’m interested in; and add them on top of the existing values already added by own code to the Solr ‘format’ field.

# add in DVD/CD etc carrier types courtesy of umich gem
# https://github.com/billdueber/traject_umich_format
require 'traject/umich_format'
umich_format_map = Traject::TranslationMap.new('umich/format')
to_field "format" do |record, accumulator|
  types = Traject::UMichFormat.new(record).types
  # only keep the ones we want
  # (previously tried more that didn't work with our catalog)
  types = types &amp; %w{RC RL VB VD VH VL}
  # translate to human with translation map
  accumulator.concat types.collect {|t| umich_format_map[t]}
end

That was really it. Bill’s gem (rather cleverly) separates the classification itself from the human labels used in the actual facets, so we use both parts of the process — classify, take just the classification codes we’re interested in from the output of classifying, run them through the map to turn them into human-presentable labels, and add them into the existing ‘format’ field.

Neat, eh? On the downside, this is not neccesarily as ‘high performance’ code as it could be, because the way Bill’s code is written I end up asking it to calculate all the classifications, then throw away the ones I’m not interested in. But I’m not too worried about it, I think it’ll be fine.

It spares me from having to re-invent the wheel of how the heck you figure out if something is an LP from MARC; and possibly even more importantly, by sharing code with Bill, when either of us finds bugs, or edge cases where our heuristics can be improved, we can easily share them with each other — in fact, more or less automatically share them with each other, by making improvements to the shared ruby gem.

Consider traject?

I have to admit, since we announced traject about a month ago, I am aware of nobody other than me and Bill trying it out.

I had hoped to get some other beta testers before I called it a 1.0.0 release, but what can you do. It will be tagged 1.0.0 soon, but regardless of when it gets that tag, traject is mature, robust, ready for business, and being used in production by both me and Bill. Consider taking it for a test drive?  If you have any frustrations with your current Solr indexing solution related to keeping your logic well-organized, tested,  re-useable and shareable between projects, and supporting rapid development and quick iteration — you may find some things you like in traject.

Of course, formats are still complicated

We don’t have this code in production here, we’re just demo’ing it out. “Formats” remain complicated because our own mental models of them are so varied and inconsistent — it’s not entirely clear what the optimal UI for this stuff is, but we don’t neccesarily have the time (or want to prioritize the time) to figure it out. We’re deciding if the basic implementation based on our current UI supplemented by Bill’s code is ‘good enough’ already to add value for our patrons.

Posted in General | Leave a comment

library vendor wars

We libraries as customers would prefer to be able to ‘de-couple’ content and presentation.  We want to be able to decide what content and services to purchase based on our users needs and our budgets; and separately to decide what software to use for UX and presentation — whether proprietary or open source — based on the features and quality of the software, and our budgets.

To make matters more complicated, we want to try and take our content and services — purchased from a variety of different vendors — and present them to our users as if it were one single ‘app’, one single environment, as if the library were one single business.  This makes matters more complicated, but it also makes this ‘de-coupling’ of UX layer from underlying content and services — even more important. Because if the content and services we purchase from various vendors are tied only to those vendors own custom interfaces and platforms, there’s no way to present it to users as a unified integrated whole. (How would you feel about Amazon or Netflix if they made you use one website for Science Fiction, and a completely different website that looked and behaved completely differently for History?).

Of course, our vendors have different interests.  A vendor of content and services could decide that the more places their content and services can be used, the more valuable those content and services are — so they’d want to allow their content and services, once purchased, to be used in as wide a variety of proprietary and open source UX’s as possible. Or a vendor could decide that approach dilutes their brand, instead they should use their content and services as ‘lock in’ to try and ‘vertically integrate’ and get existing customers to buy even more products from them. You want these journals to be available in your ‘discovery’? Then you better buy our discovery platform, because that’s the only place these journals are available, and besides we’ll cut you a ‘big deal’ discount when you buy our discovery product too.

I am honestly not really sure which approach is better for the vendors. But I know which approach is better for the libraries. Library and vendor interests may not be aligned here, at least in the short- and medium-terms. In the long range view, certainly our vendors need us to survive as customers, and we need some vendors to exist to sell us things we can’t feasibly provide in-house or through consortium alone.

The attempt to ‘lock in’ by various vendors will make it impossible for us to present services in the integrated UX that is necceasary for us to remain credible and valuable to our users. We’ll have vendor-purchased content and services available only in a number of separate vendor ‘silos’ or ‘walled gardens’.  It’s not actually a question of purchase costs, it’s an issue of pure technical feasibility.  We’ll either start limiting our purchases to one vertically integrated vendor (which every vendor would be happy with, as long as we pick them),  or we’ll continue to deliver content and services as a patchwork of different pieces fitting poorly together, confusing our users and further degrading the perception of the library as a competent organization.

Here’s an email sent out today from Ex Libris, I don’t know of any reason I would not be allowed to share it publicly, I hope there’s no reason I am not aware of.

Dear Primo Central Customers,

This is to inform you that Thomson Reuters has decided to withdraw its Web of Science content from Primo Central starting January 1, 2014. We understand this decision encompasses all the major library discovery solutions.

Thomson Reuters informed us that they are not planning a broad market communication of any sort; rather, they will communicate through their representatives on an individual customer basis. The message below is adapted from the information that Thomson Reuters is sharing with individual customers:

“Thomson Reuters has decided to focus on enabling customers and end users to use the Web of Science research discovery environment as the primary interface for authoritative search and evaluation of citation connected research. For this reason Thomson Reuters will no longer make Web of Science content available for indexing within EBSCO, Summon, or Primo Central. Thomson Reuters will, however, continue to support Web of Science accessibility via integrated federated search tools that are available in Primo or other systems.”

The impact of this decision on your end users will be limited because the vast majority of the Web of Science records are available in Primo Central via Elsevier Scopus and other resources of similar quality. The Scopus collection is now fully indexed in the Primo Central Index and is searchable by mutual customers of Scopus and Primo Central.

If you have any comments or additional questions, please feel free to contact [omitted]

Kind Regards,

Primo Central Team

 

Posted in General | 6 Comments