Karen Schneider remarks about some misconceptions about negative aspects of open source, which leads me to respond with something I’ve been meaning to write for a while, about when some of those negative asepcts come true from our own lack of care. Karen says:
Second, this article doesn’t really get its head around the concept that in open source development, development rarely stays “local” (even if it starts there).
Sadly, in libraryland, open source development often DOES stay local too often. The library approach to ‘open source’ has often been: Fork early, fork often. That is, take the code, when you need it to do something slightly different, just hack it to do what you need, without regard for maintaining a common codebase.
This has happened with lots of library-used open source software. One prime example is the “Fac-Back-OPAC” and its numerous forked progeny.(I was wrong)
Why does this happen? Because, in the short-term, it’s easier and quicker to get something up and running this way that meets your perceived local needs. To maintain a common codebase capable of accomodating diverse local needs takes both more initial time, and, significantly, some software development expertise. Library developers are frequently short on time, pressured by their bosses to meet local needs as quickly as possible–and are also frequently self-taught and not very experienced in complex coordinated software projects.
In the long-term, this is a very inefficient use of programming resources, which leads to the negatives outlined in the report Karen responds to. Everybody’s got their own copy of the code, enhancements and bug fixes can not be easily shared, we are not collaborating.
Not all open source will be succesful open source
Every major open source project I know of has a development process, a project development timeline, and well-orchestrated development.
Right, and few library open source projects I know have this. (Koha and Evergreen being exceptions to be sure). Without figuring out how to achieve this, these library projects are not going to become succesful (let alone ‘major’) open source projects. The library considering open source development would do well to realize this, and evaluate a project accordingly, and figure out how to allocate their institutional resources to changing this state of affairs.
Certainly, this is not an inherent part of open source, it is instead counter-indicative of successful open source, which does exist. But to change this state of affairs in libraryland is not trivial. The first step is acknowledging the problem—rather than either pretending the problem does not exist or incorrectly believing it is inevitable with open source software.
Next steps would, in my opinion, involve major libraries realizing that they need to have actual software development expertise in-house if they are going to participate in a sustainable non-vendor-supported open source development.
(Again, Koha and Evergreen are exceptions in having vendor support, which indeed protects them from many of the problems noted in that report). There is no such thing as a free lunch with open source, and if libraries think there is, they are making a mistake.
(One significant organizational culture problem too many libraries suffer from is the opposite of ‘not invented here’ syndrome. You can call it “I’d rather have someone else to blame” syndrome. Certainly not unique to libraryland, as exhibited by corporations paying exhorbitant amounts to ‘consultants’ who really have no more expertise than their local staff. But managers get what they paid for–someone else to blame when it doesn’t work.)
I do love open source
But that also doesn’t mean that open source is inherently problematic. It depends on the product. It depends on if there is paid support available, and you want to pay for it. It depends on the strength of the development community–and if you want to build a strong development community, that takes resources. These things don’t come without effort, cleverness, experience, and allocation of resources.
And again, lest anyone misunderstand, Evergreen and Koha are the very best of breed of libraryland open source. It would be a mistake to assume the flaws of less mature products are their flaws–but it would also be a mistake to assume that their success will automatically accrue to other open source.
And Karen makes one excellent point about staff time that goes into localizing, customizing, and maintaining proprietary software too. I know at my institution, we spend a whole lot of staff time on (usually unsupported) localizations and customizations of proprietary software too:
First, I’ve watched large teams of library developers struggled to “adapt” proprietary software, or really, to develop around its inadequacies and hidden source code. For systems of any significant size and political complexity, “turnkey” is a fantasy. What would you rather “adapt”: code that is free to view, share, and download — and discuss and debug on public lists and chatrooms — or some vendor’s super-secret code you can’t entirely view and are often bound by contract not to discuss in the open?
I agree with Karen, but we need to do it right, and too many library managers and library programmers don’t realize what it takes to do it right yet.