It’s kind of amazing to me that after over fifteen years in this business, I’m looking at a situation that pretty much hasn’t changed for small libraries looking for an automation system.  There wasn’t much available for them at a reasonable cost back then, and there is even less today.  Go ahead. Show me where I’m wrong.

Seriously, a small library, a small public library, one that is supported, sometimes begrudgingly, by (too often non-existent) local public funds, does not have a lot to spend on annual fees for a library automation system.  They have even less to spend on a tech person who could install and maintain one of the FLOSS options.  And even if they do, there’s still the recurring cost of a web server to put it online.  O.K., maybe they could tie into the local government’s web server, assuming the local government entity has one (probably a “yes” in the U.S., but not in other parts of the world).

I actually did a complete retrospective conversion at a library years ago.  It was a special library that had the funds to support a quality system.  I was shocked and horrified at the “system” (basically a giant word processing document) that was being used to track titles.  There was no mechanism to track costs.  I have since come to appreciate the efforts of librarians in small libraries doing the best they can with what they have, and with the skill sets they have, to manage their collections.  Hello, World: librarians are amazing, and you should throw money at them, because they do amazing things with your resources.

So, in the history of library systems, the first stop is Marshall Breeding’s excellent record of what was, what is, and who ate up whom.  Although this doesn’t cover everything being used out there, it does give an interesting picture of the developing landscape.  I was surprised and amazed that the Georgia Public Library Service, back in 2004, chose to build its own automation system (Evergreen) for libraries in the state rather than use one of the existing systems.  There was, of course, Koha, out of New Zealand.  But that one got mired in nasty disputes over code and trademarks.  So far, so good, with Evergreen (keeping fingers crossed)

Next stop is Wikipedia’s list of Next-Generation library systems, especially the Comparison chart, but you might want to scan through the brief explanation of what Next-Generation means here.  The list is notable, because it includes both large systems and systems smaller libraries can use.  But note that these are all web-based systems.  Some of them, however, can function on a stand-alone basis, on the desktop.  This is important, because the most basic need of libraries is library management software.  Getting it on the web is secondary to librarians (although not to their patrons, of course).

So let’s take a stab at what some of the possibilities are for small (mostly) underfunded libraries today.  There are two perspectives to consider here:  libraries with staff that are systems-capable, and libraries with limited or no staff capable of managing the backend of a system.

In the first category (libraries with systems-capable staff), we have, first, systems that have a stand-alone option,

and second, systems that are add-ons to content management systems a library may be using, or have access to, for their web site (so far, I’m only seeing Drupal modules, are there any others out there?).

In the second category, it’s pretty bleak without (1) hiring someone, or a service, to set up a system, or (2) a funding stream to pay annual fees.  In the first case I refer back to one of the four above that can be installed on a stand-alone basis.  After installation, training would probably be required as well, despite the documentation.   In the second case, LibraryWorld seems to have a loyal following around the world.  I haven’t had an opportunity to look at it recently, so I don’t really have anything to say about it (yet).  Feel free to add comments below about your experience with it.

But LibraryWorld is a closed system, and if you are looking for something open source, there are

  • PMB Services (uses phpMyBibli)
  • OPALS  (they say they are open source, but I don’t see a download link – the acid test of open source)

There are, of course, larger open source systems, which may work for a consortium of libraries: one of the Koha versions, and Evergreen come to mind.  Both have companies that will install and customize the system.

Finally, there is LibraryThing, which is oddly ignored by everyone except those who want their collections online for their patrons and have no other way to do that.  Granted it is limited in terms of collection management:  checkouts? reports?  But it can work, because it is, actually, a next-generation cataloging system.  It’s online, it’s searchable, it’s fairly easy to add resources, the cataloging options are wide open (if that’s how you want to characterize keywords).  And even though the amount of resources that can be entered free of charge is limited  (with the option for more space requiring a small fee), most small library collections are pretty small. Best of all, it’s accessible by difference devices.  All we need is apps that extend the basic functionality offered with a LibraryThing account.

So here I am, looking for viable library management software options for small libraries outside of the U.S., and this is what I’ve come up with. Give my some feedback, library world.  Which of the options above are worth taking a longer look at?

 

Comments 8 Comments »

What if you could give a book to everyone on earth? Get an ebook and read it on any device, in any format, forever? Give an ebook to your library, for them to share? Own DRM-free ebooks, legally? Read free ebooks, and know their creators had been fairly paid?  –From About, unglue.it

Copyright is a round hole.  Paper publications are nice, round pegs.  Electronic items are square pegs.  Hard copies can be passed around, shared from person to person across time and space.  A copyright holder’s distribution rights are curtailed by the physical transfer of the copyrighted item (by purchase or gift) to another.

Electronic items can be similarly shared. Maybe.  Because they are square pegs, a new way to control distribution was needed, so a square hole called “licensing” was carved into the copyright landscape.  This pretty much upsets the shaky balance between the public right to knowledge and a creator’s right to profit from the work.

Enter the crowdfunding concept, which takes advantage of the ubiquitousness of the Interwebs and the ability to use that to more easily raise money for relatively small scale projects.  Kickstarter is a fairly well known example of a crowdfunding conduit.  And now comes Eric Hellman, using the crowdfunding idea to harmonize the ideals of copyright and licensing, to make that square peg fit in the round hole.

Welcome to unglue.it.  I love it.  Where else can you find the possibility of getting your favorite book released into the electronic domain?  I’m hoping when this catches on, I’ll see In the Night Kitchen moved into an active campaign by the time my new grandson is ready to read!

Comments No Comments »

The Code4Lib Journal exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.  –Mission Statement, Code4Lib Journal

A colleague on the Code4Lib Journal’s editorial committee has posted a defense of the Journal’s position on peer review, or more specifically, double blind refereeing.  I was tempted, several months ago, to address the topic in the opening editorial for Issue 16, but was too preoccupied with food. :)  I don’t always see things the way Jonathan does, although I’ve learned over the years he gets it right a lot more times than I do.  In this case, we both agree that the Journal is, in fact, peer reviewed, but not double blind refereed, and we both agree this is a good thing.

Jonathan has, from my perspective, a rather entertaining analogy of the process as open source software development.  He also makes the argument that the Journal’s purpose is to provide value to our audience, and, just to make sure the elephant in the room is not ignored, stresses in very blunt terms that the Journal is not here to validate librarians’ or library technologists’ work for purposes of tenure or advancement.  I’m not going to disagree, but I am going to address the elephant differently.

I understand how “peer review” has been a good thing.  It has provided outside confirmation of the relevancy and quality of scholars’ work.  This was not the reason we started the Code4Lib Journal, however.  We were looking for a way to encourage and facilitate the (relatively) rapid dissemination of helpful information to our audience (which we did not see as being limited to people self-identified as members of  the Code4Lib community).

Because of this goal, we do things a little differently at the Code4Lib Journal.  We don’t require entire drafts be submitted, but we do have a rather tight timeline to publication, so it is obviously a plus when we get a draft up front, or when the author is already working on a draft and can get it to us quickly if the proposal is accepted.  All of us on the editorial committee are library technologists of some kind, although not all the same kind.  We all consider each submission, sometimes engaging in lively and lengthy discussions about a proposal.  In that regard, I would argue that the Journal is not just peer reviewed, it is uber-peer reviewed, because it is not one or two “peers” who are making the call on whether to accept a proposed article, it is typically 7-12 peers who are making the call.

Again, because of the goal to encourage and facilitate rapid dissemination of information, we are committed to working with authors to get their articles ready for publication.  This typically takes the form of (electronically) marking up a draft with specific comments and suggestions that we, as peers, think would make the article more useful to our audience.  The process sometimes goes through several iterations.  It doesn’t always work.  We have had some fall off the table, so to speak.  But the consistently high quality issues we have published are a testament to the validity of the process. Double-blind refereeing here would be an inherently inferior process for achieving our mission and goal.

But back to the elephant in the room.  Why, today, given the current state of disruptive technology in the publishing  industry, are we even talking about refereed journals?  I’m waiting for the other shoe to drop regarding that process.  Why are libraries still hyper-focused on double blind refereeing and peer review as validating mechanisms in tenure?  Isn’t it time to rethink that?  After all, libraries have already registered their disgust at being held hostage by journal publishers:  Universities require their scholars publish, and their libraries have to pay whatever the publisher demands for a license to see that scholarship.

Double-blind refereeing is time consuming.  So is what we do at the Code4Lib Journal. But I would posit that our way is more effective in identifying relevant information and  ensuring its quality.  How much ROI is there, really, in sticking with the old vetting process for validating tenure candidates?  May I suggest letting that other shoe drop and cut to the core of

  • Why is tenure needed in your institution?
  • How effective is the current tenuring system in supporting your institution’s mission and goals?
  • What value does scholarly publishing bring to your institution?
  • What are you willing to do to ensure continued scholarly publication?

The Code4Lib Journal was started 5 years ago by a message from Jonathan Rochkind to the Code4Lib mailing list asking, basically,  “who’s in?”  Change can be done.  It just takes someone willing to voice the call.  If publishing is important to tenure, send out a call to your colleagues to start a Code4Lib type journal.  If it looks too scary, ask.  I’m willing to help.  I’m sure there are others out there as well.

 

Comments No Comments »

With a little help from the DrupalEasy folks in Orlando, the Miami and Fort Lauderdale groups are finally putting on a Drupal Camp! (Thanks Mike Anello, for giving it the final push!)

Nova Southeaster University in Davie is hosting the event on Saturday, October 22. Admission is a only $10 — if you are anywhere in South Florida, come!  There are corporate sponsors, but consider chipping in $40 to be an individual sponsor.  Details are at the Drupalcamp website.

For the Drupal wary:  there is a Beginner’s track, using Drupal 7, the easiest Drupal ever!

For experienced Drupalers, there will be plenty to chew on, such as drush awesomeness!

If you are somewhere in between, trust me, you won’t be bored. :-)

Comments No Comments »

What started out as a 3-4 month hiatus to do an intranet site redesign, is now winding down after 6 months.  After the first couple months when it became apparent no progress was going to be made, I regrouped and put together a different, motivated but novice team. It’s been a little over three months since that team started on the project, and the results are impressive.  Although we could go live with it, I decided to do some “beta testing” on our unsuspecting end-users.  That has been enlightening: there may be some revisions in store before we finally get this baby to bed.

The terms usually associated with Drupal are “steep learning curve.”  I was the only one in my organization who even knew what Drupal is, although some seemed to have a vague concept of “content management system.” But I recommended we go with Drupal over other options because of (1) it’s potential, (2) the growing and active group of libraries with Drupal, and (3) because it’s the CMS I was most familiar with.  Looking back, I’d have done some things differently (isn’t that always the case?), but I would still choose Drupal. We haven’t fully taken advantage of all Drupal’s potential, but that’s only because I decided to hold off development of more advanced features until after we completed the initial project.

I was fortunate to have 2 others who were eager to learn and undaunted by Drupal’s complexity.  In a little over two months, with 1 1/2 days of one-on-one training and many many hours of phone conferences with them, they understand Drupal better than I did after two years of playing with it. This is a good thing, since they will likely be the ones left with the task of maintaining the site over the long run.  But we needed more than us three to migrate the content from the previous site, so I recruited 4 others, 3 of whom were apprehensive about approaching technology at this level.  One had a Technical Services background, and provided us with the taxonomy structure we needed. Two added content directly into special content types I set up, and one tracked down copyright-free pictures we needed.  It was an interesting exercise in project management: finding the team members we needed by dividing the tasks by skill level required, configuring Drupal to be easier to use for technophobes, and by approaching prospects individually to ask for help on a limited scale.

About half way through the project, as I struggled with trying to get the site to display the same in IE7 and Firefox, I shifted gears and decided to do the layout completely in CSS.  Actually I was shamed into it after a query to the Drupal library group.  I finished those changes just about the same time everything else fell into place.  And it works just fine in both browsers, thank you!  But we had been designing with the assumption most end users would be using a set screen size and resolution.  This week we discovered those assumptions were way off.  The good news is that we have found a lot more real estate to work with.  The bad news is that while things aren’t broken, the site doesn’t look quite the way we envisioned.

There may be some more tweaking involved, but there are now two others who have enough experience to do the tweaking.  Life is good.  Now to get back to the digitization project.

Comments No Comments »

  • Attachments