Code4Lib Midwest Lightning Talk, July 2016

C4LJournal

Many thanks to the folks at the University of Chicago for hosting C4L Midwest last week. After hearing some of the presentations and discussions on data plans and availability, I put together a short lightning talk about the data we have at the Code4Lib Journal, or at least what can be cobbled together (literally). Surprise statistic (for me): the percent published and percent rejected, over the history of the journal, are equal.

As Eric Lease Morgan pointed out, most of the data we can’t share from the Journal are confidential. But one of the problems with gathering even shareable data is that it’s messy. As I mentioned in the lightning talk, the two main reasons for this are (1) it’s just not a priority for a volunteer committee, and (2) very few people even ask about the data. Even gathering statistics like “rejected” is difficult, because not all proposals and rejections are tied to a specific issue, and some that are accepted are later rejected for publication. But globally, we can get some generalized statistics.

Ongoing project from this: try to gather editor numbers over time (even trickier since it involves identifying when people came and went, and there’s nothing available, other than emails, to nail that down).

Social Media ROI for libraries

We have all been indoctrinated in the importance of incorporating social media in our libraries’ outreach/marketing strategies, to the point one almost has to explain by way of apology if their library isn’t on social media. I am wondering, however, where is the evidence?  How do we know, first, that social media has any effect on our institutional bottom line other than from social media supporters connecting dots (e.g., surveys indicating social media use, and references to other supporters) and saying, “of course it does!” Where’s the data?

I am still exploring, so help me out if you actually have the evidence documented somewhere (e.g., an article or data set?).  But mind you, we are talking about libraries, and library-like institutions here, not commercial operations for which the conversion metrics of Google Analytics (or others) nicely work. Because hopefully we already know that libraries don’t so nicely fit that commercial model. Also, considering the sources used to justify the swooning over social media, the percentage of a national or global community’s use of social media does not necessarily translate to a library’s base (i.e., the community that pays for and uses it).

Focusing just on the U.S., because I work in a region within the U.S., I did find an interesting data set about what libraries are doing with social media and how they are handling it: http://scholarscompass.vcu.edu/libraries_data/1/ (with a shoutout to the authors/librarians involved who evidently believe in open data and data sharing!).  It’s an appropriately complex set of data, but a quick scan through the survey, especially the write in responses, indicates social media integration is pretty hodgepodge, as if validating a feeling of distrust (as in, “what it this really going to do for us?”)

Doing a cursory literature search (limited to the last few years because this is such a quickly changing landscape), I came across one helpful article: Marketing Finding Aids on Social Media: What Worked and What Didn’t Work (http://americanarchivist.org/doi/10.17723/0360-9081.78.2.488), in which a research team selected ten social media sites to promote content, using email lists as well, and tracked and evaluated the click-throughs using Google Analytics. Although it presumes that social media marketing is needed (and I don’t dispute that position), they actually have the data to prove (1) its effectiveness, and (2) which ones give the best results.  Why can’t we have more like this?

Why can’t we have less “jump on the bandwagon” programs and courses and classes, and more instruction on assessment of need and measurement of impact?  How about classes that teach what data can be gathered, how to gather it, and how to use it?  Because maybe we should distrust social media’s usefulness.  What is it going to do for us?  Is there really a social media ROI for library-type institutions?

Automation and Small Libraries, and CornerThing

The situation hasn’t really changed in the world of library automation since last year’s post.  Libraries find what works for them, given their  economic and human resources.  What is different, is a new tool, developed with some virtual interns.  I call it CornerThing, because I’m not very creative with names. 🙂

I’ve got these small libraries (American Corners), where, for some of them, their “automation” consists of massive spreadsheets.  And LibraryThing.  Checkouts are still done by hand on cards.  They compile reports by hand, going through the cards each month, to send to me, or one of my colleagues.  It seemed like there must be an app to use LibraryThing to do more than just display a collection.  I searched and checked as only a Reference Librarian would. 🙂  Nothing was out there.  So how hard could it be to make an app that could capture checkout statistics (the part I was interested in)?

I originally wanted an iPad app, but rather than spend precious little free time on it myself, I decided to get a couple interns who we willing to learn some new skills while creating a simple app.  It was an interesting experience.  I didn’t get an iPad app, because no one applied for that project.  Several applied for the Android app project I added almost as an afterthought (why not? More options!).  So I got an Android app, now in beta, which about 1/3 of those small libraries, which already have Android tablets, can use.

CornerThing:  it syncs with a LibraryThing collection, downloading the metadata to the device, into a lightweight searchable database.  Subsequent “syncs” only add changes.  It’s possible to add an item in the app, but the syncing is not two-way.  Then there’s a searchable database for borrowers, entered on the fly, or by uploading a spreadsheet file (via computer connection).  Items from the collection can be checked out to borrowers, with a due date, and checked back in.  When an item is checked out, the data is captured on the item record and preserved.  Once the item is checked back in, the connection between the borrower and item is erased, but the numerical data on checkouts is retained on the item record, so reports can be generated by selected metadata (e.g., author, title, keyword).

CornerThing: a simple circulation app for small libraries (like American Corners) to take advantage of their LibraryThing collections.  I’m pretty sure it would work for other small libraries with limited resources. 🙂  It’s also open source. If you’re interested, send me a message.

Automation and small libraries – first look

It’s kind of amazing to me that after over fifteen years in this business, I’m looking at a situation that pretty much hasn’t changed for small libraries looking for an automation system.  There wasn’t much available for them at a reasonable cost back then, and there is even less today.  Go ahead. Show me where I’m wrong.

Seriously, a small library, a small public library, one that is supported, sometimes begrudgingly, by (too often non-existent) local public funds, does not have a lot to spend on annual fees for a library automation system.  They have even less to spend on a tech person who could install and maintain one of the FLOSS options.  And even if they do, there’s still the recurring cost of a web server to put it online.  O.K., maybe they could tie into the local government’s web server, assuming the local government entity has one (probably a “yes” in the U.S., but not in other parts of the world).

I actually did a complete retrospective conversion at a library years ago.  It was a special library that had the funds to support a quality system.  I was shocked and horrified at the “system” (basically a giant word processing document) that was being used to track titles.  There was no mechanism to track costs.  I have since come to appreciate the efforts of librarians in small libraries doing the best they can with what they have, and with the skill sets they have, to manage their collections.  Hello, World: librarians are amazing, and you should throw money at them, because they do amazing things with your resources.

So, in the history of library systems, the first stop is Marshall Breeding’s excellent record of what was, what is, and who ate up whom.  Although this doesn’t cover everything being used out there, it does give an interesting picture of the developing landscape.  I was surprised and amazed that the Georgia Public Library Service, back in 2004, chose to build its own automation system (Evergreen) for libraries in the state rather than use one of the existing systems.  There was, of course, Koha, out of New Zealand.  But that one got mired in nasty disputes over code and trademarks.  So far, so good, with Evergreen (keeping fingers crossed)

Next stop is Wikipedia’s list of Next-Generation library systems, especially the Comparison chart, but you might want to scan through the brief explanation of what Next-Generation means here.  The list is notable, because it includes both large systems and systems smaller libraries can use.  But note that these are all web-based systems.  Some of them, however, can function on a stand-alone basis, on the desktop.  This is important, because the most basic need of libraries is library management software.  Getting it on the web is secondary to librarians (although not to their patrons, of course).

So let’s take a stab at what some of the possibilities are for small (mostly) underfunded libraries today.  There are two perspectives to consider here:  libraries with staff that are systems-capable, and libraries with limited or no staff capable of managing the backend of a system.

In the first category (libraries with systems-capable staff), we have, first, systems that have a stand-alone option,

and second, systems that are add-ons to content management systems a library may be using, or have access to, for their web site (so far, I’m only seeing Drupal modules, are there any others out there?).

In the second category, it’s pretty bleak without (1) hiring someone, or a service, to set up a system, or (2) a funding stream to pay annual fees.  In the first case I refer back to one of the four above that can be installed on a stand-alone basis.  After installation, training would probably be required as well, despite the documentation.   In the second case, LibraryWorld seems to have a loyal following around the world.  I haven’t had an opportunity to look at it recently, so I don’t really have anything to say about it (yet).  Feel free to add comments below about your experience with it.

But LibraryWorld is a closed system, and if you are looking for something open source, there are

  • PMB Services (uses phpMyBibli)
  • OPALS  (they say they are open source, but I don’t see a download link – the acid test of open source)

There are, of course, larger open source systems, which may work for a consortium of libraries: one of the Koha versions, and Evergreen come to mind.  Both have companies that will install and customize the system.

Finally, there is LibraryThing, which is oddly ignored by everyone except those who want their collections online for their patrons and have no other way to do that.  Granted it is limited in terms of collection management:  checkouts? reports?  But it can work, because it is, actually, a next-generation cataloging system.  It’s online, it’s searchable, it’s fairly easy to add resources, the cataloging options are wide open (if that’s how you want to characterize keywords).  And even though the amount of resources that can be entered free of charge is limited  (with the option for more space requiring a small fee), most small library collections are pretty small. Best of all, it’s accessible by difference devices.  All we need is apps that extend the basic functionality offered with a LibraryThing account.

So here I am, looking for viable library management software options for small libraries outside of the U.S., and this is what I’ve come up with. Give my some feedback, library world.  Which of the options above are worth taking a longer look at?

 

Copyright and disruptive technology

What if you could give a book to everyone on earth? Get an ebook and read it on any device, in any format, forever? Give an ebook to your library, for them to share? Own DRM-free ebooks, legally? Read free ebooks, and know their creators had been fairly paid?  –From About, unglue.it

Copyright is a round hole.  Paper publications are nice, round pegs.  Electronic items are square pegs.  Hard copies can be passed around, shared from person to person across time and space.  A copyright holder’s distribution rights are curtailed by the physical transfer of the copyrighted item (by purchase or gift) to another.

Electronic items can be similarly shared. Maybe.  Because they are square pegs, a new way to control distribution was needed, so a square hole called “licensing” was carved into the copyright landscape.  This pretty much upsets the shaky balance between the public right to knowledge and a creator’s right to profit from the work.

Enter the crowdfunding concept, which takes advantage of the ubiquitousness of the Interwebs and the ability to use that to more easily raise money for relatively small scale projects.  Kickstarter is a fairly well known example of a crowdfunding conduit.  And now comes Eric Hellman, using the crowdfunding idea to harmonize the ideals of copyright and licensing, to make that square peg fit in the round hole.

Welcome to unglue.it.  I love it.  Where else can you find the possibility of getting your favorite book released into the electronic domain?  I’m hoping when this catches on, I’ll see In the Night Kitchen moved into an active campaign by the time my new grandson is ready to read!

Peer Review and Relevancy

The Code4Lib Journal exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.  —Mission Statement, Code4Lib Journal

A colleague on the Code4Lib Journal’s editorial committee has posted a defense of the Journal’s position on peer review, or more specifically, double blind refereeing.  I was tempted, several months ago, to address the topic in the opening editorial for Issue 16, but was too preoccupied with food. 🙂  I don’t always see things the way Jonathan does, although I’ve learned over the years he gets it right a lot more times than I do.  In this case, we both agree that the Journal is, in fact, peer reviewed, but not double blind refereed, and we both agree this is a good thing.

Jonathan has, from my perspective, a rather entertaining analogy of the process as open source software development.  He also makes the argument that the Journal’s purpose is to provide value to our audience, and, just to make sure the elephant in the room is not ignored, stresses in very blunt terms that the Journal is not here to validate librarians’ or library technologists’ work for purposes of tenure or advancement.  I’m not going to disagree, but I am going to address the elephant differently.

I understand how “peer review” has been a good thing.  It has provided outside confirmation of the relevancy and quality of scholars’ work.  This was not the reason we started the Code4Lib Journal, however.  We were looking for a way to encourage and facilitate the (relatively) rapid dissemination of helpful information to our audience (which we did not see as being limited to people self-identified as members of  the Code4Lib community).

Because of this goal, we do things a little differently at the Code4Lib Journal.  We don’t require entire drafts be submitted, but we do have a rather tight timeline to publication, so it is obviously a plus when we get a draft up front, or when the author is already working on a draft and can get it to us quickly if the proposal is accepted.  All of us on the editorial committee are library technologists of some kind, although not all the same kind.  We all consider each submission, sometimes engaging in lively and lengthy discussions about a proposal.  In that regard, I would argue that the Journal is not just peer reviewed, it is uber-peer reviewed, because it is not one or two “peers” who are making the call on whether to accept a proposed article, it is typically 7-12 peers who are making the call.

Again, because of the goal to encourage and facilitate rapid dissemination of information, we are committed to working with authors to get their articles ready for publication.  This typically takes the form of (electronically) marking up a draft with specific comments and suggestions that we, as peers, think would make the article more useful to our audience.  The process sometimes goes through several iterations.  It doesn’t always work.  We have had some fall off the table, so to speak.  But the consistently high quality issues we have published are a testament to the validity of the process. Double-blind refereeing here would be an inherently inferior process for achieving our mission and goal.

But back to the elephant in the room.  Why, today, given the current state of disruptive technology in the publishing  industry, are we even talking about refereed journals?  I’m waiting for the other shoe to drop regarding that process.  Why are libraries still hyper-focused on double blind refereeing and peer review as validating mechanisms in tenure?  Isn’t it time to rethink that?  After all, libraries have already registered their disgust at being held hostage by journal publishers:  Universities require their scholars publish, and their libraries have to pay whatever the publisher demands for a license to see that scholarship.

Double-blind refereeing is time consuming.  So is what we do at the Code4Lib Journal. But I would posit that our way is more effective in identifying relevant information and  ensuring its quality.  How much ROI is there, really, in sticking with the old vetting process for validating tenure candidates?  May I suggest letting that other shoe drop and cut to the core of

  • Why is tenure needed in your institution?
  • How effective is the current tenuring system in supporting your institution’s mission and goals?
  • What value does scholarly publishing bring to your institution?
  • What are you willing to do to ensure continued scholarly publication?

The Code4Lib Journal was started 5 years ago by a message from Jonathan Rochkind to the Code4Lib mailing list asking, basically,  “who’s in?”  Change can be done.  It just takes someone willing to voice the call.  If publishing is important to tenure, send out a call to your colleagues to start a Code4Lib type journal.  If it looks too scary, ask.  I’m willing to help.  I’m sure there are others out there as well.

 

Drupal Camp!

With a little help from the DrupalEasy folks in Orlando, the Miami and Fort Lauderdale groups are finally putting on a Drupal Camp! (Thanks Mike Anello, for giving it the final push!)

Nova Southeaster University in Davie is hosting the event on Saturday, October 22. Admission is a only $10 — if you are anywhere in South Florida, come!  There are corporate sponsors, but consider chipping in $40 to be an individual sponsor.  Details are at the Drupalcamp website.

For the Drupal wary:  there is a Beginner’s track, using Drupal 7, the easiest Drupal ever!

For experienced Drupalers, there will be plenty to chew on, such as drush awesomeness!

If you are somewhere in between, trust me, you won’t be bored. 🙂

Drupal and other distractions

What started out as a 3-4 month hiatus to do an intranet site redesign, is now winding down after 6 months.  After the first couple months when it became apparent no progress was going to be made, I regrouped and put together a different, motivated but novice team. It’s been a little over three months since that team started on the project, and the results are impressive.  Although we could go live with it, I decided to do some “beta testing” on our unsuspecting end-users.  That has been enlightening: there may be some revisions in store before we finally get this baby to bed.

The terms usually associated with Drupal are “steep learning curve.”  I was the only one in my organization who even knew what Drupal is, although some seemed to have a vague concept of “content management system.” But I recommended we go with Drupal over other options because of (1) it’s potential, (2) the growing and active group of libraries with Drupal, and (3) because it’s the CMS I was most familiar with.  Looking back, I’d have done some things differently (isn’t that always the case?), but I would still choose Drupal. We haven’t fully taken advantage of all Drupal’s potential, but that’s only because I decided to hold off development of more advanced features until after we completed the initial project.

I was fortunate to have 2 others who were eager to learn and undaunted by Drupal’s complexity.  In a little over two months, with 1 1/2 days of one-on-one training and many many hours of phone conferences with them, they understand Drupal better than I did after two years of playing with it. This is a good thing, since they will likely be the ones left with the task of maintaining the site over the long run.  But we needed more than us three to migrate the content from the previous site, so I recruited 4 others, 3 of whom were apprehensive about approaching technology at this level.  One had a Technical Services background, and provided us with the taxonomy structure we needed. Two added content directly into special content types I set up, and one tracked down copyright-free pictures we needed.  It was an interesting exercise in project management: finding the team members we needed by dividing the tasks by skill level required, configuring Drupal to be easier to use for technophobes, and by approaching prospects individually to ask for help on a limited scale.

About half way through the project, as I struggled with trying to get the site to display the same in IE7 and Firefox, I shifted gears and decided to do the layout completely in CSS.  Actually I was shamed into it after a query to the Drupal library group.  I finished those changes just about the same time everything else fell into place.  And it works just fine in both browsers, thank you!  But we had been designing with the assumption most end users would be using a set screen size and resolution.  This week we discovered those assumptions were way off.  The good news is that we have found a lot more real estate to work with.  The bad news is that while things aren’t broken, the site doesn’t look quite the way we envisioned.

There may be some more tweaking involved, but there are now two others who have enough experience to do the tweaking.  Life is good.  Now to get back to the digitization project.

Google, tech support, and your parents

Google has entered the tech support arena: http://www.teachparentstech.org/watch The short help videos are slick, and they’re appealing, at least to the target audience: a younger generation that is very tech savvy with parents or grandparents who are not.  One of my sons came across them and asked if I thought they would be helpful for his grandparents, who are in their 80’s.  I went to investigate.

The Tech Support care package is a set of quick videos intended to make using Google products easier.  It makes sense.  You have a product.  You do a market analysis.  Where can you expand? In technology, an obvious place to expand your market is the older adult population, which is the fastest growing segment of the population.  But there are problems with that market segment (see my Connecting the Disconnected series of posts, as well as the Computers, Older Adults, and Libraries page).  So Google, in a style very reminiscent of Apple, has created some help videos for basic computer tasks as well as for using Google products.  They are short (good idea), to the point (good idea), and friendly (good idea).  Some are good, some are fails.

The first issue is: how basic is “Basic”?  On the assumption that this is intended for someone who at least uses email (after all, the front page of the site is an invitation to email these helpful videos to the one you think needs them), how much existing knowledge does that presume?  Looking at the set of 6 basic videos, the following knowledge and skills are expected:

  • How to use the mouse, including the right and left buttons (or right and left side of the mouse).
  • How to click and drag
  • What the various special function keys are (such as the Control Key or Command Key) and where they are .
  • How to browse a computer’s file structure.
  • What a computer file is, and what the different types of computer files are (such as jpg, pdf, docx)
  • How to use email, including attaching files.

How reasonable are these expectations?  I fall back on the standard evasive answer:  That depends. 🙂  I have developed a lot of computer training and taught a lot of people how to use computers.  They have ranged in age from thirties to nineties.  They have had varying levels of computer skills across all ages (although, in general, the older they are, the less computer skills they have).  For those who had no experience with computers, my goal was to teach them how to use the internet, and how to use email.  Once they reached that level, I could teach them more advanced things like bookmarking web sites, basic computer skills and file structure, and sharing photos.  Some of these videos presume more skills than I did even for the next step beyond the new user level.  For example, files and file structure, and email attachments were elements in our more advanced user classes.

As an aside, Gmail was one of the email services we tested on the new user groups.  It did not work out well, because (1) Google kept changing the service and interface, and (2) it was too confusing for a typical older user to figure out.  I tried to contact Google about creating a user interface that would work for older adults.  Obviously, I didn’t get their attention.

So the videos really aren’t all that basic, except to technologists who find the featured tasks unbelievably mundane.  But how useful are they to their intended audience (the older adult who already has some computer skills)?  Again, it depends:

  • How old is the recipient of the “Care” package?  The older the person is (generally, 55+), the more they need explicit instructions, using discete steps. The visuals are nice, but sometimes they move too fast and skip over steps.  Also, the language is often not explicit enough for an older adult.
  • How experienced is that person with computers?  This question is actually tied to the next one.  Older adults do not tend to keep up with changes in technology as much as their children/grandchildren.  But generally, the more experience they have, the less difficulty they have learning new, related skills.
  • What operating system, and what version of the operating system, is that person using?  Because older adults tend to not update their skills (learn why here), they are usually using an old computer and operating system (it was not uncommon to have students in my classes who were using Windows 98).  The changes from Windows 98/2000/xp to Win7 or OSX Snow Leopard are intimidating to an older adult (again, generally, 55+).  These videos assume the recipient will be comfortable using one of those operating systems.  That is a big assumption.

Bottom line: If the intended recipient of these cute care packages is under 55, and has some experience using a recent operating system, the videos will likely be both handy and useful. If the recipient is over 55 and/or is not using a recent operating system, a few of the videos would be useful:  How to Create a Strong Password, How to Know if an Email is Real, and most of the Search Information videos.  Also note, there are a lot more Mac-centric videos than Windows.

Would it work for my parents, in their 80’s, who have been using computers since the first Apples came out, and currently have Snow Leopard?  Actually, no.  They would have difficulty following most of them, and for the rest, they wouldn’t see the point.

I’d be happy to take that off your hands

In the not too distant past, I was manning the reference desk, listening to a man say he had to come to the library to use the computers because his laptop was so badly infested with viruses that he had to throw it away.

“You threw it away?” I asked, incredulously.

“Yeah, it’s worthless now.  I can’t use it.  I’m just going to throw it away.”

Realizing he hadn’t actually thrown it away yet, but was willing to, I glibly asked if he’d throw it my way.  He looked at me incredulously at the same time I realized there was probably some intervening ethics involved.  So I said, “Or, I could show you how to make it usable again so there will never be another virus on it.”

He was still incredulous.  I assured him it can be done.  He wanted to know what he could do for me.  I told him “Never tell anyone about this,” forming a mental image of what would happen if he went out and told all his friends, or worse, wrote to the director about what I’d done for him.

He came back a couple days later, but didn’t have the laptop with him.  I hooked him up with a copy of Keir Thomas’ Beginning Ubuntu Linux, and a newer version of the CD included in the book.  He was still somewhat incredulous.  He left the book, but promised to come back the next day with the laptop.  Unfortunately, I didn’t see him again after that. I’m still wondering whether the original story was true, or if my comments prompted him to find someone to clean up the laptop for him.

I’ve since left that job.  Sometimes I miss the interesting world of public libraries.