Archive for the ‘Libraries’ Category.

Code4Lib2010 Notes from Day 2, afternoon

A better advanced search

http://searchworks.stanford.edu

How to filter multiple similar titles by the same author, or multiple author instances (artist as author, as subject, as added author), or combine multiple facet values

At start: no drop down boxes, only titled text boxes, based on above.  Keyword (& Item Description) 3rd on the list; “Subject Terms” instead of just “Subject”

Dismax, & Solr local params:  local param syntax: _query_:{dismax qf …..}

jQuery functions added to multi-facet search boxes; also added faceting to results (actionable facets)

The search breadcrumbs got really complex.

Drupal 7: A more powerful platform for building library applications

Has a new Information Architecture, writes things into “contexts” (attempt to make it easier for end user)

Users can cancel their own account

New admin theme, toolbars & shortcuts (taken from admin menu toolbar module); dashboard (add what you want)

Uses overlays (rather than changing page)

Module selection screen changed to landscape table view.

Permission screen: allows admin role (same as 1st user)

Install options are default, or minimal profile.

Minimum Software Requirements: php 5.2, mysql 5.0 (or postgres 5.0)

File System changes: separate public and private paths

Has native imaging handling out of the box

email security notifications set automatically with install; php filter module now global.

cron.php requires key in url to run

Field UI (included in core) draws from cck module in Drupal 6.  Types: boolean, decimal, file, images, list, text taxonomy, etc.; can apply to almost anything

Update manager: upload and install a module/theme from drupal

Page elements are assignable; templating system changed (more consistent?)

The base theme is based on Zen: Stark (naked)

Theming of content is now granular (content can be pulled from container for theming)

Javascript uses jQuery 1.3, jQuery Forms 2.2 & jQuery UI 1.7; ajax framework from cTools

Really backend stuff: 5.0 database abstraction layer can utilize PHP Data Objects; dynamic select queries; stream wrapper: URI’s can be referenced; field API not node specific & any element can be fieldable

Enhancing Discoverability With Virtual Shelf Browse

Displays book covers, with mouseovers; scrolls right and left

Not everything has a cover image; uses “faux covers” ala google books

Goal: browse arbitrary number of titles around a known item in call number order, including online & all locations

Daily output from ILS to delimited text, then db ingest with python, to call number index in mysql; call number is in alternate formats w/in table records.

Front end challenges: DOM=SLOW; multiple plugins = headache; remote servers = latency issues; too much ajax = browser issues; IE not a friend (doh!)

How to Implement A Virtual Bookshelf With Solr

ILS is Sirsi.

Problems with dirty data, & no standard call no (incl. sudocs & theses/dissertation numbering schemes)

Code4Lib2010 Notes from Day 2, morning

Iterative Development:  Done Simply

Agile, Scrum (Agile case study)

Problem:  too much to do (geez, even with a staff like that?); priorities, requirements change frequently, emergencies happen

IT blackbox:  those on the outside don’t know what’s going on inside

Agile: response to waterfall method of software development:  values working software, customer collaboration, change, interaction

Scrum: agile methodology

Roles:  product owner, scrum master (keeps things going), team

Artifacts: product backlog, sprint backlog

(visual of scrum sprint.)

Daily scrum (c. 15 min. @ start of day) what’s going on, bring up to date

Sprint retrospective:  what went well, what to change for next time

At NCSU: used to tackle big problems in small pieces.  Using iterative development loosely based on scrum.  Just in time planning & documentation;  cross functional teams with an IT point person + developers; joint “owners”

NCSU toolbox:

Product & Sprint backlog: JIRA

Requirement confluence + JIRA

Planning: google docs + JIRA

Daily scrum

Sprint demo at product team meetings

6 week iteration: 1 week planning, 4 weeks development (realign as necessary) 1 week testing/release

Use 1 week to plan across multiple projects

1. high level overview of upcoming projects (3-6 mo)

Prioritize projects for next iteration

Add to google docs

2. Meet w/product owners for each prioritized project

3. At end of the week, re-prioritize based on estimates & time availability; back to googledocs (spreadsheet) w/ time estimates by individual.

4. Get it done: daily scrum, weekly review (how does progress look for cycle, requires work logging), subversion>JIRA integration

Issue burndown chart (should be going down as goal nears)

Test throughout cycle; demo at regular meetings, close tickets when closed, put comments in JIRA to document issues.

Challenges: multple small projects within a cycle: not traditional for agile practice

Lack of documented requirements (what are user stories and when do you need them; librarian teams work slowly); prioritization is difficult for library staff (work at release level), testing (how & when for small projects), no QA experts, simultaneously handle support & develop (juggling act)

Outcomes: positive movement across multiple projects.  Individual development efforts timeboxed, increased user satisfaction, increased flexibility to adapt (users hate seing no movement)  “Positive energy” feedback and librarians seeing movement

Resources: Agile Project Management with Scrum (books)

Vampires vs. werewolves: ending the war between developers and sysadmins

Each side’s goal is satisfying the villagers w/pichforks

Innovation is about risk, & you don’t take risks with people you don’t trust.

Reach out and build trust:

1. Test:  do you KNOW that your code works? does it work on the system; will it after system upgrade? (Uses Nagios to monitor uptime, what’s working, & what’s not; plug in sys upgrade & monitor from Nagios to see what to expect.)

2. Documentation: for the sysadmins, so they can know what your process/code is supposed to do, & who’s responsible, contact info, etc.

3. Puppet (sys config tool) book: Pulling Strings with Puppet. (uses ruby)

I am not your mother, write your test code

Ad hoc testing: where’s the safety net?

Automate testing:  regression testing (local code, local mods), make sure your code doesn’t break someone else’s, refactoring, reduce design complexity, specify expected outcomes

Hudson: continuous integration tool

Selenium: firefox plugin – does automatic searching of web site, makes assertions, asserts text in an xpath. (but xpath is brittle)

(working in ruby): rSpec, cucumber, rails testing framework; rSpec & cucumber primarily.  rSpec tests whole stack.  Cucumber has “features” with “scenarios” (tests instances)

Types of tests:

Unit tests; testing at function or method level.  Integration test would test interface between functions.  At high level, tests stack

What to test: assert call (assertEquals(…)); test branching – differing paths of logic; test during bug fixes (first write test to figure out the bug); test for exceptions, error-handling (try-catch);

Testing legacy code: start with bug fixing: high level, test the section (trying to understand what’s going on).

Run tests: isolate what you’re testing, & for dependencies on other processes (e.g., network, db, solr, external svcs)

All modern languages have unit testing frameworks

Result: fewer bugs, better design, freedom to refactor

Media, Blacklight & Viewers like You (WGBH Interactive)

Using fedora, lighttpd, solr, mysql, blicklight (media screen), jquery, PBCore (mirrors DC in a lot of ways) Question to libraries: http://tinyurl.com/c4l-pbcore – how can PBS make pbcore more useful to libraries?

Metadata apis: rdf/xml, oai pmh, unapi, embed uriplay (common way of embedding stuff across sites)

Fedora has problems with large data strings.

Left Quicktime rsp streaming -> h.264 pseudo-streaming using lighttpd

Apache is in front of fedora and streams data directly out of the data store.  Fedora manages the files, but doesn’t have to deliver them

Rights:  ODRL (metadata records are public, limited backend security policies).

Media workflows: have to integrate with whole bunch of asset management systems; no standard identifier control, so there’s extensive manual processing

Becoming Truly Innovative: Migrating from Millennium to Koha

New York University Health Sciences libraries, was on Millennium III.  Outside the library, IT departments in the medical center are merging.  HIPAA issues involved; circuitous route around firewall to medpac (OPAC).

When network security (IT) consolidated, policies made things break.  When intermediate server was disconnected, the ILS went down.

NETSecurity required all vendors to connect through Juniper VPN(etc)

Enter concept of open source: tested desktop level first with basic data on circulation, holds, acquisitions, serials, bibliographic & authority records

Circulation stuff: take from Millenium, normalize, put in excel, compare to other lists, enter into Koha

Bib/authority: millennium tags to marc fields not 1-1, not comprehensive, multiple subfields.  Migration tool did not work.  Used Xrecords: pulled from III (supply list of records in one line), put in XML, to XSLT, to marcXML, to Koha

XSLT needs to be customized for each institution; III codes didn’t map directly to koha codes (used conditionals, patterns)

Holds had to be handled manually

Migration took place from Aug 6 (freeze cataloging) to Aug 30 migration, aug 31 live. (contract w/III ended on aug 31)

Bibliographic records =80k, authority records = 5k, patron recordss = 11k; acquisitions = end of fiscal year – start over; serials was a problem

Outcome: good, o.k. – everything got in & done & everything is working fine; code is available for others, but you have to write your own XSLT

Caveats: mfhd  – no support yet, diacritics, cross linked items, multiple barcodes/call number, keeping track of old record numbers; pull patron records on day of migration, not before.

Code4Lib2010 Notes from Day 1, afternoon

Taking Control of Metadata with eXtensible Catalog

Open source, user-centered, “not big”

Customizable, faceted, FRBRized search interface

Automated harvesting and processing of large batches of metadata

Connectivity Tools: between XC and ILS & NCIP; allows you to get MARC sml out of the ILS

XC Metadata Services Toolkit – takes DC from repository and marcxml and sends it to drupal (toolkit)

XC = extensible catalog

Nice drupal interface (sample).  Modify PHP code to change display

Create custom web apps to browse catalog content (fill out web form), or preset search limits & customized facets

XC metadata services toolkit: cleans, aggregates, makes OAI compatible.  Tracks “predecessor-successor” records (changes)

XC OAIi toolkit transforms marc to marcxml; drivers available for Voyager, Aleph, NTT, III, III/oracle (note from twitter response: waiting to see if there is interest in SIRSI)

www.eXtensibleCatalog.org –still needs funding/community support

Matching Dirty Data, yet another wheel

Matching bib data with no common identifiers

Goal: ingest metadata and pdf’s for etd’s received in dspace

MARC data in UMI: filename & abstract; ILS marc data: OCLC #,  author, title, date type, dept., subject

Create python dictionaries, do exact & fuzzy matches.  Find the intersection of the keys and filter/refine from there

Reduce the search space (get rid of common words (not just stop words))

Jaro-Winkler algorhythm: 2 characters match if they are a reasonable distance from one another, but best for short strings

String comparison tutorial, secondstring (java text analysis library – on sourceforge), marcximil

http://snurl.com/uggtn

HIVE:  Helping Interdisciplinary Vocabulary Engineering

“Researchy” (as in not too well developed)

Problem: terms for self publishing instances.  Solution: combine terms from existing vocabularies (LCSH, MeSH, NBII Thesaurus) & compare to labeling

skos: somewhere between ontologies and cataloging

Abstract is run through HIVE, outputs extracted concepts cloud, color coded to represent catalog/ontology source.

Based on google web toolkit; currently available in googlecode

Coming soon: HIVE api, sparql, sru

http://datadryad.org

Metadata Editing, a truly extensible solution: Trident

DukeCore:  Duke U’s DC wireframe

MAP: metadata application profile: works by instructing an editor how to build a UI for editing: creates a schema neutral representation of metadata (metadata form).  Editor only needs to understand metadata form and communicate with the repository via the API

Editor/repository manager app:  built on a web services API so it doesn’t need to know what’s behind it.  uses python, django, yahoo grids, and jquery

Uses restful API

Starts in metadata schema, duke core, transforms to mdr, validations are applied to create a packet that is returned to the user interface.  On submission, it goes to mdf and then back to duke core

Metadata forms made up of field groups & elements (looks a lot like infopath form elements)

You can also have your vocabulary lists automatically updated, & built on the fly

Repository architecture:  Repo API allows editor not to have to worry about implementation.  Next level is fedora business logic & i/o layer, then fedora repository.

Solr updated in realtime

Uses jms messaging

Lightning Talks:

Forward: forward.library.wisconsin.edu

Uses blacklight (shoutout to blacklight devs).  shows librarians that are recommended for a targeted search string.

Problems: no standardization in cataloging; differing licensing

Stable (can shoot it with guns & it still runs)

RubyTool to edit MODS metadata

(using existing mods metadata)

“Opinionated XML”:  looking for feedback: http://yourmediashelf.org/blog

DAR: Digital Archaeological Record

Trying to allow archeologists to submit data sets with any encoding they want

Includes a google map on the search screen. Advances to filtering page for data results.  Tries to allow others to map their ontology to other, standardized ontologies

Hydra:  blacklight + Active Fedora + Rails

Being used by Stanford to process dissertations & theses

Hydrangea for open repositories to be released this year, using mods & DC

Why CouchDB

JSON documents, uses GET, PUT, DELETE, & POST

Stateless auto replication possible.

Includes CouchApps which live inside CouchDB (have to go get them & store them in CouchDB. Sofa outputs Atom feeds, e.g.

Code4Lib2010 Notes from Day 1, morning

#1 keynoter: Cathy Marshall:

A web page weighs 80 micrograms?

15 years ago (New Yorker article): well, anyone can have a web page, although most of them are commercial enterprises.  All you need is to digitize your stuff.  Born digital (pictures), even in TIFF were low resolution.

Today 4.3 billion personal photos on Flickr alone.

Cathy Marshall:  guilty of “feral ethnography” (studying what others do)

Benign neglect:  de facto archiving/collection policy?  Is Facebook millenials’ scrapbook?  Should it be?  Whose job is it to “remember”?

Digital stuff accumulates at a faster rate than physical space (out of sight & easier to “neglect”?)  CS view:  how is this a problem?  Storage is getting cheaper, why not just keep everything?

Why keep everything? Difficult to predict an items future worth (true also for physical items as well); deleting is hard, thankless work; filtering and searching allows locating things easily. (easier to keep than to cull)

Losing stuff becomes a means of culling collections (realizing afterwards that losing the stuff wasn’t such a big deal after all).

Attitude that “I don’t really need this stuff, but it’s nice to be able to keep it around.”

It’s easier to lose things than to maintain them.  The more things are used, the more it’s likely to be preserved.

No single reservation technology will win the battle for saving your stuff, but people put/save stuff all over the place (flickr, youtube, blogs, facebook..)

People lose stuff for non technology reasons (don’t understand EULA’s, lost passwords, die, etc.)  For scholars, key vulnerability is changing organizations (more so than technology failures).

Digital originals vs. reference copies:  highest fidelity (e.g., photos) is the one closest to the source (local computer).  Remote service, e.g., flickr, has the metadata, though, which becomes important for locating, filtering (save everything).  Where are the tools for gathering the metadata and finding the copies with the metadata?

Searching for tomorrow: Re-encountering techniques?  But some encounters are with stuff that was supposed to disappear.

Bottom efforts taking place (personal & small orgs); new institutions showing up to digitize collections.  New opportunistic uses of massed data.

Power of benign neglect vs. power of forgetting: some things you want to make sure are gone.  How sure are you that you really want to forget (data)?

Cloud computing talks:

Cloud4lib.

Coolest glue in the galaxy?  Is it even possible to have a centralized repository of development activities, especially among disparate libraries?

What is the base level, policy, that needs to exist in order to make it all work? (what exactly is the glue)  What is sticky enough to create critical mass?  Base services:  repository, metadata

Breakout session: brainstorming to figure out oversight & governance.  I see “possible” but not “probably” here.

Linked Library Data Cloud:

From Tim Berners-Lee:  bottom line:  make stuff discoverable

Concept of triples in RDF:  Subject, Predicate, Object.

Linking by assertion, using central index (e.g. id.loc.gov), which is linked data.  But how to make bib data RDF:  LCCN.  Resources (linked, verified data as URI’s).

If you have an LCCN in your MARC record, you already have what you need for Linked Data.  If you know what the LCCN, you can grab all the linked stuff and make it part of your data.

OCLC’s virtual information authority file:  another source for linked data.

No standard data model, but more linkages to resources are outside the library domain:  how to get them? And what about sustainability and preservation?  If a goes away, what do you do?

Do it Yourself Cloud Computing (with R and Apache)

How to use data and data analysis in libraries.

1.  What is The Cloud?  Replacement for the desktop: globally accessible

2. What is R? Free and Open Source, SAS, SPSS, Stata. Software that supports data analysis and visualization.  It has a console!  Eventually a gui port? Cons:  learning curve, problems with very large datasets.  Pros:  de facto standard, huge user community, extensivle.

3.  What is Rapache?  R + Apache? Apache module developed at Vanderbilt.  Puts instance of R in each apache process, similarly to PHP.  You can embed R script in web pages.

4.  Relevance to libraries – keep the slide! keep the slide!

Public Datasets in the Cloud

IaaS (Infrastructure as a service

Paas (Platform as a service)

Saas (Software as a service)

In this case, raw data, that can be used elsewhere, not what can be downloaded from a web site

Demo:  Public datasets in the cloud (in this case, from Amazon ec2): get data from/onto remote site, retrieve via ssh

Using Google fusion tables:  you can comment, slice, dice.  You can embed the data from google fusion onto a web site.

7+ Ways to improve library UIs with OCLC web services

“7 really easy ways to improve your web services”

Crosslisting print and electronic materials: use WorldCart Search API to check/add link to main record

Link to libraries nearby that have a book that is out (mashup query by oclc number and zip code)

Providing journal TOCs: use xISSN to see if a feed of recent article TOC is available, embed a link to open a dialog with items from the cat to the UI.

Peer review indicators:  use data from xIssn to add peer review info to (appropriate) screens

Providing info about the author: use Identities and Wikipedia API to insert author info into a dialog box within UI

Providing links to free full text:  use xOCLCNum to check for free full text scanning projects like Open Content Alliance and HathiTrust and link to full text where available.

Add similar items? (without the current one also listed)

Creating a m-catalog: put all our holdings in worldcat and build a mobile site using the worldcat search api

The Digital Age, Books, and Libraries

There’s a lot of flag waving (especially by alarmed librarians) about the imminent demise of the book and libraries.  Actually, that’s not true.  The librarians are trying to fend off those who are buying into the idea that printed books, and libraries as we used to know them, are pointless vestiges of a prior era.  The debate has been picked up by the New York Times, which is getting a lot of press (sorry) lately.

The biggest issue, which is only obliquely hinted at in the arguments floating around, especially those in the Times opinion piece, is accessibility. I have a book. No one anywhere can prevent me from sharing that book with you. No one anywhere can prevent you from sharing that book with someone else (once it’s in your possession). Granted, this is a single item, with geospatial limitations which can be transcended by electronic networks. But “electronic” has its own, more restrictive limitations. Does one have access to the electronic network? Does one have the equipment to access the electronic network? Is there a power source to enable access to the equipment (or network)? Does one have permission to access the electronic device/network/item?

Librarianship has always been about finding and gaining access to books/information. The interesting twist today is the gaining access part, which involves navigating rights and permissions, as well as delivery options: both print (is there a printer?) and electronic (does the recipient even have the means to access an electronic version?).

The discussion in the NYTimes column (and others) focuses on universities and private schools, essentially ignoring that part of the population that is (a) less educated, (b) less affluent, (c) less technologically savvy, and (d) any combination of the above. My guesstimate, from experience and prior research, is that those categories make up a significant minority of the US population (maybe up to 40%), and likely always will.

So to the issue of accessibility, add disenfranchisement.  Where will the have-nots get what the haves are being taught to take for granted?  Those “pointless” vestiges of a prior era really aren’t so pointless after all.

Computer Classes for Libraries and others

I promised this long ago, so it’s way past time to get these posted.

Feel free to modify and reuse these.  They are provided under a Creative Commons Attribution, Non-commercial 3.0 license.  If you require other terms, leave a comment with your contact information, and I will get back to you.  Please note the powerpoint files are rather large (>4MB).

portablesoftware This is a powerpoint program covering Portable Software:  what it is, how to install it, where to get it, and how to use it.  There are two handouts that go with the program: Install Portable Software and Start Portable Apps Handout, both Open Document Text (.odt) documents.

eBooks and Audiobooks This is a powerpoint program I created for the Palm Beach County Library System, so there are still some vestiges within the show. The handouts for this were specific to that library, so I have not included them.  Contact me via the comment form below if you want them.

Beginning Internet This is a powerpoint program on Internet Basics for beginners.

I will post more as I get them cleaned up.

The perfect absurdity of it all

Patron:  Where is computer number 37?

Librarian:  Between numbers 19 and 21, of course!

But of course!  It made perfect sense to us because we weren’t thinking of the numbers as being sequential.  They were simply labels.  But the hapless patron had looked around and seen computers numbered in what seemed to be a sequential order, quickly scanned for numbers in the 30’s, and found 34, 35, 36, and no 37.  There was a reason computer number 37 was put where it is, which made sense at the time, and its location has just been accepted matter of factly by everyone working at the library.

But the quick exchange caught me unexpectedly and I laughed at the perfect absurdity of it.  People come into a library expecting things to be nicely ordered so they will be easy to find.  But in this case our nicely ordered system made no sense at all.  We won’t change it, of course.  Changing the computer numbers on the computers themselves and within the local network will never make it even to the bottom of the short list of things to fix here.  Besides, it looks perfectly normal to us – 37 has been between 19 and 21 for so long it’s practically ingrained in our vision.

So I wonder and ponder what else we do achieves that perfect absurdity, where the obvious eludes us, or it would take too much time or effort to change.

Libraries and Web 2.0

The Public Library of Charlotte and Mecklenburg County started a learning program for its employees a few years ago called 23 Things. It was intended to help people learn about new Web technologies that have changed the way we interact with the Web. It was evidently successful, and other libraries followed suit, using the formula and exercises set out by the Charlotte and Mecklenburg County Library.

Their site says that (as of May, 2006) there are over 200 libraries who are using their Web 2.0 (23 things) learning tool. It’s a pretty neat set of exercises. But I have some suggestions for anyone in a library thinking of implementing this program.

  1. Don’t have all your employees sign up for Gmail to be able to use Blogger (which is now owned by Google). They can get a Blogger account with any active e-mail address. When Google sees 300+ or 100+ or even 50+ e-mail accounts being generated, and accessed, from the same IP address, and Blogger accounts instantly created with those accounts they’re going to think one thing: Spammers! Actually they aren’t doing the thinking, they set up bots to capture just those types of events, so they don’t have to think about it. Once you’re flagged as a spammer, forget trying to get the e-mail account unblocked.
  2. Unblock the content you want staff to be able to access. This would be a good time to take a closer look at just what your filter is blocking and whitelist those inocuous sites you want the staff to be able to play with.
  3. A lot has happened since 23 Things first appeared. If you are going to encourage learning technology, don’t limit the discussion and exercises to old technology. Do a little research, hang out at technology conferences, follow technology feeds, talk to a tech-savvy person, and find out what is current and what is coming down the pike. Then change or add to the discussion to make the exercise current and relevant.  Seriously.  Others have already made changes.  You can, too.
  4. If you haven’t already, read The Cluetrain Manifesto, available online or in print.  This really is an absolute must for administrators, whether you’re doing the 23 Things or not.

And, of course, you deserve great commendations for taking this step into Web 2.0.  Welcome!

Code4Lib Journal, Issue 2 now available!

Seriously, lots of good stuff:

Code4Lib: More than a journal

Free and Open Source Options for Creating Database-Driven Subject Guides

Using Google Calendar to Manage Library Website Hours

Geocoding LCSH in the Biodiversity Heritage Library

Toward element-level interoperability in bibliographic metadata

Help! A simple method for getting back-up help to the reference desk

Googlizing a Digital Library

Participatory Design of Websites with Web Design Workshops

Quick Lookup Laptops in the Library: Leveraging Linux with a SLAX LiveCD

The ICAP (Interactive Course Assignment Pages) Publishing System

Respect My Authority

Conference Report: Code4LibCon 2008

Whether you are in a public library, academic library, or special library, this issue has something for you. It is hard to pick a favorite among them, but I really like “Quick lookup laptops in the Library,” because it’s about using Linux to leverage old machines in the library.

I gotta say, it’s great being a part of the editorial team, bringing this to the world.

Code4Lib conference

I am heading off to another conference, this time to learn instead of teach. Code4Lib 2008 is in Portland, Oregon, next week.

If anyone is interested in stacking the deck for next year, I’m not above a shameless plug for a vote for South Florida for next year’s conference. If you have a login account at code4lib.org, go here to vote (note, some firewalls block the port in this url – leave a comment here if you are having problems). If you don’t have a login account at the code4lib site, you can get one here.