Posts tagged ‘code4lib’

Code4Lib2010 Notes from Day 3

Keynote #2:  Paul Jones

catfish, cthulhu, code, clouds and levenshtein cloud (what is the levenshtein distance between Cathy Marshall and Paul Jones?)

Brains: create the code using certain assumptions about the recipient, which may or may not be accurate

Brains map based on how they are used.

(images showing difference in brain usage between users who are Internet Naive, and Internet Savvy: for Internet Naive, “reading text” image closely matches “Internet searching” image, but is very different for the Internet Savvy image)

Robin Dunbar: anthropologist who studies apes & monkeys.  Grooming, gossip, and the evolution of language (book).  Neocortex ratio (ability to retain social relationship memory).  “Grooming” in groups maintains relationship memory.  In humans, the relationship can be maintained long distance via communication (“gossip”).  Dunbar circles of intimacy; lower the number, higher the “intimacy” relationship.

Who do you trust and how do you trust them?

Attributed source matters (S.S. Sundar, & C. Nass: “Source effects in users’ perception of online news”).  Preference for source: 1.others, 2.computer, 3.self, 4.news editor; quality preference: 1.others, 2.computer, 3.news editors, 4.self (others=random preset; computer=according to computer generated profile; self=according to psychological profile)

Significance:  small talk leads to big talk, which leads to trust.

Small talk “helps” big talk when there is: 1. likeness (homophily) 2. grooming and gossip 3. peer to peer in an informal setting 4. narrative over instruction

Perception of american nerds problems: 1.ADD 2. asperger (inability to get visual emotional cues) 3.hyperliterality, & jargon of the tasks and games 4. friendless, 5. idiocentric humor(but this gets engineered away?)

but they: 1.multitask, 2. use text based interactions (visual cues become emoticons), 3.mainstream jargon into slang, 4. redefine friendship 5. use the power of Internet memes, shared mindspace

The gossipy part of social interactions around information is what makes it accessible, memorable and actionable.  But it’s the part we strip out.

Lightning Talks

Batch OCR with open source tools

Tesseract (no layout analysis) & Ocropus (includes layout analysis): both in google code

HocrConverter.py (python script builds a PDF file from an image)

xplus3.net for code

VuFind at Western Mich.U

Uses marc 005 field to determine new book (now -5 days, now -14 days, etc.)

Please clean my data

Cleaning harvested metadata & cleaning ocr text

Transformation and translation steps added to the harvesting process (all metadata records are in xml); regex as part of step templates; velocity template variables store each regex (gives direct access to the xml elements, then use the java dom4j api to do effectively whatever we wish)

Using newspapers:  example of “fix this text” (within browser).  Cool

Who the heck uses Fedora disseminators anyway?

Fedora content models are just content streams.

(They put their stuff in Drupal.)

Disseminator lives in Fedora, extended with PHP to display in Drupal, & edit

Library a la carte Update

Ajax tool to create library guides, open source style

Built on building blocks (reuse, copy, share)

Every guide type has a portal page (e.g., subject guide, tutorial (new), quiz)

Local install or hosted: tech stack: ruby, gems, rails, database, web server

Open source evangelism:  open source instances in the cloud!

Digital Video Made Easier

Small shop needed to set up video services quickly.  Solution: use online video services for ingest, data store, metadata, file conversion, distribution, video player.

Put 3 demos in place (will be posted on twitter stream): blip.tv api, youtube api

Disadvantages: data in the cloud, Terms of service, api lag, varying support (youtube supportive, blip not so much)

GroupFinder

Tool to help students find physical space more easily.

Launched oct. 2009, approx 65 posts per week.

php + MySql + jQuery

creativecommons licensed

EAD, Apis & Cooliris

Limited by contentdm (sympathy from audience), but tricked out to integrate cooliris.

(Pretty slick)

Talks

You Either Surf or You Fight: Integrating Library Services With Google Wave

Why?  go where your users are

Wave apps: gadgets & robots (real time interaction)

Real time interaction

Google has libraries in java and python, deployed using google app engine (google’s cloud computing platform.  Free for up to 1.3 million requests per day)

Create an app at appengine.google.com; Get app engine SDK, which includes the app engine launcher (creates skeleton application & deploys into app engine)

Set up the app.yaml (give app a name, version number [really important]).  api version is the api version of app engine.  Handler url is /_wave/.*

Get the wave robot api (not really stable yet, but shouldn’t be a problem) & drop it into the app directory

Wave concepts:  wavelet is the current conversation taking place within google wave; blip is each message part of the wavelet (hierarchical); each (wavelet & blip) have unique identifiers, which can be programmatically addressed.

Code avail on github.com/MrDys/

External libs are ok – just drop in the proj directory.  Using beautifulsoup to scrape the OPAC

Google wave will do html content, sort of.  Use OpBuilder instead (but it is not well documented).  Doesn’t like CSS

For debugging, use logging library (one of the better parts)

Code4Lib2010 Notes from Day 2, afternoon

A better advanced search

http://searchworks.stanford.edu

How to filter multiple similar titles by the same author, or multiple author instances (artist as author, as subject, as added author), or combine multiple facet values

At start: no drop down boxes, only titled text boxes, based on above.  Keyword (& Item Description) 3rd on the list; “Subject Terms” instead of just “Subject”

Dismax, & Solr local params:  local param syntax: _query_:{dismax qf …..}

jQuery functions added to multi-facet search boxes; also added faceting to results (actionable facets)

The search breadcrumbs got really complex.

Drupal 7: A more powerful platform for building library applications

Has a new Information Architecture, writes things into “contexts” (attempt to make it easier for end user)

Users can cancel their own account

New admin theme, toolbars & shortcuts (taken from admin menu toolbar module); dashboard (add what you want)

Uses overlays (rather than changing page)

Module selection screen changed to landscape table view.

Permission screen: allows admin role (same as 1st user)

Install options are default, or minimal profile.

Minimum Software Requirements: php 5.2, mysql 5.0 (or postgres 5.0)

File System changes: separate public and private paths

Has native imaging handling out of the box

email security notifications set automatically with install; php filter module now global.

cron.php requires key in url to run

Field UI (included in core) draws from cck module in Drupal 6.  Types: boolean, decimal, file, images, list, text taxonomy, etc.; can apply to almost anything

Update manager: upload and install a module/theme from drupal

Page elements are assignable; templating system changed (more consistent?)

The base theme is based on Zen: Stark (naked)

Theming of content is now granular (content can be pulled from container for theming)

Javascript uses jQuery 1.3, jQuery Forms 2.2 & jQuery UI 1.7; ajax framework from cTools

Really backend stuff: 5.0 database abstraction layer can utilize PHP Data Objects; dynamic select queries; stream wrapper: URI’s can be referenced; field API not node specific & any element can be fieldable

Enhancing Discoverability With Virtual Shelf Browse

Displays book covers, with mouseovers; scrolls right and left

Not everything has a cover image; uses “faux covers” ala google books

Goal: browse arbitrary number of titles around a known item in call number order, including online & all locations

Daily output from ILS to delimited text, then db ingest with python, to call number index in mysql; call number is in alternate formats w/in table records.

Front end challenges: DOM=SLOW; multiple plugins = headache; remote servers = latency issues; too much ajax = browser issues; IE not a friend (doh!)

How to Implement A Virtual Bookshelf With Solr

ILS is Sirsi.

Problems with dirty data, & no standard call no (incl. sudocs & theses/dissertation numbering schemes)

Code4Lib2010 Notes from Day 2, morning

Iterative Development:  Done Simply

Agile, Scrum (Agile case study)

Problem:  too much to do (geez, even with a staff like that?); priorities, requirements change frequently, emergencies happen

IT blackbox:  those on the outside don’t know what’s going on inside

Agile: response to waterfall method of software development:  values working software, customer collaboration, change, interaction

Scrum: agile methodology

Roles:  product owner, scrum master (keeps things going), team

Artifacts: product backlog, sprint backlog

(visual of scrum sprint.)

Daily scrum (c. 15 min. @ start of day) what’s going on, bring up to date

Sprint retrospective:  what went well, what to change for next time

At NCSU: used to tackle big problems in small pieces.  Using iterative development loosely based on scrum.  Just in time planning & documentation;  cross functional teams with an IT point person + developers; joint “owners”

NCSU toolbox:

Product & Sprint backlog: JIRA

Requirement confluence + JIRA

Planning: google docs + JIRA

Daily scrum

Sprint demo at product team meetings

6 week iteration: 1 week planning, 4 weeks development (realign as necessary) 1 week testing/release

Use 1 week to plan across multiple projects

1. high level overview of upcoming projects (3-6 mo)

Prioritize projects for next iteration

Add to google docs

2. Meet w/product owners for each prioritized project

3. At end of the week, re-prioritize based on estimates & time availability; back to googledocs (spreadsheet) w/ time estimates by individual.

4. Get it done: daily scrum, weekly review (how does progress look for cycle, requires work logging), subversion>JIRA integration

Issue burndown chart (should be going down as goal nears)

Test throughout cycle; demo at regular meetings, close tickets when closed, put comments in JIRA to document issues.

Challenges: multple small projects within a cycle: not traditional for agile practice

Lack of documented requirements (what are user stories and when do you need them; librarian teams work slowly); prioritization is difficult for library staff (work at release level), testing (how & when for small projects), no QA experts, simultaneously handle support & develop (juggling act)

Outcomes: positive movement across multiple projects.  Individual development efforts timeboxed, increased user satisfaction, increased flexibility to adapt (users hate seing no movement)  “Positive energy” feedback and librarians seeing movement

Resources: Agile Project Management with Scrum (books)

Vampires vs. werewolves: ending the war between developers and sysadmins

Each side’s goal is satisfying the villagers w/pichforks

Innovation is about risk, & you don’t take risks with people you don’t trust.

Reach out and build trust:

1. Test:  do you KNOW that your code works? does it work on the system; will it after system upgrade? (Uses Nagios to monitor uptime, what’s working, & what’s not; plug in sys upgrade & monitor from Nagios to see what to expect.)

2. Documentation: for the sysadmins, so they can know what your process/code is supposed to do, & who’s responsible, contact info, etc.

3. Puppet (sys config tool) book: Pulling Strings with Puppet. (uses ruby)

I am not your mother, write your test code

Ad hoc testing: where’s the safety net?

Automate testing:  regression testing (local code, local mods), make sure your code doesn’t break someone else’s, refactoring, reduce design complexity, specify expected outcomes

Hudson: continuous integration tool

Selenium: firefox plugin – does automatic searching of web site, makes assertions, asserts text in an xpath. (but xpath is brittle)

(working in ruby): rSpec, cucumber, rails testing framework; rSpec & cucumber primarily.  rSpec tests whole stack.  Cucumber has “features” with “scenarios” (tests instances)

Types of tests:

Unit tests; testing at function or method level.  Integration test would test interface between functions.  At high level, tests stack

What to test: assert call (assertEquals(…)); test branching – differing paths of logic; test during bug fixes (first write test to figure out the bug); test for exceptions, error-handling (try-catch);

Testing legacy code: start with bug fixing: high level, test the section (trying to understand what’s going on).

Run tests: isolate what you’re testing, & for dependencies on other processes (e.g., network, db, solr, external svcs)

All modern languages have unit testing frameworks

Result: fewer bugs, better design, freedom to refactor

Media, Blacklight & Viewers like You (WGBH Interactive)

Using fedora, lighttpd, solr, mysql, blicklight (media screen), jquery, PBCore (mirrors DC in a lot of ways) Question to libraries: http://tinyurl.com/c4l-pbcore – how can PBS make pbcore more useful to libraries?

Metadata apis: rdf/xml, oai pmh, unapi, embed uriplay (common way of embedding stuff across sites)

Fedora has problems with large data strings.

Left Quicktime rsp streaming -> h.264 pseudo-streaming using lighttpd

Apache is in front of fedora and streams data directly out of the data store.  Fedora manages the files, but doesn’t have to deliver them

Rights:  ODRL (metadata records are public, limited backend security policies).

Media workflows: have to integrate with whole bunch of asset management systems; no standard identifier control, so there’s extensive manual processing

Becoming Truly Innovative: Migrating from Millennium to Koha

New York University Health Sciences libraries, was on Millennium III.  Outside the library, IT departments in the medical center are merging.  HIPAA issues involved; circuitous route around firewall to medpac (OPAC).

When network security (IT) consolidated, policies made things break.  When intermediate server was disconnected, the ILS went down.

NETSecurity required all vendors to connect through Juniper VPN(etc)

Enter concept of open source: tested desktop level first with basic data on circulation, holds, acquisitions, serials, bibliographic & authority records

Circulation stuff: take from Millenium, normalize, put in excel, compare to other lists, enter into Koha

Bib/authority: millennium tags to marc fields not 1-1, not comprehensive, multiple subfields.  Migration tool did not work.  Used Xrecords: pulled from III (supply list of records in one line), put in XML, to XSLT, to marcXML, to Koha

XSLT needs to be customized for each institution; III codes didn’t map directly to koha codes (used conditionals, patterns)

Holds had to be handled manually

Migration took place from Aug 6 (freeze cataloging) to Aug 30 migration, aug 31 live. (contract w/III ended on aug 31)

Bibliographic records =80k, authority records = 5k, patron recordss = 11k; acquisitions = end of fiscal year – start over; serials was a problem

Outcome: good, o.k. – everything got in & done & everything is working fine; code is available for others, but you have to write your own XSLT

Caveats: mfhd  – no support yet, diacritics, cross linked items, multiple barcodes/call number, keeping track of old record numbers; pull patron records on day of migration, not before.

Code4Lib2010 Notes from Day 1, afternoon

Taking Control of Metadata with eXtensible Catalog

Open source, user-centered, “not big”

Customizable, faceted, FRBRized search interface

Automated harvesting and processing of large batches of metadata

Connectivity Tools: between XC and ILS & NCIP; allows you to get MARC sml out of the ILS

XC Metadata Services Toolkit – takes DC from repository and marcxml and sends it to drupal (toolkit)

XC = extensible catalog

Nice drupal interface (sample).  Modify PHP code to change display

Create custom web apps to browse catalog content (fill out web form), or preset search limits & customized facets

XC metadata services toolkit: cleans, aggregates, makes OAI compatible.  Tracks “predecessor-successor” records (changes)

XC OAIi toolkit transforms marc to marcxml; drivers available for Voyager, Aleph, NTT, III, III/oracle (note from twitter response: waiting to see if there is interest in SIRSI)

www.eXtensibleCatalog.org –still needs funding/community support

Matching Dirty Data, yet another wheel

Matching bib data with no common identifiers

Goal: ingest metadata and pdf’s for etd’s received in dspace

MARC data in UMI: filename & abstract; ILS marc data: OCLC #,  author, title, date type, dept., subject

Create python dictionaries, do exact & fuzzy matches.  Find the intersection of the keys and filter/refine from there

Reduce the search space (get rid of common words (not just stop words))

Jaro-Winkler algorhythm: 2 characters match if they are a reasonable distance from one another, but best for short strings

String comparison tutorial, secondstring (java text analysis library – on sourceforge), marcximil

http://snurl.com/uggtn

HIVE:  Helping Interdisciplinary Vocabulary Engineering

“Researchy” (as in not too well developed)

Problem: terms for self publishing instances.  Solution: combine terms from existing vocabularies (LCSH, MeSH, NBII Thesaurus) & compare to labeling

skos: somewhere between ontologies and cataloging

Abstract is run through HIVE, outputs extracted concepts cloud, color coded to represent catalog/ontology source.

Based on google web toolkit; currently available in googlecode

Coming soon: HIVE api, sparql, sru

http://datadryad.org

Metadata Editing, a truly extensible solution: Trident

DukeCore:  Duke U’s DC wireframe

MAP: metadata application profile: works by instructing an editor how to build a UI for editing: creates a schema neutral representation of metadata (metadata form).  Editor only needs to understand metadata form and communicate with the repository via the API

Editor/repository manager app:  built on a web services API so it doesn’t need to know what’s behind it.  uses python, django, yahoo grids, and jquery

Uses restful API

Starts in metadata schema, duke core, transforms to mdr, validations are applied to create a packet that is returned to the user interface.  On submission, it goes to mdf and then back to duke core

Metadata forms made up of field groups & elements (looks a lot like infopath form elements)

You can also have your vocabulary lists automatically updated, & built on the fly

Repository architecture:  Repo API allows editor not to have to worry about implementation.  Next level is fedora business logic & i/o layer, then fedora repository.

Solr updated in realtime

Uses jms messaging

Lightning Talks:

Forward: forward.library.wisconsin.edu

Uses blacklight (shoutout to blacklight devs).  shows librarians that are recommended for a targeted search string.

Problems: no standardization in cataloging; differing licensing

Stable (can shoot it with guns & it still runs)

RubyTool to edit MODS metadata

(using existing mods metadata)

“Opinionated XML”:  looking for feedback: http://yourmediashelf.org/blog

DAR: Digital Archaeological Record

Trying to allow archeologists to submit data sets with any encoding they want

Includes a google map on the search screen. Advances to filtering page for data results.  Tries to allow others to map their ontology to other, standardized ontologies

Hydra:  blacklight + Active Fedora + Rails

Being used by Stanford to process dissertations & theses

Hydrangea for open repositories to be released this year, using mods & DC

Why CouchDB

JSON documents, uses GET, PUT, DELETE, & POST

Stateless auto replication possible.

Includes CouchApps which live inside CouchDB (have to go get them & store them in CouchDB. Sofa outputs Atom feeds, e.g.

Code4Lib2010 Notes from Day 1, morning

#1 keynoter: Cathy Marshall:

A web page weighs 80 micrograms?

15 years ago (New Yorker article): well, anyone can have a web page, although most of them are commercial enterprises.  All you need is to digitize your stuff.  Born digital (pictures), even in TIFF were low resolution.

Today 4.3 billion personal photos on Flickr alone.

Cathy Marshall:  guilty of “feral ethnography” (studying what others do)

Benign neglect:  de facto archiving/collection policy?  Is Facebook millenials’ scrapbook?  Should it be?  Whose job is it to “remember”?

Digital stuff accumulates at a faster rate than physical space (out of sight & easier to “neglect”?)  CS view:  how is this a problem?  Storage is getting cheaper, why not just keep everything?

Why keep everything? Difficult to predict an items future worth (true also for physical items as well); deleting is hard, thankless work; filtering and searching allows locating things easily. (easier to keep than to cull)

Losing stuff becomes a means of culling collections (realizing afterwards that losing the stuff wasn’t such a big deal after all).

Attitude that “I don’t really need this stuff, but it’s nice to be able to keep it around.”

It’s easier to lose things than to maintain them.  The more things are used, the more it’s likely to be preserved.

No single reservation technology will win the battle for saving your stuff, but people put/save stuff all over the place (flickr, youtube, blogs, facebook..)

People lose stuff for non technology reasons (don’t understand EULA’s, lost passwords, die, etc.)  For scholars, key vulnerability is changing organizations (more so than technology failures).

Digital originals vs. reference copies:  highest fidelity (e.g., photos) is the one closest to the source (local computer).  Remote service, e.g., flickr, has the metadata, though, which becomes important for locating, filtering (save everything).  Where are the tools for gathering the metadata and finding the copies with the metadata?

Searching for tomorrow: Re-encountering techniques?  But some encounters are with stuff that was supposed to disappear.

Bottom efforts taking place (personal & small orgs); new institutions showing up to digitize collections.  New opportunistic uses of massed data.

Power of benign neglect vs. power of forgetting: some things you want to make sure are gone.  How sure are you that you really want to forget (data)?

Cloud computing talks:

Cloud4lib.

Coolest glue in the galaxy?  Is it even possible to have a centralized repository of development activities, especially among disparate libraries?

What is the base level, policy, that needs to exist in order to make it all work? (what exactly is the glue)  What is sticky enough to create critical mass?  Base services:  repository, metadata

Breakout session: brainstorming to figure out oversight & governance.  I see “possible” but not “probably” here.

Linked Library Data Cloud:

From Tim Berners-Lee:  bottom line:  make stuff discoverable

Concept of triples in RDF:  Subject, Predicate, Object.

Linking by assertion, using central index (e.g. id.loc.gov), which is linked data.  But how to make bib data RDF:  LCCN.  Resources (linked, verified data as URI’s).

If you have an LCCN in your MARC record, you already have what you need for Linked Data.  If you know what the LCCN, you can grab all the linked stuff and make it part of your data.

OCLC’s virtual information authority file:  another source for linked data.

No standard data model, but more linkages to resources are outside the library domain:  how to get them? And what about sustainability and preservation?  If a goes away, what do you do?

Do it Yourself Cloud Computing (with R and Apache)

How to use data and data analysis in libraries.

1.  What is The Cloud?  Replacement for the desktop: globally accessible

2. What is R? Free and Open Source, SAS, SPSS, Stata. Software that supports data analysis and visualization.  It has a console!  Eventually a gui port? Cons:  learning curve, problems with very large datasets.  Pros:  de facto standard, huge user community, extensivle.

3.  What is Rapache?  R + Apache? Apache module developed at Vanderbilt.  Puts instance of R in each apache process, similarly to PHP.  You can embed R script in web pages.

4.  Relevance to libraries – keep the slide! keep the slide!

Public Datasets in the Cloud

IaaS (Infrastructure as a service

Paas (Platform as a service)

Saas (Software as a service)

In this case, raw data, that can be used elsewhere, not what can be downloaded from a web site

Demo:  Public datasets in the cloud (in this case, from Amazon ec2): get data from/onto remote site, retrieve via ssh

Using Google fusion tables:  you can comment, slice, dice.  You can embed the data from google fusion onto a web site.

7+ Ways to improve library UIs with OCLC web services

“7 really easy ways to improve your web services”

Crosslisting print and electronic materials: use WorldCart Search API to check/add link to main record

Link to libraries nearby that have a book that is out (mashup query by oclc number and zip code)

Providing journal TOCs: use xISSN to see if a feed of recent article TOC is available, embed a link to open a dialog with items from the cat to the UI.

Peer review indicators:  use data from xIssn to add peer review info to (appropriate) screens

Providing info about the author: use Identities and Wikipedia API to insert author info into a dialog box within UI

Providing links to free full text:  use xOCLCNum to check for free full text scanning projects like Open Content Alliance and HathiTrust and link to full text where available.

Add similar items? (without the current one also listed)

Creating a m-catalog: put all our holdings in worldcat and build a mobile site using the worldcat search api

Code4Lib Journal, Issue 2 now available!

Seriously, lots of good stuff:

Code4Lib: More than a journal

Free and Open Source Options for Creating Database-Driven Subject Guides

Using Google Calendar to Manage Library Website Hours

Geocoding LCSH in the Biodiversity Heritage Library

Toward element-level interoperability in bibliographic metadata

Help! A simple method for getting back-up help to the reference desk

Googlizing a Digital Library

Participatory Design of Websites with Web Design Workshops

Quick Lookup Laptops in the Library: Leveraging Linux with a SLAX LiveCD

The ICAP (Interactive Course Assignment Pages) Publishing System

Respect My Authority

Conference Report: Code4LibCon 2008

Whether you are in a public library, academic library, or special library, this issue has something for you. It is hard to pick a favorite among them, but I really like “Quick lookup laptops in the Library,” because it’s about using Linux to leverage old machines in the library.

I gotta say, it’s great being a part of the editorial team, bringing this to the world.

Code4Lib conference

I am heading off to another conference, this time to learn instead of teach. Code4Lib 2008 is in Portland, Oregon, next week.

If anyone is interested in stacking the deck for next year, I’m not above a shameless plug for a vote for South Florida for next year’s conference. If you have a login account at code4lib.org, go here to vote (note, some firewalls block the port in this url – leave a comment here if you are having problems). If you don’t have a login account at the code4lib site, you can get one here.

Code4Lib Journal now online!

The first issue of Code4Lib Journal is now available, thanks to the efforts of Jonathan Rochkind, who spearheaded getting a group of volunteers to put it all together and get it up on the web. The journal’s mission is “to foster community and share information among those interested in the intersection of libraries, technology, and the future.” Jonathan’s Editorial Introduction explains it all.

It is worth taking a look at, and keeping an eye on, whether you are involved in libraries or not. (As an aside, I happen to be one of the editors)