Google, tech support, and your parents

Google has entered the tech support arena: http://www.teachparentstech.org/watch The short help videos are slick, and they’re appealing, at least to the target audience: a younger generation that is very tech savvy with parents or grandparents who are not.  One of my sons came across them and asked if I thought they would be helpful for his grandparents, who are in their 80’s.  I went to investigate.

The Tech Support care package is a set of quick videos intended to make using Google products easier.  It makes sense.  You have a product.  You do a market analysis.  Where can you expand? In technology, an obvious place to expand your market is the older adult population, which is the fastest growing segment of the population.  But there are problems with that market segment (see my Connecting the Disconnected series of posts, as well as the Computers, Older Adults, and Libraries page).  So Google, in a style very reminiscent of Apple, has created some help videos for basic computer tasks as well as for using Google products.  They are short (good idea), to the point (good idea), and friendly (good idea).  Some are good, some are fails.

The first issue is: how basic is “Basic”?  On the assumption that this is intended for someone who at least uses email (after all, the front page of the site is an invitation to email these helpful videos to the one you think needs them), how much existing knowledge does that presume?  Looking at the set of 6 basic videos, the following knowledge and skills are expected:

  • How to use the mouse, including the right and left buttons (or right and left side of the mouse).
  • How to click and drag
  • What the various special function keys are (such as the Control Key or Command Key) and where they are .
  • How to browse a computer’s file structure.
  • What a computer file is, and what the different types of computer files are (such as jpg, pdf, docx)
  • How to use email, including attaching files.

How reasonable are these expectations?  I fall back on the standard evasive answer:  That depends. 🙂  I have developed a lot of computer training and taught a lot of people how to use computers.  They have ranged in age from thirties to nineties.  They have had varying levels of computer skills across all ages (although, in general, the older they are, the less computer skills they have).  For those who had no experience with computers, my goal was to teach them how to use the internet, and how to use email.  Once they reached that level, I could teach them more advanced things like bookmarking web sites, basic computer skills and file structure, and sharing photos.  Some of these videos presume more skills than I did even for the next step beyond the new user level.  For example, files and file structure, and email attachments were elements in our more advanced user classes.

As an aside, Gmail was one of the email services we tested on the new user groups.  It did not work out well, because (1) Google kept changing the service and interface, and (2) it was too confusing for a typical older user to figure out.  I tried to contact Google about creating a user interface that would work for older adults.  Obviously, I didn’t get their attention.

So the videos really aren’t all that basic, except to technologists who find the featured tasks unbelievably mundane.  But how useful are they to their intended audience (the older adult who already has some computer skills)?  Again, it depends:

  • How old is the recipient of the “Care” package?  The older the person is (generally, 55+), the more they need explicit instructions, using discete steps. The visuals are nice, but sometimes they move too fast and skip over steps.  Also, the language is often not explicit enough for an older adult.
  • How experienced is that person with computers?  This question is actually tied to the next one.  Older adults do not tend to keep up with changes in technology as much as their children/grandchildren.  But generally, the more experience they have, the less difficulty they have learning new, related skills.
  • What operating system, and what version of the operating system, is that person using?  Because older adults tend to not update their skills (learn why here), they are usually using an old computer and operating system (it was not uncommon to have students in my classes who were using Windows 98).  The changes from Windows 98/2000/xp to Win7 or OSX Snow Leopard are intimidating to an older adult (again, generally, 55+).  These videos assume the recipient will be comfortable using one of those operating systems.  That is a big assumption.

Bottom line: If the intended recipient of these cute care packages is under 55, and has some experience using a recent operating system, the videos will likely be both handy and useful. If the recipient is over 55 and/or is not using a recent operating system, a few of the videos would be useful:  How to Create a Strong Password, How to Know if an Email is Real, and most of the Search Information videos.  Also note, there are a lot more Mac-centric videos than Windows.

Would it work for my parents, in their 80’s, who have been using computers since the first Apples came out, and currently have Snow Leopard?  Actually, no.  They would have difficulty following most of them, and for the rest, they wouldn’t see the point.

I’d be happy to take that off your hands

In the not too distant past, I was manning the reference desk, listening to a man say he had to come to the library to use the computers because his laptop was so badly infested with viruses that he had to throw it away.

“You threw it away?” I asked, incredulously.

“Yeah, it’s worthless now.  I can’t use it.  I’m just going to throw it away.”

Realizing he hadn’t actually thrown it away yet, but was willing to, I glibly asked if he’d throw it my way.  He looked at me incredulously at the same time I realized there was probably some intervening ethics involved.  So I said, “Or, I could show you how to make it usable again so there will never be another virus on it.”

He was still incredulous.  I assured him it can be done.  He wanted to know what he could do for me.  I told him “Never tell anyone about this,” forming a mental image of what would happen if he went out and told all his friends, or worse, wrote to the director about what I’d done for him.

He came back a couple days later, but didn’t have the laptop with him.  I hooked him up with a copy of Keir Thomas’ Beginning Ubuntu Linux, and a newer version of the CD included in the book.  He was still somewhat incredulous.  He left the book, but promised to come back the next day with the laptop.  Unfortunately, I didn’t see him again after that. I’m still wondering whether the original story was true, or if my comments prompted him to find someone to clean up the laptop for him.

I’ve since left that job.  Sometimes I miss the interesting world of public libraries.

Administrators vs. Technology

Somehow this post got lost in the drafts folder.  But since it’s an enduring topic, it’s still current. 🙂

A friend has some advice for library administrators:  The Top Ten Things Library Administrators Should Know About Technology.  It’s not a new subject, but it’s a topic that is being discussed openly more and more. 🙂  One gets the impression administrators are actually beginning to realize computer technology is not only not going to stand still, it is moving on at a dizzying pace that demands attention.

Now Roy Tennant is one of those icons in the library technology world who is worth listening to.  But technology geeks sometimes write in a language which makes the eyes of library administrators glaze over (been there, done that, got the T-shirt).  So I offer here a translation service for the first four items in Roy’s excellent post.

1. Technology isn’t as hard as you think it is.

The tools available for getting websites up and running are much easier than a few years ago, and it’s getting better each day.  Some things are still complicated (like writing software), but basic services don’t require that knowledge.

2. Technology gets easier all the time.

Installing special software used to be hard.  Today there are pre-packaged programs for complex software programs that make installation a snap.

3. Technology gets cheaper all the time.

Even if you pay a third party to store your web site and make it available on the Internet, the cost of what you can get today is much less than it was even a few years ago, and it keeps getting cheaper.

4. Maximize the effectiveness of your most costly technology investment — your people.

Hardware is cheap (all of it).  The expensive part of technology is knowledgeable staff.  Don’t make it harder for your expensive staff when the tools are so cheap by comparison.

The rest don’t need translating. 🙂

These really are points that need to be made again and again until administrators start feeling more comfortable with the technology side of library services.  The problem is, are any administrators listening?  Really listening?  Roy has a larger library audience than I have 🙂  Maybe there will be a few who will read and take heart, especially since LISnews posted it as news.

Creating a Comparison Matrix

Charles Bailey has published a very helpful bibliography (Digital Curation and Preservation Bibliography, v.1), from which the resources below were gleaned.  In addition, I have been adding resources to Mendeley, a research management tool: Digital Curation, Digital Library Best Practices & Guidelines, Digital Library Systems, and Metadata.

I have added a few more open source items, and a lot of proprietary systems I discovered thanks to Mr. Bailey’s rich resource.  I am constructing a matrix of features for comparison, borrowing from the reports above and my initial chart, based mainly on features that are most important for our needs:

  • Product
  • URL
  • Owned by/Maintained by
  • License type
  • Runs on (OS)
  • Database
  • Server Software
  • Interoperability with Digital Repository Systems
  • Works with (what other software)
  • Programming Lang
  • Additional hardware or software required
  • Hosting available
  • OAI-PMH?
  • Rights management
  • Manage Restricted Materials
  • User submission
  • Set processing priorities
  • Manage processing status
  • Localization options
  • Formats supported
  • Image file import (TIFF, JPEG, etc.)
  • A/V file import
  • Text file import (TEI, PDF, etc.)
  • Image file management w/ associated metadata
  • A/V file management w/ associated metadata
  • Text file management w/ associated metadata
  • Batch edit
  • DC type
  • METS
  • MODS
  • MARC
  • Imports (MARC , EAD, Tab Delimited/CSV
  • Batch Import (MARC, EAD, CSV)
  • Exports (MARC, EAD, MADS, MODS, METS, Dublin Core, EAC, Tab Delimited)
  • Batch Exports (MARC, EAD, MADS, MODS, METS, Dublin Core, EAC, Tab Delimited)
  • Easy Data Entry
  • Spell Check
  • PREMIS?
  • Other Schemas
  • Create description record from existing record and automatically populate fields
  • Item-level Description
  • Link accession and description records
  • Link accession record to multiple description records
  • Link description record to multiple accession records
  • Hierarchical – fonds, collection, sous-fonds, series, sub-series, files, items and link with its parts in the hierarchy.
  • Ability to reorganize hierarchies
  • Flexibility of Data Model
  • Templating/default fields
  • Controlled vocabularies
  • Authority Records
  • Link authority record to unlimited description records
  • Link description record to unlimited authority records
  • Compliance to Archival Standards
  • Data validation
  • Backup/Restore utility
  • Integrated Web Publication
  • Public search interface
  • Advanced search (by field)
  • Faceted Search
  • Browse levels
  • Search results clearly indicate hierarchical relationships of records
  • Records linked to other parts of hierarchy
  • User Access and Data Security Function
  • Control who can delete records
  • User permissions management
  • Control when record becomes publicly accessible
  • Feeds
  • Install Notes
  • Forum/List URL
  • Bug tracker URL
  • Feature Req URL
  • Trial/demo/sandbox
  • Training available
  • Technical support provided by developers
  • User Manuals (user, admin)
  • Context-specific help
  • Page turning
  • Developer customization available
  • User customization permitted
  • What reports
  • Customize reports
  • Repository statistics
  • Plugins
  • UTF

Comparing Digital Library Systems

I am currently evaluating options for implementing a digital library.  It’s an ongoing process. :o)  Since there are probably more proprietary systems out there, I’m hoping people will leave comments letting me know about them (same thing for open source).  I’ll post the charted results when I’m done (hopefully in the near future).

There are several digital asset management systems for digital libraries. On the proprietary side (closed source) there are (this is not an exhaustive list):

  • ContentDM (OCLC): software that handles the storage, management and delivery of library digital collections to the Web
  • DigiTool (ExLibris)
  • Archivalware (PTFS): a web-based, full-text search and retrieval content management system.
  • SKCA (CuadraStar):  Star Knowledge Center for Archives
  • Eloquent: A suite of applications, Librarian (ILS), Archives (software for physical archives management). Records (records management), Museum, which can be purchased individually or combined for a complete content management system (Museum+Librarian+Archives).
  • Mint: a “cultural asset management system” mix of their individual products M2A (archives), M2L (libraries), and M3 (museums).  Based in Canada (Link updated).
  • PastPerfect: primarily for museums, includes library integration.
  • Proficio: collections management system from Re:discovery.
  • Gallery Systems: a suite of software products for management and web publishing
  • Questor Argus: Collection management and portal software (Link updated).
  • Mimsy XG: collection management and web publishing software (Link updated).
  • IDEA: content management and web publishing software, with modules for libraries, archives, and museums
  • EMu: Museum and Archive management software from KEsoft, (includes web publishing)
  • Digital Commons: A repository system developed by Berkeley Electronic Press.  They set up and maintain a hosted site.
  • SimpleDL: options for hosted library or licensed software on a local server.  Unfortunately, there is not much information on who, what, or how within the site.
  • AdLib: Library, archival, and museum software systems from Adlib Information Systems.  There is a free “lite” version of the Library and Museum software (requires registration).

On the open source side, there are (also not an exhaustive list):

  • CollectiveAccess: a highly configurable cataloguing tool and web-based application for museums, archives and digital collections. There is a demo to try it out. (Link updated).
  • Greenstone: a suite of software for building and distributing digital library collections.Greenstone is produced by the New Zealand Digital Library Project
  • Omeka: a free, flexible, and open source web-publishing platform for the display of library, museum, archives, and scholarly collections and exhibitions.  There is a sandbox to try it out.
  • DSpace: software to host and manage subject based repositories, dataset repositories or media based repositories
  • ResourceSpace: a web-based, open source digital asset management system which has been designed to give your content creators easy and fast access to print and web ready assets.)
  • CDS Invenio:  a suite of applications which provides the framework and tools for building and managing an autonomous digital library server. (Link updated).
  • Islandora: A project combining Fedora and Drupal (web content management system).  It has a VirtualBox demo download available. (Link updated).
  • Razuna: an open source digital asset management with hosting options and consulting services to set up and deploy the system.
  • Digital Collection Builder (DCB):  from Canadiana.org, a software distribution built from the Qubit Toolkit for Libraries & Museums. (Updated URL goes to canadiana.org tools)
  • ICA-AtoM Project: (“International Council on Archives – Access to Memory”): a software distribution built from the Qubit Toolkit, for Archives.  An online demo is available, as well as a downloadable version (update: see this site for currently supported version).
  • CollectionSpace: a collections management system and collection information system platform, primarily for museums. Current version is 0.6
  • NotreDAM: Open source system developed in Italy by Sardegna Richerche.  A demo (updated URL; software on GitHub) is available, as well as documentation (update: see GitHub project page).  It is not a trivial install, requiring two instances of Ubuntu 9.10, but there is a VirtualBox (update: see GitHub location) instance for evaluation purposes. (Link updated).

There is also repository software, like Fedora, which can be used with a discovery interface such as Blacklight, or Islandora.

The main difference between proprietary systems and the open source systems listed above is economics.  While the argument in the past has been that open source systems are not as developed and require more in-house expertise to implement, that is not the case any more.  For one thing, even proprietary systems require in-house expertise in varying levels in order to realize full functionality of their features (see, e.g., Creating an Institutional Repository for State Government Digital Publications).  For another, as the number of libraries implementing Digital Libraries with resource discovery have increased, development of Digital Asset Management Systems has matured beyond the Alpha, and sometimes even Beta, stage.  Open source Systems which did not reach critical mass have quietly died or been absorbed into better supported products.  In the proprietary field, systems typically are developed within a parent organization that includes other software, such as an Integrated Library System, whose profits support R&D for the DAM.

So, while economics should broadly encompass all aspects of  implementation, including time and asset costs, in this case the economics is primarily the money involved, since the difference in the other factors has pretty much been leveled.  With any system, you will be involved in user forums, in bug fix requests, in creating (or updating) documentation, in training, in local tweaking, with or without outside help.  Proprietary systems are currently asking between $10,000 and $20,000 per year for a (relatively) small archive, from what I have seen and heard.

Another issue which may come up is “Cloud Computing.”  Proprietary vendors (and even some open source systems) offer the option of hosting your digital library repository (where all the digital objects live) on their servers.  The issue with remote hosting, of course, is control.  Who has ultimate control and responsibility for the items in the repository?  If the archive is intended to be open and public, the issue is more one of accountability and curation:  how securely is the data being backed up, and what is/will be done to ensure long term viable access?

If the archive is intended to be for local use only (for example, on an intranet), the issues change dramatically regarding remote hosting by an outside vendor.  It is no longer just a matter of secure backups, but the security of the system itself.  Who can access the respository?  How secure is the repository from outside crackers?  With even Google admitting to a breach of their network security, how much security can be expected from a vendor?

In some cases, we may want both public and private (local) access to archive materials.  While originally my thinking was to simply control access using the metadata for each object, others more experienced than I am recommend creating separate repositories for public and private archives, which adds another layer of complexity.

UPDATE:  Added Digital Collection Builder (DCB) and ICAToAtoM (2010/5/5)

UPDATE: Added CollectionSpace, Eloquent, Mint, PastPerfect, Proficio, Gallery Systems, Questor Argus, Mimsy XG, IDEA, EMu, and Digital Commons (2010/5/21)

UPDATE: Added SimpleDL, AdLib, and NotreDAM (2010/6/10)

UPDATE:  Fixed broken links:  Mint, DSpace, Islandora demo, removed reference to online demo for Digital Collection Builder (2011/4/11)

UPDATE: Multiple broken links updated (2016/08/07)

Code4Lib2010 Notes from Day 3

Keynote #2:  Paul Jones

catfish, cthulhu, code, clouds and levenshtein cloud (what is the levenshtein distance between Cathy Marshall and Paul Jones?)

Brains: create the code using certain assumptions about the recipient, which may or may not be accurate

Brains map based on how they are used.

(images showing difference in brain usage between users who are Internet Naive, and Internet Savvy: for Internet Naive, “reading text” image closely matches “Internet searching” image, but is very different for the Internet Savvy image)

Robin Dunbar: anthropologist who studies apes & monkeys.  Grooming, gossip, and the evolution of language (book).  Neocortex ratio (ability to retain social relationship memory).  “Grooming” in groups maintains relationship memory.  In humans, the relationship can be maintained long distance via communication (“gossip”).  Dunbar circles of intimacy; lower the number, higher the “intimacy” relationship.

Who do you trust and how do you trust them?

Attributed source matters (S.S. Sundar, & C. Nass: “Source effects in users’ perception of online news”).  Preference for source: 1.others, 2.computer, 3.self, 4.news editor; quality preference: 1.others, 2.computer, 3.news editors, 4.self (others=random preset; computer=according to computer generated profile; self=according to psychological profile)

Significance:  small talk leads to big talk, which leads to trust.

Small talk “helps” big talk when there is: 1. likeness (homophily) 2. grooming and gossip 3. peer to peer in an informal setting 4. narrative over instruction

Perception of american nerds problems: 1.ADD 2. asperger (inability to get visual emotional cues) 3.hyperliterality, & jargon of the tasks and games 4. friendless, 5. idiocentric humor(but this gets engineered away?)

but they: 1.multitask, 2. use text based interactions (visual cues become emoticons), 3.mainstream jargon into slang, 4. redefine friendship 5. use the power of Internet memes, shared mindspace

The gossipy part of social interactions around information is what makes it accessible, memorable and actionable.  But it’s the part we strip out.

Lightning Talks

Batch OCR with open source tools

Tesseract (no layout analysis) & Ocropus (includes layout analysis): both in google code

HocrConverter.py (python script builds a PDF file from an image)

xplus3.net for code

VuFind at Western Mich.U

Uses marc 005 field to determine new book (now -5 days, now -14 days, etc.)

Please clean my data

Cleaning harvested metadata & cleaning ocr text

Transformation and translation steps added to the harvesting process (all metadata records are in xml); regex as part of step templates; velocity template variables store each regex (gives direct access to the xml elements, then use the java dom4j api to do effectively whatever we wish)

Using newspapers:  example of “fix this text” (within browser).  Cool

Who the heck uses Fedora disseminators anyway?

Fedora content models are just content streams.

(They put their stuff in Drupal.)

Disseminator lives in Fedora, extended with PHP to display in Drupal, & edit

Library a la carte Update

Ajax tool to create library guides, open source style

Built on building blocks (reuse, copy, share)

Every guide type has a portal page (e.g., subject guide, tutorial (new), quiz)

Local install or hosted: tech stack: ruby, gems, rails, database, web server

Open source evangelism:  open source instances in the cloud!

Digital Video Made Easier

Small shop needed to set up video services quickly.  Solution: use online video services for ingest, data store, metadata, file conversion, distribution, video player.

Put 3 demos in place (will be posted on twitter stream): blip.tv api, youtube api

Disadvantages: data in the cloud, Terms of service, api lag, varying support (youtube supportive, blip not so much)

GroupFinder

Tool to help students find physical space more easily.

Launched oct. 2009, approx 65 posts per week.

php + MySql + jQuery

creativecommons licensed

EAD, Apis & Cooliris

Limited by contentdm (sympathy from audience), but tricked out to integrate cooliris.

(Pretty slick)

Talks

You Either Surf or You Fight: Integrating Library Services With Google Wave

Why?  go where your users are

Wave apps: gadgets & robots (real time interaction)

Real time interaction

Google has libraries in java and python, deployed using google app engine (google’s cloud computing platform.  Free for up to 1.3 million requests per day)

Create an app at appengine.google.com; Get app engine SDK, which includes the app engine launcher (creates skeleton application & deploys into app engine)

Set up the app.yaml (give app a name, version number [really important]).  api version is the api version of app engine.  Handler url is /_wave/.*

Get the wave robot api (not really stable yet, but shouldn’t be a problem) & drop it into the app directory

Wave concepts:  wavelet is the current conversation taking place within google wave; blip is each message part of the wavelet (hierarchical); each (wavelet & blip) have unique identifiers, which can be programmatically addressed.

Code avail on github.com/MrDys/

External libs are ok – just drop in the proj directory.  Using beautifulsoup to scrape the OPAC

Google wave will do html content, sort of.  Use OpBuilder instead (but it is not well documented).  Doesn’t like CSS

For debugging, use logging library (one of the better parts)

Code4Lib2010 Notes from Day 2, afternoon

A better advanced search

http://searchworks.stanford.edu

How to filter multiple similar titles by the same author, or multiple author instances (artist as author, as subject, as added author), or combine multiple facet values

At start: no drop down boxes, only titled text boxes, based on above.  Keyword (& Item Description) 3rd on the list; “Subject Terms” instead of just “Subject”

Dismax, & Solr local params:  local param syntax: _query_:{dismax qf …..}

jQuery functions added to multi-facet search boxes; also added faceting to results (actionable facets)

The search breadcrumbs got really complex.

Drupal 7: A more powerful platform for building library applications

Has a new Information Architecture, writes things into “contexts” (attempt to make it easier for end user)

Users can cancel their own account

New admin theme, toolbars & shortcuts (taken from admin menu toolbar module); dashboard (add what you want)

Uses overlays (rather than changing page)

Module selection screen changed to landscape table view.

Permission screen: allows admin role (same as 1st user)

Install options are default, or minimal profile.

Minimum Software Requirements: php 5.2, mysql 5.0 (or postgres 5.0)

File System changes: separate public and private paths

Has native imaging handling out of the box

email security notifications set automatically with install; php filter module now global.

cron.php requires key in url to run

Field UI (included in core) draws from cck module in Drupal 6.  Types: boolean, decimal, file, images, list, text taxonomy, etc.; can apply to almost anything

Update manager: upload and install a module/theme from drupal

Page elements are assignable; templating system changed (more consistent?)

The base theme is based on Zen: Stark (naked)

Theming of content is now granular (content can be pulled from container for theming)

Javascript uses jQuery 1.3, jQuery Forms 2.2 & jQuery UI 1.7; ajax framework from cTools

Really backend stuff: 5.0 database abstraction layer can utilize PHP Data Objects; dynamic select queries; stream wrapper: URI’s can be referenced; field API not node specific & any element can be fieldable

Enhancing Discoverability With Virtual Shelf Browse

Displays book covers, with mouseovers; scrolls right and left

Not everything has a cover image; uses “faux covers” ala google books

Goal: browse arbitrary number of titles around a known item in call number order, including online & all locations

Daily output from ILS to delimited text, then db ingest with python, to call number index in mysql; call number is in alternate formats w/in table records.

Front end challenges: DOM=SLOW; multiple plugins = headache; remote servers = latency issues; too much ajax = browser issues; IE not a friend (doh!)

How to Implement A Virtual Bookshelf With Solr

ILS is Sirsi.

Problems with dirty data, & no standard call no (incl. sudocs & theses/dissertation numbering schemes)

Code4Lib2010 Notes from Day 2, morning

Iterative Development:  Done Simply

Agile, Scrum (Agile case study)

Problem:  too much to do (geez, even with a staff like that?); priorities, requirements change frequently, emergencies happen

IT blackbox:  those on the outside don’t know what’s going on inside

Agile: response to waterfall method of software development:  values working software, customer collaboration, change, interaction

Scrum: agile methodology

Roles:  product owner, scrum master (keeps things going), team

Artifacts: product backlog, sprint backlog

(visual of scrum sprint.)

Daily scrum (c. 15 min. @ start of day) what’s going on, bring up to date

Sprint retrospective:  what went well, what to change for next time

At NCSU: used to tackle big problems in small pieces.  Using iterative development loosely based on scrum.  Just in time planning & documentation;  cross functional teams with an IT point person + developers; joint “owners”

NCSU toolbox:

Product & Sprint backlog: JIRA

Requirement confluence + JIRA

Planning: google docs + JIRA

Daily scrum

Sprint demo at product team meetings

6 week iteration: 1 week planning, 4 weeks development (realign as necessary) 1 week testing/release

Use 1 week to plan across multiple projects

1. high level overview of upcoming projects (3-6 mo)

Prioritize projects for next iteration

Add to google docs

2. Meet w/product owners for each prioritized project

3. At end of the week, re-prioritize based on estimates & time availability; back to googledocs (spreadsheet) w/ time estimates by individual.

4. Get it done: daily scrum, weekly review (how does progress look for cycle, requires work logging), subversion>JIRA integration

Issue burndown chart (should be going down as goal nears)

Test throughout cycle; demo at regular meetings, close tickets when closed, put comments in JIRA to document issues.

Challenges: multple small projects within a cycle: not traditional for agile practice

Lack of documented requirements (what are user stories and when do you need them; librarian teams work slowly); prioritization is difficult for library staff (work at release level), testing (how & when for small projects), no QA experts, simultaneously handle support & develop (juggling act)

Outcomes: positive movement across multiple projects.  Individual development efforts timeboxed, increased user satisfaction, increased flexibility to adapt (users hate seing no movement)  “Positive energy” feedback and librarians seeing movement

Resources: Agile Project Management with Scrum (books)

Vampires vs. werewolves: ending the war between developers and sysadmins

Each side’s goal is satisfying the villagers w/pichforks

Innovation is about risk, & you don’t take risks with people you don’t trust.

Reach out and build trust:

1. Test:  do you KNOW that your code works? does it work on the system; will it after system upgrade? (Uses Nagios to monitor uptime, what’s working, & what’s not; plug in sys upgrade & monitor from Nagios to see what to expect.)

2. Documentation: for the sysadmins, so they can know what your process/code is supposed to do, & who’s responsible, contact info, etc.

3. Puppet (sys config tool) book: Pulling Strings with Puppet. (uses ruby)

I am not your mother, write your test code

Ad hoc testing: where’s the safety net?

Automate testing:  regression testing (local code, local mods), make sure your code doesn’t break someone else’s, refactoring, reduce design complexity, specify expected outcomes

Hudson: continuous integration tool

Selenium: firefox plugin – does automatic searching of web site, makes assertions, asserts text in an xpath. (but xpath is brittle)

(working in ruby): rSpec, cucumber, rails testing framework; rSpec & cucumber primarily.  rSpec tests whole stack.  Cucumber has “features” with “scenarios” (tests instances)

Types of tests:

Unit tests; testing at function or method level.  Integration test would test interface between functions.  At high level, tests stack

What to test: assert call (assertEquals(…)); test branching – differing paths of logic; test during bug fixes (first write test to figure out the bug); test for exceptions, error-handling (try-catch);

Testing legacy code: start with bug fixing: high level, test the section (trying to understand what’s going on).

Run tests: isolate what you’re testing, & for dependencies on other processes (e.g., network, db, solr, external svcs)

All modern languages have unit testing frameworks

Result: fewer bugs, better design, freedom to refactor

Media, Blacklight & Viewers like You (WGBH Interactive)

Using fedora, lighttpd, solr, mysql, blicklight (media screen), jquery, PBCore (mirrors DC in a lot of ways) Question to libraries: http://tinyurl.com/c4l-pbcore – how can PBS make pbcore more useful to libraries?

Metadata apis: rdf/xml, oai pmh, unapi, embed uriplay (common way of embedding stuff across sites)

Fedora has problems with large data strings.

Left Quicktime rsp streaming -> h.264 pseudo-streaming using lighttpd

Apache is in front of fedora and streams data directly out of the data store.  Fedora manages the files, but doesn’t have to deliver them

Rights:  ODRL (metadata records are public, limited backend security policies).

Media workflows: have to integrate with whole bunch of asset management systems; no standard identifier control, so there’s extensive manual processing

Becoming Truly Innovative: Migrating from Millennium to Koha

New York University Health Sciences libraries, was on Millennium III.  Outside the library, IT departments in the medical center are merging.  HIPAA issues involved; circuitous route around firewall to medpac (OPAC).

When network security (IT) consolidated, policies made things break.  When intermediate server was disconnected, the ILS went down.

NETSecurity required all vendors to connect through Juniper VPN(etc)

Enter concept of open source: tested desktop level first with basic data on circulation, holds, acquisitions, serials, bibliographic & authority records

Circulation stuff: take from Millenium, normalize, put in excel, compare to other lists, enter into Koha

Bib/authority: millennium tags to marc fields not 1-1, not comprehensive, multiple subfields.  Migration tool did not work.  Used Xrecords: pulled from III (supply list of records in one line), put in XML, to XSLT, to marcXML, to Koha

XSLT needs to be customized for each institution; III codes didn’t map directly to koha codes (used conditionals, patterns)

Holds had to be handled manually

Migration took place from Aug 6 (freeze cataloging) to Aug 30 migration, aug 31 live. (contract w/III ended on aug 31)

Bibliographic records =80k, authority records = 5k, patron recordss = 11k; acquisitions = end of fiscal year – start over; serials was a problem

Outcome: good, o.k. – everything got in & done & everything is working fine; code is available for others, but you have to write your own XSLT

Caveats: mfhd  – no support yet, diacritics, cross linked items, multiple barcodes/call number, keeping track of old record numbers; pull patron records on day of migration, not before.

Code4Lib2010 Notes from Day 1, afternoon

Taking Control of Metadata with eXtensible Catalog

Open source, user-centered, “not big”

Customizable, faceted, FRBRized search interface

Automated harvesting and processing of large batches of metadata

Connectivity Tools: between XC and ILS & NCIP; allows you to get MARC sml out of the ILS

XC Metadata Services Toolkit – takes DC from repository and marcxml and sends it to drupal (toolkit)

XC = extensible catalog

Nice drupal interface (sample).  Modify PHP code to change display

Create custom web apps to browse catalog content (fill out web form), or preset search limits & customized facets

XC metadata services toolkit: cleans, aggregates, makes OAI compatible.  Tracks “predecessor-successor” records (changes)

XC OAIi toolkit transforms marc to marcxml; drivers available for Voyager, Aleph, NTT, III, III/oracle (note from twitter response: waiting to see if there is interest in SIRSI)

www.eXtensibleCatalog.org –still needs funding/community support

Matching Dirty Data, yet another wheel

Matching bib data with no common identifiers

Goal: ingest metadata and pdf’s for etd’s received in dspace

MARC data in UMI: filename & abstract; ILS marc data: OCLC #,  author, title, date type, dept., subject

Create python dictionaries, do exact & fuzzy matches.  Find the intersection of the keys and filter/refine from there

Reduce the search space (get rid of common words (not just stop words))

Jaro-Winkler algorhythm: 2 characters match if they are a reasonable distance from one another, but best for short strings

String comparison tutorial, secondstring (java text analysis library – on sourceforge), marcximil

http://snurl.com/uggtn

HIVE:  Helping Interdisciplinary Vocabulary Engineering

“Researchy” (as in not too well developed)

Problem: terms for self publishing instances.  Solution: combine terms from existing vocabularies (LCSH, MeSH, NBII Thesaurus) & compare to labeling

skos: somewhere between ontologies and cataloging

Abstract is run through HIVE, outputs extracted concepts cloud, color coded to represent catalog/ontology source.

Based on google web toolkit; currently available in googlecode

Coming soon: HIVE api, sparql, sru

http://datadryad.org

Metadata Editing, a truly extensible solution: Trident

DukeCore:  Duke U’s DC wireframe

MAP: metadata application profile: works by instructing an editor how to build a UI for editing: creates a schema neutral representation of metadata (metadata form).  Editor only needs to understand metadata form and communicate with the repository via the API

Editor/repository manager app:  built on a web services API so it doesn’t need to know what’s behind it.  uses python, django, yahoo grids, and jquery

Uses restful API

Starts in metadata schema, duke core, transforms to mdr, validations are applied to create a packet that is returned to the user interface.  On submission, it goes to mdf and then back to duke core

Metadata forms made up of field groups & elements (looks a lot like infopath form elements)

You can also have your vocabulary lists automatically updated, & built on the fly

Repository architecture:  Repo API allows editor not to have to worry about implementation.  Next level is fedora business logic & i/o layer, then fedora repository.

Solr updated in realtime

Uses jms messaging

Lightning Talks:

Forward: forward.library.wisconsin.edu

Uses blacklight (shoutout to blacklight devs).  shows librarians that are recommended for a targeted search string.

Problems: no standardization in cataloging; differing licensing

Stable (can shoot it with guns & it still runs)

RubyTool to edit MODS metadata

(using existing mods metadata)

“Opinionated XML”:  looking for feedback: http://yourmediashelf.org/blog

DAR: Digital Archaeological Record

Trying to allow archeologists to submit data sets with any encoding they want

Includes a google map on the search screen. Advances to filtering page for data results.  Tries to allow others to map their ontology to other, standardized ontologies

Hydra:  blacklight + Active Fedora + Rails

Being used by Stanford to process dissertations & theses

Hydrangea for open repositories to be released this year, using mods & DC

Why CouchDB

JSON documents, uses GET, PUT, DELETE, & POST

Stateless auto replication possible.

Includes CouchApps which live inside CouchDB (have to go get them & store them in CouchDB. Sofa outputs Atom feeds, e.g.

Code4Lib2010 Notes from Day 1, morning

#1 keynoter: Cathy Marshall:

A web page weighs 80 micrograms?

15 years ago (New Yorker article): well, anyone can have a web page, although most of them are commercial enterprises.  All you need is to digitize your stuff.  Born digital (pictures), even in TIFF were low resolution.

Today 4.3 billion personal photos on Flickr alone.

Cathy Marshall:  guilty of “feral ethnography” (studying what others do)

Benign neglect:  de facto archiving/collection policy?  Is Facebook millenials’ scrapbook?  Should it be?  Whose job is it to “remember”?

Digital stuff accumulates at a faster rate than physical space (out of sight & easier to “neglect”?)  CS view:  how is this a problem?  Storage is getting cheaper, why not just keep everything?

Why keep everything? Difficult to predict an items future worth (true also for physical items as well); deleting is hard, thankless work; filtering and searching allows locating things easily. (easier to keep than to cull)

Losing stuff becomes a means of culling collections (realizing afterwards that losing the stuff wasn’t such a big deal after all).

Attitude that “I don’t really need this stuff, but it’s nice to be able to keep it around.”

It’s easier to lose things than to maintain them.  The more things are used, the more it’s likely to be preserved.

No single reservation technology will win the battle for saving your stuff, but people put/save stuff all over the place (flickr, youtube, blogs, facebook..)

People lose stuff for non technology reasons (don’t understand EULA’s, lost passwords, die, etc.)  For scholars, key vulnerability is changing organizations (more so than technology failures).

Digital originals vs. reference copies:  highest fidelity (e.g., photos) is the one closest to the source (local computer).  Remote service, e.g., flickr, has the metadata, though, which becomes important for locating, filtering (save everything).  Where are the tools for gathering the metadata and finding the copies with the metadata?

Searching for tomorrow: Re-encountering techniques?  But some encounters are with stuff that was supposed to disappear.

Bottom efforts taking place (personal & small orgs); new institutions showing up to digitize collections.  New opportunistic uses of massed data.

Power of benign neglect vs. power of forgetting: some things you want to make sure are gone.  How sure are you that you really want to forget (data)?

Cloud computing talks:

Cloud4lib.

Coolest glue in the galaxy?  Is it even possible to have a centralized repository of development activities, especially among disparate libraries?

What is the base level, policy, that needs to exist in order to make it all work? (what exactly is the glue)  What is sticky enough to create critical mass?  Base services:  repository, metadata

Breakout session: brainstorming to figure out oversight & governance.  I see “possible” but not “probably” here.

Linked Library Data Cloud:

From Tim Berners-Lee:  bottom line:  make stuff discoverable

Concept of triples in RDF:  Subject, Predicate, Object.

Linking by assertion, using central index (e.g. id.loc.gov), which is linked data.  But how to make bib data RDF:  LCCN.  Resources (linked, verified data as URI’s).

If you have an LCCN in your MARC record, you already have what you need for Linked Data.  If you know what the LCCN, you can grab all the linked stuff and make it part of your data.

OCLC’s virtual information authority file:  another source for linked data.

No standard data model, but more linkages to resources are outside the library domain:  how to get them? And what about sustainability and preservation?  If a goes away, what do you do?

Do it Yourself Cloud Computing (with R and Apache)

How to use data and data analysis in libraries.

1.  What is The Cloud?  Replacement for the desktop: globally accessible

2. What is R? Free and Open Source, SAS, SPSS, Stata. Software that supports data analysis and visualization.  It has a console!  Eventually a gui port? Cons:  learning curve, problems with very large datasets.  Pros:  de facto standard, huge user community, extensivle.

3.  What is Rapache?  R + Apache? Apache module developed at Vanderbilt.  Puts instance of R in each apache process, similarly to PHP.  You can embed R script in web pages.

4.  Relevance to libraries – keep the slide! keep the slide!

Public Datasets in the Cloud

IaaS (Infrastructure as a service

Paas (Platform as a service)

Saas (Software as a service)

In this case, raw data, that can be used elsewhere, not what can be downloaded from a web site

Demo:  Public datasets in the cloud (in this case, from Amazon ec2): get data from/onto remote site, retrieve via ssh

Using Google fusion tables:  you can comment, slice, dice.  You can embed the data from google fusion onto a web site.

7+ Ways to improve library UIs with OCLC web services

“7 really easy ways to improve your web services”

Crosslisting print and electronic materials: use WorldCart Search API to check/add link to main record

Link to libraries nearby that have a book that is out (mashup query by oclc number and zip code)

Providing journal TOCs: use xISSN to see if a feed of recent article TOC is available, embed a link to open a dialog with items from the cat to the UI.

Peer review indicators:  use data from xIssn to add peer review info to (appropriate) screens

Providing info about the author: use Identities and Wikipedia API to insert author info into a dialog box within UI

Providing links to free full text:  use xOCLCNum to check for free full text scanning projects like Open Content Alliance and HathiTrust and link to full text where available.

Add similar items? (without the current one also listed)

Creating a m-catalog: put all our holdings in worldcat and build a mobile site using the worldcat search api