Archive for the ‘Technology’ Category.

Code4Lib2010 Notes from Day 1, afternoon

Taking Control of Metadata with eXtensible Catalog

Open source, user-centered, “not big”

Customizable, faceted, FRBRized search interface

Automated harvesting and processing of large batches of metadata

Connectivity Tools: between XC and ILS & NCIP; allows you to get MARC sml out of the ILS

XC Metadata Services Toolkit – takes DC from repository and marcxml and sends it to drupal (toolkit)

XC = extensible catalog

Nice drupal interface (sample).  Modify PHP code to change display

Create custom web apps to browse catalog content (fill out web form), or preset search limits & customized facets

XC metadata services toolkit: cleans, aggregates, makes OAI compatible.  Tracks “predecessor-successor” records (changes)

XC OAIi toolkit transforms marc to marcxml; drivers available for Voyager, Aleph, NTT, III, III/oracle (note from twitter response: waiting to see if there is interest in SIRSI) –still needs funding/community support

Matching Dirty Data, yet another wheel

Matching bib data with no common identifiers

Goal: ingest metadata and pdf’s for etd’s received in dspace

MARC data in UMI: filename & abstract; ILS marc data: OCLC #,  author, title, date type, dept., subject

Create python dictionaries, do exact & fuzzy matches.  Find the intersection of the keys and filter/refine from there

Reduce the search space (get rid of common words (not just stop words))

Jaro-Winkler algorhythm: 2 characters match if they are a reasonable distance from one another, but best for short strings

String comparison tutorial, secondstring (java text analysis library – on sourceforge), marcximil

HIVE:  Helping Interdisciplinary Vocabulary Engineering

“Researchy” (as in not too well developed)

Problem: terms for self publishing instances.  Solution: combine terms from existing vocabularies (LCSH, MeSH, NBII Thesaurus) & compare to labeling

skos: somewhere between ontologies and cataloging

Abstract is run through HIVE, outputs extracted concepts cloud, color coded to represent catalog/ontology source.

Based on google web toolkit; currently available in googlecode

Coming soon: HIVE api, sparql, sru

Metadata Editing, a truly extensible solution: Trident

DukeCore:  Duke U’s DC wireframe

MAP: metadata application profile: works by instructing an editor how to build a UI for editing: creates a schema neutral representation of metadata (metadata form).  Editor only needs to understand metadata form and communicate with the repository via the API

Editor/repository manager app:  built on a web services API so it doesn’t need to know what’s behind it.  uses python, django, yahoo grids, and jquery

Uses restful API

Starts in metadata schema, duke core, transforms to mdr, validations are applied to create a packet that is returned to the user interface.  On submission, it goes to mdf and then back to duke core

Metadata forms made up of field groups & elements (looks a lot like infopath form elements)

You can also have your vocabulary lists automatically updated, & built on the fly

Repository architecture:  Repo API allows editor not to have to worry about implementation.  Next level is fedora business logic & i/o layer, then fedora repository.

Solr updated in realtime

Uses jms messaging

Lightning Talks:


Uses blacklight (shoutout to blacklight devs).  shows librarians that are recommended for a targeted search string.

Problems: no standardization in cataloging; differing licensing

Stable (can shoot it with guns & it still runs)

RubyTool to edit MODS metadata

(using existing mods metadata)

“Opinionated XML”:  looking for feedback:

DAR: Digital Archaeological Record

Trying to allow archeologists to submit data sets with any encoding they want

Includes a google map on the search screen. Advances to filtering page for data results.  Tries to allow others to map their ontology to other, standardized ontologies

Hydra:  blacklight + Active Fedora + Rails

Being used by Stanford to process dissertations & theses

Hydrangea for open repositories to be released this year, using mods & DC

Why CouchDB

JSON documents, uses GET, PUT, DELETE, & POST

Stateless auto replication possible.

Includes CouchApps which live inside CouchDB (have to go get them & store them in CouchDB. Sofa outputs Atom feeds, e.g.

Code4Lib2010 Notes from Day 1, morning

#1 keynoter: Cathy Marshall:

A web page weighs 80 micrograms?

15 years ago (New Yorker article): well, anyone can have a web page, although most of them are commercial enterprises.  All you need is to digitize your stuff.  Born digital (pictures), even in TIFF were low resolution.

Today 4.3 billion personal photos on Flickr alone.

Cathy Marshall:  guilty of “feral ethnography” (studying what others do)

Benign neglect:  de facto archiving/collection policy?  Is Facebook millenials’ scrapbook?  Should it be?  Whose job is it to “remember”?

Digital stuff accumulates at a faster rate than physical space (out of sight & easier to “neglect”?)  CS view:  how is this a problem?  Storage is getting cheaper, why not just keep everything?

Why keep everything? Difficult to predict an items future worth (true also for physical items as well); deleting is hard, thankless work; filtering and searching allows locating things easily. (easier to keep than to cull)

Losing stuff becomes a means of culling collections (realizing afterwards that losing the stuff wasn’t such a big deal after all).

Attitude that “I don’t really need this stuff, but it’s nice to be able to keep it around.”

It’s easier to lose things than to maintain them.  The more things are used, the more it’s likely to be preserved.

No single reservation technology will win the battle for saving your stuff, but people put/save stuff all over the place (flickr, youtube, blogs, facebook..)

People lose stuff for non technology reasons (don’t understand EULA’s, lost passwords, die, etc.)  For scholars, key vulnerability is changing organizations (more so than technology failures).

Digital originals vs. reference copies:  highest fidelity (e.g., photos) is the one closest to the source (local computer).  Remote service, e.g., flickr, has the metadata, though, which becomes important for locating, filtering (save everything).  Where are the tools for gathering the metadata and finding the copies with the metadata?

Searching for tomorrow: Re-encountering techniques?  But some encounters are with stuff that was supposed to disappear.

Bottom efforts taking place (personal & small orgs); new institutions showing up to digitize collections.  New opportunistic uses of massed data.

Power of benign neglect vs. power of forgetting: some things you want to make sure are gone.  How sure are you that you really want to forget (data)?

Cloud computing talks:


Coolest glue in the galaxy?  Is it even possible to have a centralized repository of development activities, especially among disparate libraries?

What is the base level, policy, that needs to exist in order to make it all work? (what exactly is the glue)  What is sticky enough to create critical mass?  Base services:  repository, metadata

Breakout session: brainstorming to figure out oversight & governance.  I see “possible” but not “probably” here.

Linked Library Data Cloud:

From Tim Berners-Lee:  bottom line:  make stuff discoverable

Concept of triples in RDF:  Subject, Predicate, Object.

Linking by assertion, using central index (e.g., which is linked data.  But how to make bib data RDF:  LCCN.  Resources (linked, verified data as URI’s).

If you have an LCCN in your MARC record, you already have what you need for Linked Data.  If you know what the LCCN, you can grab all the linked stuff and make it part of your data.

OCLC’s virtual information authority file:  another source for linked data.

No standard data model, but more linkages to resources are outside the library domain:  how to get them? And what about sustainability and preservation?  If a goes away, what do you do?

Do it Yourself Cloud Computing (with R and Apache)

How to use data and data analysis in libraries.

1.  What is The Cloud?  Replacement for the desktop: globally accessible

2. What is R? Free and Open Source, SAS, SPSS, Stata. Software that supports data analysis and visualization.  It has a console!  Eventually a gui port? Cons:  learning curve, problems with very large datasets.  Pros:  de facto standard, huge user community, extensivle.

3.  What is Rapache?  R + Apache? Apache module developed at Vanderbilt.  Puts instance of R in each apache process, similarly to PHP.  You can embed R script in web pages.

4.  Relevance to libraries – keep the slide! keep the slide!

Public Datasets in the Cloud

IaaS (Infrastructure as a service

Paas (Platform as a service)

Saas (Software as a service)

In this case, raw data, that can be used elsewhere, not what can be downloaded from a web site

Demo:  Public datasets in the cloud (in this case, from Amazon ec2): get data from/onto remote site, retrieve via ssh

Using Google fusion tables:  you can comment, slice, dice.  You can embed the data from google fusion onto a web site.

7+ Ways to improve library UIs with OCLC web services

“7 really easy ways to improve your web services”

Crosslisting print and electronic materials: use WorldCart Search API to check/add link to main record

Link to libraries nearby that have a book that is out (mashup query by oclc number and zip code)

Providing journal TOCs: use xISSN to see if a feed of recent article TOC is available, embed a link to open a dialog with items from the cat to the UI.

Peer review indicators:  use data from xIssn to add peer review info to (appropriate) screens

Providing info about the author: use Identities and Wikipedia API to insert author info into a dialog box within UI

Providing links to free full text:  use xOCLCNum to check for free full text scanning projects like Open Content Alliance and HathiTrust and link to full text where available.

Add similar items? (without the current one also listed)

Creating a m-catalog: put all our holdings in worldcat and build a mobile site using the worldcat search api

The Digital Age, Books, and Libraries

There’s a lot of flag waving (especially by alarmed librarians) about the imminent demise of the book and libraries.  Actually, that’s not true.  The librarians are trying to fend off those who are buying into the idea that printed books, and libraries as we used to know them, are pointless vestiges of a prior era.  The debate has been picked up by the New York Times, which is getting a lot of press (sorry) lately.

The biggest issue, which is only obliquely hinted at in the arguments floating around, especially those in the Times opinion piece, is accessibility. I have a book. No one anywhere can prevent me from sharing that book with you. No one anywhere can prevent you from sharing that book with someone else (once it’s in your possession). Granted, this is a single item, with geospatial limitations which can be transcended by electronic networks. But “electronic” has its own, more restrictive limitations. Does one have access to the electronic network? Does one have the equipment to access the electronic network? Is there a power source to enable access to the equipment (or network)? Does one have permission to access the electronic device/network/item?

Librarianship has always been about finding and gaining access to books/information. The interesting twist today is the gaining access part, which involves navigating rights and permissions, as well as delivery options: both print (is there a printer?) and electronic (does the recipient even have the means to access an electronic version?).

The discussion in the NYTimes column (and others) focuses on universities and private schools, essentially ignoring that part of the population that is (a) less educated, (b) less affluent, (c) less technologically savvy, and (d) any combination of the above. My guesstimate, from experience and prior research, is that those categories make up a significant minority of the US population (maybe up to 40%), and likely always will.

So to the issue of accessibility, add disenfranchisement.  Where will the have-nots get what the haves are being taught to take for granted?  Those “pointless” vestiges of a prior era really aren’t so pointless after all.

Software Freedom Day again!

September 19 this year!  Unfortunately, the date, selected by an international committee, is also on Rosh Hoshanna.  I think groups in some areas have changed the date for their events 🙂

In another unfortunate turn of events (or not, depending on your point of view), I had to pass my role as organizer for our local event to others in the Palm Beach Linux User Group.  The good news is they’re doing a better job of it!  The local Software Freedom Day event in Palm Beach County this year will be at Florida Atlantic University in Boca Raton.  If you’re reading this and you’re from the south Florida area  contact Bill Hall (pbclug at comcast + dot + net) or me (leave a comment below).  They’re looking for presenters, people who want to help set up, share, or just spread the word!

See the page at Software Freedom Day for more details!

See you there!


CPU's and Monitors, lined up for Frankenfest

CPUs and Monitors for the Frankenfest

It was Kevin’s idea.

At a PBLUG meeting held at Nova Southeastern’s North County campus in Palm Beach Gardens, he suggested it. Frankenfest?  What’s a Frankenfest?  Kevin explained it’s when people bring whatever computers or computer parts they have laying around to an event where you (those attending the event) build whatever you can from what you have.

Kevin had a motive.  He had a garage full of computers and parts, and his wife was not happy about it, but he couldn’t bring himself to just pitch them.

Of course we were intrigued by the idea.  Especially me.  I kept the idea alive by continually bringing it up to the group.  They, of course, took the bait, being Linux geeks.  We ended up with a plan, of sorts. We needed to do something with all the computers we were sure to build.  Laura came up with a group that would like to give away the computer systems to needy families for Christmas.  We needed a place to do it.  My library had fortuitously cancelled all programming for December, under the impression they would be closing, so a very large meeting room was available to us almost any Saturday that month (actually, almost any day in December would have been available, but Saturdays worked best for everyone).  We needed a Linux distro to install.  I suggested Kubuntu because I like KDE and Ubuntu seemed mainstream enough to be easy for the ultimate recipients to find books or help.

Resource: cables

Cables, anyone?

So we did it.  The Frankenfest was today.  We spent two hours sorting and testing what we had. We spent the next 4 hours trying to load Kubuntu on the best machines we had, since I had created a Kubuntu cheatsheet to give to the ultimate recipients.  We started with 7 candidates, from Pentium III’s to a 2.16 MHz box.  We ended up with three successful installs, two with Linux Mint on them, and one with Kubuntu.  Travelin’ Rob had brought the Linux Mint because it runs on anything, and he likes it.  He also promised to do a Linux Mint cheat sheet to give to the foundation to include with these systems they will be distributing.

We almost had one more Linux Mint box, but the install ultimately failed, probably because we tried to put a 160GB hard drive on an older machine that couldn’t recognize bigger hard drives.  One of the better machines we had didn’t like our RAM upgrade attempt, and didn’t seem to know how to operate without the three centuries worth of dust we removed.  As much as we’d like to think of ourselves as computer geeks, we’re really just linux geeks, and have lapses in hardware sense from time to time.  I had spent most of the last week getting screenshots of Kubuntu on my virtual machine, thinking that it would work exactly the same on a real machine.

Resource line 2: speakers and keyboards

Speakers, keyboards, and mice

But ultimately, I guess it was a “success.”  Three families will be getting awesome computer systems. Kevin cleared out his garage. We got everything cleared out by the time the library closed.  I finally learned everyone’s names.

Someone (Travelin’ Rob?) suggested we do it again.  I said, “Yeah, once a year wouldn’t be too painful.”  Someone else suggested we let one of the local stations know, because they would cover it and advertise it if they just knew in advance.  We actually had people walking in and asking if we were taking computer donations.  I looked around at the 20+ computers in various stages of usability, and said “No, thanks.”  I can imagine what a little advance advertising would do.  Of course, since Kevin’s now got his garage cleaned out, it might be interesting to see what we’d get from people dropping by to drop off their computers.

Yeah, I guess I’m hopeless.

Drupal, Google Calendars, and cool people

A friend was looking for a way to communicate with employees without having to send e-mails, since not everyone checks their e-mail regularly, or even thinks to check their e-mail these days.  All of the employees, however, work at a computer for at least part of the day.  Several months ago I had found a way to have the current day’s events listed on each computer’s desktop by using Windows Active Desktop, which will display a web page.  Unfortunately, IT people intervened after a couple months and disabled the Active Desktop feature on most of the computers. That left using live web sites, accessed with a browser, as the only option.

The first issue was to set up something that the friend, who has moderate computer skills, could handle.  We also needed a site that could restrict access to the information being posted.  A bonus would be finding a way to easily display the current day’s scheduled events on the site as well.  An even bigger bonus would be a “solution” that integrated room scheduling with displaying the schedules on the site, especially if that solution would prevent overbooking.  And, of course, the kicker is that it all has to be free.

My friend thought the limitations of using a browser and Internet to access posts and information were acceptable.  We could place shortcuts on the desktop, or make the site the browser homepage, and let the staff know about it.  The staff were grateful to have something after the current events schedule disappeared from their desktops.

Except for the site itself, everything did turn out to be free.  But since I happen to have a hosted account with an obscene amount of space and bandwidth that will never get used, it seemed like a good place to experiment for the benefit of my friend.  Since I already have several sites running Drupal, that was my CMS of choice.  It is free, and has a large, active community supporting it.

So I set up a new site, required a login to view the content, gave my friend just enough access to publish stories, and logged into the site from all the location’s computers, instructing Internet Explorer to remember the username and password.  So far so good.  Pretty simple and straightforward.

Then Internet Explorer stopped remembering the username and password (there was probably some kind of staff intervention involved, but I decided to see if I could find a fix that would outsmart them).  A quick search of the modules section of Drupal turned up Persistent Login.  This works great until they start clearing the cookies.

The next request was from my friend for an RTF type editor, to be able to use different fonts and colors in the posts. That was solved with the TinyMCE Wysiwyg module. Then I turned my attention to finding a way to get a daily events listing posted dynamically.

Enter Google Calendar, which has XML feeds.  After trying out several ways to get the feeds onto the site using the FeedAPI module, the Views module, and the CCK module, I began searching through the discussion groups on Drupal.  I came across a discussion that referred to a new module being developed to do just what I was looking for: GCal Events.  Jeff Simpson, the hero here, without any previous experience creating modules for Drupal, put it together, tweaked it and fixed bugs based on our feedback, and has now put it in the projects section of Drupal:

Since the site for my friend was already up and running, I set up a test site that mirrored the other site’s setup:  With the development snapshot of the GCal Events module installed, which has some tweaks and bug fixes applied after the official release was put up, everything ran great.  So I enabled the module on my friend’s site.  Scheduled events for the day are pulled from a Google calendar and displayed on the right column.

The last issue was to set up the Google calendar account to work as a room scheduling “solution.”  There are 3 rooms at this location that are reserved for various uses.  Several people in different departments were using 3 different calendar books to block out reserved times.  On a few occasions, events have been overbooked.  The books can also be hard to locate if someone has taken them for awhile.  Google calendars seemed like an easy, free, and obvious answer:

  1. More than one calendar can be created within an account
  2. Calendars can be shared with other google accounts
  3. Event times in a calendar cannot overlap (which prevents overbooking)

On the main Google calendar account, I set up calendars for each of the rooms that can be booked.  I then shared the calendars with others who would be booking the rooms, allowing them to make changes (so they can add events).  Since the calendars represent the rooms being booked, it is not necessary to fill in the location field, making a “quick add” possible through the popup that appears when clicking on a time space within the calendar (day or week view).

On the site using the calendar feeds, I set up a separate GCal Event feed for each of the calendars, so events are displayed by room.  The only glitch, which was fairly easy to fix, was a piece of php code that refreshes the cache once a week instead of every day (thanks to jdwfly’s post in the discussion:

I love open source software.  And I love the people that are part of it.  Thanks, Jeff!

Digital natives, digital immigrants, and digital refugees

I have been hearing the terms digital native and digital immigrant for quite a while.  Digital native, of course, refers to those who have grown up with digital technology (generally those born after computers and cell phones became mainstream), and digital immigrants would be those who had to learn the technology as an adult. But there are a lot of people that don’t nicely fit into those categories, there are also the “bridges” (somewhere between digital native and digital immigrant) and the refugees (those who have fled the onslaught).  I teach the digital refugees, of course.

In an effort to get a better picture of these distinctions, I started questioning my kids about the ways they use technology and why.  I don’t think my kids are particularly typical (after all, they are mine), but their responses were interesting, nonetheless, since they affirm, for the most part, what others (mostly digital immigrants) are saying about digital natives.  My kids range in age from 17 to 28.  I questioned the 17 year old first.  His answers were pretty much the same as his 22 year old sibling who is still in college.  His 26 year old sibling, out in the work force, had only slightly different answers.  All of them (even the oldest ones) grew up with computers both at home and at school, although for the older ones, computer technology was not as widespread and integrated as it is today.

They all have cell phones.  They all use the phones to send text messages.  The youngest says he uses text much more than voice (verified by the phone bill).  For the 22 year old it’s about a 50-50 split, and for the 26 year old, it’s mostly voice.

Why do they text instead of use voice?

  1. It’s more private, or, to put it in the words of the youngest, “texting is less obnoxious.”  He used an example of someone in a public place like a grocery store talking loudly on a cell phone so everyone can hear all the gory details that they would rather not. Texting doesn’t disturb anyone.
  2. In many cases it’s quicker and easier than dialing a number and waiting for the other person to answer just to say something like “I’m on my way, I’ll be a few minutes late” or “are you going to Fred’s this evening?”
  3. You can send the message to multiple recipients rather than making multiple phone calls.
  4. Sometimes it’s the only way you can communicate.  The youngest used the example of being in class, where phones are not allowed, and texting surreptitiously.  The 22 year old used an example of being at a loud party where you wouldn’t be able to carry on a phone conversation.

In most cases the texting is short, quick messages.  The 22 year old will switch to a phone call if the messages are getting long, since it’s easier to talk.

They all have MySpace and Facebook accounts.  Which one they use depends on which friends they want to communicate with. The 26 year old is in the process of resurrecting his Facebook and MySpace accounts, because that’s where all his friends are.  They all prefer Facebook:

  1. MySpace has too many ads that are in your face. As one of them put it, “Where would you rather talk to your friends: in the Mall, or in Radio Shack?”
  2. Facebook is more streamlined
  3. Facebook is more user friendly
  4. Facebook gives you a targeted list (“Here’s a list of others from your school who are on Facebook”) making it easier to find your real-life friends.
  5. Facebook has more games and applications.
  6. You have more freedom to change around your Facebook page since it’s HTML based (but this can be a bad thing when you go to page to leave a comment and there’s a big flash application that slows down your computer and an annoying song you can’t turn off because the flash app is in the way).

What do they think of MySpace and Facebook?  Generally, it’s a time waster.  They get on one of them when they have nothing else to do, or they have time to waste.  Both MySpace and Facebook are used to communicate with their friends, when the communication does not need an instant response.  But all of them know people who are “addicted” to MySpace or Facebook, spending every waking second trying to find out what everyone else is doing, or checking to see if there are any new comments.

What about e-mail?  For all of them e-mail is snail mail.  They use it for:

  1. formal communication
  2. sending attachments (it’s easier than IM, with less problems)
  3. staying in touch with distant friends or friends in foreign countries (where it’s too expensive to text or phone).

What is the real snail mail for?  Packages.

What about blogs? There was a disinterested “no” from all of them.  They don’t have one, don’t want one, and don’t read them.  When I pressed the 26 year old, he thought about it and admitted he does visit a couple technology news sites that are actually blogs.

E-readers have gotten such hype I couldn’t resist the opportunity to find out what they thought about them. They were puzzled:  “Why wouldn’t you just get the book?”  When I pointed out you could put hundreds of books on them, they were still puzzled:  “Isn’t that what a library is for?”  They conceded they might read an electronic version of a book, but couldn’t fathom having a specialized device to read it:  “Why would you get something that can only do one thing?”

Finally, I asked what they would do if there were no cell phones or computers.  The 17 year old wasn’t fazed: “Find something else to do, like read a book or ride over to my friend’s house.”  The 22 year old was a bit more concerned:  “You mean, like a day or two, or forever?”  (clearly not liking the “forever” option).  The 26 year old didn’t like the forever option either since he works in the technology field.

I think they all have a very different concept of technology than my generation does, even those of us who have embraced computer technology since its inception.  It really is an everyday occurrance for them, no more special than a toothbrush. And I guess that is what makes them “natives.”  It is hard for them to understand not being intimately connected to technology.  My 17 year old found it too painful to watch me figuring out how to navigate around a new cell phone last year, that had a totally different interface from the last one (and a few more features).  He finally took it from me and set it up in a matter of seconds, complete with a picture of him as the background.   On the other hand, his brother only two years older reacted to the new release of World of Warcraft much the same as a digital immigrant:  he wasn’t so sure he wanted to take the time to relearn how to play the game with all its new features and content.  He wanted to stick with what he was comfortable with.  In the end, for them, it is just another tool.

Software Freedom Day 2008!

Software Freedom Day in Palm Beach County

The Palm Beach County Linux User Group is proud to announce its third Software Freedom Day/Installfest as part of Software Freedom Day 2008, the biggest international celebration and outreach event for Software Freedom, with hundreds of teams from all around the world participating. The yearly event is a celebration of Software Freedom and why it is important. This year the Palm Beach County Linux User Group will be hosting the event at the West Boynton Branch Library, 9451 Jog Road, Boynton Beach, Florida, from 10:00 AM to 12:00 P.M. on Saturday, September 20, 2008.  Google map available here.


West Boynton Beach Library

West Boynton Beach Library


As part of the Software Freedom Day celebration this year, the Palm Beach County Linux User Group will be offering assistance with installing free portable software on USB flash drives, giving away CD’s with free and open source software for Windows and Macintosh computers, and demonstrating how to use the free software.

We invite you to come by for giveaways, demonstrations, and to learn about Linux, a free and open source operating system available for any type of computer.

Google API’s and Mac

I have an old iMac that I’ve been using as a server. Because I like Linux, and because it was easier to configure LAMP (Linux, Apache, MySql, PHP) than the similar components in OS X, I installed Kubuntu 6.06 on it (I’ve always liked the KDE desktop better than the Gnome desktop, which is the default for Ubuntu). Everything was fine until I decided I wanted to try out a Google API.

Google APIs require PHP 5.1.4 or higher (actually it was needed for the Zend engine, which is required for the Google API). But Ubuntu 6.06 (and Kubuntu 6.06) didn’t have upgrades to PHP 5.1.4. After a lot of trials and failures, I decided to fall back on Apple’s OS X and install MAMP (Mac, Apache, MySql, PHP). This particular machine could only take OS 10.3.* on it, which limited the MAMP I could use. But it included PHP 5.1.6, so I was happy. For a while.

I got everything up and running again, and even figured out how to get local network access working. Then I got back to the Google API. The first step, with MAMP, however, was to secure it, since the default install is with user “root” and password “root.” So far, that wasn’t a problem since MAMP on this computer was only accessible on the local network, firewalled from the Internet. But using a Google API requires access to and from the web.

The MAMP application has a FAQ page, accessible from the start page, that looks really helpful, but isn’t. You can get there by clicking in the FAQ button at the start page:

MAMP start page

Of course, the part about which versions of the included programs are installed is helpful. But I had already checked that before I downloaded MAMP. It’s the part right below that, under the “How can I change the password for the MySQL database?” that is unhelpful.










First of all, mysqladmin is not in that location (/Applications/MAMP/bin/mysql4/bin/mysqladmin). It’s in /Applications/MAMP/Library/bin. The php config file location is closer to what’s listed: /Applications/MAMP/bin/phpMyAdmin/

Second, trying to run the suggested command in tcsh got me nowhere. It turns out the default shell was changed to bash in OS 10.3, but upgrades (which this is) keep tcsh as the default. Fortunately, bash is available, but the default has to be changed in the terminal preferences.

So, just to make sure bash is really there, go to the /bin directory in the terminal (using the Finder will just show the documentation):

bash in the Finder

Change the directory to root level by typing “cd /.” Then type “cd /bin” to get to the /bin directory. Then type “ls” to list everything in that directory (see bash listed in the screenshot):

While the terminal is open, go to the Terminal preferences:


Notice the path listed is for tcsh:

tcsh set

Change it to /bin/bash:

bash path

Close the Preferences window, quit the Terminal application, and relaunch it. bash will be at the top of the Terminal window instead of tcsh now.

Now running the command listed in the FAQ page (with the path modification) will change the password in MySQL.  But before you actually press the Enter key to run the command, highlight the new password and copy it using the edit menu at the top of the screen.

/Applications/MAMP/Library/bin/mysqladmin -u root -p password NEWPASSWORD

(where NEWPASSWORD is the password it is to be changed to). The php config file will also need to be edited. I have eMacs on this machine, which worked nicely.  Don’t try to do it in Text Edit.  That will not work nicely at all.  Open the file (in MAMP’s phpmyadmin folder) in a code editor like bbedit or emacs. Find the lines

$cfg['Servers'][$i]['user']           =   'root';          //MySQL user
$cfg['Servers'][$i]['password']       =   'root';         //MySQL password

Replace ‘root’ in the password line with the one you copied. Save the file and close it.

Now, according to the MAMP faq page, it’s finished. Not.

It turns out there’s also a couple scripts to change in MAMP, documented over on network0.  There’s also a handy section on securing MAMP itself by password protecting the htaccess folder using an online .htaccess password tool (  So now that I’ve got it locked down it’s time to figure out how to open it up for Gdata on that Google API.  🙂

Libraries and Web 2.0

The Public Library of Charlotte and Mecklenburg County started a learning program for its employees a few years ago called 23 Things. It was intended to help people learn about new Web technologies that have changed the way we interact with the Web. It was evidently successful, and other libraries followed suit, using the formula and exercises set out by the Charlotte and Mecklenburg County Library.

Their site says that (as of May, 2006) there are over 200 libraries who are using their Web 2.0 (23 things) learning tool. It’s a pretty neat set of exercises. But I have some suggestions for anyone in a library thinking of implementing this program.

  1. Don’t have all your employees sign up for Gmail to be able to use Blogger (which is now owned by Google). They can get a Blogger account with any active e-mail address. When Google sees 300+ or 100+ or even 50+ e-mail accounts being generated, and accessed, from the same IP address, and Blogger accounts instantly created with those accounts they’re going to think one thing: Spammers! Actually they aren’t doing the thinking, they set up bots to capture just those types of events, so they don’t have to think about it. Once you’re flagged as a spammer, forget trying to get the e-mail account unblocked.
  2. Unblock the content you want staff to be able to access. This would be a good time to take a closer look at just what your filter is blocking and whitelist those inocuous sites you want the staff to be able to play with.
  3. A lot has happened since 23 Things first appeared. If you are going to encourage learning technology, don’t limit the discussion and exercises to old technology. Do a little research, hang out at technology conferences, follow technology feeds, talk to a tech-savvy person, and find out what is current and what is coming down the pike. Then change or add to the discussion to make the exercise current and relevant.  Seriously.  Others have already made changes.  You can, too.
  4. If you haven’t already, read The Cluetrain Manifesto, available online or in print.  This really is an absolute must for administrators, whether you’re doing the 23 Things or not.

And, of course, you deserve great commendations for taking this step into Web 2.0.  Welcome!