A new kind of mousing around

Many years ago I initiated what became the Mousing Around tutorial, a standalone browser based tool for teaching beginners to use the computer mouse. It was picked up worldwide and modified and translated for local use, which was kind of cool. Yesterday, an older gentleman who wanted to apply for a job at Whole Foods was sent over to learn to use a computer. He had never used a computer in his life, so I started him on the Mousing Around Tutorial. We stopped at the mousercise part because (1) his hand and wrist were aching already, and (2) I wanted to give him an update on computers he was likely to encounter at a job that was not likely to be at a desk with a mouse to control the cursor.

It started me thinking as I explained touchpads and touchscreens, and how a finger is now used to do what the mouse typically does: how does the current Mousing Around tutorial work on touchscreen devices? Answer: not very well. Would an updated, or even a totally new, tutorial be useful as a tool to help people learn to use touchscreens? Maybe. I do a lot of coaching people how to use their tablets and smart phones, and one of the trickier parts is navigating the device with a (not shaky) finger.

One of the nice things about the Mousing Around tutorial is that the process of introducing the mouse and how it interacts with the computer also gives a very basic but functional introduction to the computer itself. So as I explained about touchscreens, it was easy for the gentleman to understand, for example, tapping on an icon (or double tapping) with his finger would open it the same way clicking on it using the mouse would. Back when we created the Mousing Around tutorial, the icon examples we used were pretty standard fare on most computers. That has changed. A lot.

And so I wonder: is it time for a redo? Probably. Should I do it? <shrug> I’m sure I could create something really useful. The question is, of course, whether the cost benefit ratio would be worth it.

Back to Technology and the Aging population

I’m a boomer with an aging parent. He’s currently 93 and doing just fine, thank you. I also coach/train people on using technology. Many of these people are also compromised by the effects of aging. The technology world has not been particularly mindful of this population group. But there have been some changes. I am cautiously hopeful.

The biggest conundrum I see right now is cell phones. I will use my dad as the starting point here, because, although he is 93, he fairly represents a large portion of all aging adults, including some within my own generation (shoutout to my brothers and sisters who are equally stymied by the changes foisted upon them by tech companies).

My dad has been using computers since the 1980’s. Well, using them at home, anyway. He has some fun stories about using a mainframe at Tulane to run a research project in the 1950’s. He had a mobile phone in his car in the late 1960’s. So it’s fair to say he’s never been afraid of learning new things and making technology work for him.

As he got older, he was not as quick to adopt changes to the technology he was using, which is a typical response. It gets harder to relearn things, and if the older methods work fine, why change? So when an OS change would have required changing the way he saved and organized files, for example, he opted to forego updating his computer until he absolutely had to. This, of course, made it even harder because he had to try to catch up on several major changes at once rather than incrementally. And he wasn’t getting any younger, so it wasn’t getting any easier. When he finally did upgrade, he tried to impose his old ways on the new system which made for some…interesting results, which my son and I to this day are still trying to unravel.

My dad was using Apple, but I saw the same things happening with Windows users at the time. They would come, in 2007, with computers running Windows 3.1 or [shudder] Window ME and want to make the computer keep working so they wouldn’t have to upgrade (which for most of them meant a financial outlay they weren’t willing to, or couldn’t, make). Like my dad, they were comfortable with the system they had learned and did not see a reason to change. Also like my dad, when they were finally forced to adopt newer OS’s, they tried to impose their existing knowledge and practices on the new system, with interesting results.

Meanwhile, phones were evolving as well, and took a huge leap with the first iPhone. I never really learned how to use all the features on the phones I had. I tried using the internet, but the effort to reward ratio was just too steep. I did text, but never really embraced the T-9 thing. The phones were, ultimately, just fancy mobile phones. But they were mobile phones we boomers learned to use. My dad, too. Sort of.

Then smart phones took off, and we could have mini computers in our hands that could also make calls. For younger generations this must have seemed a natural progression. For the aging population, which was finding learning new things (and relearning existing things) increasingly difficult, the gradual disappearance of the flip phone must have seemed both a blessing and a curse. The smart phones seemed to hold the promise of being easier, but they didn’t work like any phones they were used to.

Which brings us more or less up to date. Cell phones in the U.S. are practically ubiquitous (see the Pew Research Center Report). For those over 65, the smart phone/not smartphone ownership is almost an even split (46%/40%), while for the youngest generation, smartphone ownership is at 94%. Of course, the youngest generation has barely known anything else. And this is an important point. The experience and expectations of younger generations is vastly different from my and my parents’ generations.

Meanwhile, back to my dad. He never did learn to use all the bells and whistles on the flip phone, of course. It was all we could do to teach him to make a call ten years ago. Because he liked the idea of having a mobile phone again, he was willing to make the effort to learn how to use it: “tap in the telephone number, then tap on the button with the green phone receiver.” He’s colorblind. But he did memorize which side the “call” button was on, and to just close the phone to hang up. Yet he rarely used it, and even less as the years stretched on. Every time we had to get him a new phone it was a harder challenge for him to learn how to use the new one, until he finally decided he would only use it for emergencies, which meant he never used it and he forgot entirely how to even turn it on.

Things reached a head when he needed to use it in an airport and couldn’t understand how, even though I had “taught” him at least three times in the past year. One sister got him a new flip phone and taught him how to use it. When that failed with a second incident at an airport, another sister decided he needed a Jitterbug phone, but called me first.

I started to consider what people who come to me to learn to use their phones want or ask about. None of them have flip phones. All of them are confused by the myriad of options available to them. Most want to just be able to use the phone. They also get the concept of “Contacts” (although they don’t completely understand how to use it, because, of course, too many options). Some are interested in texting, but more as a curiosity: they just want to know what it is and how it works (and why people use it). Many are interested in the camera, and ways to share the pictures they take. Some use email on their phone. A few want to use an app that is specific to their interests. And some have been using a smart phone for years but are still stymied and frustrated by it, usually when they are forced to update and the update forces changes which require they relearn how to use it.

I went to BestBuy to check out the Jitterbug, and it was an instant NOPE! It boasts simplicity, but it offers too many options for someone like my dad, who only needed an easy to use phone. But the nice store assistant, Natasha, suggested Samsung’s Easy Mode, and showed me how it works. I figured we had a winner. Get rid of all the extraneous shortcuts, add contact shortcuts to the home page, and virtually all we’d have to do is teach him how to unlock the phone.

My dad had other ideas. He liked the simplification, but he wanted a bigger phone than the J3 I was steering him to, and then started talking about learning to use the camera. He’s only ever had Macs. Yep, we ended up with a large iPhone, to make it easier to access photos from his computer. I removed all apps that could be removed, then moved everything else to a folder away from the home page. I disabled all notifications, and Siri. I also disabled cellular data for everything, and blocked all calls not from people in his contacts. I added the phone, contacts, and camera buttons at the bottom, and created a widget with favorite contacts which appears even on the lock screen. The last stop was to get a case with a cover to prevent ghost calls. I then spent the next three days going over how to make a call, answer a call, reject a call, and hang up. We also went over, several times, using the camera, and what to do when he couldn’t figure out what was happening. I crossed my fingers and headed home.

It’s hard for someone that age to learn new technology. Even with the modifications I made, the way everything is interconnected it is still easy for him to get lost and not know what is happening. He has been playing with his new phone, which means those of us on the favorites list have been getting calls where nothing happens. I hear he has also been attempting selfies. I count it a success: he feels comfortable enough to play with it and try things, which is how we learn. But it took a lot of time, a lot of patience, a lot of handholding. Because the technology world thinks only in terms of the younger generation of users when designing interfaces (but shoutout to Samsung for recognizing the issues and making the effort to make their product usable for older generations), this is what it takes to get a usable phone for people like my dad.

Code4Lib Midwest Lightning Talk, July 2016


Many thanks to the folks at the University of Chicago for hosting C4L Midwest last week. After hearing some of the presentations and discussions on data plans and availability, I put together a short lightning talk about the data we have at the Code4Lib Journal, or at least what can be cobbled together (literally). Surprise statistic (for me): the percent published and percent rejected, over the history of the journal, are equal.

As Eric Lease Morgan pointed out, most of the data we can’t share from the Journal are confidential. But one of the problems with gathering even shareable data is that it’s messy. As I mentioned in the lightning talk, the two main reasons for this are (1) it’s just not a priority for a volunteer committee, and (2) very few people even ask about the data. Even gathering statistics like “rejected” is difficult, because not all proposals and rejections are tied to a specific issue, and some that are accepted are later rejected for publication. But globally, we can get some generalized statistics.

Ongoing project from this: try to gather editor numbers over time (even trickier since it involves identifying when people came and went, and there’s nothing available, other than emails, to nail that down).

Social Media ROI for libraries

We have all been indoctrinated in the importance of incorporating social media in our libraries’ outreach/marketing strategies, to the point one almost has to explain by way of apology if their library isn’t on social media. I am wondering, however, where is the evidence?  How do we know, first, that social media has any effect on our institutional bottom line other than from social media supporters connecting dots (e.g., surveys indicating social media use, and references to other supporters) and saying, “of course it does!” Where’s the data?

I am still exploring, so help me out if you actually have the evidence documented somewhere (e.g., an article or data set?).  But mind you, we are talking about libraries, and library-like institutions here, not commercial operations for which the conversion metrics of Google Analytics (or others) nicely work. Because hopefully we already know that libraries don’t so nicely fit that commercial model. Also, considering the sources used to justify the swooning over social media, the percentage of a national or global community’s use of social media does not necessarily translate to a library’s base (i.e., the community that pays for and uses it).

Focusing just on the U.S., because I work in a region within the U.S., I did find an interesting data set about what libraries are doing with social media and how they are handling it: http://scholarscompass.vcu.edu/libraries_data/1/ (with a shoutout to the authors/librarians involved who evidently believe in open data and data sharing!).  It’s an appropriately complex set of data, but a quick scan through the survey, especially the write in responses, indicates social media integration is pretty hodgepodge, as if validating a feeling of distrust (as in, “what it this really going to do for us?”)

Doing a cursory literature search (limited to the last few years because this is such a quickly changing landscape), I came across one helpful article: Marketing Finding Aids on Social Media: What Worked and What Didn’t Work (http://americanarchivist.org/doi/10.17723/0360-9081.78.2.488), in which a research team selected ten social media sites to promote content, using email lists as well, and tracked and evaluated the click-throughs using Google Analytics. Although it presumes that social media marketing is needed (and I don’t dispute that position), they actually have the data to prove (1) its effectiveness, and (2) which ones give the best results.  Why can’t we have more like this?

Why can’t we have less “jump on the bandwagon” programs and courses and classes, and more instruction on assessment of need and measurement of impact?  How about classes that teach what data can be gathered, how to gather it, and how to use it?  Because maybe we should distrust social media’s usefulness.  What is it going to do for us?  Is there really a social media ROI for library-type institutions?

Automation and Small Libraries, and CornerThing

The situation hasn’t really changed in the world of library automation since last year’s post.  Libraries find what works for them, given their  economic and human resources.  What is different, is a new tool, developed with some virtual interns.  I call it CornerThing, because I’m not very creative with names. 🙂

I’ve got these small libraries (American Corners), where, for some of them, their “automation” consists of massive spreadsheets.  And LibraryThing.  Checkouts are still done by hand on cards.  They compile reports by hand, going through the cards each month, to send to me, or one of my colleagues.  It seemed like there must be an app to use LibraryThing to do more than just display a collection.  I searched and checked as only a Reference Librarian would. 🙂  Nothing was out there.  So how hard could it be to make an app that could capture checkout statistics (the part I was interested in)?

I originally wanted an iPad app, but rather than spend precious little free time on it myself, I decided to get a couple interns who we willing to learn some new skills while creating a simple app.  It was an interesting experience.  I didn’t get an iPad app, because no one applied for that project.  Several applied for the Android app project I added almost as an afterthought (why not? More options!).  So I got an Android app, now in beta, which about 1/3 of those small libraries, which already have Android tablets, can use.

CornerThing:  it syncs with a LibraryThing collection, downloading the metadata to the device, into a lightweight searchable database.  Subsequent “syncs” only add changes.  It’s possible to add an item in the app, but the syncing is not two-way.  Then there’s a searchable database for borrowers, entered on the fly, or by uploading a spreadsheet file (via computer connection).  Items from the collection can be checked out to borrowers, with a due date, and checked back in.  When an item is checked out, the data is captured on the item record and preserved.  Once the item is checked back in, the connection between the borrower and item is erased, but the numerical data on checkouts is retained on the item record, so reports can be generated by selected metadata (e.g., author, title, keyword).

CornerThing: a simple circulation app for small libraries (like American Corners) to take advantage of their LibraryThing collections.  I’m pretty sure it would work for other small libraries with limited resources. 🙂  It’s also open source. If you’re interested, send me a message.

Automation and small libraries – first look

It’s kind of amazing to me that after over fifteen years in this business, I’m looking at a situation that pretty much hasn’t changed for small libraries looking for an automation system.  There wasn’t much available for them at a reasonable cost back then, and there is even less today.  Go ahead. Show me where I’m wrong.

Seriously, a small library, a small public library, one that is supported, sometimes begrudgingly, by (too often non-existent) local public funds, does not have a lot to spend on annual fees for a library automation system.  They have even less to spend on a tech person who could install and maintain one of the FLOSS options.  And even if they do, there’s still the recurring cost of a web server to put it online.  O.K., maybe they could tie into the local government’s web server, assuming the local government entity has one (probably a “yes” in the U.S., but not in other parts of the world).

I actually did a complete retrospective conversion at a library years ago.  It was a special library that had the funds to support a quality system.  I was shocked and horrified at the “system” (basically a giant word processing document) that was being used to track titles.  There was no mechanism to track costs.  I have since come to appreciate the efforts of librarians in small libraries doing the best they can with what they have, and with the skill sets they have, to manage their collections.  Hello, World: librarians are amazing, and you should throw money at them, because they do amazing things with your resources.

So, in the history of library systems, the first stop is Marshall Breeding’s excellent record of what was, what is, and who ate up whom.  Although this doesn’t cover everything being used out there, it does give an interesting picture of the developing landscape.  I was surprised and amazed that the Georgia Public Library Service, back in 2004, chose to build its own automation system (Evergreen) for libraries in the state rather than use one of the existing systems.  There was, of course, Koha, out of New Zealand.  But that one got mired in nasty disputes over code and trademarks.  So far, so good, with Evergreen (keeping fingers crossed)

Next stop is Wikipedia’s list of Next-Generation library systems, especially the Comparison chart, but you might want to scan through the brief explanation of what Next-Generation means here.  The list is notable, because it includes both large systems and systems smaller libraries can use.  But note that these are all web-based systems.  Some of them, however, can function on a stand-alone basis, on the desktop.  This is important, because the most basic need of libraries is library management software.  Getting it on the web is secondary to librarians (although not to their patrons, of course).

So let’s take a stab at what some of the possibilities are for small (mostly) underfunded libraries today.  There are two perspectives to consider here:  libraries with staff that are systems-capable, and libraries with limited or no staff capable of managing the backend of a system.

In the first category (libraries with systems-capable staff), we have, first, systems that have a stand-alone option,

and second, systems that are add-ons to content management systems a library may be using, or have access to, for their web site (so far, I’m only seeing Drupal modules, are there any others out there?).

In the second category, it’s pretty bleak without (1) hiring someone, or a service, to set up a system, or (2) a funding stream to pay annual fees.  In the first case I refer back to one of the four above that can be installed on a stand-alone basis.  After installation, training would probably be required as well, despite the documentation.   In the second case, LibraryWorld seems to have a loyal following around the world.  I haven’t had an opportunity to look at it recently, so I don’t really have anything to say about it (yet).  Feel free to add comments below about your experience with it.

But LibraryWorld is a closed system, and if you are looking for something open source, there are

  • PMB Services (uses phpMyBibli)
  • OPALS  (they say they are open source, but I don’t see a download link – the acid test of open source)

There are, of course, larger open source systems, which may work for a consortium of libraries: one of the Koha versions, and Evergreen come to mind.  Both have companies that will install and customize the system.

Finally, there is LibraryThing, which is oddly ignored by everyone except those who want their collections online for their patrons and have no other way to do that.  Granted it is limited in terms of collection management:  checkouts? reports?  But it can work, because it is, actually, a next-generation cataloging system.  It’s online, it’s searchable, it’s fairly easy to add resources, the cataloging options are wide open (if that’s how you want to characterize keywords).  And even though the amount of resources that can be entered free of charge is limited  (with the option for more space requiring a small fee), most small library collections are pretty small. Best of all, it’s accessible by difference devices.  All we need is apps that extend the basic functionality offered with a LibraryThing account.

So here I am, looking for viable library management software options for small libraries outside of the U.S., and this is what I’ve come up with. Give my some feedback, library world.  Which of the options above are worth taking a longer look at?


Copyright and disruptive technology

What if you could give a book to everyone on earth? Get an ebook and read it on any device, in any format, forever? Give an ebook to your library, for them to share? Own DRM-free ebooks, legally? Read free ebooks, and know their creators had been fairly paid?  –From About, unglue.it

Copyright is a round hole.  Paper publications are nice, round pegs.  Electronic items are square pegs.  Hard copies can be passed around, shared from person to person across time and space.  A copyright holder’s distribution rights are curtailed by the physical transfer of the copyrighted item (by purchase or gift) to another.

Electronic items can be similarly shared. Maybe.  Because they are square pegs, a new way to control distribution was needed, so a square hole called “licensing” was carved into the copyright landscape.  This pretty much upsets the shaky balance between the public right to knowledge and a creator’s right to profit from the work.

Enter the crowdfunding concept, which takes advantage of the ubiquitousness of the Interwebs and the ability to use that to more easily raise money for relatively small scale projects.  Kickstarter is a fairly well known example of a crowdfunding conduit.  And now comes Eric Hellman, using the crowdfunding idea to harmonize the ideals of copyright and licensing, to make that square peg fit in the round hole.

Welcome to unglue.it.  I love it.  Where else can you find the possibility of getting your favorite book released into the electronic domain?  I’m hoping when this catches on, I’ll see In the Night Kitchen moved into an active campaign by the time my new grandson is ready to read!

Peer Review and Relevancy

The Code4Lib Journal exists to foster community and share information among those interested in the intersection of libraries, technology, and the future.  —Mission Statement, Code4Lib Journal

A colleague on the Code4Lib Journal’s editorial committee has posted a defense of the Journal’s position on peer review, or more specifically, double blind refereeing.  I was tempted, several months ago, to address the topic in the opening editorial for Issue 16, but was too preoccupied with food. 🙂  I don’t always see things the way Jonathan does, although I’ve learned over the years he gets it right a lot more times than I do.  In this case, we both agree that the Journal is, in fact, peer reviewed, but not double blind refereed, and we both agree this is a good thing.

Jonathan has, from my perspective, a rather entertaining analogy of the process as open source software development.  He also makes the argument that the Journal’s purpose is to provide value to our audience, and, just to make sure the elephant in the room is not ignored, stresses in very blunt terms that the Journal is not here to validate librarians’ or library technologists’ work for purposes of tenure or advancement.  I’m not going to disagree, but I am going to address the elephant differently.

I understand how “peer review” has been a good thing.  It has provided outside confirmation of the relevancy and quality of scholars’ work.  This was not the reason we started the Code4Lib Journal, however.  We were looking for a way to encourage and facilitate the (relatively) rapid dissemination of helpful information to our audience (which we did not see as being limited to people self-identified as members of  the Code4Lib community).

Because of this goal, we do things a little differently at the Code4Lib Journal.  We don’t require entire drafts be submitted, but we do have a rather tight timeline to publication, so it is obviously a plus when we get a draft up front, or when the author is already working on a draft and can get it to us quickly if the proposal is accepted.  All of us on the editorial committee are library technologists of some kind, although not all the same kind.  We all consider each submission, sometimes engaging in lively and lengthy discussions about a proposal.  In that regard, I would argue that the Journal is not just peer reviewed, it is uber-peer reviewed, because it is not one or two “peers” who are making the call on whether to accept a proposed article, it is typically 7-12 peers who are making the call.

Again, because of the goal to encourage and facilitate rapid dissemination of information, we are committed to working with authors to get their articles ready for publication.  This typically takes the form of (electronically) marking up a draft with specific comments and suggestions that we, as peers, think would make the article more useful to our audience.  The process sometimes goes through several iterations.  It doesn’t always work.  We have had some fall off the table, so to speak.  But the consistently high quality issues we have published are a testament to the validity of the process. Double-blind refereeing here would be an inherently inferior process for achieving our mission and goal.

But back to the elephant in the room.  Why, today, given the current state of disruptive technology in the publishing  industry, are we even talking about refereed journals?  I’m waiting for the other shoe to drop regarding that process.  Why are libraries still hyper-focused on double blind refereeing and peer review as validating mechanisms in tenure?  Isn’t it time to rethink that?  After all, libraries have already registered their disgust at being held hostage by journal publishers:  Universities require their scholars publish, and their libraries have to pay whatever the publisher demands for a license to see that scholarship.

Double-blind refereeing is time consuming.  So is what we do at the Code4Lib Journal. But I would posit that our way is more effective in identifying relevant information and  ensuring its quality.  How much ROI is there, really, in sticking with the old vetting process for validating tenure candidates?  May I suggest letting that other shoe drop and cut to the core of

  • Why is tenure needed in your institution?
  • How effective is the current tenuring system in supporting your institution’s mission and goals?
  • What value does scholarly publishing bring to your institution?
  • What are you willing to do to ensure continued scholarly publication?

The Code4Lib Journal was started 5 years ago by a message from Jonathan Rochkind to the Code4Lib mailing list asking, basically,  “who’s in?”  Change can be done.  It just takes someone willing to voice the call.  If publishing is important to tenure, send out a call to your colleagues to start a Code4Lib type journal.  If it looks too scary, ask.  I’m willing to help.  I’m sure there are others out there as well.


Drupal Camp!

With a little help from the DrupalEasy folks in Orlando, the Miami and Fort Lauderdale groups are finally putting on a Drupal Camp! (Thanks Mike Anello, for giving it the final push!)

Nova Southeaster University in Davie is hosting the event on Saturday, October 22. Admission is a only $10 — if you are anywhere in South Florida, come!  There are corporate sponsors, but consider chipping in $40 to be an individual sponsor.  Details are at the Drupalcamp website.

For the Drupal wary:  there is a Beginner’s track, using Drupal 7, the easiest Drupal ever!

For experienced Drupalers, there will be plenty to chew on, such as drush awesomeness!

If you are somewhere in between, trust me, you won’t be bored. 🙂

Drupal and other distractions

What started out as a 3-4 month hiatus to do an intranet site redesign, is now winding down after 6 months.  After the first couple months when it became apparent no progress was going to be made, I regrouped and put together a different, motivated but novice team. It’s been a little over three months since that team started on the project, and the results are impressive.  Although we could go live with it, I decided to do some “beta testing” on our unsuspecting end-users.  That has been enlightening: there may be some revisions in store before we finally get this baby to bed.

The terms usually associated with Drupal are “steep learning curve.”  I was the only one in my organization who even knew what Drupal is, although some seemed to have a vague concept of “content management system.” But I recommended we go with Drupal over other options because of (1) it’s potential, (2) the growing and active group of libraries with Drupal, and (3) because it’s the CMS I was most familiar with.  Looking back, I’d have done some things differently (isn’t that always the case?), but I would still choose Drupal. We haven’t fully taken advantage of all Drupal’s potential, but that’s only because I decided to hold off development of more advanced features until after we completed the initial project.

I was fortunate to have 2 others who were eager to learn and undaunted by Drupal’s complexity.  In a little over two months, with 1 1/2 days of one-on-one training and many many hours of phone conferences with them, they understand Drupal better than I did after two years of playing with it. This is a good thing, since they will likely be the ones left with the task of maintaining the site over the long run.  But we needed more than us three to migrate the content from the previous site, so I recruited 4 others, 3 of whom were apprehensive about approaching technology at this level.  One had a Technical Services background, and provided us with the taxonomy structure we needed. Two added content directly into special content types I set up, and one tracked down copyright-free pictures we needed.  It was an interesting exercise in project management: finding the team members we needed by dividing the tasks by skill level required, configuring Drupal to be easier to use for technophobes, and by approaching prospects individually to ask for help on a limited scale.

About half way through the project, as I struggled with trying to get the site to display the same in IE7 and Firefox, I shifted gears and decided to do the layout completely in CSS.  Actually I was shamed into it after a query to the Drupal library group.  I finished those changes just about the same time everything else fell into place.  And it works just fine in both browsers, thank you!  But we had been designing with the assumption most end users would be using a set screen size and resolution.  This week we discovered those assumptions were way off.  The good news is that we have found a lot more real estate to work with.  The bad news is that while things aren’t broken, the site doesn’t look quite the way we envisioned.

There may be some more tweaking involved, but there are now two others who have enough experience to do the tweaking.  Life is good.  Now to get back to the digitization project.