Friday, March 09, 2007

Weekly readings - 09 March 2007

Digital Repository Audit Method Based on Risk Assessment. Website. March 1, 2007.

The Digital Curation Centre and DigitalPreservationEurope have released the Digital Repository Audit Method Based on Risk Assessment (DRAMBORA) toolkit. This is to provide repository administrators with a way to audit and assess the capabilities, weaknesses, and strengths of their repository. This model is designed to respond to developments; it follows the DCC audit process for various types of archives. This 221 page document is very comprehensive and expects that the organizations, processes, and documents are already in place. Registration is required to download the document and worksheets.

Review and analysis of the CLIR e-Journal Archiving survey. Maggie Jones. JISC. 7 March 2007.

A review of e-journal archiving is now available. This site has a link to the report “E-Journal Archiving: Review and Analysis of the CLIR Report E-Journal Archiving Metes and Bounds: A Survey of the Landscape, by Maggie Jones”, as well as an executive summary. The trend to e-journals is increasing. LOCKSS and Portico have provided a momentum in this area. The report lists the basic principles for this effort, mostly for the services and nationally. Of special interest are:

  • There must be an explicit commitment to digitally archive scholarly peer-reviewed journals.
  • Form a network to exchange information with others on what you are doing.
  • Participate in at least one long term initiative.
  • Act collectively to address long-term accessibility
  • Participate in a registry of archived scholarly publications
  • Have a preservation mandate
  • Initiate either formal or informal certification and articulate practices and procedures
  • Create publicly accessible policies and procedural documents.
  • Clearly state access conditions
  • View the preservation of electronic journals as a necessary investment.
  • Support a range of options and solutions and collaborate

Manage risks appropriately. Having the publisher responsible for archiving is a high risk strategy. A single definitive approach to e-journal archiving is unlikely ever to emerge. The article provides an in-depth look at LOCKSS, CLOCKSS, Portico, PubMed Central, and others.

Electronic Resources Management and Long Term Preservation: (Is the library a growing organism?) Tommaso Giordano. E-LIS. 02 March 2007.

Digital preservation is a complex issue that involves many areas of expertise. This paper looks at how academic libraries see preserving e-journals and the organizational practices. License controls may control access to the current year, back issues, and an archival copy. Perpetual access and archiving right are different. Getting an archive copy is only the first step, there is more to implementing long-term access. Digital preservation may not be possible for every library. Digital preservation is a costly operation which considerable and long-term commitments, beyond current budgets. This is a high-level strategic issue. There is a shift from the traditional model to one based on renting resources with no guarantees for the future. Sustainability is a question.

The End of Online Storage: Coming Soon. Brian Bergstein. MCPmag. March 5, 2007.

A new study estimates that the amount of digital information the world is generating has increased dramatically. The study tries to account for photos, videos, e-mails, web pages, instant messages, phone calls and other digital content. They estimate that 40 exabytes of original data was created last year, and that it would be 161 exabtyes if you count the times it is duplicated. They wonder if enough is being done to save the digital data for posterity. "Someone has to make a decision about what to store and what not. How do we preserve our heritage? Who's responsible for keeping all of this stuff around so our kids can look at it, so historians can look at it? It's not clear."

Google helps terabyte data swaps. Darren Waters. BBC News. 7 March 2007.

Google is developing a program to physically transfer large data sets around the internet. Some data sets may be 120 terabytes in size. Google is collecting the data sets and sending them to scientists who want them. This was started after researchers working with The Archimedes Palimpsest had problems transferring the enormous data sets from place to place. “Google keeps a copy and the data is always in an open format, or in the public domain or perhaps covered by a creative commons license.” They hope some day the data can be open to the public.

Pre-Pixel Preservation: Concept Device to Archive and Preserve the Past. Naveen Shimla. Gizmo Watch. February 27, 2007.

An interesting look at “Pre-Pixel Preservation”, meaning scanning and printing photographs and videos, or rather the “pre-digital files”.

No comments: