Managing the preservation and accessibility of public records from the past into the digital future. Dean Koh. Open Gov. 30 November 2016.
A post about the Public Record Office of the State Archives of Victoria. They have many paper records but now also a lot of born digital records governments, so the archives is a hybrid paper and digital archives. For accessibility purposes, paper records are digitised to provide online access. The Public Record Office also sets records management standards for government agencies across Victoria. "In the digital environment, there is not a lot of difference between records and information so that means we set standards in the area of information management as well." Access to records is a major focus, including equity of access in a digitally focused age.
"There’s a lot to access that isn’t necessarily ‘just digitise something’, there’s a lot of work to be done in addition to just digitising them. There’s capturing metadata about the digital images because again, if I just take photographs of a whole lot of things and send you the files, that’s not very accessible, you have to open each one and look at it in order to find the one that you want. So we have to capture metadata about each of the images in order to make them accessible so a lot of thinking and work goes into that."
Another issue around records, particularly born digital records, is the different formats used to create records in government. There are a "whole bunch of different technologies" used to create born digital records and the archives is trying to manage the formats and the records so that they "continue to remain accessible into the far future. So 50 years, a 100 years, 200 years, they still need to be accessible because those records are of enduring value to people of Victoria. So that’s a format issue and a format obsolescence issue."
This blog contains information related to digital preservation, long term access, digital archiving, digital curation, institutional repositories, and digital or electronic records management. These are my notes on what I have read or been working on. Please note: this does not reflect the views of my employer or anyone else.
Saturday, December 31, 2016
Friday, December 30, 2016
How Not to Build a Digital Archive: Lessons from the Dark Side of the Force
How Not to Build a Digital Archive: Lessons from the Dark Side of the Force. David Portman. Preservica. December 21, 2016.
This post is an interesting and humorous look at Star Wars archiving: "Fans of the latest Star Wars saga Rogue One will notice that Digital Archiving forms a prominent part in the new film. This is good news for all of us in the industry, as we can use it as an example of how we are working every day to ensure the durability and security of our content. Perhaps more importantly it makes our jobs sound much more glamorous – when asked 'so what do you do' we can start with 'remember the bit in Rogue One….'"
The Empire’s choice of archiving technology is not perfect and there are flaws in their Digital Preservation policy in many areas, such as security, metadata, redundancy, access controls, off site storage, and format policy. Their approaches are "hardly the stuff of a trusted digital repository!"
This post is an interesting and humorous look at Star Wars archiving: "Fans of the latest Star Wars saga Rogue One will notice that Digital Archiving forms a prominent part in the new film. This is good news for all of us in the industry, as we can use it as an example of how we are working every day to ensure the durability and security of our content. Perhaps more importantly it makes our jobs sound much more glamorous – when asked 'so what do you do' we can start with 'remember the bit in Rogue One….'"
The Empire’s choice of archiving technology is not perfect and there are flaws in their Digital Preservation policy in many areas, such as security, metadata, redundancy, access controls, off site storage, and format policy. Their approaches are "hardly the stuff of a trusted digital repository!"
Thursday, December 29, 2016
Robots.txt Files and Archiving .gov and .mil Websites
Robots.txt Files and Archiving .gov and .mil Websites. Alexis Rossi. Internet Archive Blogs. December 17, 2016.
The Internet Archive collects webpages "from over 6,000 government domains, over 200,000 hosts, and feeds from around 10,000 official federal social media accounts". Do they ignore robots.txt files? Historically, sometimes yes and sometimes no, but the robots.txt file is less useful that it was, and is becoming less so over time as, particularly for web archiving efforts. Many sites do not actively maintained the files or increasingly block crawlers with other technological measures. The "robots.txt file is not relevant to a different era". The best way for webmasters to exclude their sites is to contact archive.org and to specify the exclusion parameters.
"Our end-of-term crawls of .gov and .mil websites in 2008, 2012, and 2016 have ignored exclusion directives in robots.txt in order to get more complete snapshots. Other crawls done by the Internet Archive and other entities have had different policies." The archived sites are available in the beta wayback. They have had little feedback at all on their efforts. "Overall, we hope to capture government and military websites well, and hope to keep this valuable information available to users in the future."
The Internet Archive collects webpages "from over 6,000 government domains, over 200,000 hosts, and feeds from around 10,000 official federal social media accounts". Do they ignore robots.txt files? Historically, sometimes yes and sometimes no, but the robots.txt file is less useful that it was, and is becoming less so over time as, particularly for web archiving efforts. Many sites do not actively maintained the files or increasingly block crawlers with other technological measures. The "robots.txt file is not relevant to a different era". The best way for webmasters to exclude their sites is to contact archive.org and to specify the exclusion parameters.
"Our end-of-term crawls of .gov and .mil websites in 2008, 2012, and 2016 have ignored exclusion directives in robots.txt in order to get more complete snapshots. Other crawls done by the Internet Archive and other entities have had different policies." The archived sites are available in the beta wayback. They have had little feedback at all on their efforts. "Overall, we hope to capture government and military websites well, and hope to keep this valuable information available to users in the future."
Thursday, December 22, 2016
Securing Trustworthy Digital Repositories
Securing Trustworthy Digital Repositories. Devan Ray Donaldson, Raquel Hill, Heidi Dowding, Christian Keitel. Paper, iPres 2016. (Proceedings p. 95-101 / PDF p. 48-51).
Security is necessary for a digital repository to be trustworthy. This study looks at digital repository staff members’ perceptions of security for Trusted Digital Repositories (TDR) and explores:
Two standards, DIN 31644 and ISO 16363, draw upon DRAMBORA, an earlier standard, which consisted of six steps for digital repository staff members:
Security is necessary for a digital repository to be trustworthy. This study looks at digital repository staff members’ perceptions of security for Trusted Digital Repositories (TDR) and explores:
- Scholarship on security in digital preservation and computer science literature
- Methodology of the sample, and data collection, analysis techniques
- Report findings; discussion of implications of the study and recommendations
Security in the paper refers to “the practice of defending information from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction”. Three security principles mentioned are confidentiality, integrity, and availability. Recent standards for TDRs show the best practices of the digital preservation community, including security as part of attaining formal “trustworthy” status for digital repositories. However, security can be hard to measure. Part of security is the threat modeling process, where "assets are identified; threats against the assets are enumerated; the likelihood and damage of threats are quantified; and mechanisms for mitigating threats are proposed". Understanding threats should be based on "historical data, not just expert judgment" to avoid unreliable data. The study discusses the Security Perception Survey, which "represents a security metric focused on the perceptions of those responsible for managing and securing computing infrastructures".
Two standards, DIN 31644 and ISO 16363, draw upon DRAMBORA, an earlier standard, which consisted of six steps for digital repository staff members:
- identify their objectives.
- identify central activities necessary to achieve their objectives and assets.
- align and document risks to their activities and assets.
- assess, avoid, and treat risks by each risk’s probability, impact, owner, and remedy
- determine what threats are most likely to occur and identify improvements required.
- complete a risk register of all identified risks and the results of their analysis.
Wednesday, December 21, 2016
We Are Surrounded by Metadata--But It’s Still Not Enough
We Are Surrounded by Metadata--But It’s Still Not Enough. Teresa Soleau. In Metadata Specialists Share Their Challenges, Defeats, and Triumphs. Marissa Clifford. The Iris. October 17, 2016.
Many of their digital collections end up in their Rosetta digital preservation repository. Descriptive and structural information about the resources comes from many sources, including the physical materials themselves as they are being reformatted. "Metadata abounds. Even file names are metadata, full of clues about the content of the files: for reformatted material they may contain the inventory or accession number and the physical location, like box and folder; while for born-digital material, the original file names and the names of folders and subfolders may be the only information we have at the file level."
A major challenge is that the collection descriptions must be at the aggregate level because of the volume of materials, "while the digital files must exist at the item level, or even more granularly if we have multiple files representing a single item, such as the front and back of a photograph". The questions is how to provide useful access to all the digital material with so little metadata. This can be overwhelming and inefficient if the context and content is difficult to recognize and understand. And "anything that makes the material easier to use now will contribute to the long-term preservation of the digital files as well; after all, what’s the point of preserving something if you’ve lost the information about what the thing is?"
Technical information about the files themselves are fingerprints that help verify the file hasn’t changed over time, in addition to tracking what has happened to the files after entering the archive. Software preservation, such as with the Software Preservation Network, is now being recognized as an important effort. Digital preservationists are working out who should be responsible for preserving which software. There are many preservation challenges yet to be solved in the years ahead.
Many of their digital collections end up in their Rosetta digital preservation repository. Descriptive and structural information about the resources comes from many sources, including the physical materials themselves as they are being reformatted. "Metadata abounds. Even file names are metadata, full of clues about the content of the files: for reformatted material they may contain the inventory or accession number and the physical location, like box and folder; while for born-digital material, the original file names and the names of folders and subfolders may be the only information we have at the file level."
A major challenge is that the collection descriptions must be at the aggregate level because of the volume of materials, "while the digital files must exist at the item level, or even more granularly if we have multiple files representing a single item, such as the front and back of a photograph". The questions is how to provide useful access to all the digital material with so little metadata. This can be overwhelming and inefficient if the context and content is difficult to recognize and understand. And "anything that makes the material easier to use now will contribute to the long-term preservation of the digital files as well; after all, what’s the point of preserving something if you’ve lost the information about what the thing is?"
Technical information about the files themselves are fingerprints that help verify the file hasn’t changed over time, in addition to tracking what has happened to the files after entering the archive. Software preservation, such as with the Software Preservation Network, is now being recognized as an important effort. Digital preservationists are working out who should be responsible for preserving which software. There are many preservation challenges yet to be solved in the years ahead.
Tuesday, December 20, 2016
File Extensions and Digital Preservation
File Extensions and Digital Preservation. Laura Schroffel. In Metadata Specialists Share Their Challenges, Defeats, and Triumphs. Marissa Clifford. The Iris. October 17, 2016
The post looks at metadata challenges with digital preservation. Most of the born-digital material they work with exists on outdated or quickly obsolescing media, such as floppy disks, compact discs, hard drives, and flash drives that are transferred into their Rosetta digital preservation repository, and accessible through Primo.
"File extensions are a key piece of metadata in born-digital materials that can either elucidate or complicate the digital preservation process". The extensions describe format type, provide clues to file content, and indicate a file that may need preservation work. The extension is an external label that is human readable, often referred to as external signatures. "This is in contrast to internal signatures, a byte sequence modelled by patterns in a byte stream, the values of the bytes themselves, and any positioning relative to a file."
Their born-digital files are processed on a Forensic Recovery of Evidence Device ( FRED) which can acquire data from many types of media, such as Blu-Ray, CD-ROM, DVD-ROM, Compact Flash, Micro Drives, Smart Media, Memory Stick, Memory Stick Pro, xD Cards, Secure Digital Media and Multimedia Cards. The workstation also has the Forensic Toolkit (FTK) software is capable of processing a file and can indicate the file format type and often the software version. There are challenges since file extensions are not standardized or unique, such as naming conflicts between types of software, or older Macintosh systems that did not require files extensions. Also, because FRED and FTK originated in law enforcement, challenges arise when using it to work with cultural heritage objects.
The post looks at metadata challenges with digital preservation. Most of the born-digital material they work with exists on outdated or quickly obsolescing media, such as floppy disks, compact discs, hard drives, and flash drives that are transferred into their Rosetta digital preservation repository, and accessible through Primo.
"File extensions are a key piece of metadata in born-digital materials that can either elucidate or complicate the digital preservation process". The extensions describe format type, provide clues to file content, and indicate a file that may need preservation work. The extension is an external label that is human readable, often referred to as external signatures. "This is in contrast to internal signatures, a byte sequence modelled by patterns in a byte stream, the values of the bytes themselves, and any positioning relative to a file."
Their born-digital files are processed on a Forensic Recovery of Evidence Device ( FRED) which can acquire data from many types of media, such as Blu-Ray, CD-ROM, DVD-ROM, Compact Flash, Micro Drives, Smart Media, Memory Stick, Memory Stick Pro, xD Cards, Secure Digital Media and Multimedia Cards. The workstation also has the Forensic Toolkit (FTK) software is capable of processing a file and can indicate the file format type and often the software version. There are challenges since file extensions are not standardized or unique, such as naming conflicts between types of software, or older Macintosh systems that did not require files extensions. Also, because FRED and FTK originated in law enforcement, challenges arise when using it to work with cultural heritage objects.
Monday, December 19, 2016
Metadata Specialists Share Their Challenges, Defeats, and Triumphs
Metadata Specialists Share Their Challenges, Defeats, and Triumphs. Marissa Clifford. The Iris. October 17, 2016.
"Metadata is a common thread that unites people with resources across the web—and colleagues across the cultural heritage field. When metadata is expertly matched to digital objects, it becomes almost invisible. But of course, metadata is created by people, with great care, time commitment, and sometimes pull-your-hair-out challenge." At the Getty there are a number of people who work with metadata "to ensure access and sustainability in the (digital) world of cultural heritage—structuring, maintaining, correcting, and authoring it for many types of online resources." Some share their challenges, including:
"Metadata is a common thread that unites people with resources across the web—and colleagues across the cultural heritage field. When metadata is expertly matched to digital objects, it becomes almost invisible. But of course, metadata is created by people, with great care, time commitment, and sometimes pull-your-hair-out challenge." At the Getty there are a number of people who work with metadata "to ensure access and sustainability in the (digital) world of cultural heritage—structuring, maintaining, correcting, and authoring it for many types of online resources." Some share their challenges, including:
- Laura Schroffel on digital preservation
- Melissa Gill on GLAMs’ role in supporting researchers
- Kelly Davis on the importance of data clean up
- Ruth Cuadra on embracing Linked Open Data
- Kelsey Garrison on embracing technological change
- Teresa Soleau on providing access to digital material with metadata
- Matthew Lincoln on making meaning from uncertainty
- Jonathan Ward on bridging gaps between library staff and IT
- The metadata process had to be re-thought when they started publishing digitally because the metadata machinery was specifically for print books. That proved mostly useless for their online publications so that started from scratch to find the best ways of sharing book metadata to increase discoverability.
- "Despite all of the standards available, metadata remains MESSY. It is subject to changing standards, best practices, and implementations as well as local rules and requirements, catalogers’ judgement, and human error."
- Another challenge with access is creating relevancy in the digital image repository
- Changes are needed in skills and job roles to make metadata repositories truly useful.
- "One of the potential benefits of linked open data is that gradually, institutional databases will be able speak to each other. But the learning curve is quite large, especially when it comes to integrating these new concepts with traditional LIS concepts in the work environment."
Thursday, December 15, 2016
DPN and uploading to DuraCloud Spaces
DPN and uploading to DuraCloud Spaces. Chris Erickson. December 15, 2016.
For the past while we have been uploading preservation content into DuraCloud as the portal to DPN. DuraCloud can upload files by drag-and-drop but a better way is with the DuraCloud Sync Tool. (The wiki had helpful information in setting this up). This sync tool can copy files from any number of local folders to a DuraCloud Sspace, and can add, update, and delete files. I preferred the GUI version in one browser window and the DuraCloud Account in another.
We have been reviewing all of our long term collections and assigning Preservation Priorities, Preservation Levels, and also the number of Preservation Copies. From all this we decided on three collections to add to DPN, and created a Space (which goes into an Amazon bucket) for each. The Space will then be processed into DPN:
We also had a very informative meeting with DPN and the two other universities in Utah that are DPN members, where Mary and Dave told us that the price per TB was now half the original cost. Also, that unused space could be carried over to the next year. This will be helpful in planning additional content to add. Instead of replicating our entire archive in DPN, we currently have a hierarchical approach, based on the number and location of copies, along with the priorities and preservation levels.
Related posts:
For the past while we have been uploading preservation content into DuraCloud as the portal to DPN. DuraCloud can upload files by drag-and-drop but a better way is with the DuraCloud Sync Tool. (The wiki had helpful information in setting this up). This sync tool can copy files from any number of local folders to a DuraCloud Sspace, and can add, update, and delete files. I preferred the GUI version in one browser window and the DuraCloud Account in another.
We have been reviewing all of our long term collections and assigning Preservation Priorities, Preservation Levels, and also the number of Preservation Copies. From all this we decided on three collections to add to DPN, and created a Space (which goes into an Amazon bucket) for each. The Space will then be processed into DPN:
- Our institutional repository, including ETDs which are now digitally born, and research information. From our ScholarsArchive repository
- Historic images that have been scanned; the original content is either fragile or not available. Exported from Rosetta Digital Archive.
- University audio files; the original content was converted from media that is at risk. Some from hard drives, others exported from Rosetta Digital Archive.
We also had a very informative meeting with DPN and the two other universities in Utah that are DPN members, where Mary and Dave told us that the price per TB was now half the original cost. Also, that unused space could be carried over to the next year. This will be helpful in planning additional content to add. Instead of replicating our entire archive in DPN, we currently have a hierarchical approach, based on the number and location of copies, along with the priorities and preservation levels.
Related posts:
- Our Preservation Levels
- Digital Preservation Priorities: What to preserve?
- How many copies are needed for preservation?
- Digital Preservation Network - 2016
Wednesday, December 14, 2016
PDF/A as a preferred, sustainable format for spreadsheets?
PDF/A as a preferred, sustainable format for spreadsheets? Johan van der Knijff. johan's Blog. 9 Dec 2016.
National Archives of the Netherlands published a report on preferred file formats, with an overview of their ‘preferred’ and ‘acceptable’ formats for 9 categories. The blog post concerns the ‘spreadsheet’ category for which it lists the following ‘preferred’ and ‘acceptable’ formats:
There may be situations where PDF/A is a good or maybe the best, but choosing a preferred format should "take into account the purpose for which a spreadsheet was created, its content, its intended use and the intended (future) user(s)."
National Archives of the Netherlands published a report on preferred file formats, with an overview of their ‘preferred’ and ‘acceptable’ formats for 9 categories. The blog post concerns the ‘spreadsheet’ category for which it lists the following ‘preferred’ and ‘acceptable’ formats:
- Preferred: ODS, CSV, PDF/A
- Acceptable: XLS, XLSX
PDF/A – PDF/A is a widely used open standard and a NEN/ISO standard (ISO:19005). PDF/A-1 and PDF/A-2 are part of the ‘act or explain’ list. Note: some (interactive) functionality will not be available after conversion to PDF/A. If this functionality is deemed essential, this will be a reason for not choosing PDF/AThere are some problems of the choice of PDF/A and its justification.
- Displayed precision not equal to stored precision
- Loss of precision after exporting to PDF/A
- Also loss of precision after exporting to CSV
- Use of cell formatting to display more precise data is possible but less than ideal,
- Interactive content
- Reading PDF/A spreadsheets: This may be difficult without knowing the intended users, the target software, the context, or how the user intends to use the spreadsheet.
There may be situations where PDF/A is a good or maybe the best, but choosing a preferred format should "take into account the purpose for which a spreadsheet was created, its content, its intended use and the intended (future) user(s)."
Monday, December 12, 2016
Harvesting Government History, One Web Page at a Time
Harvesting Government History, One Web Page at a Time. Jim Dwyer. New York Times. December 1, 2016.
With the arrival of any new president, large amounts of information on government websites are at risk of vanishing within days. Digital federal records, reports and research are very fragile. "No law protects much of it, no automated machine records it for history, and the National Archives and Records Administration announced in 2008 that it would not take on the job." Referring to government websites: “Large portions of dot-gov have no mandate to be taken care of. Nobody is really responsible for doing this.” The End of Term Presidential Harvest 2016 project is a volunteer, collaborative effort by a small group of university, government and nonprofit libraries to find and preserve valuable pages that are now on federal websites. The project began before the 2008 elections. Harvested content from previous End of Term Presidential Harvests is available at http://eotarchive.cdlib.org/.
The project has two phases of harvesting:
With the arrival of any new president, large amounts of information on government websites are at risk of vanishing within days. Digital federal records, reports and research are very fragile. "No law protects much of it, no automated machine records it for history, and the National Archives and Records Administration announced in 2008 that it would not take on the job." Referring to government websites: “Large portions of dot-gov have no mandate to be taken care of. Nobody is really responsible for doing this.” The End of Term Presidential Harvest 2016 project is a volunteer, collaborative effort by a small group of university, government and nonprofit libraries to find and preserve valuable pages that are now on federal websites. The project began before the 2008 elections. Harvested content from previous End of Term Presidential Harvests is available at http://eotarchive.cdlib.org/.
The project has two phases of harvesting:
- Comprehensive Crawl: The Internet Archive crawl the .gov domain in September 2016, and also after the inauguration in 2017.
- Prioritized Crawl: The project team will create a list of related URL’s and social media feeds.
Saturday, December 10, 2016
Error detection of JPEG files with JHOVE and Bad Peggy – so who’s the real Sherlock Holmes here?
Error detection of JPEG files with JHOVE and Bad Peggy – so who’s the real Sherlock Holmes here? Yvonne Tunnat. Yvonne Tunnat's Blog. 29 Nov 2016.
Post that describes an examination of the findings of two validation tools, JHOVE (Version 1.14.6) and Bad Peggy (version 2.0), which scans image files for damages, using the Java Image IO library. The goal of the test is to compare the findings from these validation tools and know what to expect for digital curation work. There were 3070 images for the test, which included images from Google's publicly available Imagetestsuite. Of the images, 1,007 files had problems.
The JHOVE JPEG module can determine 13 different error conditions; Bad Peggy can distinguish at least 30 errors. The results of each are in tables in the post. The problem images could not be opened and displayed or had missing parts, mixed up parts and colour problems. The conclusion is that the tool Bad Peggy was able to detect all of the visually corrupt images. The JHOVE JPEG module missed 7 corrupt images out of 18.
Post that describes an examination of the findings of two validation tools, JHOVE (Version 1.14.6) and Bad Peggy (version 2.0), which scans image files for damages, using the Java Image IO library. The goal of the test is to compare the findings from these validation tools and know what to expect for digital curation work. There were 3070 images for the test, which included images from Google's publicly available Imagetestsuite. Of the images, 1,007 files had problems.
The JHOVE JPEG module can determine 13 different error conditions; Bad Peggy can distinguish at least 30 errors. The results of each are in tables in the post. The problem images could not be opened and displayed or had missing parts, mixed up parts and colour problems. The conclusion is that the tool Bad Peggy was able to detect all of the visually corrupt images. The JHOVE JPEG module missed 7 corrupt images out of 18.
Thursday, December 08, 2016
OAIS: a cage or a guide?
OAIS: a cage or a guide? Barbara Sierman. Digital Preservation Seeds. December 3, 2016.
Post about the OAIS standard and asking if it is a restriction or a guide. OAIS, the functional model, the data model and metrics in OAIS and the related standards like the audit and certification standard. "OAIS is out there for 20 years and we cannot imagine where digital preservation would be, without this standard." It is helpful for discussing preservation by naming the related functions and meta data groups. But it lacks a link to implementation and application for daily activities. OAIS is a lot of common sense put into a standard. The audit and certification standard, ISO 16363, is meant to explain how compliance can be achieved, a more practical approach.
Many organisations are using this standard to answer to the question "Am I doing it right?" People working with digital preservation want to know the approach that others are using, the issues that they have solved. The preservation community needs to "evaluate regularly whether the standards they are using are still relevant in the changing environment" and a continuous debate is required to do this. In addition, we need evidence that practical implementations that follow OAIS are the best way to do digital preservation. Proof of what worked and what did not work is needed in order to adapt standards, and the DPC OAIS community wiki has been set up to gather thoughts related to the practical implementation of OAIS and to provide practical information about the preservation standards,
Post about the OAIS standard and asking if it is a restriction or a guide. OAIS, the functional model, the data model and metrics in OAIS and the related standards like the audit and certification standard. "OAIS is out there for 20 years and we cannot imagine where digital preservation would be, without this standard." It is helpful for discussing preservation by naming the related functions and meta data groups. But it lacks a link to implementation and application for daily activities. OAIS is a lot of common sense put into a standard. The audit and certification standard, ISO 16363, is meant to explain how compliance can be achieved, a more practical approach.
Many organisations are using this standard to answer to the question "Am I doing it right?" People working with digital preservation want to know the approach that others are using, the issues that they have solved. The preservation community needs to "evaluate regularly whether the standards they are using are still relevant in the changing environment" and a continuous debate is required to do this. In addition, we need evidence that practical implementations that follow OAIS are the best way to do digital preservation. Proof of what worked and what did not work is needed in order to adapt standards, and the DPC OAIS community wiki has been set up to gather thoughts related to the practical implementation of OAIS and to provide practical information about the preservation standards,
Monday, December 05, 2016
Digital Preservation Network - 2016
Digital Preservation Network - 2016. Chris Erickson. December 5, 2016.
An overview of the reason for DPN. Academic institutions require that their scholarly histories, heritage and research remain part of the academic record. This record needs to continue beyond the life spans of individuals, technological systems, and organizations. The loss of academic collections that are part of these institutions could be catastrophic. These collections, which include oral history collections, born digital artworks, historic journals, theses, dissertations, media and fragile digitizations of ancient documents and antiquities are irreplaceable resources.
DPN is structured to preserve the stored content by using diverse geographic, technical, and institutional environments. The preservation process consists of:
An overview of the reason for DPN. Academic institutions require that their scholarly histories, heritage and research remain part of the academic record. This record needs to continue beyond the life spans of individuals, technological systems, and organizations. The loss of academic collections that are part of these institutions could be catastrophic. These collections, which include oral history collections, born digital artworks, historic journals, theses, dissertations, media and fragile digitizations of ancient documents and antiquities are irreplaceable resources.
DPN is structured to preserve the stored content by using diverse geographic, technical, and institutional environments. The preservation process consists of:
- Content is deposited into the system through an Ingest Node, which are preservation repositories themselves;
- Content is replicated to at least two other Replicating Nodes and stored in different types of repository infrastructures;
- Content is checked by bit auditing and repair services to prevent change or loss;
- Changed or corrupted content is restored by DPN;
- As Nodes enter and leave DPN, preserved content is redistributed to maintain the continuity of preservation services into the far-future.
Thursday, December 01, 2016
Implementing Automatic Digital Preservation for a Mass Digitization Workflow
Implementing
Automatic Digital Preservation for a Mass Digitization Workflow. Henrike
Berthold, Andreas Romeyke, Jörg Sachse. Short
paper, iPres 2016. (Proceedings p. 54-56
/ PDF p. 28-29).
To ensure robustness, only single page, uncompressed TIFF files are accepted. They use the open-source tool checkit-tiff to check files against a specified configuration. To deal with AIP updates, files can be submitted multiple times: the first time is an ingest, all transfers after that are updates. Rosetta ingest functions can add, delete, or replace a file. Rosetta can also manage multiple versions of an AIP, so older versions of digital objects remain accessible for users.
They manage three copies of the data, which totals 120 TBs. An integrity check of all digital documents, including the three copies, is not feasible due to the time that is required to read all data from tape storage and check them. So to get reliable results without checking all data in the archive they use two different methods:
Their current challenges are in developing new media types (digital video, audio, photographs and pdf documents), unified pre-ingest processing, and automation of processes (e.g. to perform tests of new software versions).
This short
paper describes their preservation workflow for digitized documents and the
in-house mass digitization workflow, based on the Kitodo software, and the three
major challenges encountered.
- validating and checking the target file format and the constraints to it,
- handling updates of d content already submitted to the preservation system,
- checking the integrity of all archived data in an affordable way
To ensure robustness, only single page, uncompressed TIFF files are accepted. They use the open-source tool checkit-tiff to check files against a specified configuration. To deal with AIP updates, files can be submitted multiple times: the first time is an ingest, all transfers after that are updates. Rosetta ingest functions can add, delete, or replace a file. Rosetta can also manage multiple versions of an AIP, so older versions of digital objects remain accessible for users.
They manage three copies of the data, which totals 120 TBs. An integrity check of all digital documents, including the three copies, is not feasible due to the time that is required to read all data from tape storage and check them. So to get reliable results without checking all data in the archive they use two different methods:
- Sample Method Integrity 1% sample of archival copies is checked yearly
- Specified fixed bit pattern workflow that is checked quarterly.
Their current challenges are in developing new media types (digital video, audio, photographs and pdf documents), unified pre-ingest processing, and automation of processes (e.g. to perform tests of new software versions).
Subscribe to:
Posts (Atom)