Showing posts with label Solid state. Show all posts
Showing posts with label Solid state. Show all posts

Thursday, August 13, 2015

Permanent Digital Data Storage: Tape, solid-state, and discs

Permanent Digital Data Storage: An Overview. Barry M. Lunt, Matthew R. Linford, and Robert C. Davis. Brigham Young University. ISOM Conference. Received PDF August 2015. [From author's  version; not yet available on line.]
     Research shows that digital storage, whether optical, solid state, or tape, can be permanent and could potentially last over 100,000 years if permanent materials are used. The failure mechanisms are well documented. Knowing what materials to use to eliminate the failure mechanisms is the key to permanent digital storage.

Computer data storage has always been ephemeral because of the emphasis on density and speed. There has been little interest in developing a permanent way to store digital data. The authors, an engineer, chemist, and physicist, believe "that the optimal storage media does not need to be refreshed nor stored in special conditions, and that a store-and-forget approach (like printing books and storing them on shelves) is best because it is the simplest."

Permanent Storage Options and approaches
  • Optical disks. The dominant failure mechanism is dye fading which can be removed by using permanent materials. 
    • The media they developed (M-Discs) make permanent physical and optical marks on a standard DVD or Blu-ray format disc.
    • Optical discs are a viable option for archival storage of large amounts of storage
    • This permanent format is essentially guaranteed for many decades or centuries to come
  • Hard disk drives are not permanent. The failure mechanisms, which are fairly well known, are predominantly mechanical. A materials approach cannot solve these problems.
  • Solid-state storage. A materials approach has produced storage elements capable of lasting as long as integrated circuits; the failure rate of such circuits is measured in Failure In Time, or about 114,155 years. This is a permanent form of preservation. Their research has solved the dominant failure mechanism of early permanent programmable solid state storage.
  • Permanent optical tape. Their materials-research shows that if correct materials are used, computer tape can also be permanent and the permanent tape would "match the density of LTO-5, allowing about 2 TB per cartridge. The price of the media should be equivalent to that of magnetic tape."
Optical discs, solid-state storage, and computer tape can all be made to store data permanently and last hundreds or thousands of years.

Related posts:

Wednesday, August 12, 2015

Keeping Data For A Long Time

Keeping Data For A Long Time. Tom Coughlin. Forbes. June 29, 2014.
     Keeping information for a long time has always been a challenge.  Thermodynamics doesn’t favor information lasting a long time and so to make that happen people have to spend effort and energy. Deciding how to create a long-term archive involves choosing the right storage system with the right technology under the proper environmental conditions.  This can be combined with migration and replication practices to improve the odds of keeping content useful and accessible for an extended period of time. A conference looked at digital storage for long term archiving and preservation.Some of the technologies:
  • It appears conventional flash memory may not have good media archive life and should only be used for storing transitory data
  • Hard disk drives are used in active archives have problems because they wear out and even if the power is turned off the data in the hard disk drive will eventually decay due to thermal erasure  
  • Digital magnetic tape under low temperature/humidity is a good candidate for long-term data retention  
  • Optical storage has also been used for long-term data retention and should last at least several decades. Facebook has a 1 PB prototype that should reduce the storage costs by 50% and the energy consumption by 80% of their Hard Disk storage
  • Sony said their properly made archival grade optical discs should have a shelf life of 50 years.
  • Hitachi Data Systems showed costs for 5 PB of content over 75 years is less than frequent tape and HDD replacement.
A lot of digital data has persistent value and so long term retention of that data is very important. It is estimated the storage for archiving and retention is currently a $3B market growing to over $7B by 2017. "Magnetic tape and optical disks provide low cost long-term inactive storage with additional latency for data access vs. HDDs due to the time to mount the media in a drive.  Thus depending upon the access requirements for an archive it may be most effective to combine two or even three technologies to get the right balance of performance and storage costs."

Related posts:

Thursday, July 23, 2015

First Large Scale, In Field SSD Reliability Study Done At Facebook

First Large Scale, In Field SSD Reliability Study Done At Facebook. Adam Armstrong. Storage Review. June 22, 2015.
    Carnegie Mellon University has released a study titled “A Large-Scale Study of Flash Memory Failures in the Field.” The study was conducted using Facebook’s datacenters over the course of four years and millions of operational hours. The study looks at how errors manifest and aim to help others develop novel flash reliability solutions.
Conclusions drawn from the study include:
  • SSDs go through several distinct failure periods – early detection, early failure, usable life, and wearout – during their lifecycle, corresponding to the amount of data written to flash chips.
  • The effect of read disturbance errors is not a predominant source of errors in the SSDs examined.
  • Sparse data layout across an SSD’s physical address space (e.g., non-contiguously allocated data) leads to high SSD failure rates; dense data layout (e.g., contiguous data) can also negatively impact reliability under certain conditions, likely due to adversarial access patterns.
  • Higher temperatures lead to increased failure rates, but do so most noticeably for SSDs that do not employ throttling techniques.
  • The amount of data reported to be written by the system software can overstate the amount of data actually written to flash chips, due to system-level buffering and wear reduction techniques.
The study doesn’t state one type of drive is better than another.

Related posts:

Tuesday, July 14, 2015

Seagate Senior Researcher: Heat Can Kill Data on Stored SSDs

Seagate Senior Researcher: Heat Can Kill Data on Stored SSDs.  Jason Mick. Daily Tech. May 13, 2015.
   A research paper by Alvin Cox, a senior researcher, warns that those storing solid state drives should be careful to avoid storing them in hot locations. Average "shelf life" in a climate controlled environment is about 2 years but drops to 6 months if the temperature hits 95° F / 35° C. It also says that typically enterprise-grade SSDs can retain data for around 2 years without being powered on if the drive is stored at a temperature of 25°C / 77°F. For every 5°C / 9°F increase, the storage time halves.  This also applies to storage of solid-state equipped computers and devices. If only a few  sectors are bad it may be possible to repair the drive.  But if too many sectors are badly corrupted, the only option may be to format the device and start over.

A Large-Scale Study of Flash Memory Failures in the Field

A Large-Scale Study of Flash Memory Failures in the Field. Justin Meza, et al. ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems. June 15-19, 2015.
     Servers use flash memory based solid state drives (SSDs) as a high-performance alternative to hard disk drives to store persistent data. "Unfortunately, recent increases in flash density have also brought about decreases in chip-level reliability." This can lead to data loss.

This is the first large-scale study of actual flash-based SSD reliability and it analyzes data from flash-based solid state drives at Facebook data centers for about four years and millions of operational hours in order to understand the failure properties and trends. The major observations:
  1. SSD failure rates do not increase monotonically with flash chip wear, but go through several distinct periods corresponding to how failures emerge and are subsequently detected, 
  2. the effects of read disturbance errors are not prevalent in the field, 
  3. sparse logical data layout across an SSD's physical address space can greatly affect SSD failure rate, 
  4. higher temperatures lead to higher failure rates, but techniques that throttle SSD operation appear to greatly reduce the negative reliability impact of higher temperatures, and 
  5. data written by the operating system to flash-based SSDs does not always accurately indicate the amount of wear induced on flash cells
The findings will hopefully lead to other analyses and flash reliability solutions.