The one with worries about the short life-expectancy of solid-state disks

I recently upgraded myself to a notebook with an 128MB SSD. Naturally, I want to keep my new-egg fine and happy and immediately started to worry about the SSD’s life time.

An SSD consists of flash memory cells that have a finite number of erase/write-cycles (100.000 for SLC, 5000 for MLC [1]). Since flash memory cells cannot be erased individually, they are organized into larger blocks. The common block size for SSD appears to be 128KiB.

In practice, this means there is a maximum amount of data than can be written to an SSD. For example, a 128GB MLC drive is expected to have a write capacity of 80 TB [2], a mere 640 writes for each block on the drive.

Drive manufacturers use two tactics to cope with the problem: politics and technology.

Politics means they compare apples and oranges (SSD life-time, HDD mean-time between failure).

Technology means they implement wear-leveling strategies to spread wear across all blocks. Basically, data is always being written to another block. Even old data will need to be moved around, so its original cells can be used for leveling. Similar to HDD there may also be reserved blocks on the drive.

So each and every write to the SSD will result in at least one block being re-written (write amplification).

The question is, how do the file system’s write operations affect the disk? File system block sizes vary between 512 bytes and 16KiB, apparently much smaller than the SSD block size.

However, the file system cannot easily assume that the disk is an SSD, so it cannot easily combine multiple small writes into bigger chunks. For HDD, this would cause immense fragmentation and would limit their performance significantly.

And the SSD can’t help either, because it can’t cache writes infinitely if it wants to avoid file corruption from power failure.

The operating system and the applications are way to high up in the stack to care about the problem. They are more concerned with consistency than with drive wear. At least newer systems are able suppress any attempts at defragmenting files (SSD read access is constant) and may report to the disk which blocks are unused (TRIM), so that they do not need to be preserved by the wear leveling algorithms.

But here I am, with my battery-powered notebook, more RAM than I ever need, and worries about SSD wear and how many useless files are being written to disk. I don’t worry about power failure. Can I do better?

The handy solution is a trip on memory lane: RAM disks. For a short time in computer history, there was plenty of RAM and few applications that used it all. So we could have disks in RAM to speed up games and applications.

For an HDD-based system, they provide no benefit. The operating system’s cache controller is way smarter than a fixed amount of RAM reserved for a fixed amount of files.

But for an SSD-based system, where the “TEMP”-directories, browser cache and occasional downloads are constantly written to with small and rather useless files, they can have a significant benefit in reducing writes to the SSD.

I’m using Dataram RAMDisk 3.5.130, a freeware that works well under Windows 7 64-bit, including Sleep and Hibernate.

I created a 1GB RAMDisk and formatted it with NTFS. I mounted it into a directory on my system drive, so it doesn’t need its own drive letter. I created directories there for Google Chrome’s cache and IE’s Temporary Internet Files, as well as for Downloads and my TEMP directory.

Then I saved the binary image and compressed it. I configured RAMDisk to load this image into the RAM disk upon system start.

For Google Chrome, I needed to setup a junction for the Cache folder. I also configured Chrome to use the Downloads folder on the RAM disk.

For Internet Explorer, I needed to change the Cache value under HKEY_CURRENT_USERSoftwareMicrosoftWindowsCurrentVersionExplorerUser Shell Folders.

For the TEMP directory I changed the environment variables.

I setup a Windows 7 library for Downloads and added the Downloads folder on the RAM disk as the save location.

So far, this works like a charm. (As long as I remember to copy important downloads before shutting down.)

I’m going to occasionally use Process Monitor to find out further possibilities to limit unnecessary file writes. There is a lot of logging going on that may be worth looking into.

Finally, I’m making a rather bold move.

By far the most writes come from Visual Studio. I save a lot of source files, the IDE saves a lot of temporary files for IntelliSense and Refactoring, then there a builds and test files. I don’t think Toshiba has software development as one of their user scenarios.

I copied some often referenced library projects to the RAM disk, cleared the archive flags, and cleaned and saved the image. I’m planning to setup tasks that sync my source files and the RAM disk. For now, I just hope to not forget to not shut the system down before I save my files.

I might also think about a local source control system. And I finally realized: This is exactly how distributed source control will shine.

Advertisements
This entry was posted in Computers and Internet. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s