Categories
Linux

HowTo: Setup and Benchmark Encrypted Partitions in Ubuntu

In a previous article, I talked about using shred to securely delete files. Now we’ll delve into using encrypted volumes in Linux to secure our data in the first place, so that we don’t need to use programs like shred. Along the way, we’ll benchmark the raw performance of an encrypted volume and compare the results to an unencrypted volume and see just what kind of real world compromises we see.

To start out we need free space on a drive that isn’t partitioned, or enough patience to resize an existing one. Just about everything here needs root privileges, since we’re working with filesystems. It would be easiest to start a root terminal withsudo su, then enter your password.

First, we install the tools to get the encrypted partition going:
apt-get install cryptsetup hashalot gparted

Next, we use gparted to create a 20GB partition at the end of my disk. It’s a dead simple drag n’ drop application similar in function to Partition Magic or other GUI partition editors… hopefully you don’t need instructions. Make sure to record the name of the new partition! Everything here that says /dev/sda2 is going to change based on your hardware and partitioning scheme.

After that completes (which can take some time if any resizing or moving of an existing partition happens), we need to set a password.
cryptsetup --verbose --verify-passphrase luksFormat /dev/sda2

This command will create a device called /dev/mapper/sda2 and give us access to the encrypted volume after verifying the password:
cryptsetup luksOpen /dev/sda2 sda2

By now we’re knee deep in waist-high water. I’m not quite sure what that means… I just made it up. Say it out loud… rolls off the tongue. Sorry… where was I? Ah right. I’ll try to explain where we’re at right now, for my benefit as well as yours.

At this moment, we have a partition called /dev/sda2. That raw partition now has an encrypted container inside, located at /dev/mapper/sda2. The last step is to actually format the encrypted volume so we can actually put some files on there. This can also be done in gparted if you want to split things up into multiple partitions, use the drive dropdown box to find the mapper.
/sbin/mkfs.ext3 -j -m 1 -O dir_index,filetype,sparse_super /dev/mapper/sda2

Next, we’ll make a directory to mount the encrypted volume and then actually mount it:
mkdir /mnt/test
mount /dev/mapper/sda2 /mnt/test

Now we can copy files into /mnt/test and every file located there will be encrypted. Sweet!
To unmount the volume, use the following commands:
umount /mnt/test
cryptsetup luksClose sda2

I bet you’re asking the question we all are… How fast is it? Good question. The answer is a pain in the ass to be honest. This almost ended up being two separate articles because the benchmarking was not going very well… but here we go… how to benchmark hard drives in Linux with FOUR different tools:

Method:
The first plan I had was to perform two separate clean installs on an entire disk, run several benchmarks and quote some articles on how hard it would be to crack into the encrypted disk. Those results followed the expected trend of a significant (approximately 10%) degradation in read, write, and seek times on the encrypted volume. However, those results could be tainted, because they were run in a graphical environment that had lots going on in the background. I decided to throw those out. More accurate results tied directly to actual performance can be achieved by installing a text-mode only system, and using a separate partition at the end of the disk. By using failsafe text mode, we’ll limit the number of extra services/daemons/etc running.

Testbed:
Processor: Intel E6400 Core2Duo 3.2 GHz
Hard Drive: Western Digital 150GB RaptorX 10,000 RPM
RAM: 4GB, no swap partition used.

Benchmarks Used:
Bonnie++ 1.03
Options: bonnie -s 14176 -d /mnt/test
Bonnie++ is a benchmark suite that is aimed at performing a number of simple tests of hard drive and file system performance. The -s 14176 option sets the program to use is four times the amount of memory available on our testbed, which is the recommended setting. This is to make sure the OS is not doing any sort of caching in RAM to skew results. -d /mnt/test sets the program to use /mnt/test as the location to save the temporary file. Bonnie is a nice benchmark, but it’s got a problem. The results are nearly indecipherable to read by someone unfamiliar with the output, and this page helped me read them.

bonnie.png

PostMark
Options: set size 10000 10000000 (10KB - 10MB pseudo-randomly sized files)
set number 2000 (2000 generated files)
set transactions 2500 (2500 read/write/etc actions made on those files)
run
quit

PostMark is a benchmark for servers. We can use it and gain some additional insight into how a server would function if it were working on 10KB to 10MB sized files… it’s a benchmark that would need to be customized for the application desired to gain any direct correlations from the results.

pm1.png
pm2.png

IOZone3 3.279
Options used:
sudo iozone -a -R -g 10g -R -f /mnt/test/iozone
Iozone is useful for determining a filesystem performance. The benchmark tests file I/O performance for the following operations: Read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read/write, pread/pwrite variants. The options setup the following variables: Auto Mode, Excel/CSV formatted results, create a 10GB test file on /mnt/test called iozone.

IOZone outputs a TON of data, and they have lots of pretty graphs on their website, but there’s more than enough analysis already for this article. Here are the two Excel (???) formatted files for your pleasure. No graphs by are included in the output by default – shame really, since their graphs look great. iozone.zip

Easy-Bake Tar-Gzip-Gunzip-Untar Oven Benchmark Test
For the last test, we’ll use a home-grown benchmark using tar to archive 3.5GB of highly compressed HD videos, gzip the archive, then unzip and untar the compressed tar.gz file into the current directory, forcibly overwriting the existing files. This constitutes a “real-world” scenario more than the above benchmarks with reads, writes, re-writes and plenty of seeking. We can easily count how long each operation takes to complete and get a really quick, dirty and simple comparison with time counted for each operation. The fact that we’re working with the same data four times… do the math – at least 3.5GB of data traversing to and fro across the subsystems of the testbed 8 times (input/output cycles for each action) for around 28GB of data flying about. The unencrypted data manipulation test completed 42 seconds faster, for a 6% lead over the encrypted filesystem.

time tar -cf archive.tar *.MTS && time gzip archive.tar && time gunzip archive.tar.gz -f && time tar xvf archive.tar --overwrite

Not shabby at all for something I pulled out of my rear Easy Bake Oven, eh?

Here’s graphs of the results:
snapshot2.jpg
ebo-resize.png

Conclusion:
When attempting to benchmark a filesystem, there’s so much choice out there. We know there should be a performance hit when running an encrypted filesystem, and we can look at all the graphs we want, but in the end, you’ll see a 5-10% degradation in speed when running an encrypted drive.

That’s it for now, I’ve got an interview lined up with Sean Moss-Pultz, CEO of OpenMoko later this week… Did I mention now is a good time to subscribe to my RSS feed? Let’s get that ol’ counter on the side a notch over 1k, shall we?