Last Week on My Mac: Compression models

0

Over the fifty years since I started using computers, one of the greatest changes has been who does the waiting. In my early days, it was me, in some cases waiting overnight for batches to be processed on a mainframe computer. By the mid 1990s I had to wait a couple of days for some of my optimisation runs to complete on the Mac in my office. Since then, as Macs have got progressively faster, they have come to do the waiting for me. Whether I’m editing an image or writing an article, my Mac keeps pace with almost everything I do.

One notable if infrequent exception is when there’s a macOS update. On a newer Apple silicon model, installation itself is far quicker than when Apple changed to using the SSV in Big Sur, but there’s always that wait for the downloaded update to be “prepared” in the background. Much of that is thought to be decompression, which minimises the size of downloads by putting the burden on the Mac to expand and organise what needs to be installed.

Data compression is widely used in modern operating systems. Take a look in Activity Monitor’s Memory view, and at the bottom right you’ll see how much compressed memory is in use, here currently over 1 GB. It’s used to save space in macOS, where many system files are stored in compressed formats. Compression is also an integral part of many media formats, whether still or moving images, or audio. Unless they’re compressed, we’d only be able to enjoy a small fraction of the media we now consume so voraciously.

It’s understandable that compression is thus one of the best-studied tasks in computing. Over decades of intensive research and practical experience, we have learned that the most powerful methods of compressing data can also take the most processing time. The effectiveness of compression is generally expressed in terms of compression ratio, the ratio of original file size to that of the compressed file. Thus all effective compression methods should result in a compression ratio of more than 1. You’ll also come across percentage space saving, the reduction in size compared to the original. If a compression method shrinks a 10 GB file to 5 GB, then its compression ratio is 10/5 = 2, and its space saving is (1 – 5/10) x 100 = 50%.

One of the most efficient compression techniques has been that in nanozip 0.09a, for instance, which can achieve a compression ratio of 5.3, or a space saving of 81%, on English text taken from Wikipedia. This illustrates the trade-offs made in such efficient techniques, as to achieve that it takes almost 128 seconds to compress 100 MB, a rate of 0.783 MB/s. That makes it of little everyday use and unsuitable for macOS updates, where it also couldn’t achieve the same high compression ratio because the data are binary rather than English text.

Optimising compression isn’t as simple as it might appear. There’s an inverse relationship between the time spent compressing data and the amount of compressed data to be written, as there is with many similar tasks. The more sophisticated the compression and the higher the compression ratio, the less data there is to be written out, although there’s still the same amount of data to be read. Using the example of a compression ratio of 2, 10 GB of input data has to be read and only 5 GB written out, so unless write speed is less than half read speed, reading the input data is likely to be more rate-limiting.

Compression is also representative of many other types of everyday tasks, as it consists of three sub-tasks:

read data from disk
process it (encode, decode, transcode)
write the transformed data to disk

Unless you use small file sizes, this is usually performed on data streamed from storage and back to storage, rather than on single chunks of data in memory.

To understand what limits performance of any of those everyday tasks, we need to understand how each of its sub-tasks is limited, and how those limitations interact. In some cases, such as preparation of macOS updates, time taken appears to be determined by the processing required in the second step. In the past, Apple has run all that in a single thread to minimise impact on concurrent use of other apps in the foreground. In other tasks, such as encrypting files, stage 2 processing should be fastest, making it most likely that disk access is limiting.

Although we can’t apply conclusions drawn from studying any specific compression method, we can use it as a model to develop tools that we can then apply to other tasks. Over the last week, I’ve published two articles here discussing how to do that.

In the first I asked how we can investigate whether more threads will run a task faster. We know that for my compression task they do, but that may not be true for the tasks you’re most interested in accelerating. In rare cases, an app may provide a control to set how many threads to use, and it’s easy to use that to investigate. It’s more likely that your app has no such control, but one way around that is to run it in a virtual machine, where you can pick the number of virtual cores to host that VM.

Timing the performance of your app with test files of known size should give a performance rate, and that can help identify which of those sub-tasks is probably limiting overall performance. If your app transcodes video, for example, and you time how long it takes to complete all three sub-tasks on a 10 GB video clip, you can work out its transcoding rate in GB/s. If that test clip takes 20 seconds to read that test file, transcode it, and write out the converted video, then its overall transcoding rate is 0.5 GB/s.

The next article considers how you might work out whether running that task on high-speed storage would improve that overall performance. Although it looks unlikely that a transcoding rate of 0.5 MB/s would be altered by running it on a Thunderbolt 3 disk with its 2.3 GB/s write speed, it’s worth comparing performance on the faster internal SSD to check that, and decide whether a faster USB4 SSD might be better.

By the time that you’ve worked through those tests, you should have better insight into which of those three sub-tasks is limiting performance, how you might be able to improve them, and compare current performance in the overall transcoding rate against attempts at improvement. As I wrote last week, the only way to do that is with objective measurements such as that transcoding rate.

Next week I’ll continue this series by considering how you can control the type of CPU core threads run on, and some related issues about sparse bundles and the performance of Time Machine backup storage.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.