top of page
Writer's pictureNicholas Gee

ARCSERVE UDP – DEDUPLICATION PERFORMANCE – CHOOSE YOUR DATASTORE BLOCK SIZE WISELY!

Updated: Jul 19, 2023

There is little doubt in my mind that Arcserve UDP offers the best Deduplication performance in the industry. There is plenty of marketing chat and customer references detailing the fantastic reduction in backup footprint after migrating from one of the other image-based backup vendors.

The outstanding De-duplication performance is possible, thanks to UDP’s utilisation of a Windows-based Recovery Point Server (RPS). Arcserve supply purpose-built UDP Appliances (Arcserve UDP 9000 Series Appliances) but you can also build and install your own “homebrew” Appliance. It is also common to deploy Virtual Machine RPS servers either on-premise or in the cloud.


ARCSERVE UDP

What to consider before creating a datastore. Not all RPS Servers are of the same specification. You can hardly compare an Arcserve specification 9072DR Appliance (Dual Silver Xeon CPU, 20 cores, 192GB RAM, 2x1TB SSD) with a Virtual Machine with 2vCPU & 16GB RAM. Therefore, it is essential to configure the Deduplicated Datastore appropriately for the specification of the RPS Server.

When creating a De-dupe datastore, you have decisions to make that you must get right first time! Some of the settings can’t be changed at a later time!


  • Number of Simultaneous Backup Jobs. The default for an Arcserve Appliance is 20, the default for self-installed RPS is 4. Any value over 16 will generate a warning about performance you have been warned. I generally find 8-12 as a good value for a “well-sized” RPS. Luckily, this is a value that can be changed at any time by modifying the datastore.

  • Datastore Hash is on Solid State Disk (SSD) – or not. This innocent looking checkbox has a MASSIVE impact on Datastore performance. Storing your Hash on non-SSD will mean the entire Hash table will be stored in RAM. Store the Hash on SSD & only 1/20th the amount of RAM will be consumed. Don’t try to cheat the system by pretending that SSD is used… You will only crash n burn. Luckily, this checkbox can be modified at any stage. It is also possible to re-locate the Hash as required.

  • Encryption – Do you want to Encrypt the Datastore? I will make this easy for you! YES! Always enable encryption. You need to specify an Encryption password – keep this password safe, it can not be changed, recovered or reset.

  • Compression – Standard or Maximum Compression. Maximum compression will reduce datastore size but will require more CPU – resulting in slower backups & restores.

  • Deduplication Block Size – The most critical factor for RPS Performance. Choose between 4KB, 8KB, 16KB, 32KB & 64KB Block Sizes. Arcserve Appliances use 4KB Block Size though the default for self installations is 16KB. 32KB and 64KB are ideal for lower spec RPS Servers OR for situations when disk utilisation doesn’t really matter.

Comparing Datastore Block Size Performance

Two primary performance values that are directly determined by the datastore configuration. 1) The size of the datastore – i.e. how much disk space backups and retention of backups will consume. This also influences how much data is transmitted over the network during replication. 2) How long it takes to restore data from a deduplicated datastore.

This 2nd performance characteristic – restore speed is often overlooked when deploying a datastore. Yes, you can use a 4KB datastore on a crappy old server and see some fantastic storage optimisation. But how long will it take to recover critical data when the poop hits the fan? Put simply, a UDP Recovery Point Server has to work a lot harder to restore from a 4KB Block size datastore than from a larger block size. This can mean that restores take much longer than could be acceptable.


The Experiment

I decided to perform an experiment to compare different datastore configurations with identical data. For this experiment, I am using an Arcserve 9240DR Appliance. It is a beast of a machine, designed for companies with 240TB of data to backup. It is the perfect machine for testing multiple datastores.

I started by replicating multiple backup jobs from a remote UDP Console – approx. 1.5TB of backups. I then replicated the data to 10 different Datastores! Each block size had 2 datastores – one with Standard Compression and the other with Maximum compression. For example, I had two datastores with a 4KB Block Size, one with Standard Compression and one with Maximum Compression. I had similar datastore pairs for 8KB, 16KB, 32KB and 64KB block Sizes. I prefixed the names of the datastores alphabetically so they would be listed in order. The top 5 datastores are all using “standard” compression, the lower 5 are all using “maximum” compression.


Arcserve 9240DR Appliance

Datastore De-duplication, Compression and Overall Data Reduction Performance

Check out the results in the image above. The table columns are:

  • Stored Data – The total amount of data that could be recovered from the datastore..

  • Deduplication – what reduction in disk space utilisation has been achieved due to Deduplication.

  • Compression – what reduction in disk space utilisation has been achieved due to Compression

  • Overall Data Reduction – The combined effect of Deduplication & Compression.

  • Space Occupied – how much disk space the datastore is actually consuming.

Each datastore contains 1.45TB of Stored Data. I had carefully populated each datastore with identical backup data.

No shock here – 4KB Block size datastores have the best deduplication performance with 63% deduplication. Combining this deduplication performance with the compression performance of 21% (Standard Compression & 30% (Maximum Compression) gives an “Overall Data Reduction of 70% and 74% respectively.

Ok, so we knew that already didn’t we? If we compare large chunks of data at a 4KB block level, we are going to find more matching blocks than if we look for matching 64KB blocks.

However, what is very interesting to see is how the Compression Performance changes for each block size. In fact, using a larger block size seems to offer a better Compression performance! An extra 10% reduction in thanks to Compression, which does offset the lower the deduplication performance somewhat.




Which block size should you use?

I am often asked, “won’t I lose deduplication if I use a larger block size?”

Comparing 4KB Block size to 64KB Block size gives a drop in Dedupe from 63% to 45%; however, Compression improves from 21% to 26% (Standard Compression) and 30% to 40% (Maximum Compression). The combination of these factors means that Overall Data Reduction is 70%/74% for a 4KB Block Size and 59%/67% for 64KB Block Size.

So why would you ever use a Block Size of more than 4KB? The answer is Resources – what resources does your RPS server have:


A 4KB Block Size Datastore used to back up 100TB of data would require a 370GB Hash Table! That would require either 370GB of available RAM or 370GB of SSD with 20GB of available RAM. Clearly, a lot of resources are needed – and yes, we are talking about a Physical Server with plenty of power. The estimated datastore size would be 26TB.


Compression Performance

On the other hand, that 100TB of data backed up to a 64KB Block Size Datastore would result in an estimated datastore size of 33TB and a hash table of 37.5GB. Clearly much a much lower specification server, such as a Virtual or Cloud machine is an option.


Estimate Memory

So, Yes, you will lose De-dupe performance and consume more storage. However, the trade-off is a lower demand for server resources.


So what about Restore Performance?

Keep in mind that I am using a purpose-built Arcserve 9240DR Appliance that has power & performance to burn. However, I thought it would still be interesting to compare Restoration speed from each of the 10 datastore types.

I decided to test a restore of one particular server, the recovery point size happened to be 59.84GB – pretty much 60GB. To ensure that all restores were identical and not limited to the restore destination, I used the Appliances own SSD drive to restore the data to.

To be honest, I was expecting the restore performance graph to be in the inverse of the Overall Data Reduction Graph… I was slightly surprised by the result.


Data Reduction Graph

Block Size and Maximum Compression

No surprise that the Datastore with a 4KB Block Size and Maximum Compression was the slowest to restore the 60GB – taking 14mins and 26 seconds. Notice though that “Standard Compression” was 4 minutes quicker to restore.

Again, no real surprise that the 64K Block Size datastores where the quickest to restore. However, it was strange to see that the very fastest restore speed was 64KB Block Size combined with Maximum Compression!

I was so surprised by this fact that I repeated both restores – with similar results!

Datastore “e” with Standard Compression restored the 60GB in 7mins 26 seconds, but Datastore “j” with Maximum Compression restored it in just 6mins 36seconds.

Another curious discovery was that 16KB and 32KB datastore restore times were almost identical when standard Compression was used. However, Maximum Compression resulted in slightly slower restore time for 16KB and slightly faster restore time for 32KB block sizes.


The 16KB Default.

When you create a new Datastore in UDP, you will notice that the default block size is 16KB. This is a happy medium between Deduplication performance and Restore (& Tape Backup) speed. Block Size can only be set when a datastore is created – so choose wisely.

When creating a datastore, you are given a graphical indication about how the datastore performance is balanced.


CPU/RAM/SSD

Is your priority to maximise your available storage to give you the longest on disk retention, plus you have plenty of CPU/RAM/SSD? Then 4KB or 8KB might be the right choice.

Or is Restore Speed or Backup to Tape a priority? Do you have plenty of available disk space or is your RPS Server a bit lightweight or a VM? Then perhaps 64KB or 32KB Block Size would be the right choice.

If like the majority of installs that I am involved in, you want to strike a balance between Deduplication and Restore performance, then 16KB is always an acceptable choice.


When would you want to use a 4KB Block Size?

There is only one circumstance when I would recommend a 4KB Block size. Only when using an Arcserve branded Appliance, such as the 9240DR that I used in my testing. Why? I hear you virtually asking? Value for Money! When you invest in an Arcserve 9000 Series Appliance, you are not only getting top-quality hardware and software, but you are also purchasing an “unlimited” capacity license for Arcserve UDP. “Unlimited” in quotes because you are in fact limited to what you can fit onto the Appliance and yes, you can squeeze more on it when you use a 4KB Block Size. You are basically getting an extra 20% of the value out of the Appliance.

If it were my money, I would prefer to squeeze out that extra 20% than have to invest in a 2nd Appliance or buy expansion units.


I’ve already deployed a 4KB Block Size, and my backup to Tape is Too Slow!

I’m very sorry that you didn’t get a chance to read this blog before you deployed your datastore! However, there are workarounds for your issue.

You can, of course, migrate your datastore using Replication or Jumpstart process. However, if your priority is to backup the latest recovery points to tape, then you could deploy a 2nd Datastore using 64KB Block Size and set retention to a minimum. Configure replication between the two datastores, then use the 2nd datastore for tape backup.


Conclusion & Summary

When I decided to put this blog together, I was not certain what the outcome would be. However, I did want to produce some evidence to back up my frequent instruction to clients to standardise on 16KB Block Sizes or larger. Frequently I have encountered end user issues that are caused entirely because their RPS server simply can not cope with the demands required of a 4KB datastore.

So in summary, I would suggest that 16KB be the SMALLEST block size for your RPS Datastores, even if you have a “high spec” server. If disk space is not your main concern, or if your RPS is a Virtual Machine – then use 64KB. You won’t reget it.

If on the other hand, you do have an Arcserve 9000 Series Appliance – then using a 4KB datastore is a perfectly acceptable way to maximise your investment. If backup to Tape or long term retention, I would suggest adding an external storage array to host a replica datastore (yes you can have a secondary datastore for replication purposes). Use a larger block size for the external storage.

511 views
bottom of page