Measuring and Improving Write Throughput Performance on GCP Using SIOS DataKeeper for Windows

SIOS Background
Reading Time: 2 minutes

Background

This post serves to document my findings in GCP in regards to write performance to a disk being replicated to GCP. But first, some background information. A customer expressed concern that DataKeeper was adding a tremendous amount of overhead to their write performance when testing with a synchronous mirror between Google Zones in the same region. The original test they performed was with the bitmap file on the C drive, which was a persistent SSD. In this configuration they were only pushing about 70 MBps. They tried relocating the bitmap to an extreme GCP disk, but the performance did not improve.

Moving the Bitmap to a Local SSD

I suggested that they move the bitmap to a local SSD, but they were hesitant because they believed the extreme disk they were using for the bitmap had latency and throughput that was as good or better than the local SSD, so they doubted it would make a difference. In addition, adding a local SSD is not a trivial task since it can only be added when the VM is originally provisioned.

Selecting the Instance Type

As I set out to complete my task, the first thing I discovered was that not every instance type supported a local SSD. For instance, the E2-Standard-8 does not support local SSD. For my first test I settled on a C2-Standard-8 instance type, which is considered “compute optimized”. I attached a 500 GB persistent SSD and started running some write performance tests and quickly discovered that I could only get the disk to write at about 140MBps rather than the max speed of 240MBps. The customer confirmed that they saw the same thing. It was perplexing, but we decided to move on and try a different instance type.

The second instance type we selected was an N2-Standard-8. With this instance type we were able to push the disk to its maximum throughput speed of 240 MBps when not replicating the disk. I moved the bitmap to the local SSD I had provisioned and repeated the same tests on a synchronous mirror (DataKeeper v8.8.2) and got the results shown below.

The Results

Diskspd test parameters

diskspd.exe -c96G -d10 -r -w100 -t8 -o3 -b64K -Sh -L D:data.dat
diskspd.exe -c96G -d10 -r -w100 -t8 -o3 -b8K -Sh -L D:data.dat
diskspd.exe -c96G -d10 -r -w100 -t8 -o3 -b4K -Sh -L D:data.dat

AvgLat vs. Write Size graph

 

MBps

Image alt text: MB/s vs. Write Size graph

The Data

Write SizeMB/sMBps Percent Overhead
64k-Mirror240.010.00%
64k-NoMirror240.02 
8k-Mirror58.8739.18%
8k-NoMirror96.8 
4k-Mirror29.3421.84%
4k-NoMirror37.54 
Write SizeAvgLatAvgLat Overhead
64k-Mirror6.247-0.02%
64k-NoMirror6.248 
8k-Mirror3.18339.21%
8k-NoMirror1.935 
4k-Mirror3.19421.88%
4k-NoMirror2.495 

Conclusions

The 64k and 4k write sizes all incur overhead which could be considered as “acceptable” for synchronous replication. The 8k write size seems to incur a more significant amount of overhead, although the average latency of 3.183ms is still pretty low.

-Dave Bermingham, Director, Customer Success


Recent Posts

Questions High Availability

First 30 days: Key things to know for a newbie to SIOS LifeKeeper or SIOS DataKeeper

As a relatively new employee, my boss asked me to write down my impressions of SIOS products and things that newbie’s to SIOS […]

Read More

What is a License Rehost?

How to Perform a License Rehost in the SIOS Licensing Portal When a SIOS perpetual product license is first activated in the SIOS […]

Read More