Benchmarked
OnApp vs AWS vs Azure
Performance isn’t the only priority when you’re choosing a cloud provider – but it’s usually pretty high on the list.
Most people assume that the big-name clouds will out-perform everything else, but is that really true?
Paul Morris, our Professional Services Manager, compared an OnApp cloud in production at one of our service provider customers, to similarly priced instances from Amazon and Azure. This is what he discovered.
Setting up the test

Paul Morris, OnApp Professional Services Manager
I’ve wanted to do this for a while, and one of our customers, Cyber Host Pro, was good enough to give me access to their newly-designed cloud infrastructure. They’re running OnApp with our integrated software-defined storage solution.
I plan to compare a virtual machine in their OnApp cloud against an instance in AWS and Azure with the same hardware specifications, at a comparable pricing tier.
Let’s see if an OnApp cloud provider can deliver a better service for the same cost.
The OnApp platform
The test OnApp cloud is one of Cyber Host Pro’s production environments, with many workloads running.
It’s an all-SSD integrated storage setup, with Dell R740 hypervisors and a 10Gbit backend storage network. The datastore is configured with 2 copies and 2 stripes, and doesn’t have any NVMe cache configured.
The VM is placed optimally to provide a local read path, and all vdisks are in sync, so we are ready to perform the tests. I’ve provisioned a VM with 4GB RAM and 2 CPU cores on KVM. The VM is running CentOS 7 x64 with SELinux disabled, and updated to the latest release.
The AWS instance
I’ve chosen a ‘c5.large’ instance inside AWS, with standard SSD storage, as this was the instance that would cost around the same price as the OnApp-hosted solution above. The instance has CentOS 7 x64 installed with all updates completed, and SELinux disabled. It too has 4GB RAM and 2 CPU cores available.
Preparation & performance testing
I will run the following on each virtual machine and then reboot to completely disable selinux.
yum -y update
sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
yum -y install fio git gcc screen
[reboot]
I want to run a series of tests that will best mimic a production environment, so I will be using Unix Bench to give you an overall score of how the environment performs on the underlying infrastructure. This is a good choice as it will test the hardware performance and the default template configurations provided as standard, and has a simple scoring system for comparison which we can review later.
I will also be running some random read/writes at various block sizes using fio to test the storage layer. This is a better test than dd, as in a production environment applications access data randomly across the disks in parallel, and the block sizes will vary massively depending on the workloads running inside the virtual environment. I’ll take a total number of IOPS across all 9 tests for a simple comparison at the end.
How the test was run (click to expand)
Run Unix Bench:
cd; mkdir unixbench; cd unixbench git init; git clone https://github.com/kdlucas/byte-unixbench cd byte-unixbench/UnixBench screen -S benchmark ./Run
Run fio stats:
# Random read/write performance test
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=0 --name=test --filename=random.fio --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 | grep IOPS; rm -f random.fio
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=0 --name=test --filename=random.fio --bs=256k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 | grep IOPS; rm -f random.fio
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=0 --name=test --filename=random.fio --bs=4m --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75 | grep IOPS; rm -f random.fio
# Random read performance test
fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4k --direct=1 --size=4G --numjobs=2 --runtime=240 --filename=random.fio --group_reporting | grep IOPS; rm -f random.fio
fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=256k --direct=1 --size=4G --numjobs=2 --runtime=240 --filename=random.fio --group_reporting | grep IOPS; rm -f random.fio
fio --name=randread --ioengine=libaio --iodepth=16 --rw=randread --bs=4m --direct=1 --size=4G --numjobs=2 --runtime=240 --filename=random.fio --group_reporting | grep IOPS; rm -f random.fio
# Random write performance test
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4k --direct=1 --size=4G --numjobs=2 --runtime=240 --filename=random.fio --group_reporting | grep IOPS; rm -f random.fio
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=256k --direct=1 --size=4G --numjobs=2 --runtime=240 --filename=random.fio --group_reporting | grep IOPS; rm -f random.fio
fio --name=randwrite --ioengine=libaio --iodepth=1 --rw=randwrite --bs=4m --direct=1 --size=4G --numjobs=2 --runtime=240 --filename=random.fio --group_reporting | grep IOPS; rm -f random.fio
The results
I wanted to show what kind of results you can get from an OnApp cloud running our integrated storage system when it’s configured and set up correctly… and while I was hopeful (!) I didn’t expect such a huge difference in the results. Just look at this:
I wonder how much more you’d need to spend to get similar results with the competition?
If you’re interested, I have kept a copy of the raw output from all providers: you can find it here.
Optimize your cloud performance
The OnApp cloud platform was designed from the ground up to offer a fast, flexible and all-in-one solution for service provider cloud. If you’re new to OnApp, why not chat with our cloud specialists or take a demo? Just fill out the form and we’ll be in touch.


