Amazon EFS Performance Tutorial

Publish date: 2024-05-06
Amazon EFS Performance Tutorial

Performance Tutorial

Version 1.0.1

efs-pt-1.0.1

© 2017 Amazon Web Services, Inc. and its affiliates. All rights reserved. This work may not be reproduced or redistributed, in whole or in part, without prior written permission from Amazon Web Services, Inc. Commercial copying, lending, or selling is prohibited.

Errors or corrections? Email us at [email protected].

Tutorial Overview

Overview

This tutorial is designed to help you better understand the performance characteristics of Amazon Elastic File System (Amazon EFS) and how parallelism, I/O size, and Amazon EC2 instance types have a profound effect on file system performance.

This tutorial is divided into four sections.

Section 1 will demonstrate that not all Amazon EC2 instance types are created equal and different instance types provide different levels of network performance when accessing an EFS file system.
Section 2 will demonstrate how different I/O sizes (block sizes) and sync() frequencies (the rate data is persisted to disk) have a profound impact on EFS performance when compared to EBS.
Section 3 will demonstrate how increasing the number of threads accessing EFS will significantly improve performance when compared to EBS.
Section 4 will compare and demonstrate how different file transfer tools affect performance when accessing an EFS file system.

Prerequisites

The AWS CloudFormation template below will create the compute environment you need to run the tutorial. You must have an existing Amazon EFS file system in the region where you launch the CloudFormation stack and it must have mount targets in the VPC where you launch your EC2 instances. You will need to provide the EFS file system id as a parameter value when you launch the CloudFormation stack.

The Environment

The AWS CloudFormation template will launch three EC2 instances, each in their own Auto Scaling group. Please use the recommended default instance types for each Auto Scaling group. The file system, whose id you entered as a CloudFormation parameter, will be automatically mounted to each EC2 instance. These instances will also have a 20 GB gp2 EBS data volume mounted and 5GB of test data will be generated on that volume. The following open-source applications will also be installed on each instance.

nload - is a console application that monitors network traffic and bandwidth usage in real time
smallfile - https://github.com/bengland2/smallfile - used to generate test data; Developer: Ben England
GNU Parallel - https://www.gnu.org/software/parallel/ - used to parallelize single-threaded commands; O. Tange (2011): GNU Parallel - The Command-Line Power Tool, ;login: The USENIX Magazine, February 2011:42-47
Mutil mcp - https://github.com/pkolano/mutil - multi-threaded drop-in replacement of cp; Author Paul Kolano (NASA)
fpart - https://github.com/martymac/fpart - sorts file trees and packs them into partitions; Author Ganaël Laplanche
fpsync - wraps fpart + rsync together as a multi-threaded transfer utility - included in the tools/ directory of fpart

NOTICE!! Amazon Web Services does NOT endorse specific 3rd party applications. These software packages are used for demonstration purposes only. Follow all expressed or implied license agreements associated with these 3rd party software products.

WARNING!! This tutorial environment will exceed your free-usage tier. You will incur charges as a result of launching this CloudFormation stack and executing the scripts included in this tutorial. This tutorial will take approximately 1 hour to complete and at a cost of ~$0.83. Delete all files on the EFS file system that were created during this tutorial and delete the CloudFormation stack so you don’t continue to incur additional compute and storage charges.

Launch the AWS CloudFormation Stack

Click the cloudformation-launch-stack link below to create the AWS CloudFormation stack in your account and desired AWS region. This region must an existing Amazon EFS file system which you will use with this tutorial.

After launching the AWS CloudFormation Stack above, you should see three Amazon EC2 instances running in your VPC. Each instance Name tag will change from “EFS Performance Tutorial - Launching…” to “EFS Performance Tutorial - Ready”. Wait for the Name tag of each instance to read “EFS Performance Tutorial - Ready” before continuing.

Section 1 - Compare the network performance of different EC2 instance types accessing EFS

This section will demonstrate that not all Amazon EC2 instance types are created equal and different instance types provide different levels of network performance when accessing EFS.

1.1 SSH into all three Amazon EC2 instances
1.2 Use dd to write 20 GB of data to EFS from each instance

Run this command against all three instances to create a 20 GB file on EFS and monitor network traffic and throughput in real-time

time dd if=/dev/zero of=/efs/tutorial/dd/20G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=1M count=20480 conv=fsync & nload -u M 
1.3 Close all SSH sessions
Results

All EC2 instance types have different network performance characteristics so each can drive different levels of throughput to EFS. While the t2.micro instance initially appears to have better network performance when compared against an m4.large instance, it’s high network throughput is short lived as a result of the burst characteristics on t2 instances.

StepEC2 Instance TypeData SizeDurationBurst ThroughputBaseline ThroughputAverage Throughput
1.2t2.micro20 GB720 seconds120 MB/s7 MB/s30 MB/s
1.2m4.large20 GB384 seconds56 MB/s
1.2c4.2xlarge20 GB143 seconds150 MB/s

Section 2 - Demonstrate how different I/O sizes and sync frequencies affects throughput to EFS

This section will compare how different I/O sizes (block sizes) and sync frequencies (the rate data is persisted to disk) have a profound impact on performance between EBS and EFS.

2.1 SSH into the c4.2xlarge EC2 instance
2.2 Write to EBS using 1 MB block size and sync once after each file

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EBS using a 1 MB block size and issuing a sync once at the end to ensure everything is written to disk.

time dd if=/dev/zero of=/ebs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=1M count=2048 status=progress conv=fsync 

Record run time.

2.3 Write to EFS using 1 MB block size and sync once after each file

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EFS using a 1 MB block size and issuing a sync once at the end to ensure everything is written to disk.

time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=1M count=2048 status=progress conv=fsync 

Record run time.

2.4 Write to EBS using 16 MB block size and sync once after each file

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EBS using a 16 MB block size and issuing a sync once at the end to ensure everything is written to disk.

time dd if=/dev/zero of=/ebs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=16M count=128 status=progress conv=fsync 

Record run time.

2.5 Write to EFS using 16 MB block size and sync once after each file

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EFS using a 16 MB block size and issuing a sync once at the end to ensure everything is written to disk.

time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=16M count=128 status=progress conv=fsync 

Record run time.

2.6 Write to EBS using 1 MB block size and sync after each block

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EBS using a 1 MB block size and issuing a sync after each block to ensure each block is written to disk.

time dd if=/dev/zero of=/ebs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=1M count=2048 status=progress oflag=sync 

Record run time.

2.7 Write to EFS using 1 MB block size and sync after each block

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EFS using a 1 MB block size and issuing a sync after each block to ensure each block is written to disk.

time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=1M count=2048 status=progress oflag=sync 

Record run time.

2.8 Write to EBS using 16 MB block size and sync after each block

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EBS using a 16 MB block size and issuing a sync after each block to ensure each block is written to disk.

time dd if=/dev/zero of=/ebs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=16M count=128 status=progress oflag=sync 

Record run time.

2.9 Write to EFS using 16 MB block size and sync after each block

Run this command against the c4.2xlarge instance and use dd to create a 2 GB file on EFS using a 16 MB block size and issuing a sync after each block to ensure each block is written to disk.

time dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N).img bs=16M count=128 status=progress oflag=sync 

Record run time.

Results

All EC2 instance types have different network performance characteristics so each can drive different levels of throughput to EFS. While the t2.micro instance appears to have better network performance when initially compared to an m4.large instance, it’s high network throughput is short lived as a result of the burst characteristics on t2 instances.

StepEC2 Instance TypeOperationData SizeBlock SizeSyncStorageDurationThroughput
2.2c4.2xlargeCreate2 GB1 MBAfter each fileEBS17.0 seconds126 MB/s
2.3c4.2xlargeCreate2 GB1 MBAfter each fileEFS10.7 seconds201 MB/s
2.4c4.2xlargeCreate2 GB16 MBAfter each fileEBS17.0 seconds126 MB/s
2.5c4.2xlargeCreate2 GB16 MBAfter each fileEFS10.6 seconds202 MB/s
2.6c4.2xlargeCreate2 GB1 MBAfter each blockEBS17.3 seconds124 MB/s
2.7c4.2xlargeCreate2 GB1 MBAfter each blockEFS85.5 seconds25 MB/s
2.8c4.2xlargeCreate2 GB16 MBAfter each blockEBS16.3 seconds132 MB/s
2.9c4.2xlargeCreate2 GB16 MBAfter each blockEFS23 seconds93 MB/s

Section 3 - Demonstrate how multi-threaded access improves throughput and IOPS

This section will demonstrate how increasing the number of threads accessing EFS will significantly improve performance when compared to EBS.

3.1 SSH into the c4.2xlarge EC2 instance
3.2 Write to EBS using 4 threads and sync after each block

Run this command against the c4.2xlarge instance which will use dd to write 2 GB of data to EBS using a 1 MB block size and issuing a sync after each block to ensure everything is written to disk.

time seq 0 3 | parallel --will-cite -j 4 'dd if=/dev/zero of=/ebs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{}.img bs=1M count=512 oflag=sync' 

Record run time.

3.3 Write to EFS using 4 threads and sync after each block

Run this command against the c4.2xlarge instance which will use dd to write 2 GB of data to EFS using a 1 MB block size and issuing a sync after each block to ensure everything is written to disk.

time seq 0 3 | parallel --will-cite -j 4 'dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{}.img bs=1M count=512 oflag=sync' 

Record run time.

3.4 Write to EBS using 16 threads and sync after each block

Run this command against the c4.2xlarge instance which will use dd to write 2 GB of data to EBS using a 1 MB block size and issuing a sync after each block to ensure everything is written to disk.

time seq 0 15 | parallel --will-cite -j 16 'dd if=/dev/zero of=/ebs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{}.img bs=1M count=128 oflag=sync' 

Record run time.

3.5 Write to EFS using 16 threads and sync after each block

Run this command against the c4.2xlarge instance which will use dd to write 2 GB of data to EFS using a 1 MB block size and issuing a sync after each block to ensure everything is written to disk.

time seq 0 15 | parallel --will-cite -j 16 'dd if=/dev/zero of=/efs/tutorial/dd/2G-dd-$(date +%Y%m%d%H%M%S.%3N)-{}.img bs=1M count=128 oflag=sync' 

Record run time.

Results

The distributed data storage design of EFS means that multi-threaded applications can drive substantial levels of aggregate throughput and IOPS. If you parallelize your writes to EFS by increasing the number of threads, you can increase the overall throughput and IOPS to EFS.

StepOperationData SizeBlock SizeThreadsSyncStorageDurationThroughput
3.2Create2 GB1 MB4After each blockEBS16.6 seconds131 MB/s
3.3Create2 GB1 MB4After each blockEFS21.7 seconds99 MB/s
3.4Create2 GB1 MB16After each blockEBS16.4 seconds131 MB/s
3.5Create2 GB1 MB16After each blockEFS7.9 seconds271 MB/s

Section 4 - Compare different file transfer tools

This section will compare and demonstrate how different file transfer tools affect performance when accessing an EFS file system.

4.1 SSH into the c4.2xlarge EC2 instance
4.2 Review the data to transfer

Run this command against the c4.2xlarge instance to view the total size and count of files to be trasnferred.

du -csh /ebs/tutorial/data-1m/ find /ebs/tutorial/data-1m/. -type f | wc -l 
4.3 Set the $instanceid variable

Run this command against the c4.2xlarge instance to set the $instanceid variable which will be used in the preceeding steps.

instanceid=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 
4.4 Transfer files from EBS to EFS using rsync

Run this command against the c4.2xlarge instance to drop caches and transfer 5,000 files (~1 MB each) totalling 5 GB from EBS to EFS using rsync.

sudo su sync && echo 3 > /proc/sys/vm/drop_caches exit time rsync -r /ebs/tutorial/data-1m/ /efs/tutorial/rsync/${instanceid} & nload -u M 
4.5 Transfer files from EBS to EFS using cp

Run this command against the c4.2xlarge instance to drop caches and transfer 5,000 files (~1 MB each) totalling 5 GB from EBS to EFS using cp.

sudo su sync && echo 3 > /proc/sys/vm/drop_caches exit time cp -r /ebs/tutorial/data-1m/* /efs/tutorial/cp/${instanceid} & nload -u M 
4.6 Set the $threads variable

Run this command against the c4.2xlarge instance to set the $threads variable to 4 threads per vcpu. This variable will be used in the subsequent steps.

threads=$(($(nproc --all) * 4)) 
4.7 Transfer files from EBS to EFS using fpsync

Run this command against the c4.2xlarge instance to drop caches and transfer 5,000 files (~1 MB each) totalling 5 GB from EBS to EFS using fpsync.

sudo su sync && echo 3 > /proc/sys/vm/drop_caches exit time /usr/local/bin/fpsync -n ${threads} -v /ebs/tutorial/data-1m/ /efs/tutorial/fpsync/${instanceid} & nload -u M 
4.8 Transfer files from EBS to EFS using mcp

Run this command against the c4.2xlarge instance to drop caches and transfer 5,000 files (~1 MB each) totalling 5 GB from EBS to EFS using mcp.

sudo su sync && echo 3 > /proc/sys/vm/drop_caches exit time mcp -r --threads=${threads} /ebs/tutorial/data-1m/* /efs/tutorial/mcp/${instanceid} & nload -u M 
4.9 Transfer files from EBS to EFS using cp + GNU Parallel

Run this command against the c4.2xlarge instance to drop caches and transfer 5,000 files (~1 MB each) totalling 5 GB from EBS to EFS using cp + GNU Parallel.

sudo su sync && echo 3 > /proc/sys/vm/drop_caches exit time find /ebs/tutorial/data-1m/. -type f | parallel --will-cite -j ${threads} cp {} /efs/tutorial/parallelcp & nload -u M 
4.10 Transfer files from EBS to EFS using fpart + cpio + GNU Parallel

Run this command against the c4.2xlarge instance to drop caches and transfer 5,000 files (~1 MB each) totalling 5 GB from EBS to EFS using fpart + cpio + GNU Parallel.

sudo su sync && echo 3 > /proc/sys/vm/drop_caches exit time /usr/local/bin/fpart -Z -n 1 -o /home/ec2-user/fpart-files-to-transfer /ebs/tutorial/data-1m head /home/ec2-user/fpart-files-to-transfer.0 time parallel --will-cite -j ${threads} --pipepart --round-robin --block 1M -a /home/ec2-user/fpart-files-to-transfer.0 'sudo cpio -pdm {} /efs/tutorial/parallelcpio/${instanceid}/' & nload -u M 
Results

Not all file transfer utilities are created equal. File systems are distributed across an unconstrained number of storage servers and this distributed data storage design means that multithreaded applications like fpsync, mcp, and GNU parallel can drive substantial levels of throughput and IOPS to EFS when compared to single-threaded applications.

StepFile Transfer ToolFile CountFile SizeTotal SizeThreadsDurationThroughput
4.4rsync50001 MB5 GB1435 seconds11.7 MB/s
4.5cp50001 MB5 GB1329 seconds15.6 MB/s
4.7fpsync50001 MB5 GB32210 seconds24.4 MB/s
4.8mcp50001 MB5 GB3287 seconds58.9 MB/s
4.9cp + GNU Parallel50001 MB5 GB3273 seconds70.1 MB/s
4.10fpart + cpio + GNU Parallel50001 MB5 GB3255 seconds93 MB/s

Section 5 - Cleanup

Delete all files on the EFS file system that were created during this tutorial and delete the CloudFormation stack so you don’t continue to incur additional charges for these resources.

5.1 Delete all files on the EFS file system created during the tutorial

Run this command on the c4.2xlarge instance to delete the /efs/tutorial data.

sudo rm /efs/tutorial/ -r 
5.2 Delete the AWS CloudFormation stack you launched during the tutorial

Conclusion

The distributed data storage design of Amazon EFS enables high levels of availability, durability, and scalability. This distributed architecture results in a small latency overhead for each file operation. Due to this per-operation latency, overall throughput generally increases as the average I/O size increases, because the overhead is amortized over a larger amount of data. Amazon EFS supports highly parallelized workloads (for example, using concurrent operations from multiple threads and multiple Amazon EC2 instances), which enables high levels of aggregate throughput and operations per second.

For feedback, suggestions, or corrections, please email me at [email protected].

ncG1vNJzZmirY2Ourq3ZqKWar6NjsLC5jpqurGWlqHqmrdKtZGpnpKrBsL7ImqNonZaoerGx0Z%2Bmq6WRo7CmedOuq6iqmZa5b7TTpqM%3D