If you would like to persist data from your ECS containers, i.e. hosting databases like MySQL or MongoDB with Docker, you need to ensure that you can mount the data directory of the database in the container to volume that's not going to dissappear when your container or worse yet, the EC2 instance that hosts your containers, is restarted or scaled up or down for any reason.
Don't know how to create your own AWS ECS Cluster? Go here!
- Sadly the EC2 provisioning process doesn't allow you to configure EFS during the initial config. After your create your cluster, follow the guide below.
If you would like to encrypt your file system at-rest, then you must have a KMS key.
If not, you may skip but it is strongly recommended that you encrypt your data - no matter how unimportant you think your data is at the moment.
- Headover to IAM -> Encryption Keys
- Create key
- Provide Alias and a description
- Tag with 'Environment': 'production'
- Carefuly select 'Key Administrators'
- Uncheck 'Allow key administrators to delete this key.' to prevent accidental deletions
- Key Usage Permissions
- Select the 'Task Role' that was created when configuring your AWS ECS Cluster. If not see the Create Task Role section in the guide linked above. You'll need to update existing task definitions, and update your service with the new task definition for the changes to take affect.
- Finish
- Launch EFS
- Create file system
- Select the VPC that your ECS cluster resides in
- Select the AZs that your container instances reside in
- Next
- Add a name
- Enable encryption (You WANT this -- see above)
- Create File System
- Back on the EFS main page, expand the EFS definition, if not already expanded
- Copy the DNS name
- ECS -> Cluster
- Switch to ECS Instances tab
- Actions -> View Cluster Resources
- Click on the 'Launch configuration' that is linked
- Select the correct Launch configuration on the table and hit 'Copy launch configuration'
- Switch to 'Configure Details' tab
- Expand Advanced Details
- Paste the following script in to the User data field:
#!/bin/bash
# Install nfs-utils
cloud-init-per once yum_update yum update -y
cloud-init-per once install_nfs_utils yum install -y nfs-utils
# Create /efs folder
cloud-init-per once mkdir_efs mkdir /efs
EFS_URI=
# Mount /efs
cloud-init-per once mount_efs echo -e '$EFS_URI:/ /efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0' >> /etc/fstab
mount -a
# Set any ECS agent configuration options
echo "ECS_CLUSTER=default" >> /etc/ecs/ecs.config- Define EFS_URI using the DNS name copied from the previous part
- If you are not using the default cluster, be sure to replace the ECS_CLUSTER=default line
- Skip to review
- Create launch configuration
- Proceed without a key pair
- Note down the name your new configuration
- ECS -> Cluster
- Switch to ECS Instances tab
- Actions -> View Cluster Resources
- Click on the 'Auto Scaling Group' that is linked
- Select the correct Launch configuration on the table and hit Actions -> Edit
- Update the Launch Configuration to the new one you just created
- Save
- ECS -> Cluster
- Switch to ECS Instances tab
- Scale ECS instances to 0 Note This will bring down your applications
- After all instances have been brougt down, scale back up to 2 (or more)
- ECS -> Task definitions
- Create new revision
- If you already have not added it, make sure the Role here matches the one for the KMS key
- Add volume
- Name: 'efs', Source Path: '/mnt/efs/your-dir'
- Add
- Click on container name, under Storage and Logs
- Select mount point 'efs'
- Provide the internal container path. i.e. for MongoDB default is '/data/db'
- Update
- Create
- ECS -> Clusters
- Click on Service name
- Update
- Type in the new task definition name
- Update service
Your service should re-provision the existing containers and voila, you're done!
Test what you have done.
Go ahead and save some data.
Then scale your instance size down to 0 and scale it back up again and see if the data is still accessible.