You can easily push files from EC2 to a S3 bucket in a few simple steps. For this purpose, you can either use AWS CLI or SDKs.
Some Prerequisites:
1. You must deploy the Amazon EC2 instance where you will mount Amazon S3 as an NFS volume. Note the security group ID of the instance as it will be required for permitting access to the NFS file share [1].
2. Additionally, the operating system of your local computer determines the specific prerequisites for connecting from your local computer to your Linux instance.
For example, if your local computer operating system is Linux or macOS X, you need to connect with SSH or EC2 instance connect, etc. If your local machine runs Windows, you need to connect through OpenSSH, PuTTY, etc [7].
3. You need to create the S3 bucket that you will mount as an NFS volume in the same account and Region as the instance. The bucket and objects should not be public and you should enable server-side encryption.
The above points are the prerequisites that you need to ensure before mounting your EC2 with S3 bucket and then pushing files through it.
Now follow these steps to push files from an EC2 instance to an S3 bucket:
1. Create an IAM role with S3 write access or admin access [4]:
Open the AWS Storage Gateway console, and choose the AWS Region where you want to create your gateway.
2. Attach the IAM role to the EC2 instance [4]:
Create the VPC endpoints for AWS Storage Gateway to allow private access to the AWS Storage Gateway service. For this you need to Sign in to the Amazon VPC console.
3. Install the AWS CLI on the EC2 instance [2][4]:
Generate the Amazon S3 File Gateway activation key used to activate the S3 File Gateway. Connect to the EC2 instance that is the NFS client and then send an HTTP request with the following format:
curl http://203.0.113.100/?gatewayType=FILE_S3&activationRegion=us-east-1&vpcEndpoint=vpce-12345678e91c24a1fe9-62qntt8k.storagegateway.us-east-1.vpce.amazonaws.com&no_redirect
You will get an activation key in return.
4. Deploy the S3 file gateway and create the NFS file share.
5. Mount the NFS file share:
You can obtain the values of your gateway IP address and your S3 bucket name from the following commands:
i. For Linux clients, type the following command in the NFS file share instance.
sudo mount -t nfs -o nolock,hard [Your gateway IP address]:/[S3 bucket name] [mount path on your client]
ii. For Windows clients, type the following command (For a more natural Windows experience, you also have the option of sharing and mounting using SMB instead of NFS).
mount –o nolock -o mtype=hard [Your gateway IP address]:/[S3 bucket name] [Drive letter on your windows client]
6. Use the AWS CLI `aws s3 cp` command to copy files from the EC2 instance to the S3 bucket [4].
```bash
# Install the AWS CLI
sudo apt-get update
sudo apt-get install awscli
# Configure the AWS CLI with the IAM role credentials
aws configure
# Copy a file from the EC2 instance to the S3 bucket
aws s3 cp /path/to/local/file s3://bucket-name/path/to/s3/file
```
Note that you need to replace `/path/to/local/file` with the path to the file on the EC2 instance and `s3://bucket-name/path/to/s3/file` with the S3 bucket name and the path to the file in the bucket [4].
Alternatively, you can use the AWS SDKs for programming languages like Python, Java, and Node.js to upload files to an S3 bucket from an EC2 instance [1].