# SAP HANA on AWS Workshop **Wyndham Grand Manama, Bahrain, August 1st, 2019** Welcome to the SAP HANA on AWS workshop! Below you'll find our lab guide giving you step-by-step instructions on how to deploy SAP HANA on AWS and then carry out some key operational tasks. Your instructors will take you through this guide and will also be on hand for any questions as you work through it. This page can be found at: **http://bit.ly/sapaws-bahrain** Go here first to log into your temporary AWS account: https://dashboard.eventengine.run Once you're logged into your AWS account, resume the steps below in the guide. ## Introduction This lab is intended for SAP Basis admins and/or infrastructure teams to learn how to deploy and operate SAP HANA workloads on AWS. You will: - deploy SAP HANA on AWS using our AWS SAP HANA QuickStart - schedule backups for HANA database locally **Objectives** After completing this lab, you will be able to: - Deploy SAP HANA on AWS using our AWS SAP HANA QuickStart - Setup RDP, Bastion and HANA Servers in an automated way - Access the HANA server in AWS using HANA Studio and at OS level - Configure AWSCLI - Resize the HANA database - Configure SSM and HANA snapshot backups - Configure file backups for HANA and copy them to S3 - Enable S3 replication and copy HANA database backups to DR region - Setup AWS SSM and Cloudwatch Events to schedule HANA Backups remotely at a defined schedule - Use EFS as optional backup target incl. AWS Backup service - Restore SAP HANA database in a DR location ## Prerequisites ### Browser You need access to a computer with Wi-Fi and Microsoft Windows, Mac OS X, or Linux (Ubuntu, SuSE, or Red Hat) and one of the internet browsers such as: |Browser |Versions | |--- |--- | | Google Chrome | Latest three versions | | Mozilla Firefox | Latest three versions | | Microsoft Edge | Latest three versions | | Apple Safari for macOS | Latest two versions | ### Network Your laptop and network will need to allow access over HTTP(S) and RDP at a minimum, and preferably also SSH. ### EC2 Keypair When you first log into your new AWS account, you will need to create an EC2 keypair. This is needed to authenticate when connecting to an EC2 instance via SSH. 1. Create Key Pairs in the AWS console. Navigate to Services --> EC2 and select **Key Pairs** under Network & Security - Klick in **Create Key Pair** and enter a name - Download the pem file form the AWS console - We will need it for generating the RDP server password and for logging on to the HANA system at OS level ## Deploy the SAP HANA QuickStart AWS Quick Start provides a very consistent, repeatable and AWS/SAP certified way to deploy an SAP HANA instance in about 35 minutes. 1. Access the [SAP HANA Quick Start](https://www.sap.com/cmp/td/sap-hana-express-edition.html) in a browser window logged in as your AWS account 2. Select **View deployment guide** 3. Select **Single-AZ** and **Launch for a new VPC** if you want to create a new VPC or select **Launch for an existing VPC** if you want to leverage an existing VPC. In our case, we will deploy in a new VPC. 4. An AWS CloudFormation screen will open in a new window. Click **Next** at the bottom of the CloudFormation screen. 5. Maintain the following fields: - Choose Availability Zone for subnet creation - Confirm that the operating system selected is **SuSeLinux12SP3** - Select the instance type **r5.2xlarge** - Choose the Key Pair from the drop down, created during the preparation steps - Enter **Aws12345** as your SAP HANA password - Leave the textfield for the S3 bucket where your SAP HANA installation media is located blank. You will instead manually install HANA Express. - Set installation of SAP HANA Software as **No** (because we will manually install HANA Express) - Set RDP Instance installation as **No** (because we will manually create an RDP instance using a pre-provided AMI) > IMPORTANT: there are many parameters you can optionally set or change, but the above _have to be set_ or the CloudFormation deployment will fail. 6. Click **Next** at the bottom once the above mentioned fields are maintained 7. Click **Next** on the Options page (no changes are needed) 8. On the Review page confirm the acknowledgement by selecting the check boxes and click **Create** > IMPORTANT: if you don't select the checkboxes, CloudFormation will continue but fail the deployment. 9. Refresh the CloudFormation console to watch as the SAP HANA Quick Start begins building your environment. 10. When the Quick Start has deployed the BaseNetwork stack, you can optionally continue with installing the RDP host without waiting for the Quick Start to finish (note that if the Quick Start fails after this point you may need to delete any additional work you do). 11. The HANA Quick Start deployment is finished when all three CloudFormation stacks show "CREATE_COMPLETE". ## Install RDP host 1. Go to the EC2 console 2. Create a new EC2 instance by using a community AMI that we have prepared beforehand: `ami-02fd0bf5048e84911` 3. Deploy in the public subnet in the newly created VPC > You should have 2 VPCs in your account in us-east-1: the default VPC which is automatically created for that region and the VPC created by the SAP HANA QS 4. Connect to the RDP host using Administrator and password 'Zecq$GfL54-' 5. On your RDP host, follow the instructions to install the AWS CLI on Windows: https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html 6. Be sure to configure the AWS CLI with a default region of `eu-west-1` (Dublin) as that is the region we'll use today. ## Download & Install HANA Express > NOTE: all the steps below should be done in the RDP instance 1. Log into the RDP instance 2. Go to the [download page for SAP HANA Express](https://www.sap.com/cmp/td/sap-hana-express-edition.html) while in the RDP instance 3. Register on the website 4. Download the Windows download manager (**Windows DM**) 5. Open HXEDownloadManager_win.exe 6. Select platform: **Linux/x86-64** 7. Select Image: **Binary Installer** 8. Select the first two download options ("Getting Started with SAP HANA" and "Server only installer") 9. While waiting for the download to finish, configure WinSCP (pre-installed) to connect to your HANA database server. 10. Once the download is finished, upload hxe.tgz file to the HANA database server with WinSCP 11. Once complete, log into the HANA database server via ssh. 12. As root, copy hxe.tgz to /hana/hxe.tgz ### SAP HANA Express Edition installation Execute the following commands on the command line as root ```bash= sudo su - cd /hana tar -xvzf /hana/hxe.tgz chmod -R 777 /hana/HANA_EXPRESS_20 /hana/setup_hxe.sh ``` Change the following options AND set the master password, and leave all others as default ``` Enter SAP HANA system ID [HXE]: HDB Enter instance number [90]: 00 Enter HDB master password: ``` ### Post installation steps Execute the following commands on the command line as root ```bash= cd /backup/log rmdir /backup/log/HDB ln -s DB_HDB HDB cd /backup/data rmdir /backup/data/HDB ln -s DB_HDB HDB chown -R hdbadm:sapsys /backup/ ``` ### Connect HANA Studio to newly installed HANA DB 1. On your RDP instance, open SAP HANA Studio and connect to SAP HANA 2. When opening the SAP HANA Studio for the first time, you will get a prompt to select a directory as a workspace. Accept the default location 3. If prompted to create a password hint for the master password, select "No" 4. Click "Open Administration Console" 5. Under the "Systems" tab, select the down-arrow icon and "Add system..." 6. Use the IP address of the SAP HANA host from above and instance number 00 7. Enter the userid (default is SYSTEM) and the HANA password you specified when you installed SAP HANA Express, then click "Finish" 8. You should now be able to navigate to the Catalog, Content, etc. in the left pane of the HANA Studio ### Update SAP HANA Parameters Logon to the SAP HANA instance with SAP HANA Studio and set the following SAP HANA parameters to **persistence** in the **global.ini**: ``` basepath_catalogbackup = /backup/log basepath_databackup = /backup/data basepath_logbackup = /backup/log ``` ## Resize SAP HANA DB What if our database grew in size or performance requirements? Let’s see how we can quickly scale up to meet those demands! ### Resize via the AWS Console - Log into the EC2 console. - Select "SAP HANA Master" - Go to Actions -> Instance State -> Stop. > This will stop the EC2 instance but first the OS can shut down cleanly. Since we're running an empty database with no write activity this is fine. In a production scenario, you could use a solution like AWS Scheduler to automate a more complex start/stop mechanism that chains system calls together for shutting down first the application, then the DB, then the OS (or even chain shutdown/startup of entire inter-dependent landscapes). - Once the instance is stopped, go to Actions -> Instance Settings -> Change Instance Type. This changes the instance type to whatever you choose. Let's double the size of the instance to either r4.4xlarge or r5.4xlarge, depending on which instance family generation you're already using. - Verify in the EC2 console that the instance type is now the updated instance type you selected. - Go to Actions -> Instance State -> Start. - Once the instance is "green" and running, let's go back to the RDP instance and refresh the connection in the HANA Studio. Here you should see double the RAM now being available. You may need to wait for the HANA database server to become fully available. - Once you're satisfied that you've seen the effect of this, let's be frugal and change the instance type back to the smaller instance type for the remainder of the lab by repeating the same steps above but with a smaller instance such as r4.2xlarge or r5.2xlarge. > A note on downtime: You'll see this change in instance type happens instantaneously. The only delay in resizing to scale up is thus in the stopping and starting of the instance itself (plus related OS-level services including the DB itself). This is much quicker than stopping and starting a physical server, but still means there will be downtime. For production environments, this could be done during a maintenance window (which can be defined in AWS Systems Manager). Note that you can scale the application tier horizontally by adding more application servers _without any downtime_. ### Resize via the command line - Go to your RDP instance and open a command line prompt or PowerShell console. - The AWSCLI will already be installed. This is a way to interact with AWS services and their APIs via the command line. It is written in Python so can be installed and run almost anywhere. - Execute the following command to stop the EC2 instance. You can find the instance ID for this EC2 instance in the EC2 console. `aws ec2 stop-instances --instance-ids <instance-id>` - Execute the following command to change instance type. Edit accordingly for your desired instance type: `aws ec2 modify-instance-attribute --instance-id <instance-id> --instance-type "{\"Value\": \"r5.4xlarge\"}"` - Execute the following command to start the EC2 instance. You can find the instance ID for this EC2 instance in the EC2 console. `aws ec2 start-instances --instance-ids <instance-id>` - You can verify the impact of this by looking in the EC2 console. Once the instance is "green" and running there, go back to the RDP instance and refresh the connection in the HANA Studio. Here you should see double the RAM now being available. You may need to wait for the HANA database server to become fully available. - Once you're satisfied that you've seen the effect of this, let's be frugal and change the instance type back to the smaller instance type for the remainder of the lab by repeating the same steps above but with a smaller instance such as r4.2xlarge or r5.2xlarge. ### Extra credit: automate CLI start/stop For an extra challenge, use the CLI commands above in combination with Systems Manager to be able to execute remote start/stop commands. As mentioned you can automate in a more sophisticated fashion by using the AWS Scheduler solution (or by building your own entirely custom solution). However, you can also start much more simply by chaining CLI commands together in a shell script that you then execute remotely via AWS Systems Manager. You could then iterate on this further by automating a scale-up/scale-down event that safely shuts down the landscape first and cleanly starts it up again afterwards, for a fully automated maintenace activity driven through Systems Manager. ## Backups Once you've migrated your SAP HANA system to AWS, or built a new greenfield system on AWS, one of the first operational tasks you'll be presented with is of course taking a backup. There are 3 main backup mechanisms available to you: 1. Storage level snapshots 2. File-based backups 3. Streaming backups (BACKINT) 1 and 2 can be done natively on AWS, and all 3 can be done on AWS using a variety of 3rd party products. ### Snapshot Backups In this series of exercises we'll take storage level snapshots of our HANA system. #### Storage snapshots via AWS Console - Go to the EC2 console - Select the SAP HANA Master (the HANA database server) - Ensure that is the only EC2 instance selected - Go to Actions -> Image -> Create Image - Specify an image name > This can be anything you want, but use something that helps you identify this. In a real productive landscape, you'd have a naming convention for this as you might have many images and snapshots. - Specify an image description - Select "No reboot" > Similarly to previous exercises, we're not shutting the DB server down as even though this only takes a snapshot at storage level (not at DB level), we're not too concerned with inconsistencies as the DB is empty and not in active use. See the extra credit if you want to build a DB consistent snapshot mechanism. - Leave the EBS volumes as-is - we'll take snapshots of all volumes attached. > Note the LVM volumes made up of 3x gp2 volumes for the data files and 2x gp2 volumes for the log files. - Click Create Image. - Your AMI (Amazon Machine Image) is now being created. For more information about how this works ask the instructors or consult the AWS documentation on AMI creation. - Click on the link in the pop-up dialog box to be taken to the list of your current AMIs filtered by the newly-created AMI, or just click on the AMIs menu bar link on the left-hand side of the browser. - Here you can watch the AMI being created on Amazon S3 storage. > S3 is a region-wide service, so the data is stored across multiple copies in every Availability Zone. This both makes it extremely durable storage and also very useful storage, as backups can be restored to any Availability Zone in the region. #### Restore SAP HANA from snapshot - Test a restore from this backup! First, shut down the HANA Master EC2 instance through the EC2 console. - Next, create a new EC2 instance using the AMI you just created in the backup section above. To do this, go to the AMI section of the EC2 console. Select your AMI and select Actions -> Launch. - When creating, select a similar instance type. Then select "Next: Configure Instance Details" - be careful to specify the same VPC and subnet as the original HANA database server, so you can connect to it without having to reconfigure connection routes. Click "Next: Add Storage" - Leave the storage volumes as-is (you can change them here if you wish but there's no need) - Next add any tags required. - Finally select the same security group as the original HANA database server, then click "Review and Launch". - To check on progress, view the newly-created instance in the EC2 console. - Once the instance is restored from the image, log in and bring up the HANA database server if it's not already running. - Make sure that HANA system is started by verifying all the processes listed in output of command `sapcontrol -nr <Instance Number> -function GetProcessList` are green. - Check the IP address of the new instance and add a new connection in your HANA Studio to this instance. - Verify that you can connect to the new instance in the HANA Studio. #### Copy the AMI to another region - Go to the AMI menu in the EC2 console - Select your AMI - Go to Actions -> Copy AMI - Select your chosen destination region (e.g. Ireland) - Update the name and description and select Copy AMI - When the AMI is finished copying, attempt to restore to an EC2 instance by following the same steps as in the above restore section, in that region. > Note that for this to be usable you'd need to deploy the HANA QuickStart in that region and disable the HANA install. Then you could restore the AMI into the private subnet created by that HANA QS. The alternative would be to manually re-create the landscape around the HANA DB server. This is a great example of how powerful CloudFormation is, as it enables an easy automated restore mechanism. ### File-based backups Try to set up and execute a file-based backup directly on the SAP HANA host itself. Once done, configure to run remotely via Systems Manager and schedule the backups to happen on a regular basis. Finally, configure cross-region replication on your S3 bucket to copy your backups to another region for extra protection and flexibility! >Note that some of the steps below will need extra customisation to work with HANA Express edition! #### Take file-based backup of SAP HANA 1. From the RDP host or bastion, SSH to your SAP HANA host. You will need to determine its private IP address in the EC2 console and use your downloaded private key. 2. Use the following commands to install AWS CLI on your SAP HANA host as root user: ``` sudo su - curl https://s3.amazonaws.com/aws-cli/awscli-bundle.zip -o /usr/awscli-bundle.zip unzip /usr/awscli-bundle.zip -d /usr/awscmdline/ /usr/awscmdline/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws ``` 3. Create S3 bucket for backups - replace "XX" below with your initials. > NOTE: execute this as root user ``` sudo su - aws s3 mb s3://awsbootcamp-hana-bkp-XX` ``` >This fails! Why? Because our Quick Starts follow the principle of least privilege! The HANA EC2 IAM role created by the Quick Start does not allow S3 bucket creation by default. You can either create the bucket manually via the AWS console or by first updating the IAM role of the HANA EC2 instance. 4. Select **IAM role** of the HANA EC2 instance. You can find the link in the description tab - Click on **Attach policies** - Search for S3 and select **AmazonS3FullAccess** - Click on **Attach policy** to save the operation 5. Give sudo access to hdbadm (This is to allow HANA administrator hdbadm user run awscli commands) by adding HANA admin to the sudoers file. Execute this as root user `visudo` Add the below line towards the end of the file: `hdbadm ALL=(ALL) NOPASSWD: ALL` If you are unfamiliar with vi and visudo commands, the following will walk you through this step-by-step: - Scroll down to the end of the file - Hit the "o" key (lowercase O on your keyboard), which opens a new line. You should see the word **INSERT** at the bottom of the screen - Add the text above as indicated and hit the return key - Hit the ESC key. **INSERT** should no longer appear at the bottom of the screen - Type "ZZ" without the quotes. This saves the file and closes visudo 6. Change to HANA admin: `su - hdbadm` 7. Create a backup key for the SYSTEM user with the password you provided earlier in the CloudFormation template (Aws12345 unless you chose differently) ``` hdbuserstore SET BACKUP "localhost:30015" SYSTEM Aws12345 hdbuserstore list ``` >30015 is the port for HANA instance number 00. Adjust the port number according to your HANA instance number: 3NN15 8. Download HANA Backup Script from the S3 bucket and setup appropriate permission as root user (sudo su -): ``` sudo su - aws s3 cp s3://awssapbootcamp/hana_backup.sh /usr/sap/HDB/HDB00/hana_backup.sh chown hdbadm:sapsys /usr/sap/HDB/HDB00/hana_backup.sh chmod 755 /usr/sap/HDB/HDB00/hana_backup.sh ``` 9. Edit the script to point to the S3 bucket that was created in step 18 with correct SID, S3 Bucket info. For those wishing to use the vi editor, this process would entail: - vi /usr/sap/HDB/HDB00/hana_backup.sh - Using the arrow keys, scroll over to the first "a" in "awsbootcamp-hana-bkp-02" - Type "C" (without the quotes, this is a capital letter C). The word will disappear, and **INSERT** will appear at the bottom of the screen - Type the name of your S3 bucket, then hit the ESC key. The word **INSERT** will disappear from the bottom of the screen - Type "ZZ" without the quotation marks to save the file and exit vi >**HANA Express Edition only:** >replace "source $DIR_INSTANCE/hsbenv.sh" with "source /hana/shared/HDB/HDB00/hdbenv.sh" 10. Take HANA Backup by running below script as hdbadm user `/usr/sap/HDB/HDB00/hana_backup.sh` #### Verify backup files in Amazon S3 1. On the AWS Management Console, on the Services menu, click **Services**. 2. Select S3 3. Select the Bucket that was maintained in the backup script and verify that backup files exist in the bucket #### Setup S3 cross-region replication 1. On the AWS Management Console, on the Services menu, click **Services**. 2. Select S3 3. Create bucket in the region that you want to replicate the backups in to 4. Navigate to S3 bucket Properties section and Enable versioning 5. Select the s3 bucket to which backup was copied in to 6. Navigate to S3 bucket Properties section and Enable versioning 7. To enable cross-region replication select S3 bucket navigate to management section, select Replication Icon and Add Rule. Select below options during rule creation - Entire bucket - Enabled - Destination bucket name where contents need to replicated to - From Select IAM role drop down - Select Create new role - Review the selection and save #### Run backups via Systems Manager The IAM HANA that Quick Start creates and assigns to HANA systems does not have access to SSM so first update the HANA Instance role with SSM access 1. Navigate to IAM service in AWS console 2. Navigate to Roles section on the left 3. Select the role for HANA nodes 4. Click on Attach Policy and select the "AmazonSSMFullAccess" 5. SSM agent needs to be installed on the HANA system in order to use it. Install SSM agent on the HANA instance as root user using the below commands: ``` sudo su - zypper in amazon-ssm-agent systemctl enable amazon-ssm-agent systemctl start amazon-ssm-agent ``` 6. Make sure that the ssm agent is running by running below command before moving to next step `systemctl status` 7. Run the backup using SSM command line 8. Please maintain the correct HANA instance ID in the below command, we can get the instance-id by querying EC2 meta-data: `curl http://169.254.169.254/latest/meta-data/instance-id` 9. Run SSM command as root user ``` aws ssm send-command --instance-ids <INSTANCE-ID> --document-name AWS-RunShellScript --parameters commands="sudo -u hdbadm TIMESTAMP=$(date +\%F\_%H\%M) SAPSYSTEMNAME=HDB DIR_INSTANCE=/hana/shared/$SAPSYSTEMNAME/HDB00 -i /usr/sap/HDB/HDB00/hana_backup.sh" --region <AWS-REGION> ``` #### Run backups remotely via Systems Manager Using the AWS Systems Manager service to run a command to backup SAP HANA database remotely 1. On the AWS Management Console, select **Systems Manager** on the Services menu 2. Select **Run Command** on the navigation bar 3. Create the Run command using the "AWS-RunShellScript" command document 4. Copy the command as command parameters ``` sudo -u hdbadm TIMESTAMP=$(date +\%F\_%H\%M) SAPSYSTEMNAME=HDB DIR_INSTANCE=/hana/shared/$SAPSYSTEMNAME/HDB00 -i /usr/sap/HDB/HDB00/hana_backup.sh ``` 5. Select HANA instance from the list of instances or using the Tags 7. Select "Write to S3" so that output is saved in S3 for verification if needed 8. Enter S3 bucket and prefix, for example, awsbootcamp-hana-bkp-XX, prefix /bkps/HDB/data/ 9. Click **Run** and then view the result. #### Schedule Systems Manager backups CloudWatch events and built in SSM targets provide an efficient way to perform application specific tasks like database backups remotely on the EC2 instance at a desired frequency. 1. On the AWS Management Console, on the Services menu, click **Services**. 2. Navigate to CloudWatch in AWS console 3. Navigate to Events on the left menu 4. Select Create Rule 5. Select Schedule 6. Select Fixed rate 7. Maintain 15 Minutes 8. Under Target select SSM Run Command 9. For Document select **AWS-RunShellScript (Linux)** 10. For Target Key maintain **InstanceIds** as the key 11. For Target Value(s) type in the instance ID of the EC2 instance where HANA is running 12. Under Configure parameters Select **Constant** 13. Select command and key in the command that we used in Task 4.4 step 4 14. Select Create a new role for this specific resource 15. Click on **Configure Details** 16. Maintain **Name** and **Description** and select **Enabled** check box 17. Click on **Create Rule** ### AWS Backup service The AWS Backup service can be used as central backup management tool. In combination with AWS EFS and AWS Backup, the local /backup EBS volumes can be replaced and an automatic staging for backup files can be used. [Amazon Elastic File System (Amazon EFS)](https://aws.amazon.com/efs/?nc1=h_ls) provides a simple, scalable, elastic file system for Linux-based workloads. [AWS Backup](https://aws.amazon.com/backup/?nc1=h_ls) is a fully managed backup service that makes it easy to centralize and automate the back up of data across AWS services in the cloud as well as on premises using the AWS Storage Gateway. The architecture overview linked above illustrates the architecture and the different tiering services: - EBS volumes for /hana/data, /hana/log, /hana/shared, /usr/sap and OS files - EFS volume for /hana/backup. Database log- and full backups are written directly to EFS volume - AWS Backup as central management for backups. AWS Backup creates backups of EFS files and automatically stores data in S3. #### Create EFS share 1. Create security group and write down the security group ID. ``` aws ec2 create-security-group \ --region [your-aws-region] \ --group-name efs-walkthrough1-mt-sg \ --description "Amazon EFS walkthrough 1, SG for mount target" \ --vpc-id [your-vpc-id] ``` >You can find the VPC ID using the following command: $ aws ec2 describe-vpcs 2. Authorize inbound access to the security group ``` aws ec2 authorize-security-group-ingress \ --group-id [ID of the security group you have created in Step 1] \ --protocol tcp \ --port 2049 \ --source-group [ID of the security group of EC2 instance] \ --region [your-aws-region] ``` 3. Create EFS ``` aws efs create-file-system \ --creation-token FileSystemForHanaBackup \ --tags Key=Name,Value=EFS-HANA-Backup \ --region eu-west-1 ``` 4. Create mount target ``` aws efs create-mount-target \ --file-system-id [file-system-id] \ --subnet-id [subnet-id-where-HANA-is-running] \ --security-group [ID-of-the security-group-created-for-mount-target] \ --region [aws-region] ``` 5. Mount the EFS file system ```` mkdir -m 775 /hana-efs-backup sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport [your-efs-name]:/ /hana-efs-backup chown hdbadm:sapsys /hana-efs-backup/ ```` #### Update SAP HANA parameters Update the following SAP HANA parameters, to write the backup into the created EFS file system. Logon to the SAP HANA instance with SAP HANA Studio and set the following SAP HANA parameters to **persistence** in the **global.ini**: basepath_catalogbackup = /hana-efs-backup/log basepath_databackup = /hana-efs-backup/data basepath_logbackup = /hana-efs-backup/log Trigger a database backup to the new EFS location. #### Use AWS Backup to copy DB backups from EFS to S3 Let's use the AWS Backup service to copy the database backups from EFS to S3 for long term retention. 1. Create Backup Vault - Sign in to the AWS Management Console, and open the AWS Backup console at [https://console.aws.amazon.com/backup](https://console.aws.amazon.com/backup). - On the AWS Backup console, in the navigation pane, choose Backup vaults. - Choose Create backup vault and enter the name **HANA-Backup** 2. Create Backup Plan 3. Assign resources - Add the EFS share under **Resource assignments** 4. Run a backup - Click on **Create on-demand backup** in the protected resources - Select the EFS share as resource and **create backup now** All backup files are now transfered to S3 for long term retention. ### Extra credit: take application-consistent storage snapshot of SAP HANA DB 1. If you were to use snapshots as a backup mechanism in a real customer environment, you'd want to ensure consistency. This can be done by putting HANA into "snapshot mode", and then taking the snapshot/image. 2. There is a sample script that automates this here: https://github.com/aknoeller/hana-snapshot-aws 3. Your challenge is to take this script and adapt it to your environment, triggering it via Systems Manager on a scheduled basis for regular application-consistent snapshot backups. ## Additional Resources - AWS console: https://console.aws.amazon.com/ - SAP on AWS docs: http://aws.amazon.com/sap/docs/ - SAP HANA on AWS solution & sample pricing: https://aws.amazon.com/sap/solutions/saphana/ - TCO Calculator: https://aws.amazon.com/tco-calculator/ - Simple Monthly Calculator: https://calculator.s3.amazonaws.com/index.html ## External useful links (not official AWS resources) - Scripts to install various SAP on AWS tools: https://github.com/frumania/aws-sap-scripts - Various other AWS workshops: https://github.com/angelarw/aws-hands-on-workshops/blob/master/README.md - SAP HANA crash-consistent snapshots: https://github.com/aknoeller/hana-snapshot-aws ## End of Day Congratulations on finishing this hands-on workshop! We hope it was useful, but remember, this is just the start: keep on building and learning. If you didn't get time to try the extra credit modules, feel free to try these in yuor own account. If you do want to build on this, reach out to your local team to see if you can get AWS credits for the infrastructure spend to enable further experimentation. To evaluate the day and provide feedback, please go to: © 2019 Amazon Web Services, Inc. or its Affiliates. All rights reserved.