# Hands on Lab - Lab #1: Welcome to AWS + Amazon S3! Welcome to the AWS Console! Today we explore one of the most useful/practical services in AWS, S3 or Simple Storage Service! Today we are going to start simple and get to know the core features of this service. Lets start by navigating to the S3 service. ![](https://i.imgur.com/dbfkVbn.jpg) Once you have logged into the AWS Console successfully, navigate to S3 by either: - Finding S3 in the 'Services' menu (under Storage), or.. ![](https://i.imgur.com/kCk0ybc.jpg) - Typing 'S3' in the Search Bar and clicking 'S3' **[Recommended]** ![](https://i.imgur.com/slAtrkN.jpg) > The search bar is the fastest way to navigate to (and between) AWS services in the AWS Console, but if you are new to AWS and curious about all the services AWS offers, the Services menu is a good way to find out what services are available. ## 1. Create a bucket in S3 1. From the Amazon S3 main menu ([S3](https://console.aws.amazon.com/s3)). Press **Create bucket** to create a bucket. ![](https://i.imgur.com/wlKjuZj.png) 2. Enter a unique bucket name in the **Bucket name** field. For this lab, type `datasciencehub-lab1-user_name`, substituiting user-name with your name. *All bucket names in Amazon S3 have to be unique and cannot be duplicated*. In the **Region** drop-down box, specify the region to create the bucket. In this lab, select the region closest to you. The images will show the **Asia Pacific (Sydney) ap-southeast-2** region. **Object Ownership** change to **ACLs enabled**. Bucket settings for Block Public Access use default values, and select **Create bucket** in the lower right corner. ![](https://i.imgur.com/wwUpWrU.jpg) Bucket names must comply with these rules: * Can contain lowercase letters, numbers, dots (.), and dashes (-). * Must start with a number or letter. * Can be specified from a minimum of 3 to a maximum of 255 characters in length. * Cannot be specified in the format like the IP address (e.g., 265.255.5.4). > There may be additional restrictions depending on the region in which the bucket is created. The name of the bucket cannot be changed once it is created and is included in the URL to specify objects stored within the bucket. Please make sure that the bucket you want to create is named appropriately. 3. A bucket has been created on Amazon S3 ![](https://i.imgur.com/PLIZcnC.jpg) > There are no costs incurred for creating bucket. You pay for storing objects in your S3 buckets. The rate you are charged depends on the region you are using, your objects' size, how long you stored the objects during the month, and the storage class. There are also per-request fees. [Click for more information](https://aws.amazon.com/s3/pricing/) --- ## 2. Adding objects to buckets > If the bucket has been created successfully, you are ready to add the object. Objects can be any kind of file, including text files, image files, and video files. When you add a file to Amazon S3, you can include information about the permissions and access settings for that file in the metadata. ### Adding objects for static Web hosting This lab hosts static websites through S3. It demonstrates hosting a simple HTML page with a Data Science Hub logo. We start by preparing one image file and one HTML file. 1. Download the image file [dsh.png](http://datasciencehub-public-images.s3-website-ap-southeast-2.amazonaws.com/cloudfoundationspecialist/lab1/dsh.png) and save it as dsh.png. 2. Create a new text file on your computer named `index.html`. **Copy** and the following source text into the file using Notepad (or similar). **Save** the file but keep the file open, we will need make some final changes in an upcoming step. ``` <html> <head> <meta charset="utf-8"> <title>Data Science Hub - CFS - Lab #1</title> <style> body { background-color: #283238; } h1 { color: #FCBD14; font-family:Arial, Helvetica, sans-serif; font-weight:lighter; font-size: 40pt; } h2 { color: white; font-family:Arial, Helvetica, sans-serif; font-size: 25pt; font-weight:lighter; } </style> </head> <body> <center> <br /> <h1> Welcome to Lab #1:<h1> <h2> My Static Website on S3! </h2> <img src="{{Replace with your S3 URL Address}}" style="height: 200px;" /> </center> </body> </html> ``` 3. Upload the `dsh.png` file to S3. Click **S3 Bucket** that you just created. ![](https://i.imgur.com/PLIZcnC.jpg) 4. Click the **Upload** button. Then click the **Add files** button. Select the pre-downloaded `dsh.png` file through File Explorer. Alternatively, place the file in Drag and Drop to the screen. ![](https://i.imgur.com/sicrtPA.jpg) ![](https://i.imgur.com/LlW6UbY.jpg) 5. Check the file information named `dsh.png` to upload, then click the **Upload** button at the bottom. ![](https://i.imgur.com/lh4zWtF.jpg) When the file upload is complete click **Close** button (top right) to return to the bucket. 6. Check the URL information to fill in the image URL in `index.html` file. Select the uploaded `dsh.png` file and copy the **Object URL** information from the details on the right. ![](https://i.imgur.com/e8a2bMw.jpg) 7. Paste **Object URL** into the image URL part of the `index.html`. ![](https://i.imgur.com/pNT0x2q.jpg) 8. Upload the `index.html` file to S3 following the same instructions as you did to upload the image. ![](https://i.imgur.com/gxLH0fN.jpg) When the file upload is complete click **Close** button (top right) to return to the bucket. 9. If you check the objects in your S3 bucket, you should see 2 files. ![](https://i.imgur.com/cd3qHPH.jpg) --- ## 3. Working with objects in the S3 Console 1. In the Amazon S3 Console, please **click the object** you want to see. You can see detailed information about the object as shown below ![](https://i.imgur.com/5v0REmQ.jpg) > By default, all objects in the S3 bucket are owner-only(Private). To determine the object through a URL of the same format as `https://{Bucket}.s3.{region}.amazonaws.com/{Object}`, you must grant **Read** permission for external users to read it. Alternatively, you can create a signature-based Signed URL that contains credentials for that object, allowing unauthorized users to access it temporarily. 2. Return to the previous page and select the **Permissions** tab in the bucket. To modify the application of **Block public access (bucket settings)**, press the right Edit button. ![](https://i.imgur.com/tF4G9n3.jpg) 3. **Uncheck** the 'Block all public access' checkbox and press the **Save changes** button. ![](https://i.imgur.com/2UokId3.jpg) 4. Enter `confirm` in the bucket's Edit Block public access pop up window and press the **Confirm** button. ![](https://i.imgur.com/4y9H3jk.png) 5. Click the **Objects** tab, select the uploaded **files**, click the **Action** drop-down button, and press the **Make public** button to set them to public. ![](https://i.imgur.com/cZG18zG.jpg) 6. When the confirmation window pops up, press the **Make public** button again to confirm. ![](https://i.imgur.com/Lpw99WP.jpg) 7. Return to the bucket page, select **index.html**, and click the **Object URL** link in the Show Details entry. ![](https://i.imgur.com/cmx0qSJ.jpg) 8. When you access the HTML object file object URL, the following screen is printed. ![](https://i.imgur.com/55kX1Cy.jpg) --- ## 4. Enable Static Web Site Hosting ### Static Web Site Settings > A static website refers to a website that contains static content (HTML, image, video) or client-side scripts (Javascript) on a web page. In contrast, dynamic websites require server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET. Server-side scripting is not supported on Amazon S3. If you want to host a dynamic website, you can use other services such as EC2 on AWS. 1. In the S3 console, select the bucket you just created, and click the **Properties** tab. Scroll down and click the Edit button on **Static website hosting**. ![](https://i.imgur.com/ZLOBgvW.jpg) ![](https://i.imgur.com/Nacw9hR.png) 2. Activate the static website hosting function and select the hosting type and enter the `index.html` value in the Index document value, then click the **save changes** button. ![](https://i.imgur.com/aivi4ZQ.jpg) 3. Click **Bucket website endpoint** created in the** Static website hosting** entry to access the static website. ![](https://i.imgur.com/K2gCTwp.jpg) 4. This allows you to host static websites using Amazon S3. ![](https://i.imgur.com/3djFmjP.jpg) --- ## 5. Move objects ### Move Objects 1. Create a temporary bucket for moving objects between buckets (Bucket name: `datasciencehub-lab1-myname-target`). Substitute **myname** with your name. Remember the naming rules for the bucket. Untick the **Block all public access** checkbox for quick configuration. ![](https://i.imgur.com/mKKuUBJ.jpg) 2. Check the notification window below and select **Create bucket**. ![](https://i.imgur.com/L3N5qA4.png) 3. In the Amazon S3 Console, select the bucket that contains the object (the first bucket you created) and click the checkbox for the object you want to move. Select the **Actions** menu at the top to see the various functions you can perform on that object. Select **Move** from the listed features. ![](https://i.imgur.com/v9mNv4I.jpg) 4. Select the destination as **bucket**, then click the **Browse S3** button to find the new bucket you just created. ![](https://i.imgur.com/O3Bj7n4.jpg) 5. Click the bucket name in the pop-up window, then select the destination (arrival) bucket. Click the **Choose destination** button. ![](https://i.imgur.com/Ug7rGUA.jpg) ![](https://i.imgur.com/fEEMZDg.jpg) 6. With the Destination selected, Click Move on the previous screen to complete the move. ![](https://i.imgur.com/PmF5A1K.jpg) 7. Check that the object has moved to the target bucket ![](https://i.imgur.com/FFW1GSI.jpg) > Even though you move an object, its **existing permissions remain intact**! --- ## 6. Enabling bucket versioning ### Enable versioning 1. In the Amazon S3 Console, select the first S3 bucket we created. Select the **Properties** menu. Click the Edit button in **Bucket Versioning**. ![](https://i.imgur.com/hpHoI7N.jpg) 2. Click the enable radio button on **Bucket Versioning**, then click **Save changes**. ![](https://i.imgur.com/qcpQ7GO.png) 3. In this lab, the `index.html` file will be modified and re-uploaded with the same name. Make some changes to the **index.html** file. Then upload the modified file to the same S3 bucket. 4. When the changed file is completely uploaded, click the object in the S3 Console. You can view **current version** information by clicking the **Versions** tab on the page that contains object details. ![](https://i.imgur.com/TL6YJ6y.jpg) --- ## 7. Setting up a Lifecycle Policy You can use lifecycle policies to define actions you want Amazon S3 to take during an object's lifetime, e.g. transition objects to another storage class, archiving objects, or deleting objects after a specified period. A versioning-enabled bucket can have many versions of the same object, one current version and zero or more noncurrent (previous) versions. Using a lifecycle policy, you can define actions specific to current and noncurrent object versions. We are going to setup a lifecycle policy that will move noncurrent (previous) versions of your objects to the S3 Infrequent Access (IA) tier after 30 days and then delete them 30 days later. 1. In your bucket's overview page, select the **Management** tab 2. Under "Lifecycle rules" select the **Create lifecycle rule** button. This should then open the "Create lifecycle rule" page. ![](https://i.imgur.com/B6tPbK3.jpg) 3. Give your rule the name `[your initials] - S3 Lifecycle policy` and select the scope as **Apply to all objects in the bucket** and put a **check** in the box acknowledging the warning. We could setup more fine-grained rules based on the prefix or on object tags, but for this lab we will apply it to the entire bucket. 4. Under "Lifecycle rule actions" put a check in the box next to **Move noncurrent versions of objects between storage classes** & **Permanently delete noncurrent versions of objects**. Selecting an action for a "noncurrent" version means these actions will take place on the older object version when it is replaced by a newer object version. 5. Under "Transition noncurrent versions of objects between storage classes" select **Standard-IA** for "Choose storage class transitions". Enter 30 for "Days after objects become noncurrent". This part of the rule will move all objects from S3-Standard to S3-IA, 30 days after it becomes a previous version. This rule might be useful to save costs in S3 if the files being uploaded are frequently accessed within the first 30 days but only occasionally accessed after the first 30 days. 6. Under "Permanently delete noncurrent versions of objects" enter 60. This will delete an object 60 days after it becomes previous versions. (30 days after it is moved to S3-IA.) 7. At the bottom you will get a timeline summary of the rule you just setup. Select **Create rule** when you have finished reviewing the summary. ![](https://i.imgur.com/DuGQeku.png) 8. You now have a lifecycle policy that will move previous versions of your objects to S3-IA after 30 days and then delete them 30 days later. ![](https://i.imgur.com/IWwsGGi.jpg) You are now ready to move onto the final step: CLEANUP: DELETING THE OBJECTS AND THE S3 BUCKET --- ## 8. Cleanup: Deleting the objects and the S3 bucket > You can delete unnecessary objects and buckets to avoid unnecessary costs. 1. In the Amazon S3 Console, select the **Bucket** that you want to delete. Then click **Delete**. A dialog box appears for deletion. ![](https://i.imgur.com/bIPLwLR.jpg) 2. There is a warning that buckets cannot be deleted because they are not empty. Select **empty bucket configuration** to empty buckets ![](https://i.imgur.com/KTC24SU.jpg) 3. **Empty bucket** performs a one-time deletion of all objects in the bucket. Confirm by typing `permanently delete` in the box. Then click the **Empty** ![](https://i.imgur.com/Apjjyre.jpg) 4. Now the bucket is empty. Perform task 1 again. **Enter a bucket name** and press the **Delete bucket** button. ![](https://i.imgur.com/xS8nmcy.jpg) 5. Repeat the step with the remaining bucket. First **Empty** the bucket, then **Delete** the bucket. **Congratulations!! You have completed the workshop!** **Thank you for your efforts.**