# System overview This work was done by Mike (Kae-Jer) Cho and Eric (Yen-Hao) Chen as part of a project advised by Andrea Cuadra. ![](https://i.imgur.com/AA54hAC.jpg) Our system contains several components: Interaction with users, Data persistence and exchange, Data processing for eRFA report generation. For interaction with users, we develop the Alexa Skill and deploy it to Alexa devices to ask questions and receive answers from users of the health questionnaires. For data persistence and exchange, we utilize several AWS products to fulfill our needs: AWS DynamoDB as the database to store user response, AWS Lambda and AWS API Gateway to implement and deploy the data polling API. For the last component in the system, we use Python with ReportLab toolkit to process user responses and generate eRFA pdf reports. # AWS ## ASK The Alexa Skills Kit (ASK) is a software development framework that enables developers to create content, called skills. Skills are applications within Amazon Alexa. With an interactive voice interface, Alexa gives users a hands-free way to interact with your skill. Users can use their voice to perform everyday tasks like checking the news, listening to music, or playing a game. Users can also use their voice to control cloud-connected devices. For example, users can ask Alexa to turn on lights or change the thermostat. Skills are available on Alexa-enabled devices, such as Amazon Echo and Amazon Fire TV, and on Alexa-enabled devices built by other manufacturers. We continue the previous Alexa Skill: Health questionnaire to develop new features for the data pipeline. Since the skill already has the basic function to ask questions and store the user responses in that interaction session, we leverage it to store the user response to the AWS Dynamo database. We designed a JSON format dictionary to persist the user response. At the beginning of the interaction session, we create an answer dictionary to store users' answers. We will keep updating this dictionary when the user answers different questions and saves this answer dictionary to the database simultaneously. With this design, we do not need to wait until the user finishes all questions to save their answers to the database. ## AWS DynamoDB Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second. ## AWS Lambda AWS Lambda is a serverless compute service that lets developers run code without provisioning or managing servers, creating workload-aware cluster scaling logic, maintaining event integrations, or managing runtimes. With Lambda, developers can run code for virtually any type of application or backend service - all with zero administration. ## AWS API GATEWAY The second step we did was implementing a REST API to help us polling data from the database. A REST API (also known as RESTful API) is an application programming interface (API or web API) that conforms to the constraints of REST architectural style and allows for interaction with RESTful web services. We simplified the functionality of our API in the system, only implementing the data polling feature without other complicated operations to the database. In general, it makes sense for a big system to build an API with a 24/7 non-stop server to keep the stability and with powerful hardware to make sure the speed of calculation. However, it takes extra cost to maintain a virtual server and needs much more effort to set the environment for the API. Instead, we choose to use the solution with AWS Lambda and Amazon API Gateway. As we mentioned before, Lambda provides a serverless environment meaning that we can focus on the feature development of our service without not taking care of the runtime environment. The Amazon API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. APIs act as the "front door" for applications to access data, business logic, or functionality from the backend services. API Gateway supports containerized and serverless workloads, as well as web applications. We utilize this combination to create our data polling API with stability and sustainability. # Step by step Tutorial ### ASK Set up detail It is advised that each developer sets up their own developer console so that they can freely test their changes. - To create your own dev console, make sure that you set up your own Amazon account - Create a new skill in the developer console using the custom template: https://developer.amazon.com/alexa/console/ask - Import the JSON Editor file (this is located in the "build" tab): eRFA_Care_Questionnaire > JsonEditor.json - Click Build Model - Replace the index.js file with the file at (this is located in the "code" tab): eRFA_Care_Questionnaire > lambda > index.js - Within the lambda folder, create a folder called "languages" and in it create a new file called "en-US.json", then copy the content of the en-US.json in this repo into it: eRFA_Care_Questionnaire > lambda > languages > en-US.json - Click Save and then Deploy - Click the Test tab, and enable testing by selecting Development on the dropdown menu at the top - Test your code by typing "open my care questionnaire" in the Alexa Simulator ### AWS permission Set up detail #### Set up permissions To enable your Alexa-hosted skill to use resources in your personal AWS account, create an AWS Identity and Access Management (IAM) role to allow access to the resource from your Alexa-hosted skill. For details about creating AWS IAM roles, see the AWS Identity and Access Management User Guide. #### To get your Alexa skill ARN 1. Open the Alexa developer console and log in. 1. In the console, open your Alexa-hosted skill. 1. In the code editor, click the icon for Link your personal AWS resources. 1. Copy the ARN. ![](https://i.imgur.com/54CLsD6.jpg) #### To create an AWS IAM role 1. Open the AWS management console and log in. 1. In the console, open the Identity and Access Management (IAM) dashboard. 1. In the IAM dashboard, click Roles. 1. On the Roles page, click Create role. 1. On the Create role page, under Select type of trusted entity, select AWS service. 1. In the Choose a use case section, under Common use cases, select one of the common use cases. -Or- Under Or select a service to view its use cases, select one of the services from the list, and then under select your use case, select a use case. For example, select DynamoDB from the list of services, and then select Amazon DynamoDB Accelerator (DAX) - DynamoDB access from the list of use cases. 1. Click Next: Permissions. 1. Choose one or more policies to attach to your new role, and then click Next: Tags. For example, select the AmazonDynamoDBFullAccess policy for full access to your DynamoDB table. 3. Click Next: Review, and enter the name and description of your role. 4. Click Create role. #### To add your Alexa skill ARN to the role 1. In the IAM dashboard, click Roles. 1. On the Roles page, click the name of the role you just created, and then click the Trust relationships tab. 1. Click Edit trust relationship. 1. Add an entry for the AWS Lambda Role Execution ARN from your Alexa-hosted skill to the Statement property and include the sts:AssumeRole action as shown in the following example. Don't overwrite other existing entries in the Statement property. ```json= { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": {"AWS": "<Replace with AWS Lambda Execution Role ARN from Alexa-hosted skill>"}, "Action": "sts:AssumeRole" } ] } ``` 5. Click Update Trust Policy. https://youtu.be/yZDzxB4IT90 {%youtube yZDzxB4IT90 %} ### Use personal AWS resources with Node.js In the code for your Alexa-hosted skill, assume the role by using the AWS Security Token Service (STS) API. For example, the following code requests temporary credentials of a role with AWS DynamoDB access, and scans the DynamoDB table. ```javascript= const AWS = require("aws-sdk"); const ShowUserMessageHandler = { //... Your canHandle function for intent ... async handle(handlerInput) { // 1. Assume the AWS resource role using STS AssumeRole Action const STS = new AWS.STS({ apiVersion: '2011-06-15' }); const credentials = await STS.assumeRole({ RoleArn: '<Your AWS resource role ARN>', RoleSessionName: 'ExampleSkillRoleSession' // You can rename with any name }, (err, res) => { if (err) { console.log('AssumeRole FAILED: ', err); throw new Error('Error while assuming role'); } return res; }).promise(); // 2. Make a new DynamoDB instance with the assumed role credentials // and scan the DynamoDB table const dynamoDB = new AWS.DynamoDB({ apiVersion: '2012-08-10', accessKeyId: credentials.Credentials.AccessKeyId, secretAccessKey: credentials.Credentials.SecretAccessKey, sessionToken: credentials.Credentials.SessionToken }); const tableData = await dynamoDB.scan({ TableName: 'TestTable' }, (err, data) => { if (err) { console.log('Scan FAILED', err); throw new Error('Error while scanning table'); } return data; }).promise(); // ... Use table data as required ... } }; ``` *https://developer.amazon.com/en-US/docs/alexa/hosted-skills/alexa-hosted-skills-personal-aws.html* ### AWS DynamoDB Set up detail 1. Create table![](https://i.imgur.com/PrNPyie.jpg) 2. Use "userId" as primary key ![](https://i.imgur.com/YqsuQEd.jpg) and finish create table *https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStartedDynamoDB.html* *https://developer.amazon.com/en-US/docs/alexa/hosted-skills/alexa-hosted-skills-session-persistence.html* ### AWS Lambda / API Gateway Set up detail 1. Create function with python 3.6 ![](https://i.imgur.com/y7YcukH.jpg) 2. Upload data pipeline script zip file ![](https://i.imgur.com/dab9dDz.jpg) ![](https://i.imgur.com/G0axXhi.jpg) 3. Add trigger with API gateway![](https://i.imgur.com/jF7gJUo.jpg) ![](https://i.imgur.com/DPDzmyW.jpg) 4. Get the API endpoint ![](https://i.imgur.com/cWJijly.jpg) 5. Test it with Postman,make sure to choose "POST", Body with "raw", and the amazonId that exists in database ![](https://i.imgur.com/GvXY9w7.jpg) ### AWS S3 Set up detail *https://docs.aws.amazon.com/lambda/latest/dg/services-apigateway.html* # Python Script The last step of the data pipeline system is to process the user data and generate eRFA report. We use Python with an open-sourced document creation engine ReportLab to help to generate the file in PDF format. We follow the format of the eRFA report provided by Dr. Shahrokni, which gives us the report creation and data processing standard. For the data processing, we would get the data from the database and calculate the score of each question. Different scores will apply to different answers to the question. For example, for the Activities of Daily Living questions (ADL), we will give 2 points to the answer "not limited 1," point to "limited a little," and 0 to "limited a lot." After calculating all the answers, we can create an "Impairment Summary" table based on these scores as shown in Table1. This summary table presents the assessment result of the health questionnaire. In the report, we also provide detailed answers from the user of each question. Also, the script will send the report with Gmail api and save report in the S3 after generate the report, and then return the report file URL as the response of the api Please check the script named "lambda_function.py" for the code