Previously, I saved my json-like data in Firebase realtime database, when I just develop the function, they are all fine. However, once I deployed and started to use in public, I found out that I ran out the limit of free tier in Firebase.
At the same time, I found out that the limit of free tier on AWS dynamoDB is much bigger than firebase realtime database. Therefore, I decided to transfer my data from firebase to dynamoDB.
You can see the difference, 25GB of storage v.s. 1GB storing. That's why I want to transfer data.
I thought my old article Build an AI Line Chatbot using AWS Bedrock already had the solution to dealing with dynamoDB. However, I was wrong. Under AWS enviroment, you just need to do the two things:
Actually, that is the answer what I saw in most of articles in web, but when you are outside of AWS, the things become a little bit complicated.
I want to use dynamoDB outside AWS, and in this case, I want to use it in Google colab. boto3 is python SDK to connect AWS services, and of course, this package is not installed in colab.
!pip install boto3
import boto3
Before continuing on python, remember create some tables in dynamoDB. For this tutorial, I create a table called 123. At this moment, you can find out lots of articles saying that you just paste all the code belowed, and it can work magically.
But it is not true, since boto3 cannot know who you are outside AWS. Here, we need to set access keys. There are two ways to set them.
AWS_REGION is the location you put your dynamoDB tables, for example, Oregon is 'us-west-2'.
However, I do not recommend this method if you want to share your code or you want to update to github.
You just save your access key into environment variables, and that's it. When you import boto3, it can find out your access key and start run the code you want to use dynamoDB correctly.
reference: https://analyticshut.com/configure-credentials-with-boto3/