# London Cab ELT Phase 1 to 5 - Documentation ## 1. Abstract ELT stands for Extract, Load, and Transform. It is a data integration process that is used to move data from one system to another. ELT is a newer approach to data integration than ETL (Extract, Transform, Load), which is the traditional approach. In ELT, the data is first extracted from the source system and loaded into a staging area. The data is then transformed in the staging area, but it is not loaded into the target system until it is ready. This allows for more flexibility and scalability than ETL, as the data can be transformed in real time or batch mode. ELT is often used in conjunction with cloud-based data warehouses and data lakes. This is because cloud-based systems are more scalable and flexible than traditional on-premises systems. The same data warehouse (i.e. BigQuery) can be used for both staging and warehousing in an ELT system. In fact, this is a common practice, as it can save time and resources. When the data is extracted from the source system, it is loaded into the data warehouse in a raw format. This means that the data is not yet transformed, and it may not be in the correct format for the target system. The data is then transformed in the data warehouse, and it is then ready to be loaded into the target system. Using the same data warehouse for both staging and warehousing can be beneficial for several reasons. First, it can save time and resources, as the data does not need to be moved between two different systems. Second, it can simplify the data integration process, as there is only one system to manage. Third, it can improve data quality, as the data is only transformed once. However, there are also some potential drawbacks to using the same data warehouse for both staging and warehousing. First, it can increase the load on the data warehouse, as the data is being processed twice. Second, it can make it more difficult to troubleshoot problems, as the data is stored in a single location. Third, it can make it more difficult to scale the data integration process, as the data warehouse may not be able to handle the increased load. ETL typically consists of three stages by defenition: The abstract design of an ELT system is as follows: <img src="https://www.sqlshack.com/wp-content/uploads/2020/04/etl-and-elt-1-gray.png" alt="image info" /> Some terminologies to consider before delving into the documentaiohn: * Source System: The source system is the system that contains the data that needs to be integrated. * Extract: The extract component is responsible for extracting the data from the source system. * Staging Area: The staging area is a temporary storage area where the extracted data is stored before it is transformed. * Transform: The transform component is responsible for transforming the data from the staging area into a format that is compatible with the target system. * Data Warehouse: The data warehouse is a repository for storing data that is used for analysis and reporting. * Data Lake: The data lake is a repository for storing large amounts of data in its raw format. * ELT Platform: The ELT platform is a software platform that is used to manage the ELT process. * Target System: The target system is the system that the transformed data is loaded into. ## 2. London Cab's Analytics GCP ELT Pipeline Infrastructure Diagram <img src="https://i.imgur.com/99R1ZCB.png" alt="image info" /> Used Service Definition: * Google Cloud Scheduler: With Cloud Scheduler we set up scheduled units of work to be executed at defined times or regular intervals. These work units are commonly known as cron jobs. * Google Cloud Functions: Is a serverless computing service offered by Google Cloud Platform (GCP) that allows users to run code in response to events with minimal configuration and maintenance. These functions can be used to run code, create and manage web applications, and process data * Google Cloud Engine: A service that creates and manages virtual machines (VMs) on Google's infrastructure. You can create VMs of various sizes, from small to large, with Debian, Windows, or other standard images. * Google BigQuery: BigQuery is Google's fully managed, serverless data warehouse that enables scalable analysis over petabytes of data. It is a Platform as a Service that supports querying using a dialect of SQL. * Google Cloud Monitoring: Cloud Monitoring is a comprehensive monitoring service that provides visibility into the performance and health of your GCP resources. You can use Cloud Monitoring to monitor your ELT jobs by creating metrics, alerts, and dashboards. * Google Cloud Logging: Cloud Logging is a service that collects and stores logs from your GCP resources. You can use Cloud Logging to troubleshoot ELT problems by analyzing the logs for errors or performance issues. ## 3. GCP Pipeline Provisioning ### 3.1. Service Account Creation: 1. Go to the Google Cloud Platform Console: https://console.cloud.google.com/. 2. Click the Menu button (three horizontal lines) in the top left corner of the page. 3. Select IAM & Admin > Service accounts. 4. Click the Create Service Account button. 5. In the Service account name field, enter a name for your service account. 6. In the Service account ID field, leave the default value. 7. In the Select a role field, select the role that you want to assign to the service account. 8. Click the Create button. Once the service account is created, you will be prompted to download a JSON key file. This file contains the credentials that you will need to use to authenticate to GCP as the service account. Here are the steps on how to download the JSON key file: Click the Download JSON key file button. Save the file to a secure location. The JSON key file is a sensitive file, so you should store it in a secure location. You should also make sure that you do not share the file with anyone else. Once you have downloaded the JSON key file, you can use it to authenticate to GCP as the service account. To do this, you will need to specify the path to the file when you are configuring your application to access GCP. ### 3.2. Creating a role for the service account: 1. Go to the Google Cloud Platform Console: https://console.cloud.google.com/. 2. Click the Menu button (three horizontal lines) in the top left corner of the page. 3. Select IAM & Admin > Roles. 4. Click the Create Role button. 5. In the Role name field, enter a name for your role. 6. In the Role description field, enter a description for your role. 7. In the Select a role template field, select the role template that you want to use as a starting point for your role. 8. In the Permissions section, select the permissions that you want to grant to the role. 9. In the Members section, add the members who you want to grant the role to. 10. Click the Create button. Once the role is created, we assign it assign it to the service accounts we created for the Piepline. ### 3.3. Needed Predefined Roles to Attach to the Service Account: 1. roles/compute.admin: This role allows you to do everything that can be done with Compute Engine, including creating and managing virtual machines, disks, and networks. 2. roles/bigquery.dataEditor: This role allows you to read, write, and edit data in BigQuery datasets and tables. 2. Defining an new Role with the following permissions: 1. Cloud Scheduler: The service account needs the roles/cloudscheduler.serviceAccount permission to create and manage Cloud Scheduler jobs. 2. Cloud Function: The service account needs the roles/cloudfunctions.serviceAccount permission to create and manage Cloud Functions. 3. Cloud SDK: The service account needs the roles/storage.legacyBucketWriter permission to write to Cloud Storage buckets. 4. Logging: The service account needs the roles/logging.viewer permission to view logs. 5. Monitoring: The service account needs the roles/monitoring.viewer permission to view monitoring data. 7. Cloud Pub/Sub: The service account needs the roles/pubsub.publisher permission to publish messages to Cloud Pub/Sub topics. And we will attached both roles to the service account. ### 3.4 Createing a Windows Server VM instance in Compute Engine: 1. In the Google Cloud console, go to the Create an instance page. 2. In the Boot disk section, click Change to begin configuring your boot disk. 3. On the Public images tab, choose Windows Server from the Operating system list. 4. Choose Windows Server 2019 Datacenter from the Version list. 5. Click Select. 6. In the Firewall section, select Allow HTTP traffic. 7. To create the VM, click Create. ### 3.5 Connect to the VM instance 1. In the Google Cloud console, go to the VM instances page. 2. Go to VM instances 3. Under the Name column, click the name of your VM instance. 4. Under the Remote access section, click Set Windows password. 5. Specify a username, then click Set to generate a new password for this Windows Server VM. Save the username and password so you can log into the VM. 6. Install the Chrome Remote Desktop service on your VM. 7. Connect to your VM instance using your choice of graphical or command line tools. After accesing the VM we'll clone the following Repo https://github.com/Hawary13/LondonCabWarehousingETL.git using: >> git clone https://github.com/Hawary13/LondonCabWarehousingETL.git ## 4. THe ELT Business and Logic ### 4.1. Abstraction: The ELT will be hosted on a Windows Server Virtual Machine (VM) on Google Compute Engine. The ELT is written in Python and manages its dependencies using a requirements file and a virtual environment. ### 4.2. The ELT Business: 1. Cloud Scheduler on GCP will be configured with the desired frequency to invoke an HTTP endpoint. 2. The HTTP endpoint will process a GET request to a Compute Engine instance, which will then run some defined logic. This logic will start up a pre-defined GCP Compute Engine VM instance. 3. The Compute Engine instance will be pre-configured with the following: 1. Python 3.11 installed 2. Virtual environment created with requirements file to manage the ELT Python dependencies. 3. An ODBC driver will be installed on the Windows Server VM to manage the SQL Server connection. 4. Shell script on a batch filer to execute the ELT with the VM startingup. 5. The ELT will Shutdownb the VM instance once it finished the EL process. ### 4.3. The Project's Git Repo Skiliton: * .gitignore * README.md * SQLS2BQ.ipynb: A development and experimentation playground * SQLS2BQ.py: The main ELT Python script * SQLS2BQ.bat: a catch file to execute the ELT * cred.json: the service account credentials file * env_variables.env: the file to place the connection strings * requirements.txt: a filet used by PIP (a package-management system) ### 4.4 The ETL (ELT's Corenerstnoe) Zooming In: #### 4.1.1.1. The ELT Python Script ```python # Import the necessary libraries. import pandas as pd import pyodbc import numpy as np import tqdm from sqlalchemy.engine import URL import sqlalchemy as sa from sqlalchemy import create_engine from google.cloud import bigquery import os from google.oauth2 import service_account import pandas as pd from google.api_core.exceptions import AlreadyExists, Conflict import tqdm import warnings from pathlib import Path from dotenv import main from dotenv.main import load_dotenv ``` KeyboardInterrupt ```python # Load the environment variables from the .env file. load_dotenv(".\env_variables.env") # Ignore warnings. warnings.filterwarnings("ignore") ``` ```python # Define the credentials for accessing BigQuery. credentials_json = os.environ["GCP_SERVICE_ACCOUNT_CREDENTIALS"] # Define the database driver, server, database name, user ID, and password. database_driver = os.environ["DATABASE_DRIVER"] server = os.environ["SERVER"] database_name = os.environ["DATABASE_NAME"] user_id = os.environ["USER_ID"] user_password = os.environ["USER_PASSWORD"] dataset_id = os.environ["DATASET_ID"] ``` ```python # Define a function to extract a table from SQL Server. def extract_SQLtable(cnxn, table_name): """Extract a table from SQL Server. Args: cnxn (obj): Connection object. table_name (str): Name of the table to extract. Returns: (pd.DataFrame): Pandas DataFrame. """ # Create a query to select all rows from the table. query = f"SELECT * FROM {table_name};" # Use Pandas to read the results of the query as a DataFrame. df = pd.read_sql(query, cnxn) return df # Define a function to load a Pandas DataFrame to BigQuery. def load_DF2BQ(df, dataset_id, table_name): """Load a Pandas DataFrame to BigQuery. Args: df (pd.DataFrame): Pandas DataFrame. dataset_id (str): Dataset ID in BigQuery. table_name (str): Table name in BigQuery. """ # Create the table ID for the BigQuery table. table_id = f"{dataset_id}.{table_name}" # Create a load job config with WRITE_TRUNCATE disposition. job_config = bigquery.LoadJobConfig() job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE # Load the DataFrame to BigQuery. job = client.load_table_from_dataframe(df, table_id, job_config=job_config) # Wait for the load job to finish. job.result() ``` ```python # Define a function to connect to SQL Server and extract tables. def main(): """Connect to SQL Server and extract tables. Args: None Returns: None """ # Create a connection string to SQL Server. connection_string = ( f"Driver={database_driver};" f"Server={server};" f"Database={database_name};" f"uid={user_id};pwd={user_password};" ) # Connect to SQL Server. cnxn = pyodbc.connect(connection_string) # Create a cursor object. cursor = cnxn.cursor() # Use the cursor to execute a query to get the list of tables. cursor.execute(f"USE {database_name};") table_names = [_.table_name for _ in cursor.tables()] # Iterate through the list of tables and extract each table. for table_name in tqdm.tqdm(table_names): try: df = extract_SQLtable(cnxn, table_name) load_DF2BQ(df, dataset_id, table_name) except: print(table_name) ``` #### 4.1.1.2. Managing the Environment Variables Environment variables are managed using an .env file. When moving from development to production, we will need to update the values in the file to reflect the production environment's parameters. The .env file has the following structure: #env_variables.env DATABASE_DRIVER=ODBC Driver 17 for SQL Server GCP_SERVICE_ACCOUNT_CREDENTIALS=dataengineeringhub-ad7ef6785356.json SERVER=95.211.190.153 DATABASE_NAME=LondonCab_Dispatching_230727 USER_ID=ETLAdmin USER_PASSWORD=ETLAdmin@123 DATASET_ID=lc-gcp-bigquery-project.google_analytics_london_cab #### 4.1.1.3. Python Requirements: Requirements are managed using a virtual environment and a requirements file. This environment needs to be initialized once when the VM is provisioned. The .bat file, which is responsible for executing the ELT, will manage the environment creation and library installation if it is not already installed. If the first run of the ELT was successful, the .bat file will check and make sure that all dependencies are in place before each subsequent execution, so that the environment does not need to be re-instantiated. If the .bat file fails to manage the requirements preparation, you can follow these steps to manually prepare the requirements located in the next cell. This will ensure that the ELT can run successfully. The commands should be run in a CLI in the same directory as the project base directory. This is because the requirements.txt file and the etl.py file are located in the project base directory, and the virtual environment should be there too. If you run the commands in a different directory, the ELT will not be able to locate the requirements file and will fail. >> pip.exe install -r app_files/requirements.txt #### 4.1.1.4. Execution .bat File @Echo Off Set "VIRTUAL_ENV=venv" If Not Exist "%VIRTUAL_ENV%\Scripts\activate.bat" ( pip.exe install virtualenv python.exe -m venv %VIRTUAL_ENV% pip.exe install -r app_files/requirements.txt ) If Not Exist "%VIRTUAL_ENV%\Scripts\activate.bat" Exit /B 1 Call "%VIRTUAL_ENV%\Scripts\activate.bat" python.exe .\SQLS2BQ.py Pause Exit /B 0 ```python ```