# Tutorial: Indexing Block Timestamp The [tutorials block timestamp](https://github.com/subquery/tutorials-block-timestamp) is an official subquery example that indexes the timestamp of each finalized block. By processing the timestamp.set extrinsic, and extracting the first arguments of it, we can retrieve the timestamp. This tutorial will start from 0 and will show you how to write this subquery, then run the subquery we wrote using `subquery-node`, and finally query the block timestamp through the GraphQL interface. Starting codebase repository: [https://github.com/subquery/subql-starter/tree/v1.0.0](https://github.com/subquery/subql-starter/tree/v1.0.0) Completed codebase repository: [https://github.com/fachebot/tutorials-block-timestamp](https://github.com/fachebot/tutorials-block-timestamp) ## 1. Prerequisites * Ubuntu Focal 20.04 (LTS) * Docker Engine * Docker Compose * nodejs & yarn * @subql/cli ### Install Docker Engine Please see the official installation tutorial: [https://docs.docker.com/engine/install/ubuntu/](https://docs.docker.com/engine/install/ubuntu/) ### Install Docker Compose ```bash curl -SL https://github.com/docker/compose/releases/download/v2.5.0/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose sudo chmod +x /usr/local/bin/docker-compose ``` ### Install nodejs and yarn ```bash sudo apt-get install npm -y sudo npm install -g n yarn -y sudo n 16 -y ``` ### Install @subql/cli `@subql/cli` is a client tool for subquery. It is used to initialize a scaffold subquery project, build subQuery project code and generate schemas for graph node, etc.. Run command from the console: ```bash sudo npm install -g @subql/cli ``` ## 2. Create Project Inside the directory in which you want to create a SubQuery project, simply run the following command to get started. ```bash subql init tutorials-block-timestamp ``` You'll be asked certain questions as the SubQuery project is initalised: * Name: A name for your SubQuery project * Network: A blockchain network that this SubQuery project will be developed to index, use the arrow keys on your keyboard to select from the options, for this guide we will use "Polkadot" * Template: Select a SubQuery project template that will provide a starting point to begin development, we suggest selecting the "Starter project" * Git repository (Optional): Provide a Git URL to a repo that this SubQuery project will be hosted in (when hosted in SubQuery Explorer) * RPC endpoint (Required): Provide a HTTPS URL to a running RPC endpoint that will be used by default for this project. You can quickly access public endpoints for different Polkadot networks or even create your own private dedicated node using OnFinality (opens new window)or just use the default Polkadot endpoint. This RPC node must be an archive node (have the full chain state). For this guide we will use the default value "https://polkadot.api.onfinality.io" * Authors (Required): Enter the owner of this SubQuery project here (e.g. your name!) * Description (Optional): You can provide a short paragraph about your project that describe what data it contains and what users can do with it * Version (Required): Enter a custom version number or use the default (1.0.0) * License (Required): Provide the software license for this project or accept the default (Apache-2.0) ```bash ubuntu@ubuntu:~# subql init tutorials-block-timestamp ? Select a network family Substrate ? Select a network Polkadot ? Select a template project subql-starter Starter project for subquery RPC endpoint: [wss://polkadot.api.onfinality.io/public-ws]: Git repository [https://github.com/subquery/subql-starter]: Fetching network genesis hash... done Author [Ian He & Jay Ji]: Description [This project can be use as a starting po...]: Version [1.0.0]: License [MIT]: Preparing project... done tutorials-block-timestamp is ready ``` After the initialisation process is complete, you should see a folder with your project name has been created inside the directory. Last, under the project directory, run following command to install the new project's dependencies. ```bash cd block-timestamp yarn ``` ## 3. Make changes to project In the starter package that you just initialised, we have provided a standard configuration for your new project. You will mainly be working on the following files: 1. The GraphQL Schema in `schema.graphql` 2. The Project Manifest in `project.yaml` 3. The Mapping functions in `src/mappings/` directory ### 3.1 Updating GraphQL Schema File The `schema.graphql` file defines the various GraphQL schemas. Due to the way that the GraphQL query language works, the schema file essentially dictates the shape of your data from SubQuery. Its a great place to start becuase it allows you to define your end goal up front. We're going to update the `schema.graphql` file to read as follows. ```ts type BlockTs @entity { # BlockHash id: ID! blockHeight: BigInt! timestamp: Date! } ``` > Important: When you make any changes to the schema file, please ensure that you regenerate your types directory. Do this now. ```bash yarn codegen ``` You'll find the generated models in the `/src/types/models` directory. For more information about the `schema.graphql` file, check out our documentation under [Build/GraphQL Schema](https://university.subquery.network/build/graphql.html). ### 3.2 Updating the Project Manifest File The Projet Manifest (project.yaml) file can be seen as an entry point of your project and it defines most of the details on how SubQuery will index and transform the chain data. We won't do many changes to the manifest file as it already has been setup correctly, but we need to change our handlers. Remember we are planning to index all Polkadot transfers, as a result, we need to update the datasources section to read the following. ```yaml dataSources: - kind: substrate/Runtime startBlock: 1 mapping: file: ./dist/index.js handlers: - handler: handleTimestampSet kind: substrate/CallHandler filter: module: timestamp method: set ``` The `dataSources` defines the data that will be filtered and extracted and the location of the mapping function handler for the data transformation to be applied. Call handlers (`substrate/CallHandler`: Substrate/Polkadot Only) are used when you want to capture information on certain substrate extrinsics. You should use [Mapping Filters](https://university.subquery.network/build/manifest.html#mapping-filters) in your manifest to filter calls to reduce the time it takes to index data and improve mapping performance. Call handlers support `module` and `method` filters. This means we'll run a `handleTimestampSet` mapping function each and every time there is a `timestamp.set` call. For more information about the Project Manifest (project.yaml) file, check out our documentation under [Build/Manifest](https://university.subquery.network/build/manifest.html) File ### 3.3 Add a Mapping Function Mapping functions define how chain data is transformed into the optimised GraphQL entities that we have previously defined in the `schema.graphql` file. Navigate to the default mapping function in the `src/mappings` directory. You can delete all functions. The `handleTimestampSet` function received call data whenever call matches the filters that we specify previously in our `project.yaml`. We are going to add it to process all `timestamp.set` calls and save them to the GraphQL entities that we created earlier. You can add the `handleTimestampSet` function and update it to the following(note the additional imports): ```ts import { BlockTs } from '../types/models/BlockTs'; import { SubstrateExtrinsic } from "@subql/types"; import { Compact } from '@polkadot/types'; import { Moment } from '@polkadot/types/interfaces'; export async function handleTimestampSet(extrinsic: SubstrateExtrinsic): Promise<void> { const record = new BlockTs(extrinsic.block.block.header.hash.toString()); record.blockHeight = extrinsic.block.block.header.number.toBigInt(); const moment = extrinsic.extrinsic.args[0] as Compact<Moment>; record.timestamp = new Date(moment.toNumber()); await record.save(); } ``` What this is doing is receiving a `SubstrateExtrinsic` which includes block data on the payload. We extract this data and then instantiate a new `BlockTs` entity that we defined earlier in the `schema.graphql` file. We add additional information and then use the `.save()` function to save the new entity (SubQuery will automatically save this to the database). For more information about mapping functions, check out our documentation under [Build/Mappings](https://university.subquery.network/build/mapping.html). ## 4. Build the Project In order run your new SubQuery Project we first need to build our work. Run the build command from the project's root directory. ```bash yarn build ``` > Important: Whenever you make changes to your mapping functions, you'll need to rebuild your project ## 5. Running and Querying your Project ### 5.1 Run your Project with Docker Whenever you create a new SubQuery Project, you should always run it locally on your computer to test it first. The easiest way to do this is by using Docker. All configuration that controls how a SubQuery node is run is defined in this `docker-compose.yml` file. For a new project that has been just initalised you won't need to change anything here, but you can read more about the file and the settings in our [Run a Project section](https://university.subquery.network/run_publish/run.html). Under the project directory run following command: ```bash sudo docker-compose up ``` It may take some time to download the required packages ([@subql/node](https://www.npmjs.com/package/@subql/node) (opens new window), [@subql/query](https://www.npmjs.com/package/@subql/query) (opens new window), and Postgres) for the first time but soon you'll see a running SubQuery node. Be patient here. ### 5.2 Query your Project Open your browser and head to [http://localhost:3000](http://localhost:3000/) (opens new window). You should see a GraphQL playground is showing in the explorer and the schemas that are ready to query. On the top right of the playground, you'll find a Docs button that will open a documentation draw. This documentation is automatically generated and helps you find what entities and methods you can query. For a new SubQuery starter project, you can try the following query to get a taste of how it works or [learn more about the GraphQL Query language](https://university.subquery.network/run_publish/graphql.html). ```graphql { query { blockTs( first: 10, orderBy: BLOCK_HEIGHT_DESC ) { nodes { id blockHeight timestamp } } } } ``` ![](https://i.imgur.com/OfbRWym.png)