Try   HackMD

The LOC-Doc is Here!


Dear LOC users, partners and clients:

Our new user documentation site for LOC is now official live. From now on the latest features and references will be updated there.

https://documentation.loc.fst.network/

In the meantime, we will continue to utilise HackMD notebooks for workshops and other educational activities. We thank you again for the fantastic adventure you'd had with us, and look forward to see you along the new journey too.

FST Network, Sep 2022


LOC Studio Guidebook

LOC Studio V1.0.0
Release Date: 8 May 2022 (latest version)

About LOC Studio

LOC Studio is one of FST VADO's tools (Virtual Assistive Data Officer) to help users build and execute complex business/data processes.

*LOC Studio will automatically log out the current user after 5 mins of inactivity.

Benefits

  • Effective Workflow: Allow IT and business units to have clearer and more smooth communication cycles.
  • Function-as-a-Service (FaaS): Allow users to only focus on business logic by writing simple code to create application functionalities without considering much about the complex infrastructure.
  • Event Sourcing: Ensure the state change of a certain application/service/system to be captured by events.
  • Business Activity Monitoring (BAM): Provide real-time information about the status and results of the processes and transactions of your interest.

Concepts

  • LOC Studio: LOC Studio is a tool for users to access LOC core functions and to deploy/update resources with a graphical user interface.
  • Business Process: A business process is a collection of business tasks and activities (provided by users), either of which generate a result that will meet a business target.
  • Data Process: In the commercial world, a data process refers to the processing of data required to run business processes in enterprises. A data process is to utilise data via LOC. It converts raw data into a business-usable form that transforms the multiple generic logics and an aggregator logic into an output. When it comes to using LOC to perform a pre-defined operation on data, a data process will be generated.
  • Generic Logic: A generic logic is the basic component to form a data process. It can be a descriptive record of an end user’s static data, dynamic trade data, or behavioural data; a piece of information or an event in the existing system; or a data change in a database.
  • Aggregator Logic: In a data process, there is only one aggregator logic which is to aggregate all the generic logics of the same data process.
  • DID: A Digital Identity (DID) is essentially a unique string (e.g. UUID/CUID/GUID, a hash, or a random string) and holds labels (to be accurate, the source and target DID). A DID can be generated or derived by logic or manual setting.
  • Label Name: A Label Name is an elementary metadata unit in LOC, named by any arbitrary string. Each label name has a unique DID which can be set up by logics or manual setting.
  • Potential Labelling: It is a schema used to set up an event.
  • Event: An emitted event is stored in the event store inside of LOC.
  • Agent: An agent is the connector when a logic proactively acquires data.
  • API Route: An API route is an API used to set up a trigger that will invoke a data process.
  • HTTP: Hypertext Transfer Protocol (HTTP) is an application-layer protocol for transmitting hypermedia documents, such as HTML. https://developer.mozilla.org/en-US/docs/Web/HTTP
  • BAM:Business Activity Monitoring (BAM) describes the processes and technologies that enhance situation awareness and enable analysis of critical business performance indicators based on real-time data.
  • Event Sourcing: The Event Sourcing defines an approach to handling operations on data that is driven by a sequence of events, each of which is recorded in an append-only store. https://docs.microsoft.com/en-us/azure/architecture/patterns/event-sourcing

User Management

After clicking Users from the left navigation bar, you can select View all users to display all users.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

In this section, you can not only view all users but also add, edit and delete them.

Please DO NOT delete any accounts that have been active already, in case their data, such as configured logics or data processes, will be eliminated and affect the operation of the existing accounts. Instead, Please DEACTIVATE them (disable users).

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Audience: LOC Studio Admin
Note: Only the Admins have the right to view, create, edit, and delete users.

Create New Users

  • Step 1: Log into LOC Studio User Management with your Admin account.
  • Step 2: On the left menu, please select Users.
  • Step 3: On the top right, please click Add Users.
  • Step 4: Please fill in the user information as detailed as possible. (Username and Email are mandatory.)

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

There are a few more steps to be completed.

  • Step 5: In the Credentials tab, please ensure the Temporary button to be switched on to set a temporary password for the user and inform the user to change the password upon login.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Don’t forget this temporary password and remember to give it to the user.

The users are required to reset their password when they login to their account for the first time.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

  • Step 6: In Role Mappings tab, you can check the role authorisation of the selected user.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Remove Users

  • Step 1: Log into LOC Studio User Management with your Admin account.
  • Step 2: On the left menu, please select Users.
  • Step 3: Under Actions, please select the user you would like to remove and click Delete.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Forgot Passwords

  • Step 1: Log into LOC Studio User Management with your Admin account.
  • Step 2: On the left menu, please select Users.
  • Step 3: Under Actions, please select the user you would like to reset the password and click Edit.
  • Step 4: Switch to the Credentials tab, under which you will find the Reset Password section.
  • Step 5: After deleting the old password, please ensure the Temporary button to be switched on to set a temporary password for the user and inform the user to change the password upon login.
  • Step 6: That user MUST reset the password after login.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Data Discovery Introduction

Event

This repository is where users can see the relationship between each event or switch to graph for visualisation, helping to fulfil event sourcing.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

By clicking +, you can see more details of this particular event.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Also, switching to the graph and setting up the filter of your interest allow you to see a clearer relationship of each event for event sourcing or BAM.

(Without Filter)

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

(With Filter)

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Label Name

Every event has exactly one label name and two DIDs (source and target). However, one label name can be re-used in different events. In this repository, you are able to filter a specific label name for all the DIDsrelated to that.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

DID

This repository is where users can see the relationship between the DID and the label name. You can choose a specific DID and see all the label names related to it.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Application of Data Discovery

Whenever an event is emitted, 1 label name and 2 DIDs (source and target) will be created as what you have set up. Additionally, you might have created multiple events with the same label name, or multiple DIDs related to the same label name. In this case, each repository has its work to do.

For example, in the graph shown below, there are 4 events with 2 different label names, 4 different source DIDs, and 3 different target DIDs. We can go to Event and search all the events in this execution to get this relationship graph. In addition, you may feel interested in how many DIDs have been created with a specific label name; let's say, labelName_1. After searching labelName_1 in the Label Name, you will get 3 DIDs (2 source and 1 target). On the other hand, you can go to DID to check out how many label names have been put onto a specific DID; let's say Target_1. In this case, you will see 2 label names have been put onto Target_1 in the sense that 2 events share the same target DID (Target_1).

In conclusion, the feature of Data Discovery helps our users to quickly inspect the relationship of the events and discover some useful information or insights on a certain event, label name, or DID.

Image Not Showing Possible Reasons
  • The image was uploaded to a note which you don't have access to
  • The note which the image was originally uploaded to has been deleted
Learn More →

Data Process Explorer Introduction

After logging into LOC Studio, please click the menu of Data Process on the left. Afterwards, please select DP Interactive Map.

In this section, we will be explaining the structure of DP Interactive Map and how to start deploying a data process via LOC Studio.

Unit

A Unit is default in LOC Studio at the beginning. You can regard a Unit as one business unit such as marketing, sales, operation, etc. Normally speaking, there is only one (business) Unit for a user to access.

If you want to know more about the information of a certain Unit, please right click Unit and select Unit Info.

Create a Unit

For the time being, it is not supported for users to create a Unit. If you need to create one, please contact your Admin or FST Network.

Edit a Unit

For the time being, it is not supported for users to edit the Unit. If you need to edit one, please contact your Admin or FST Network.

Remove a Unit

For the time being, it is not supported for users to remove a Unit. If you need to remove one, please contact your Admin or FST Network.

Project

If you want to know more about the information of a certain Project, please right click that Project and select Project Info.

Create a Project

If you would like to create a Project, please right click Unit and select New Project. After filling in the Project Name and click Create, this project will be created.

In each Unit, you can create several projects.

Please note that

  • Project Name is required.
  • Description is optional.
  • In each Unit, Project Name cannot be the same.

Edit a Project

If you would like to edit a specific Project, please right click that Project and select Project Info. In the pop-up window, please click Edit to start your revision. Once completing the revision, please click Update to keep your latest revision; if you would like to give up this revision, please click Cancel to exit.

Remove a Project

If you would like to remove a specific Project, please right click that Project and select Delete Project.

Please note that once you Delete Project, there is NO WAY to make it back. Please think twice before your action.

Scenario

If you want to know more about the information of a certain Scenario, please right click that Scenario and select Scenario Info.

Create a Scenario

If you would like to create a Scenario, please right click Project and select New Scenario. After filling in the Scenario Name and click Create, this scenario will be created.

In each Project, you can create several scenario.

Please note that

  • Scenario Name is required.
  • Description is optional.
  • In each Project, Scenario Name cannot be the same.

Edit a Scenario

If you would like to edit a specific Scenario, please right click that Scenario and select Scenario Info. In the pop-up window, please click Edit to start your revision. Once completing the revision, please click Update to keep your latest revision; if you would like to give up this revision, please click Cancel to exit.

Remove a Scenario

If you would like to remove a specific Scenario, please right click that Scenario and select Delete Scenario.

Please note that once you Delete Scenario, there is NO WAY to make it back. Please think twice before your action.

Data Process

If you want to know more about the information of a certain Data Process, please right click that Data Process and select Data Process Info.

Create a Data Process

If you would like to create a Data Process, please right click Scenario and select New Data Process. After filling in the Data Process Name and click Create, this data process will be created.

In each Scenario, you can create several data processes.

Please note that

  • Data Process Name is required.
  • Description is optional.
  • Execution Timeout is required and defaulted as 180 seconds. If the data process is too complicated with a large amount of data, when executing the data process, it might fail because of the default execution timeout.
  • In each Scenario, Data Process Name cannot be the same.

Edit a Data Process

If you would like to edit a specific Data Process, please right click that Data Process and select Data Process Info. In the pop-up window, please click Edit to start your revision. Once completing the revision, please click Update to keep your latest revision; if you would like to give up this revision, please click Cancel to exit.

Remove a Data Process

If you would like to remove a specific Data Process, please right click that Data Process and select Delete Data Process.

Please note that once you Delete Data Process, there is NO WAY to make it back. Please think twice before your action.

Logic

A (generic) logic is the minimum component of a data process which must be composed with AT LEAST 1 (generic) logic and 1 aggregator logic on LOC Studio.

Create a (Generic) Logic

If you would like to create a (Generic) Logic, please click +Generic Logic to choose where it suits you and start coding.

In the Logic Body tab, you can either write codes yourself or use +New Potential Labelling, and the latter is only for creating an event.

If you choose to write your own codes, here are some agent templates for your reference.

Event Logic
await ctx.agents.eventStore.emit([
    {
      sourceDID: "apar-js/examples/eventEmitExample",
      targetDID: "saffron-eventstore",
      labelName: "TEST_LABEL_FROM_EXAMPLE",
      meta: "這是一個測試標籤",
      type: "default", // or "error",
    },
  ])
Logic for HTTP Agents
let resp = await ctx.agents.http!.post(
    "https://6051775217ef.mock.pstmn.io/test123",
    {},
    Http.ContentType.Json,
    new Uint8Array([123, 34, 97, 34, 58, 51, 51, 125])
  );

Logic for DB Agents (Microsoft SQL)
const connectionInfo = {
    server: process.env.MSSQL_HOST,
    port: process.env.MSSQL_PORT,
    username: process.env.MSSQL_USERNAME,
    password: process.env.MSSQL_PASSWORD,
    database: process.env.MSSQL_DATABASE,
    trustCert: process.env.MSSQL_TRUSTCERT,
  };

  let db = null;
  try {
    db = await ctx.agents.database?.connect({
      databaseDriver: Database.Driver.Mssql,
      connectionString: `server=${connectionInfo.server},${connectionInfo.port};uid=${connectionInfo.username};pwd=${connectionInfo.password};database=${connectionInfo.database};TrustServerCertificate=${connectionInfo.trustCert};`,
    });

As for using +New Potential Labelling, you need to firstly right click on where you would like to put in the Logic Body and then simply put your Label Name, Meta, Source DID, and Target DID into the pop-up window. Please note that only Meta optional to be filled in, the rest are required. Additionally, the potential labelling is used to emit an event.

After completing the form, these information will be automatically transformed into codes and shown in the Logic Body, where you can revise further.

Please note that Label Name, Meta, Source DID, and Target DID will be treated as string when converting from the potential labelling to the logic body.

Create an Aggregator Logic

If you would like to create an Aggregator Logic, please click Add Aggregator Logic to start coding.

Here are the templates of the aggregator logic for your reference.

  • if OK
export async function run(ctx: AggregatorContext) {
    const result = (await ctx.agents.sessionStorage.get("result")) as object;
    ctx.agents.result.finalize({
    returnCode: "000", // or other success codes
    returnMsg: "Success", // or other success message
  });
}
  • if ERROR
export async function handleError(ctx: AggregatorContext, error: RailwayError) {
    ctx.agents.logging.error(error.message);
}

Please note that

  • Logic Name is required.
  • In each Data Process, Logic Name cannot be the same.
  • Description and Comment are optional.
  • Description contains general words that apply to all versions of this logic even if this logic is revised.
  • Comment contains specific words to describe the version of this logic.
  • LOC Studio currently supports JavaScript only, but more programming languages will be added, such as TypeScript, etc.

Edit a Logic

If you would like to edit a specific Logic, please click this icon "⋮" of that Logic on the top right and select the edit icon as shown below to start your revision. Once completing the revision, please click Update to keep your latest revision; if you would like to give up this revision, please click Cancel to exit.

Remove a Logic

If you would like to delete a specific Logic, please right click that Logic and select the rubbish bin icon as shown below.

Please note that once you Delete Logic, there is NO WAY to make it back. Please think twice before your action.

API Route Configuration Introduction

After logging into LOC Studio, you can find the menu of API Route on the left. Once completing deploying a data process, you can configure an API route to execute it.

  • Please note that only the deployed data processes can be configured here.
  • The order of the linked data processes is important and will affect the result.

Create an API Route

If you would like to create an API Route, please click + right next to API Route Explorer to create a new folder first by filling out the information.

Before creating the folder:

After creating the folder:

Please note that

  • Folder Name is required.
  • Description is optional.
  • In each folder, Folder Name cannot be the same.

By clicking Create API Route on the top right, you will be able to configure an API for a specific data process. We need to stress that only the deployed data processes can be added; otherwise, the undeployed data processes cannot be selected, and hence the API route configuration cannot be completed.

Please note that

  • API Route Name, HTTP Method, HTTP Path, and Linked Data Process are required.
  • Description is optional.
  • In HTTP Path, you only need to fill in the sub-path; the domain name is default in the system and thus there is no need to put domain name here. (eg, complete HTTP Path might be https://api.loc-cse.fst.network/api/v1/search; here, only /api/v1/search is needed to be filled in.)
  • HTTP Path cannot be the same.

After configuring an API, you will have this success page. By using this API and requesting it on any API platforms (such as Postman, Insomnia, etc), it will return the result as stated in the aggregator logic of your data process.

(Requesting on Postman)

Edit an API Route

If you would like to edit a specific API Route, please click that API Route on the top right. In the pop-up window, please click Edit as shown below to start your revision. Once completing the revision, please click Update to keep your latest revision; if you would like to give up this revision, please click Cancel to exit.

Remove an API Route

If you would like to remove a specific API Route, please right click that API Route and select Delete API Route.

Please note that once you Delete API Route, there is NO WAY to make it back. Please think twice before your action.

Quick Start

The quick start section will show you how to create your first data process in LOC Studio. This data process is extremely simple, which will consume a JSON data as data source and emit an event.

Create a User Account

Step 1: Create a New User

In order to create a data process, you'll need a user account.

Logging in with the User Management account (provided by FST Network), on the menu of Users, click Add user to create a new account:

Step 2: Fill in User Information

Username and Email are required:

We strongly recommend filling user information as detailed as possible.

Step 3: Set a Temporary Password

In the Credentials tab, make sure Temporary is switched on in order to set a temporary password for the user:

LOC Studio informs the user to change the password upon first login.

Step 4: Reset the Temporary Password

If Temporary is switched on, the user will be asked to change the temporary password after logging in for the first time:

LOC Studio will automatically log out the current user after 5 mins of inactivity.

Create a "Greeting" Data Process

Now after logging in with your new user account, we are going to show you how to create this simple data process as below:

  1. User sends a HTTP request to the data process with a JSON payload, which contains a name and an age.
  2. The data process emits an event containing a greeting message with user data.

This data process has only one generic logic with one aggregator logic and will emit one event. After creating and deploying it, you will be able to see its action in LOC Studio.

Step 1: Create Project/Scenario/Data Process

In order to create a data process, you need to create a project and a scenario first.

  • create a project
    Under Default Unit, you can either right click that unit or click on the top right to create a new project, with the project name of "Quick Start Project".

  • create a scenario
    Under Quick Start Project, you can either right click that project or click on the top right to create a new scenario, with the scenario name of "Quick Start Scenario".

  • create a data process
    Under Quick Start Scenario, you can either right click that scenario or click on the top right to create a new data process, with the data process name of "Quick Start" and the default timeout of 180 seconds.

Detailed instructions can be found HERE.

Step 2: Create Logics

As soon as setting up the process explorer in the previous step, you can now create logics for this data process:

We name the data process Quick Start for the following demonstration and leave the timeout to the default of 180 seconds.

Every data process must have one aggregator logic and at least one generic logic.

Just like what we have mentioned earlier, this data process has one generic logic and one aggregator logic. Each logic has two parts:

  • if OK function (main logic)
  • iF Error function (error handling)

If something goes wrong in run(), LOC Studio will run handleError() instead. We will not do anything other than print out some logging messages. After all, it is always ideal to have the error-handling aspect covered in LOC Studio.

Now let's start coding.

Generic Logic Code

In the editing window click Logic Body. You will find if OK and iF Error tabs:

We will not use Potential Labeling here - it will automatically detect events in your code if you use the event structure that it recognises. Our customised event will not appear here though.

These tabs are corresponding to the main logic and error handling functions we have mentioned above. Copy and paste the following code:

Generic logic - [if OK] block

For now, the main supported language in LOC Studio is JavaScript. The if OK code has to be declared as an asynchronous run(ctx) function (ctx is a context object provided by LOC):

/**
*
*  The codes in 'run' are executed when no error occurrs in Generic Logic.
*
*/
async function run(ctx) {
  
  // a function that transforms byte array to string
  const UTF8ArrToStr = (aBytes) => {
    let utf8decoder = new TextDecoder();
    return utf8decoder.decode(new Uint8Array(aBytes));
  }

  // read and parse JSON data from the request body
  const payload = JSON.parse(UTF8ArrToStr(ctx.payload.http.body));

  // emit an event to event store
  ctx.agents.eventStore.emit([
    {
      sourceDID: "LOC_Studio",  // source DID
      targetDID: payload.name,  // target DID will be user's name
      labelName: `Hello, how are you, ${payload.name}?`,  // event label (greeting message)
      meta: `${payload.age}`,  // include user's age in the meta field
      type: "default",  // default group
    },
  ]);
}

For example, if the user sends the following JSON data

{
    "name": "Arthur",
    "age": 42
}

This logic would send the following event:

sourceDID: "LOC_Studio"
targetDID: "Arthur"
labelName: "Hello, how are you, Arthur?"
meta: 42
type: "default"

You might also notice that we use ctx.agents.eventStore.emit to send an event (which will be stored in the event store with user's name and age from the JSON payload). ctx.agents are built-ins so you don't need to worry about importing them from libraries.

Generic logic - [if Error] block

The if Error is another function declared as handleError(ctx, error):

/**
*
*  The codes in 'handleError' is executed when an error occurrs in Generic Logic,
*  or the CURRENT running Logic just gets an error.
*
*/
async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);  // log the error
}

Aggregator Logic Code

A data process must have one aggregator logic which is pretty much the same as any generic logics, except that an aggregator logic cannot emit/retrieve events. The aggregator logic is used to return the final result (usually also JSON data) after the data process is executed.

Aggregator logic - [if OK] block

It is the same as the generic logic with the main function of a run(ctx):

/**
*
*  The codes in 'run' are executed when no error occurrs in Aggregator Logic.
*
*/
async function run(ctx) {
    ctx.agents.result.finalize({
    status: "ok",
    taskId: ctx.task.taskId,
  });
}

This time we use agent ctx.agents.result.finalize to send back a JavaScript object (which will be converted into JSON string). The result will be like this:

{
    "status": "ok",
    "taskId": [some task id]
}

We will see the actual response later. Please note that we are NOT sending a standard HTTP response - the fields of the JSON response can be customised as whatever you like.

Aggregator logic - [if Error] block

The error handling code for the aggregator logic is the same as before (log the error):

/**
*
*  The codes in 'handleError' is executed when an error occurrs in Generic Logic,
*  or the CURRENT running Logic just gets an error.
*
*/
async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);  // log the error
}

Step 3: Deploy the Data Processes

We have finished creating the data process in Step 2. Now we can deploy it.

Right click on the data process and select Deploy Data Process. This will initiate the deployment process.

Step 4: Testing the Data Process

A data process can be triggered, normally via an API route. However, LOC Studio allows you to invoke it directly for testing. This is also called a single data process execution.

Later we will see how to deploy an API route on an API platform as well.

After the data process is deployed, right click it again and select Execute Data Process.

The screen would then ask you to upload a payload. What you need to do is to create a payload.json on your computer with the following content:

payload.json

{
    "name": "Ian",
    "age": 30
}

Make sure the JSON data contains name and age fields so that the data process can respond properly. You can change Ian and 30 into another name or age you prefer.

Upload your payload file and click Execute:

If you see something like the result below, the data process is executed successfully:

When clicking the JSON button with an eye icon, you'll find the JSON response returned by the aggregator logic like this:

{
    "status": "ok",
    "taskId": {
        "executionId": "YnHhUUoj4VhHIGj3ga_7Iw",
        "id": "rI8U8XLPpN8psMoq1d_INA"
    }
}

Step 5: Invoke the Data Processes via an API Route

Create an API Route

What we did in Step 4 was just a quick test. To trigger the data process properly, we need an API route so that we (or any code) can send HTTP requests to it.

The difference between the single data process execution and API routes is that an API route can trigger multiple data processes.

Also please note that only the deployed data processes can be linked to API routes.

Firstly, go to the API Route tab, create an API route folder, and then click Create API Route on the upper right corner:

We set our API route as follows:

  • API Route Name: Quickstart
  • HTTP Method: POST
  • HTTP Path: /YH/Quickstart
  • Request Mode: Sync
  • Response Content Type: JSON
  • Linked Data Processes: Add the data process Quick Start

The actual API path would be something like https://api.loc.xxx/YH/Quickstart.

Afterwards, click Create.

Test the Data Process with an API Route

You can use an API client tool such as Postman or Insomnia to request URL.

We will use the same payload in Step 4 (but we can simply copy and paste it in the body this time):

payload

{
    "name": "Ian",
    "age": 30
}

Select POST and paste the full API route (including server URL). Afterwards, paste the JSON payload below.

See: Basics of API Testing Using Postman

Click Send and if all go well, you can expect to see the result like this:

You can see the complete response from the data process - notice that the response in Step 4 is actually included under the data field.

Step 6: Inspect the Data Process in Data Discovery

There is one more thing you can do with the events - to inspect the data lineage, which is the graphical representation of events.

Click Event tab under Data Discovery and find the event you just sent:

You can see the Label Name of the event indeed contains the name "Ian" from our JSON payload.

Now

  1. copy the Execution ID of your event.
  2. click Add filter on top left.
  3. select the Field as Execution ID and paste your ID in Value.
  4. Click Save.
  5. Click Event again, then click the slider bar data below Applied Filters.

Now let's switch the data discovery window to Graph mode and it shows the data lineage of the event:

You can use your mouse scrolling button to zoom in/out and drag the DIDs around.

If you can click on the event label and then click the small menu icon on the right, you can inspect its details. Here you can find the age data 30 is in the meta field.

Also:

  • LOC_Studio is the source DID
  • Ian is the target DID
  • the arrow "Hello, how are you, Ian?" is this event's labal name

If you send multiple requests to the API route with the same JSON payload, you will see multiple events appear between the same source and target in Data Discovery.


Congratulations! You have created your first LOC data process and invoke it via an API route. This has already covered many basics in LOC Studio.

Getting Started - McDonald's Example

Imagine when you walk in McDonald's to buy your meal, the staff will key in your order. Subsequently, the kitchen will start to prepare your meal and deliver to the front desk once it is completed.

Today, we are going to use LOC Studio to record this transaction (and even other customers'). Most importantly, we would like to know if every order is successfully prepared through event sourcing on LOC Studio.

Business Process 1: staff to key in orders

1-A. (generic logic) to set order data
1-B. (generic logic) to get order data and emit an event to record orders into the event store
1-C. (aggregator logic) to return a success/error message to tell whether the execution of this logic is successful

Business Process 2: kitchen to prepare meal

2-A. (generic logic) to get events from the 1st data process and to allocate meals to the kitchen staff to produce
2-B. (generic logic) to deliver order and emit an event to record who is responsible for which delivery into the event store
2-C. (aggregator logic) to return a success/error message to tell whether the execution of this logic is successful

Below please find the example of the business process analysis graph.

Step 1: Create Project/Scenario/Data Process

In order to create a data process, you need to create a project and a scenario first.

  • create a project
    Under Default Unit, you can either right click that unit or click on the top right to create a new project, with the project name of "Fast Food Project".

  • create a scenario
    Under Fast Food Project, you can either right click that project or click on the top right to create a new scenario, with the scenario name of "Fast Food Scenario".

  • create a data process
    Under Fast Food Scenario, you can either right click that scenario or click on the top right to create a new data process, with the data process name of "Fast Food_1" and the default timeout of 180 seconds.

Detailed instructions can be found HERE.

Step 2: Create Logic(s)

As soon as the previous data process explorer has been set up, we can now create business/data logics.

To begin with the 1st business/data process, there will be 2 generic logics and 1 aggregator logic to set up.

Generic Logic

  • getOrder: to get order data

(sample code - If OK)

const bigData=[
    {
        "Order": "200",
        "French Fries": "1",
        "Hamburger": "1",
        "Fried Chicken": "1",
        "Chicken Nugget": "4",
        "Coke": "0",
        "Diet Coke": "0",
        "Sprite": "1",
        "Lemonade": "0",
        "Black Tea": "0",
        "Coffee": "0",
        "Salad": "1",
        "Ice Cream": "1 "
    },
    {
        "Order": "201",
        "French Fries": "1",
        "Hamburger": "1",
        "Fried Chicken": "1",
        "Chicken Nugget": "0",
        "Coke": "1",
        "Diet Coke": "0",
        "Sprite": "0",
        "Lemonade": "0",
        "Black Tea": "1",
        "Coffee": "0",
        "Salad": "1",
        "Ice Cream": "0 "
    },
    {
        "Order": "202",
        "French Fries": "0",
        "Hamburger": "1",
        "Fried Chicken": "1",
        "Chicken Nugget": "6",
        "Coke": "0",
        "Diet Coke": "0",
        "Sprite": "0",
        "Lemonade": "1",
        "Black Tea": "1",
        "Coffee": "0",
        "Salad": "1",
        "Ice Cream": "1 "
    },
    {
        "Order": "203",
        "French Fries": "1",
        "Hamburger": "0",
        "Fried Chicken": "2",
        "Chicken Nugget": "10",
        "Coke": "1",
        "Diet Coke": "1",
        "Sprite": "0",
        "Lemonade": "0",
        "Black Tea": "0",
        "Coffee": "0",
        "Salad": "0",
        "Ice Cream": "1 "
    },
    {
        "Order": "204",
        "French Fries": "1",
        "Hamburger": "1",
        "Fried Chicken": "0",
        "Chicken Nugget": "0",
        "Coke": "1",
        "Diet Coke": "1",
        "Sprite": "0",
        "Lemonade": "1",
        "Black Tea": "0",
        "Coffee": "1",
        "Salad": "0",
        "Ice Cream": "0 "
    },
    {
        "Order": "205",
        "French Fries": "2",
        "Hamburger": "0",
        "Fried Chicken": "4",
        "Chicken Nugget": "20",
        "Coke": "0",
        "Diet Coke": "10",
        "Sprite": "0",
        "Lemonade": "0",
        "Black Tea": "0",
        "Coffee": "1",
        "Salad": "0",
        "Ice Cream": "2 "
    },
    ]
  
  let newData =bigData.map((d)=>{
    let template=[]
    for (let property in d){    
      if(property ==='Order') continue 
      let templates = {};
      templates.sourceDID=property;
      templates.targetDID=d.Order;
      templates.labelName='setOrder';
      templates.meta=d[property];
      templates.type='default';
      template.push(templates);
    }
    return template
  })
  
  let data_temp =newData.flat()
  
  async function run(ctx) {
      await ctx.agents.sessionStorage.putJson("data", data_temp)
  }

(sample code - If ERROR)

async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);
}

  • getOrder_Event: to emit an event to record orders into the event store
    (event structure)
    Source DID - Item:xxx
    Target DID - Order:xxx
    Label Name - setOrder
    Meta - quantity of each item

(sample code - If OK)

async function run(ctx) {

  const data = await ctx.agents.sessionStorage.get("data")

  data.forEach((d)=>{
  ctx.agents.eventStore.emit([
    {
      sourceDID: `Item: ${d.sourceDID}`,
      targetDID: `Order: ${d.targetDID}`,
      labelName: `setOrder:${d.labelName}`,
      meta: `${d.meta}`,
      type: 'default',
    },
  ]);
})}

(sample code - If ERROR)

async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);
}

Aggregator Logic

  • orderData: to return taskID when the data process is successfully triggered and an error message when it fails

(sample code - If OK)

async function run(ctx) {
    ctx.agents.result.finalize({
    status: "ok",
    taskId: ctx.task.taskId,
  });
}

(sample code - If ERROR)

async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);
}

Moving to the 2nd business/data process, there will be 2 generic logics and 1 aggregator logic to configure as well.

Generic Logic

  • prepareOrder: to get events from the 1st data process and allocate meals to the kitchen staff to produce

(sample code - If OK)

async function run(ctx) {
    const order200To205 = [];
  const foods = {};

  for (let i = 200; i < 206; i++) {
    const searchReq = {
      queries: [
        {
          Match: {
            field: 'target_digital_identity',
            value: `Order: ${i}`,
          },
        },
        {
          Match: {
            field: 'label_name',
            value: 'setOrder:setOrder',
          },
        },
      ],
      excludes: [],
      filters: [],
      from: 0,
      size: 1000,
      sorts: [],
    };

    const search = await ctx.agents.eventStore.search(searchReq);
    const events = search?.events;
    order200To205.push(events)
  }

  order200To205.forEach(order => {
    order.forEach(detail => {
      if(!foods.hasOwnProperty(detail.sourceDigitalIdentity)){
        foods[detail.sourceDigitalIdentity] = detail.meta
      } else {
        let amount = +foods[detail.sourceDigitalIdentity];
        amount += +detail.meta
        foods[detail.sourceDigitalIdentity] = amount
      }
    })
  })

  await ctx.agents.sessionStorage.putJson('foods', {
    foods,
  });
}

(sample code - If ERROR)

async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);
}
  • prepareOrder_Event: to emit an event to record who is responsible for which delivery into the event store
    (event structure)
    Source DID - Staff
    Target DID - Item:xxx
    Label Name - deliverOrder
    Meta - quantity of each item

(sample code - If OK)

async function run(ctx) {
    const foods = await ctx.agents.sessionStorage.get('foods');
  const staff = ['John', 'Ann', 'Emily'];

  for(let food in foods.foods) {
    if (food === 'Item: French Fries' || food === 'Item:  Fried Chicken' || food === 'Item: Chicken Nugget') {
      await ctx.agents.eventStore.emit([
        {
          labelName: 'deliverOrder',
          sourceDID: staff[0],
          targetDID: food,
          meta: `${foods.foods[food]}`,
          type: 'default',
        },
      ]);
    } else if (food === 'Item: Hamburger' || food === 'Item: Ice Cream') {
      await ctx.agents.eventStore.emit([
        {
          labelName: 'deliverOrder',
          sourceDID: staff[1],
          targetDID: food,
          meta: `${foods.foods[food]}`,
          type: 'default',
        },
      ]);
    } else if (food === 'Item: Coke' ||food === 'Item: Diet Coke' || food === 'Item: Sprite' || food === 'Item: Lemonade' || food === 'Item: Black Tea' || food === 'Item: Coffee') {
      await ctx.agents.eventStore.emit([
        {
          labelName: 'deliverOrder',
          sourceDID: staff[2],
          targetDID: food,
          meta: `${foods.foods[food]}`,
          type: 'default',
        },
      ]);
    }
  }
}

(sample code - If ERROR)

async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);
}

Aggregator Logic

  • preparationData: to return taskID when the data process is successfully triggered and an error message when it fails

(sample code - If OK)

async function run(ctx) {
    ctx.agents.result.finalize({
    status: "ok",
    taskId: ctx.task.taskId,
  });
}

(sample code - If ERROR)

async function handleError(ctx, error) {
    ctx.agents.logging.error(error.message);
}

Step 3: Deploy Data Processes

In Step 2, we have completed creating 2 data processes with 2 generic logics and 1 aggregator logic for each. Once they are completed, you might want to deploy these 2 data processes respectively in order for the execution later.

By right clicking on the specific data process, you are allowed to deploy it.

Please note that

  • Deploying a data process is equivalent to storing it in the backend of LOC Studio.
  • When a data process is deployed, this data process in the menu will be marked with a green solid circle.

Step 4: Perform Single Data Process Execution

In LOC Studio, there are 2 ways to execute a data process,either through the single data process execution or the execution via API routes.

In this step, we are using the single data process execution. After a data process is deployed, you can right click again on that deployed data process and select Execute Data Process.

Because, in our case, we do not need to import any data source, we can skip this and click Execution to continue.

When the execution completes, you will have this pop-up execution result to check out some details. For instance, you can check or download your execution result (If OK / If ERROR) per your written code snippets in the Aggregator Logic.

Moreover, if there are any events created in the data process, you can switch to Emitted Events to see more details. It is worth mentioning that when clicking Repository, you will be led to a new window of Data Discovery where the events information and data lineage can be found. For detailed explanation and introduction of Data Discovery, please refer to Step 6.

Step 5: Execute Data Processes with API Route

Besides the single data process execution stated in Step 4, here we are going to introduce another way of executing data processes via API routes.

In the beginning, you have to create an API Route folder as usual. Upon the folder is built, you can create several API routes of your interest. In this case, you can choose to create 1 API route or 2. If you choose to create 2 API routes, each API route will be linked with 1 data process. Here we are going to demonstrate how to configure with merely 1 API route which will be linked with 2 data processes we created previously.

Please remember that you need to link this API route with our aforementioned data processes (McD_1 and McD_2). Additionally, in HTTP Path, you are required to fill in a sub-path only (that is you can skip the domain name).

After clicking Create, this API route is created successfully as shown below.

The next thing you need to do is to use an API platform. Here we are going to use Postman for demonstration. By entering the request URL (domain name + sub-path), it will return a result which is exactly the same as what we have tested out in Step 4 - the response will be a successful message.

Please note that if there is an error message, the reasons could be

  • the URL is being requested for too long, so you need to re-send the request on Postman again.
  • the data process execution time is longer than your previous setup, so please change the setting. (Reminder in Data Process)
  • the code snippets in your (generic or aggregator) logics might be erroneous, so please revise them and re-deployed.

Here are some of the common error messages you might encounter only for your reference.

Step 6: Inspect in Data Discovery

After requesting the URL in the Postman successfully, you will also get a set of execution ID which can be put into Data Discovery to search events.

Please note that only when there are events written in the logics, can they be searched here.

After putting the execution ID into the filter, you might get this graph.

It showcases the relationship of the entire use case, from what food in each order to who prepares for what food.

This is exactly our desired result. From this graph, not just the relationship is clearly specified, but also we could see if there is any food that has not been done, such as fried chicken, lemonade, etc. It is useful for event sourcing.

On top of this use case, you might also want to try out the subsequent events and business/data processes or to refine them. For example, you can take prices into account and plan some more for billing.