LOC-Doc
is Here!Dear LOC users, partners and clients:
Our new user documentation site for LOC is now official live. From now on the latest features and references will be updated there.
https://documentation.loc.fst.network/
In the meantime, we will continue to utilise HackMD notebooks for workshops and other educational activities. We thank you again for the fantastic adventure you'd had with us, and look forward to see you along the new journey too.
– FST Network, Sep 2022
LOC Studio V1.0.0
Release Date: 8 May 2022 (latest version)
LOC Studio is one of FST VADO's tools (Virtual Assistive Data Officer) to help users build and execute complex business/data processes.
*LOC Studio will automatically log out the current user after 5 mins of inactivity.
After clicking Users
from the left navigation bar, you can select View all users
to display all users.
In this section, you can not only view all users but also add, edit and delete them.
Please DO NOT delete any accounts that have been active already, in case their data, such as configured logics or data processes, will be eliminated and affect the operation of the existing accounts. Instead, Please DEACTIVATE them (disable users).
Audience: LOC Studio Admin
Note: Only the Admins have the right to view, create, edit, and delete users.
Admin account
.Users
.Add Users
.Username
and Email
are mandatory.)There are a few more steps to be completed.
Credentials
tab, please ensure the Temporary
button to be switched on to set a temporary password for the user and inform the user to change the password upon login.Don’t forget this temporary password and remember to give it to the user.
The users are required to reset their password when they login to their account for the first time.
Role Mappings
tab, you can check the role authorisation of the selected user.Admin account
.Users
.Actions
, please select the user you would like to remove and click Delete
.Admin account
.Users
.Actions
, please select the user you would like to reset the password and click Edit
.Credentials
tab, under which you will find the Reset Password
section.Temporary
button to be switched on to set a temporary password for the user and inform the user to change the password upon login.This repository is where users can see the relationship between each event
or switch to graph for visualisation, helping to fulfil event sourcing.
By clicking +, you can see more details of this particular event.
Also, switching to the graph and setting up the filter of your interest allow you to see a clearer relationship of each event
for event sourcing or BAM.
(Without Filter)
(With Filter)
Every event
has exactly one label name
and two DIDs
(source and target). However, one label name
can be re-used in different events
. In this repository, you are able to filter a specific label name
for all the DIDs
related to that.
This repository is where users can see the relationship between the DID
and the label name
. You can choose a specific DID
and see all the label names
related to it.
Whenever an event
is emitted, 1 label name
and 2 DIDs
(source and target) will be created as what you have set up. Additionally, you might have created multiple events
with the same label name
, or multiple DIDs
related to the same label name
. In this case, each repository has its work to do.
For example, in the graph shown below, there are 4 events
with 2 different label names
, 4 different source DIDs
, and 3 different target DIDs
. We can go to Event and search all the events
in this execution to get this relationship graph. In addition, you may feel interested in how many DIDs
have been created with a specific label name
; let's say, labelName_1. After searching labelName_1 in the Label Name, you will get 3 DIDs
(2 source and 1 target). On the other hand, you can go to DID to check out how many label names
have been put onto a specific DID
; let's say Target_1. In this case, you will see 2 label names
have been put onto Target_1 in the sense that 2 events
share the same target DID
(Target_1).
In conclusion, the feature of Data Discovery helps our users to quickly inspect the relationship of the events
and discover some useful information or insights on a certain event
, label name
, or DID
.
After logging into LOC Studio, please click the menu of Data Process
on the left. Afterwards, please select DP Interactive Map
.
In this section, we will be explaining the structure of DP Interactive Map
and how to start deploying a data process via LOC Studio.
A Unit is default in LOC Studio at the beginning. You can regard a Unit as one business unit such as marketing, sales, operation, etc. Normally speaking, there is only one (business) Unit for a user to access.
If you want to know more about the information of a certain Unit, please right click Unit and select Unit Info
.
For the time being, it is not supported for users to create a Unit. If you need to create one, please contact your Admin or FST Network.
For the time being, it is not supported for users to edit the Unit. If you need to edit one, please contact your Admin or FST Network.
For the time being, it is not supported for users to remove a Unit. If you need to remove one, please contact your Admin or FST Network.
If you want to know more about the information of a certain Project, please right click that Project and select Project Info
.
If you would like to create a Project, please right click Unit and select New Project
. After filling in the Project Name and click Create, this project will be created.
In each Unit, you can create several projects.
Please note that
Project Name
is required.Description
is optional.Project Name
cannot be the same.If you would like to edit a specific Project, please right click that Project and select Project Info
. In the pop-up window, please click Edit
to start your revision. Once completing the revision, please click Update
to keep your latest revision; if you would like to give up this revision, please click Cancel
to exit.
If you would like to remove a specific Project, please right click that Project and select Delete Project
.
Please note that once you Delete Project
, there is NO WAY to make it back. Please think twice before your action.
If you want to know more about the information of a certain Scenario, please right click that Scenario and select Scenario Info
.
If you would like to create a Scenario, please right click Project and select New Scenario
. After filling in the Scenario Name and click Create, this scenario will be created.
In each Project, you can create several scenario.
Please note that
Scenario Name
is required.Description
is optional.Scenario Name
cannot be the same.If you would like to edit a specific Scenario, please right click that Scenario and select Scenario Info
. In the pop-up window, please click Edit
to start your revision. Once completing the revision, please click Update
to keep your latest revision; if you would like to give up this revision, please click Cancel
to exit.
If you would like to remove a specific Scenario, please right click that Scenario and select Delete Scenario
.
Please note that once you Delete Scenario
, there is NO WAY to make it back. Please think twice before your action.
If you want to know more about the information of a certain Data Process, please right click that Data Process and select Data Process Info
.
If you would like to create a Data Process, please right click Scenario and select New Data Process
. After filling in the Data Process Name and click Create, this data process will be created.
In each Scenario, you can create several data processes.
Please note that
Data Process Name
is required.Description
is optional.Execution Timeout
is required and defaulted as 180 seconds. If the data process is too complicated with a large amount of data, when executing the data process, it might fail because of the default execution timeout.Data Process Name
cannot be the same.If you would like to edit a specific Data Process, please right click that Data Process and select Data Process Info
. In the pop-up window, please click Edit
to start your revision. Once completing the revision, please click Update
to keep your latest revision; if you would like to give up this revision, please click Cancel
to exit.
If you would like to remove a specific Data Process, please right click that Data Process and select Delete Data Process
.
Please note that once you Delete Data Process
, there is NO WAY to make it back. Please think twice before your action.
A (generic) logic is the minimum component of a data process which must be composed with AT LEAST 1 (generic) logic
and 1 aggregator logic
on LOC Studio.
If you would like to create a (Generic) Logic, please click +Generic Logic
to choose where it suits you and start coding.
In the Logic Body
tab, you can either write codes yourself or use +New Potential Labelling
, and the latter is only for creating an event.
If you choose to write your own codes, here are some agent templates for your reference.
As for using +New Potential Labelling
, you need to firstly right click on where you would like to put in the Logic Body
and then simply put your Label Name
, Meta
, Source DID
, and Target DID
into the pop-up window. Please note that only Meta
optional to be filled in, the rest are required. Additionally, the potential labelling is used to emit an event.
After completing the form, these information will be automatically transformed into codes and shown in the Logic Body
, where you can revise further.
Please note that Label Name
, Meta
, Source DID
, and Target DID
will be treated as string when converting from the potential labelling to the logic body.
If you would like to create an Aggregator Logic, please click Add Aggregator Logic
to start coding.
Here are the templates of the aggregator logic for your reference.
Please note that
Logic Name
is required.Logic Name
cannot be the same.Description
and Comment
are optional.Description
contains general words that apply to all versions of this logic even if this logic is revised.Comment
contains specific words to describe the version of this logic.If you would like to edit a specific Logic, please click this icon "⋮" of that Logic on the top right and select the edit icon as shown below to start your revision. Once completing the revision, please click Update
to keep your latest revision; if you would like to give up this revision, please click Cancel
to exit.
If you would like to delete a specific Logic, please right click that Logic and select the rubbish bin icon as shown below.
Please note that once you Delete Logic
, there is NO WAY to make it back. Please think twice before your action.
After logging into LOC Studio, you can find the menu of API Route on the left. Once completing deploying a data process, you can configure an API route to execute it.
If you would like to create an API Route, please click + right next to API Route Explorer
to create a new folder first by filling out the information.
Before creating the folder:
After creating the folder:
Please note that
Folder Name
is required.Description
is optional.Folder Name
cannot be the same.By clicking Create API Route
on the top right, you will be able to configure an API for a specific data process. We need to stress that only the deployed data processes can be added; otherwise, the undeployed data processes cannot be selected, and hence the API route configuration cannot be completed.
Please note that
API Route Name
, HTTP Method
, HTTP Path
, and Linked Data Process
are required.Description
is optional.HTTP Path
, you only need to fill in the sub-path; the domain name is default in the system and thus there is no need to put domain name here. (eg, complete HTTP Path might be https://api.loc-cse.fst.network/api/v1/search
; here, only /api/v1/search
is needed to be filled in.)HTTP Path
cannot be the same.After configuring an API, you will have this success page. By using this API and requesting it on any API platforms (such as Postman, Insomnia, etc), it will return the result as stated in the aggregator logic of your data process.
(Requesting on Postman)
If you would like to edit a specific API Route, please click that API Route on the top right. In the pop-up window, please click Edit
as shown below to start your revision. Once completing the revision, please click Update
to keep your latest revision; if you would like to give up this revision, please click Cancel
to exit.
If you would like to remove a specific API Route, please right click that API Route and select Delete API Route
.
Please note that once you Delete API Route
, there is NO WAY to make it back. Please think twice before your action.
The quick start section will show you how to create your first data process in LOC Studio. This data process is extremely simple, which will consume a JSON data as data source and emit an event.
In order to create a data process, you'll need a user account.
Logging in with the User Management account (provided by FST Network), on the menu of Users
, click Add user
to create a new account:
Username
and Email
are required:
We strongly recommend filling user information as detailed as possible.
In the Credentials
tab, make sure Temporary
is switched on in order to set a temporary password for the user:
LOC Studio informs the user to change the password upon first login.
If Temporary
is switched on, the user will be asked to change the temporary password after logging in for the first time:
LOC Studio will automatically log out the current user after 5 mins of inactivity.
Now after logging in with your new user account, we are going to show you how to create this simple data process as below:
This data process has only one generic logic with one aggregator logic and will emit one event. After creating and deploying it, you will be able to see its action in LOC Studio.
In order to create a data process, you need to create a project and a scenario first.
Detailed instructions can be found HERE.
As soon as setting up the process explorer in the previous step, you can now create logics for this data process:
We name the data process
Quick Start
for the following demonstration and leave the timeout to the default of 180 seconds.Every data process must have one aggregator logic and at least one generic logic.
Just like what we have mentioned earlier, this data process has one generic logic and one aggregator logic. Each logic has two parts:
if OK
function (main logic)iF Error
function (error handling)If something goes wrong in run()
, LOC Studio will run handleError()
instead. We will not do anything other than print out some logging messages. After all, it is always ideal to have the error-handling aspect covered in LOC Studio.
Now let's start coding.
In the editing window click Logic Body
. You will find if OK
and iF Error
tabs:
We will not use
Potential Labeling
here - it will automatically detect events in your code if you use the event structure that it recognises. Our customised event will not appear here though.
These tabs are corresponding to the main logic and error handling functions we have mentioned above. Copy and paste the following code:
Generic logic - [if OK] block
For now, the main supported language in LOC Studio is JavaScript. The if OK
code has to be declared as an asynchronous run(ctx)
function (ctx
is a context object provided by LOC):
For example, if the user sends the following JSON data
This logic would send the following event:
You might also notice that we use ctx.agents.eventStore.emit
to send an event (which will be stored in the event store with user's name and age from the JSON payload). ctx.agents
are built-ins so you don't need to worry about importing them from libraries.
Generic logic - [if Error] block
The if Error
is another function declared as handleError(ctx, error)
:
A data process must have one aggregator logic which is pretty much the same as any generic logics, except that an aggregator logic cannot emit/retrieve events. The aggregator logic is used to return the final result (usually also JSON data) after the data process is executed.
Aggregator logic - [if OK] block
It is the same as the generic logic with the main function of a run(ctx)
:
This time we use agent ctx.agents.result.finalize
to send back a JavaScript object (which will be converted into JSON string). The result will be like this:
We will see the actual response later. Please note that we are NOT sending a standard HTTP response - the fields of the JSON response can be customised as whatever you like.
Aggregator logic - [if Error] block
The error handling code for the aggregator logic is the same as before (log the error):
We have finished creating the data process in Step 2. Now we can deploy it.
Right click on the data process and select Deploy Data Process
. This will initiate the deployment process.
A data process can be triggered, normally via an API route. However, LOC Studio allows you to invoke it directly for testing. This is also called a single data process execution.
Later we will see how to deploy an API route on an API platform as well.
After the data process is deployed, right click it again and select Execute Data Process
.
The screen would then ask you to upload a payload. What you need to do is to create a payload.json
on your computer with the following content:
payload.json
Make sure the JSON data contains name
and age
fields so that the data process can respond properly. You can change Ian and 30 into another name or age you prefer.
Upload your payload file and click Execute
:
If you see something like the result below, the data process is executed successfully:
When clicking the JSON
button with an eye icon, you'll find the JSON response returned by the aggregator logic like this:
What we did in Step 4 was just a quick test. To trigger the data process properly, we need an API route so that we (or any code) can send HTTP requests to it.
The difference between the single data process execution and API routes is that an API route can trigger multiple data processes.
Also please note that only the deployed data processes can be linked to API routes.
Firstly, go to the API Route
tab, create an API route folder, and then click Create API Route
on the upper right corner:
We set our API route as follows:
Quickstart
POST
/YH/Quickstart
Sync
JSON
Quick Start
The actual API path would be something like
https://api.loc.xxx/YH/Quickstart
.
Afterwards, click Create
.
You can use an API client tool such as Postman or Insomnia to request URL.
We will use the same payload in Step 4 (but we can simply copy and paste it in the body this time):
payload
Select POST
and paste the full API route (including server URL). Afterwards, paste the JSON payload below.
Click Send
and if all go well, you can expect to see the result like this:
You can see the complete response from the data process - notice that the response in Step 4 is actually included under the data
field.
There is one more thing you can do with the events - to inspect the data lineage, which is the graphical representation of events.
Click Event
tab under Data Discovery and find the event you just sent:
You can see the Label Name
of the event indeed contains the name "Ian" from our JSON payload.
Now
Execution ID
of your event.Add filter
on top left.Field
as Execution ID
and paste your ID in Value
.Save
.Event
again, then click the slider bar data
below Applied Filters
.Now let's switch the data discovery window to Graph
mode and it shows the data lineage of the event:
You can use your mouse scrolling button to zoom in/out and drag the DIDs around.
If you can click on the event label and then click the small menu icon on the right, you can inspect its details. Here you can find the age data 30
is in the meta
field.
Also:
If you send multiple requests to the API route with the same JSON payload, you will see multiple events appear between the same source and target in Data Discovery.
Congratulations! You have created your first LOC data process and invoke it via an API route. This has already covered many basics in LOC Studio.
Imagine when you walk in McDonald's to buy your meal, the staff will key in your order. Subsequently, the kitchen will start to prepare your meal and deliver to the front desk once it is completed.
Today, we are going to use LOC Studio to record this transaction (and even other customers'). Most importantly, we would like to know if every order is successfully prepared through event sourcing on LOC Studio.
1-A. (generic logic) to set order data
1-B. (generic logic) to get order data and emit an event to record orders into the event store
1-C. (aggregator logic) to return a success/error message to tell whether the execution of this logic is successful
2-A. (generic logic) to get events from the 1st data process and to allocate meals to the kitchen staff to produce
2-B. (generic logic) to deliver order and emit an event to record who is responsible for which delivery into the event store
2-C. (aggregator logic) to return a success/error message to tell whether the execution of this logic is successful
Below please find the example of the business process analysis graph.
In order to create a data process, you need to create a project and a scenario first.
Detailed instructions can be found HERE.
As soon as the previous data process explorer has been set up, we can now create business/data logics.
To begin with the 1st business/data process, there will be 2 generic logics and 1 aggregator logic to set up.
(sample code - If OK)
(sample code - If ERROR)
(sample code - If OK)
(sample code - If ERROR)
(sample code - If OK)
(sample code - If ERROR)
Moving to the 2nd business/data process, there will be 2 generic logics and 1 aggregator logic to configure as well.
(sample code - If OK)
(sample code - If ERROR)
(sample code - If OK)
(sample code - If ERROR)
(sample code - If OK)
(sample code - If ERROR)
In Step 2, we have completed creating 2 data processes with 2 generic logics and 1 aggregator logic for each. Once they are completed, you might want to deploy these 2 data processes respectively in order for the execution later.
By right clicking on the specific data process, you are allowed to deploy it.
Please note that
In LOC Studio, there are 2 ways to execute a data process,either through the single data process execution or the execution via API routes.
In this step, we are using the single data process execution. After a data process is deployed, you can right click again on that deployed data process and select Execute Data Process
.
Because, in our case, we do not need to import any data source, we can skip this and click Execution
to continue.
When the execution completes, you will have this pop-up execution result to check out some details. For instance, you can check or download your execution result (If OK / If ERROR) per your written code snippets in the Aggregator Logic.
Moreover, if there are any events created in the data process, you can switch to Emitted Events
to see more details. It is worth mentioning that when clicking Repository
, you will be led to a new window of Data Discovery where the events information and data lineage can be found. For detailed explanation and introduction of Data Discovery, please refer to Step 6.
Besides the single data process execution stated in Step 4, here we are going to introduce another way of executing data processes via API routes.
In the beginning, you have to create an API Route folder as usual. Upon the folder is built, you can create several API routes of your interest. In this case, you can choose to create 1 API route or 2. If you choose to create 2 API routes, each API route will be linked with 1 data process. Here we are going to demonstrate how to configure with merely 1 API route which will be linked with 2 data processes we created previously.
Please remember that you need to link this API route with our aforementioned data processes (McD_1 and McD_2). Additionally, in HTTP Path
, you are required to fill in a sub-path only (that is you can skip the domain name).
After clicking Create
, this API route is created successfully as shown below.
The next thing you need to do is to use an API platform. Here we are going to use Postman for demonstration. By entering the request URL (domain name + sub-path), it will return a result which is exactly the same as what we have tested out in Step 4 - the response will be a successful message.
Please note that if there is an error message, the reasons could be
Here are some of the common error messages you might encounter only for your reference.
After requesting the URL in the Postman successfully, you will also get a set of execution ID which can be put into Data Discovery to search events.
Please note that only when there are events written in the logics, can they be searched here.
After putting the execution ID into the filter, you might get this graph.
It showcases the relationship of the entire use case, from what food in each order to who prepares for what food.
This is exactly our desired result. From this graph, not just the relationship is clearly specified, but also we could see if there is any food that has not been done, such as fried chicken, lemonade, etc. It is useful for event sourcing.
On top of this use case, you might also want to try out the subsequent events and business/data processes or to refine them. For example, you can take prices into account and plan some more for billing.