# Introduction
A pipeline in Ds365ai is essentially a series of workflow steps that are executed in a specific order based on dependencies. These dependencies are formed by establishing a relationship where the output of one step acts as the input for another step within the pipeline. This sequential structure ensures that the steps are executed in a coordinated manner, allowing each step to use the output of the preceding step as its input.
To be noted:
- A `Pipeline` execution is called as a`workbench`
- A `Workflowstep` execution is called as a`job`
Thanks to the dependency concept, it can have 2 or more jobs running in parallel when they are not depending in each other.
A pipeline must have at least 1 job specified in the config.
# Workbench execution diagram in a nutshell

The above diagram explains how Workbench and its Stages (Jobs) will be executed and the behavior of dependencies in the execution tree:
1. `Job A1` and `Job B1` are executed at the beginning in parallel because they are not depending each other.
2. `Job A2` depends on the output of `job A1`, hence it runs after the `Job A1` completes
3. `Job C1` depends on the output of `Job A2` and `Job C1`, hence it runs after the `Job A2` and `Job B1` complete
4. Both `Job D1` and `Job D2` depend on the output of `Job C1`, hence they run after the `Job C1` completes
5. The output of the workbench gathers 1 or more outputs from `Job A2`, `Job D1`, `Job D2` and `Job B1`
# Pipeline Directory Structure
To compile a Pipeline, initiate a Resource Request through the Ds365ai Web Application. This process involves fetching code from your specified git target folder source path, validating the config json and saving it to the Tenant's S3 under the `system/` path. Users can visualize this item under `Resources / Pipelines` on the platform's user interface.
Ds365ai expects each pipeline to reside in a directory with a specific structure, which looks like this:
```
// from root directory of git repo
dockerimages/
workflowsteps/
pipelines/
└── MyPipeline/
├── pipeline.json
└── README.md
```
# Pipeline Confiruation
## Example Config
```json
{
"name": "my_pipeline",
"title": "My Pipeline",
"description": "Lorem ipsum",
"tags": ["demo", "good data"],
"stages": [
{
"id": "prepare_data_set",
"name": "Prepare Data Set",
"workflowstepId": "2208b7fc4ffe4daea938f02618921384"
},
{
"id": "train_x",
"name": "Train X"
"workflowstepId": "313133f390824b30ad620ef440b58664",
"input": {
"training_data": {
"fromStage": "prepare_data_set",
"outputField": "training_ready_data_set"
}
}
},
{
"id": "evaluation_train_model_source",
"name": "Evaluate Train Model Source",
"workflowstepId": "fe5a7854c42040bfa127ad3b1435e50e",
"input": {
"model_s3_uri": {
"fromStage": "train_x",
"outputField": "model_s3_source"
}
}
}
],
"output": [
{
"name": "trained_model_path",
"source": {
"fromStage": "train_x",
"outputField": "model_s3_source"
}
},
{
"name": "evaluation_result",
"source": {
"fromStage": "evaluation_train_model_source",
"outputField": "result"
}
}
],
"properties": {
"Liscense": "MIT",
"Repo": "https://git@git.example.com/abc",
"DeepLearningFramework": "pytorch==2.0",
"CustomKey": "CustomValue"
}
}
```
## `.name`
(str, required)
> NOTE: Matches `^[a-zA-Z][a-zA-Z0-9_]{0,99}$`
A string value which tells the pipeline name. Similar to Workflowstep Name, the name of the pipeline should not include spaces and other special characters.
## `.title`
(str, required)
Different from `.name`, the title is displayed as a default workbench name by default when you click into RUN button on the UI.
## `.description`
(str, required)
A string value which describe more details of the pipeline
## .tags
(array[dict], optional) default to []
Meta tags for workflow step.
## `.input`
(arr[dict], optional)
An array of json object which each has the same definition as workflowstep `.inputSpec[i]`. The objects in the array are pipeline-level inputs.
If presents, the pipeline will become a `Locked Pipeline` where all inputs will need to be properly mapped down into the pipeline's workflowsteps. `Locked pipeline` is used as a technique to hide the complexity of the pipeline usage from the UI users (e.g. Citizen data scientists, other UI users, ...). It hides all input forms of all workflowsteps. The UI user now only experiences a single input form of the pipeline itself and all of those input parameters will be delivered to the workflowsteps accordingly following the configuration within `.stages`
## `.stages`
(arr[obj], required)
A stage refers to an execution unit of the pipeline. In this case, it is the object having inforamation that targets to a workflowstep.
The `.stages[]` tell the instructions of job execution along with their input dependency mapping.
Each Stage has the similar format:
- `.id` (str, required) Stage id. This field is only used within the configuration in order to support for mapping input parameters between stages.
Matching `[a-zA-Z0-9_]+`
- `.name` (str, optional) The name of the Job when it is displayed within the Monitor Page OR Workflow Step List Page on the UI. By default, it is the value of the field `.title` in the workflowstep configuration
- `.workflowstepId` (str, required) The workflowstep ID for the stage
- `.input` (dict, required) Input mapping specification when the running this stage's workflowstep.
Each key in the mapping refers to the input variable name of the workflowstep that you can find at `.inputSpec[i].name`. The value for each key follows one of the 2 ways:
**SCENARIO:** Map the input from the `Locked Pipeline` to the stage
- `.fromPipelineInput`: (str, required) From `.input[i].name` in this configuration
**SCENARIO:** Map the output field of the `stage A` to the `stage B`'s input
- `.fromStage`: (str, required) From `.stages[i].id` in this configuration
- `.outputField`: (str, required) From `.inputSpec[i].name` in the workflowstep configuration of the `stage A`
## `.output`
(arr[dict], optional)
The output mapping for the pipeline.
In DS365AI, `Workflowstep` and `Pipeline` are considered as the same level entity which is called `Executable`. An Executable is an object that is runnable, receiving input(s) and returning output(s) as a single unit function/algorithm.
> To be noted, input and output for Workflowstep / Pipeline can be an empty list ([]) if you dont have one.
Because the pipeline is only considered as an instruction of running a series of workflowsteps in a particular order, the only way to make the pipeline to have its own output is looking at the stages it has. In light of that, the output of the pipeline comes from the stages outputs.
Each item in the output list of the Pipeline has the below structure:
- `.name`: (str, required) The name of the output parameter. This also makes sense in coding language, we cannot name a variable to have spaces and special characters. It should follow the conventional variable naming pattern.
Matches `[a-zA-Z0-9_]`
- `.source`: (dict, required) Source of the Pipeline Output. This is where we define the mapping of output value
- `fromStage`: (str, required) From .stages[i].id in this configuration
- `outputField`: (str, required) From .inputSpec[i].name in the workflowstep configuration of the stage A
## .properties
(dict, optional)
This field is usually used for extra meta information for the Pipeline
e.g.
```json
// workflowstep.json
{
...,
"properties": {
"whatsNews": "abc...",
"liscense": "MIT",
"modinVersion": "0.23.1",
"rayVersion": "0.24.1"
}
}
```
## example of `Locked Pipeline`
The `Locked Pipeline` when being displayed on the UI will hide all input forms of the workflowsteps. It only displays the input form of the Pipeline level itself.
> Note: The configuration in this case will be strictly validated and requires all stages to have enough required inputs before the Pipeline can be created into DS365AI. Otherwise, detailed error message will be displayed before the Resource Request can be submitted
```json
{
"name": "my_pipeline",
"title": "My Pipeline",
"description": "Lorem ipsum",
"input": [
{
"name": "preprocessing_data_set",
"label": "Training Data Set",
"class": "s3any",
"required": true,
"help": "Lorem ipsum",
},
{
"name": "preprocessing_data_set",
"label": "Preprocessing Data set",
"class": "s3any",
"required": true,
"help": "Lorem ipsum"
},
{
"name": "is_parallel",
"class": "boolean",
"required": false,
"default": false,
"help": "Whether data_set is loaded and processed in parallel or not"
}
],
"stages": [
{
"id": "prepare_data_set",
"name": "Prepare Data Set",
"workflowstepId": "2208b7fc4ffe4daea938f02618921384"
"input": {
"data_set": {
"fromPipelineInput": "preprocessing_data_set"
},
"parallel_hanlding": {
"fromPipelineInput": "is_parallel"
}
}
},
{
"id": "train_x",
"name": "Train X"
"workflowstepId": "313133f390824b30ad620ef440b58664",
"input": {
"training_data": {
"fromStage": "prepare_data_set",
"outputField": "training_ready_data_set"
}
}
},
{
"id": "evaluation_train_model_source",
"name": "Evaluate Train Model Source",
"workflowstepId": "fe5a7854c42040bfa127ad3b1435e50e",
"input": {
"model_s3_uri": {
"fromStage": "train_x",
"outputField": "model_s3_source"
}
}
}
],
"output": [
{
"name": "trained_model_path",
"source": {
"fromStage": "train_x",
"outputField": "model_s3_source"
}
},
{
"name": "evaluation_result",
"source": {
"fromStage": "evaluation_train_model_source",
"outputField": "result"
}
}
],
}
```