# ML 2021/09/09
## Goal
1. Code: Implement `TaskRunner`, a tool that can divide/schedule/run task to specific resource group ex: GPU.
2. Experiments: Try smaller T and simpler dataset on CelebA
## Code: TaskRunner
Main Class: [task_runner.py](https://github.com/FrankCCCCC/ntk-generative/blob/sychou/sychou/label-space/task_runner.py)
Usage Example: [run.py](https://github.com/FrankCCCCC/ntk-generative/blob/sychou/sychou/label-space/run.py)
A simple example:
```python=1
from task_runner import TaskRunner
def test_run(epoch :int, decay: str, gpu: int, dataset_size: int):
import os
import jax.numpy as np
os.environ["CUDA_VISIBLE_DEVICES"] = f'{gpu}'
print(f"Epoch: {epoch}, Decay: {decay}, Dataset Size: {dataset_size}, GPU: {gpu}")
if __name__ == '__main__':
config = {
'section-1': { # Each section would be executed sequentially.
'group-1': { # The groups under the same section would be executed concurrently
'Call': test_run, # Call can be either a function call or a command in string
'Param': { # The TaskRunner would list all kinds of combination of the parameters and execute them once
'decay': ['exp', 'anneal', 'static'],
'epoch': [100, 1000, 10000],
'dataset_size': [1000, 2000, 3000]
},
'Async': { # The task under the same group would be schedule to the resources by TaskRunner during runtime.
'machine': [0, 1],
'gpu': [0, 1]
}
},
'group-2':{ # 'group-2' can be seem as another resource group that handle different task from 'group-1' during 'section-1'
'Call': 'ls',
'Param': {
'': ['-l', '-a', '-la']
},
'Async': {
'': []
}
}
},
'section-2': {
'group-1': {
'Call': 'ls',
'Param': {
'': ['-a']
}
}
}
}
tr = TaskRunner(config=config)
tr.run()
```
## Experiments: Results
We try to enhance the creativity of the generator by tuning the `train_t_rate`.
Common Settings
- Dataset: Original CelebA
- Noise Size: 10
- Epoch: 10000
- T: 65536.0 * `train_t_rate`
- Perturbation Method: None
- Diag Reg: 1e-5
- target_distribution: single
- Loss Type:
$$
\min_{M} Cross \ Entropy \ Loss(1, (I - e^{\eta t K_{M+N, M+N}}) y_{M+N})
$$
## Exp1. Batch Size 256, Dataset Size 256
- Batch Size: 256
- Dataset Size: 256
- `train_t_rate`: 0.0001, 0.001, 0.01, 0.1, 0.5, 1, 3, 6
### Train_t_rate = 6



### Train_t_rate = 3



### Train_t_rate = 1



### Train_t_rate = 0.5



### Train_t_rate = 0.1



### Train_t_rate = 0.01



### Train_t_rate = 0.001



### Train_t_rate = 0.0001



## Exp2. Batch Size 512, Dataset Size 512
- Batch Size: 512
- Dataset Size: 512
- `train_t_rate`: 0.0001, 0.001, 0.01, 0.1, 1, 4, 8, 16
### Train_t_rate = 16



### Train_t_rate = 8



### Train_t_rate = 4



### Train_t_rate = 1



### Train_t_rate = 0.1



### Train_t_rate = 0.01



### Train_t_rate = 0.001



### Train_t_rate = 0.001


