# Unity
Unity has sensitive version dependency, and making working version needs lots of time to learn. Thus, we leave our history, hope it be helpful for others who will do next.
To use Unity, we need two parts; 1) Environment development on desktop application 2) Running the env to train our agent.
## Desktop App
To develop environment, we need to set up Unity on our local PC. To choose which version to install, please strictly follow their version table: https://github.com/Unity-Technologies/ml-agents#releases--documentation, but **note that there could be additional version dependency over their description**, we recommend to use the below version if there are problems to make your env.
Steps are
1. download / install unity hub (https://unity3d.com/get-unity/download)
2. install unity through unity hub (the version: 2019.4.30f1)
3. create your own project
4. install ml related packages
- download ml-agents release 18 (https://github.com/Unity-Technologies/ml-agents/archive/release_18.zip)
- decompress zip file
- In your project, select Window on top bar -> select package manager.
- select + button, and import from disk.
- find com.unity.ml-agents/package.json from decompressed folder
- find com.unity.ml-agents.extensions/package.json from decompressed folder
- search Input System package and install
- update its version to 1.1.0-preview.3
- search Newtonsoft Json package and install
- update its version to 2.0.0
5. make your own env and export it to run on server: https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Learning-Environment-Executable.md
- Development
- This link can be helpful to design your env https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Learning-Environment-Create-New.md
- If input system is not working (the reason is crush between old and new libraries of unity) -> https://stackoverflow.com/questions/65027247/unity-conflict-between-new-inputsystem-and-old-eventsystem
- After developing your env, select File in top bar -> select Build setting.
- select platform and build.
- **If linux is not visible, you need to reinstall unity with linux building option.**
- When building our env, please don't check any boxes on the options!!
- After building, you need to copy the below files to use it as environment.
- <YOUR_ENV_NAME>_Data
- <YOUR_ENV_NAME>.x86_64
- LinuxPlayer_s.debug
- UnityPlayer_s.debug
- UnityPlayer.so
### Helpful links
https://github.com/Unity-Technologies/ml-agents
https://github.com/Unity-Technologies/ml-agents/blob/main/docs/Learning-Environment-Examples.md#gridworld
https://dev.to/tsuz/run-unity-ml-agents-examples-on-mac-52p6.
### Official examples in jupyter notebook
https://github.com/Unity-Technologies/ml-agents/tree/main/colab
### Version dependency
- Unity version (not hub version!): 2019.4.30f1
- ml-agents: 2.1.0-exp.1
- ml-agents extentions: 0.5.0-preview
- input system: 1.1.0-preview.3
- Newtonsoft Json: 2.0.0
## To use
After building unity environment, we need to use it to learn our model. One of general env settings is OpenAI Gym setting, so we will share the unity env wrapped by gym APIs.
### Install libraries
run
```
wget https://github.com/Unity-Technologies/ml-agents/archive/release_18.zip
unzip release_18.zip
cd ml-agents-release_18
pip install -e ./ml-agents-envs
pip install -e ./ml-agents
pip install -e ./gym-unity
```
### Run sample code to check your env is working.
run
```
import numpy as np
import time
from pyvirtualdisplay import Display
from mlagents_envs.environment import UnityEnvironment
from gym_unity.envs import UnityToGymWrapper
display = Display(backend='xvnc', size=(64, 64), visible=0, rfbport=0)
display.start()
env_path = <Your_Env_Path>
unity_env = UnityEnvironment(env_path)
env = UnityToGymWrapper(unity_env, uint8_visual=True, allow_multiple_obs=True)
obs = env.reset() # obs[0]: state obs[1]: visual observation
done = False
step = 0
while not done:
obs, reward, done, info = env.step(env.action_space.sample())
step += 1
print(step, reward, done)
env.close()
display.stop()
```
If your env is working with the above code, then you can apply it to your experiments by using it like gym env.
### Helpful links
Gym wrapper has some **limitations**, please check it with the link below.
https://github.com/Unity-Technologies/ml-agents/blob/main/gym-unity/README.md
### Notes
- If you wanna use it for multiple actors, you need to give different port number like
```
14 unity_env = UnityEnvironment(os.path.join('unity_envs',env_name,env_name+'Env-Linux'),
15 base_port=portpicker.pick_unused_port(), side_channels=[]).
```
portpicker is library which can be installed through pip.