Stanford Icy Physics Group
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Sharing URL Help
Menu
Options
Versions and GitHub Sync Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
Invitee
Publish Note

Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

Your note will be visible on your profile and discoverable by anyone.
Your note is now live.
This note is visible on your profile and discoverable online.
Everyone on the web can find and read all notes of this public team.
See published notes
Unpublish note
Please check the box to agree to the Community Guidelines.
View profile
Engagement control
Commenting
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
  • Everyone
Suggest edit
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
Emoji Reply
Enable
Import from Dropbox Google Drive Gist Clipboard
   owned this note    owned this note      
Published Linked with GitHub
Subscribed
  • Any changes
    Be notified of any changes
  • Mention me
    Be notified of mention me
  • Unsubscribe
Subscribe
# Lai Research Group Computing Infrastructure --- [View it on HackMD](https://hackmd.io/dd8wi827SpCLAe8p2Ype6w) This note serves the following purpose: - As the onboard training for anyone using a command-line interface for their work, with most examples on our group workstation and a few examples on Sherlock. It provides best practices for writing code and using a command-line interface to run code. - As the reference note for fundamental technical issues or additional tips for enhancing the personal workflow. - As the reference note for the future administrator to understand the system setup. The note is the main page and contains links to additional references. The main page aims at onboard newcomer, so they can grasp the essential concepts and start using the workstation ASAP. Other references teach one how to achieve specific things in a step-by-step way. :::info Before you start, check that you have already received an account and a password for the workstation. And if you are outside Stanford, make sure that you are using Stanford's [VPN](https://uit.stanford.edu/service/vpn). <!---Princeton, make sure that you are using Princeton's [Global Protect ](https://workcontinuity.princeton.edu/remoteaccess)---> $\small\text{^Blue boxes like this one include essential information. Make sure you read and understand.}$ ::: :::success Feel free to ask questions. Your feedback can help us to improve this tutorial. $\small\text{^Green boxes like this one provide useful tips and ideas on each topic.}$ ::: ## A. The Basics We start by introducing the basic concept and workflow for using the command line. After reading through A-1 to A-3, one should be able to start working with a command line interface, such as on the workstation or Stanford's cluster. We hope the tutorial is concise, so everyone quickly enjoys their research work while following the best practice. ### A-1 Embrace the Command Line Using text to control a computer is still the most robust and effective method. We want to ensure everyone can use the computers with text-based commands in the command line. #### a-1.1 Know Your Home Since some computers use different languages for their native command line interface (e.g. Windows, Mac, Linux), we recommend users start by logging into the workstation, as detailed in section B. After login, you shall see: ![](https://i.imgur.com/3BcqlE3.png) You are now ready to use basic Bash commands! :::info After the welcome message, there are two important elements on the screen. * **User Name**: The `mc4536` indicates my user name. For you, it will be your account name. * **Location**: The text between the colon `:` and number sign `#`, in this case, it's the tilde `~`, indicated **where you are** on the computer. $\small\text{^On a different machine, the colon or the number sign can become something else}$ ::: All files and directory in Linux forms a [Tree](https://en.wikipedia.org/wiki/Tree_(data_structure)). The slash `/` separates the names of directories and files. The root of the directory system has no name, so the root path is simply a single slash: `/`. There are many directories under root; we now only focus on a directory called **home**. The path to that directory is thus `/home`. Under home, there are multiple directories, one for each user. We'll see that in short. :::info There is no apparent difference between a file and a directory. For example, given a path, `/home/mc4536/Conda`, one can not tell if it is a file or directory. The fundamental difference between the two is that a directory can contain files but not vice versa. ::: ![](https://i.imgur.com/aNrTKnV.png)\ [Photo credit: TecMint](https://www.tecmint.com/linux-directory-structure-and-important-files-paths-explained/) When you are on the workstation, at any time, you must be inside one of the directories, which is called **current working directory**. By default, after login, you are at a directory tilde `~`. Tilde `~` is a shortened form for the directory `/home/<your user name>`; this is different for every user. For example, for me, `~` means `/home/mc4536` actually. `pwd` displays where you are. It displays the full path instead of using abbreviated tilde `~`:\ ![](https://i.imgur.com/AmFe1Ou.png) We call the tilde `~` directory, or `/home/<your user name>` simply **home**, **home directory** or **user's home**. Everyone has their home directory. We seldom need to use `/home`; thus no need for another name. :::info 99% of the time when you are on the workstation, **you should work under your home.** You should put all your files under your home (the system prevents you from creating files at other places, anyway). ::: #### a-1.2 Execute Command Type `passwd` to the screen, then press *enter*. It will be the first command you type. It's a command to change your password. Follow the instructions and set the one you like. Now, we will give it a second shot. But this time, only type `passw` and execute. You'll get the following. You can see that the system is smart. When you type a non-existing command, it will try finding some suggestions. When you forgot a command, you may find the suggestions helpful:\ ![](https://i.imgur.com/o9TROVr.png) Now we do it the third time. This time, we type `pas`, then **click the Tab twice**, you shall see: ![](https://i.imgur.com/gmCarwM.png) The **Tab** key is a killer feature for the command line. If you type something with a correct prefix, it will try to complete the rest. If multiple results matches, click Tab twice will display them all. It saves time for typing and the burden of remembering things. We do it the fourth time. This time we type `info passwd`. You shall see a detailed description of the `passwd` command. Most Linux commands have a thorough document. Click `q` to exit the description. Now we do it the last time. This time we type `passwd --help`. You shall see a summary of usage and various options of the `passwd` command. :::info The general form of executing a Linux command is: ![](https://i.imgur.com/RlMl1aR.png)\ \ Starting with a command, use space to separate the options, arguments for options, and arguments. `<Some Argument>` is the syntax we use for arguments in this document. `<Some Argument>` usually contains **no space**, since it uses space to separate it from other arguments.\ \ For example : * `passwd` * `passwd` is the command. We do not provide any option/argument * `passwd --help` * `passed` is the command. `--help` is an option for it. We provide no argument for both the option and the command. * `info passwd` * `info` is the command, and `passwd` is the argument for it. * To be clear, the actual running command is `info`. Although the output teaches you to use `passwd`, it does not trigger the command `passwd`. ::: :::success Rule of thumb: **No need to remember commands.** You'll remember them automatically after doing it hundreds of times. Before your body is hardwired to a command, you can easily find some suggestion from the system, use Tab, read the document from `info command` and `command --help`, or simply google. ::: #### a-1.3 Basic Command We introduce the eight essential commands: :::info **Switch your working directory:** **`cd <target directory>`** You see that the location after your name changes.\ You can also confirm that by using `pwd` \ \ Tips: - `cd ..` moves "one layer up", i.e., into the parent directory. - `cd ../..` moves "two layers up" and so on. \ \ `..` is a convenient word for sepcifying path, like the previous tilde symbol `~`. The meaning of `..` is roughly *'the parent'*. It's not for `cd` only, you can use it with all the commands listed below. For example, `ls ..` lists the content of the parent directory. ::: :::info **List files/directories under a directory:**\ **`ls <target directory>`** <br></br> Also, the file names starting with period `.` are considered hidden. `ls` will not display them by default. One can display hidden files by adding `-a' before `<target directory>`\ ![](https://i.imgur.com/ySVoiev.png) There are quite a few hidden files under users' homes. ::: :::info **Copy File(s) and Directory(s)**\ `cp <file1> <file2> <file3> ... <destination>` <br></br> Copy all contents under directory(s):\ `cp -r <directory or file 1> <directory or file 2> ... <destination>` ::: :::info **Move/Rename File(s) and Directory(s)**\ `mv <source1> <source2> <source3> ... <destination>` <br></br> Rename is the special case of a general moving actually.\ `mv <old directory or file name> <new directory or file name>` ::: :::info **Create New Directory**\ `mkdir <new directory>`\ or\ `mkdir -p ./a/series/of/directory/` ::: :::info **Edit a Text File**\ `vim <target for edit>` If the target does not exist, it creates a new file. \ \ It is worth writing another book for using `vim`. For now, we only show how to start editing and quit the program. \ \ **Start Editing** Click `i` and check there is a text `-- INSERT --` at the bottom left. If so, you are in *insert mode* and ready to add/remove some texts. You can type/delete texts as you normally did. It's a basic mode without convenient features you used to have, for example, "undo" & "copy-paste". \ \ **Save and Quit** Click `esc` when you finish. It will leave insert mode. To save and quit, type a colon `:`, so that there is a `:` at the bottom left. Then type `wq` and press `enter`. It saves the content and sends you back to the command line. ::: :::info **Upload/Download File from Workstation** <br></br> **You have to execute `scp` command on your local PC, NOT the workstation** <br></br> Upload from the local PC to the workstation:\ `scp -r D:\what\I\want\to\upload <my-account>@sdss-yaolai.stanford.edu:/place/to/upload/` <br></br> Download from the workstation to the local PC -- just reverse the arguments of upload:\ `scp -r <my-account>@sdss-yaolai.stanford.edu:/target/to/download/ D:\where\I\want\to\store` ::: :::info **Remove Files/Directories** Be careful, there is **no way to recover** removed files. <br></br> Removes File(s):\ `rm <file1> <file2> ...` <br></br> Remove Empty Directory(ies):\ `rm -d <empty directory1> <empty directory2> ...`\ It removes empty directory only. If the directory is not empty, the action is blocked. <br></br> Remove File(s)/Directory(ies) and Its(Their) Content Recursively:\ `rm -r <directory or file1> <directory or file2> ...`\ Be careful. This will removes all things under the a directory. ::: By combining the above commands, you can conduct all essentials on the workstation. Though we recommend using command line, you may occasionally require a graphical user interface. We also provide that for the workstation. Please check the "Remote Access to the workstation" in References. ### A-2 Access the Code of Our Group #### a-2-1 Access the Shared Code from GitHub We use [Icy Physics Group Github](https://github.com/YaoGroup) as the platform for collaboratively working on projects. We'll teach you a simple way to use it for sharing the works with others. :::info Register an account for GitHub. Contact Yao to access the Icy Physics Group Github codes. ::: #### GitHub has prohibited simple passwords since Aug 2021. We need to set up the authentication. It seems cumbersome but saves you from typing a password every time. Make sure you follow the instructions: :::info **Setup the Authentication of GitHub** - [Follow <u>"Generating a new SSH key"</u>](https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#generating-a-new-ssh-key) section to create and add keys. - [Follow <u>"Adding your ssh key to the ssh-agent"</u>]( https://docs.github.com/en/github/authenticating-to-github/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#adding-your-ssh-key-to-the-ssh-agent) section - Copy the content of **~/.ssh/id_ed25519.pub** and add it as a key to your GitHub account. - Execute `cat ~/.ssh/id_ed25519.pub` to display the content of the file in terminal - Click **New SSH Key** at [<u>SSH and GPG keys</u>](https://github.com/settings/keys), and copy the content into the **Key** field. - Type anything you like into the **Title** field, for example, "Yao Group Workstation". ::: Now we are ready to use GitHub. The tool we are using for code management is **Git**. GitHub tightly integrates with Git. We'll use the [Toy Example](https://github.com/YaoGroup/ToyExample) code as an example. Switch your current working directory to a place you like, and start by execute: `git clone git@github.com:YaoGroup/ToyExample.git` Git will download the code from GitHub to a (new) directory *ToyExample* under the current working directory. #### a-2-2 Upload Your Changes `cd` into the *ToyExample* directory. Try using `vim` or other methods to modify the README.md by adding a line below the message section. For example, Ray add his message `Ray: Welcome! Hope the tutorials are clear and helpful!` ![](https://i.imgur.com/FYqQZ4v.png) Save the changes, and check your changes by `git status` It will indicate the README.md has changed. Now we need to use Git to create a formal savepoint of the change. :::info The fundamental element of a version control system is a save point. Savepoint means a snapshot of what files are like at the time of saving. In Git, we call a savepoint **commit**. Also, when we use commit as a verb, it means storing changes into a commit. ::: The first time you commit on the workstation, you have to tell Git who you are before committing. Register your email and name by the followings: - `git config --global user.email <your Github email>` - **should be exactly same as the email of your Github account** - `git config --global user.name <your Name>`, use a name we all recognize We are ready to commit your change by: - Select the change to commit by `git add README.md` - `git commit`, which will pop up vim to create a summary of your change. Here we use `Create first change for <your name>`. Save the summary as saving a file. ![](https://i.imgur.com/XdfgQw4.png) - Upload your change by `git push` Now, by refreshing [the project page](https://github.com/YaoGroup/ToyExample), you shall see your new message on the front page. #### a-2-3 Download Changes from Others You may not be the only one working on the project. You can download the changes by execute `git pull` under the project directory when someone uploaded his changes. :::success - `git status` checks the file changes - `git add <changed file1> <changed file2> ...` selects the file(s) to commit - `git commit` commit the changes. Be sure to type a concise and meaningful summary of your changes. - `git push` uploads the committed changes \ \ You may want to check out the References for a more detailed explanation. ::: ### A-3 Edit and Run The Code We now demonstrate how to modify and run the code for Ice Sheet/Shelf projects. We use [Shelf1D](https://github.com/YaoGroup/IceShelf1D) as an example. Clone the project to your workstation home: `git clone git@github.com:YaoGroup/IceShelf1D.git` #### a-3-1 The Conda Environment We use [Conda](https://docs.conda.io/en/latest/) to manage the environment for Python. You can learn more about the Conda environment in References. :::info A Conda environment is an isolated set of files/programs to execute Python scripts. By isolated, it means that **the environment works independently from any other Python interpreter**, including the system default ones. So users can install Python packages specifically for an environment without affecting any other Python interpreter. It is essential for controlling the packages for running Python scripts. We'll encounter many "Why does the script work on my machine, but not yours?" problems without this. ::: All files/programs of a Conda environment are under a single directory. For example, the environment we built for running TensorFlow 2.x codes are under `/opt/anaconda3/envs/tf24`. #### a-3-2 Use Standard Environment to Run a Script Run a script using a specific environment is simple. For the IceShelf1D, we use tf24 environment to run the file `script/1sr_order_forward. py' of the project: - (First time only) setup Conda for use by `conda init` - Activate tf24 environment by `conda activate tf24` - After which you shall see a `(tf24)` text before your name in terminal - Run the script `python3 script/1st_order_forward.py` That's all. You can deactivate an environment by `conda deactivate` :::success We provide following standard environments: <br></br> > **tf24** (`conda acitvate tf24`) > > > **We use this for most ice sheet/shelf projects.**\ > The standard environment for TensorFlow 2.X codes. > **tf115** (`conda acitvate tf115`) > > the standard environment for TensorFlow 1.X codes. > **tf114** (`conda acitvate tf114`) > > Only for running [Rassi's PINN project](https://github.com/maziarraissi/PINNs) > It uses tf.contrib and thus too old for Tensorflow 1.15 addtional info: [Admin notes on HackMD](https://hackmd.io/7PUOhT4GREysoZkAwp-3Ag?view) ::: #### a-3-3 Create Your Own Environment Though non-admin users can not create named environments under /opt/anaconda3, they can still create custom environments under their other directories. We give the step-by-step example: 1. Create and move into a target directory by `mkdir -p ~/Conda/my-env && cd ~/Conda/my-env` 2. Create/Copy an environment file, you may find the example useful at the bottom of [this page](https://hackmd.io/7PUOhT4GREysoZkAwp-3Ag). 3. Create an environment by `conda env create -f ./environment.yml -p .` 4. To trigger the environment, we have to specify the path: `conda env activate ~/Conda/my-env` After activation, you can run a script by `python3 <your script>`, as we have in a-3-2. ^$\small\text{Though we recommend using an environment file, it's not the only way to create an environment. Check out}$ $\small\href{https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html}{the\ official\ documnet}$ ^$\small\href{https://princeton.zoom.us/rec/share/xW-kiZmsCqcXNZkyDJudxZk59pkP3mhr-8X8jI1H3Z7Hd6_R38LLkBVXRXYrOaQD.w55FzYVTfGEWgwph}{Our\ training\ session\ recording\ might\ help}$ #### a-3-4 Edit Code and Run Jupyter NoteBook Using Visual Studio Code [Visual Studio Code](https://code.visualstudio.com/), or VSCode, is our recommendation for editing Python code. Its ability to directly work on a remote machine, for example, our workstation, is impressive. :::info We provide a [<u>step-by-step</u>](https://hackmd.io/meeqtJktRfmAD-8gZwsk-g?view) walkthrough. ::: After you make the VSCode connects to the workstation, editing files inside VScode happens on the workstation. No upload/download or synchronization is required. It is handy for working with Jupyter Notebook. We can remotely develop the code on the workstation and easily view the result on the VSCode. Also, we could ask VSCode to use the specified Conda environment (as we did for script in a-3-2 & a-3-3) for running the notebook: :::info * Install Microsoft [Python](https://marketplace.visualstudio.com/items?itemName=ms-python.python) & [Jupyter extension](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.jupyter) * Open a Jupyter Notebook file. Things should work. * Select the desired Python interpreter at the top right of the notebook ![](https://i.imgur.com/6UeUM4W.png) <br></br> VSCode is smart. By selecting the correct interpreter path, it will load the corresponding Conda environments. For example, **using the standard tf24** environment, we could choose: `/opt/anaconda3/envs/tf24/bin/python` ::: $\small\text{^There are other editors that provide similar functions. For example, PyCharm on Mac has similar features}$ ## B. Workstation ### B-0 Login To the Group Workstation As of January 2024, accessing the group workstation requires the [Stanford VPN](https://uit.stanford.edu/service/vpn), even on Stanford's campus. Download the VPN client and log in with your SUNetID and password. Keep your phone handy, as you may be asked to use [Duo Push](https://play.google.com/store/apps/details?id=com.duosecurity.duomobile&hl=en_US&gl=US&pli=1). Successful connection will look like this: ![vpn_connection](https://hackmd.io/_uploads/H1y888Qtp.png) Open a terminal. On Windows, one can use [PowerShell](https://www.howtogeek.com/662611/9-ways-to-open-powershell-in-windows-10/). On Mac, one can open the [native terminal](https://www.howtogeek.com/682770/how-to-open-the-terminal-on-a-mac/). Type the following into terminal, then press *enter*: `ssh <your_account_name>@sdss-yaolai.stanford.edu` <!---`ssh <your_account_name>@yaolab.princeton.edu`---> Then it will ask for the password, type the one you get from Yao, and then successfully log into the workstation. Your login may look like this: ![workstation_login](https://hackmd.io/_uploads/rydi3LmY6.png) If you instead are using a graphical user interface instead of a command line interface, we recommend [Microsoft Remote Desktop](https://apps.microsoft.com/detail/9WZDNCRFJ3PS?hl=en-US&gl=US). If you are not asked to make a new one-time password, type `passwd` to reset your password! ### B-1 Secure Your Long-Running Jobs on the Workstation (tmux) :::success TLDR; Execute `tmux new -s <name>` to create a persistent terminal. Run your job inside the terminal. The workstation holds the jobs inside the terminal regardless of the internet connection. Use `tmux attach -t <name>` to get back to the terminal. ::: To this point, we know how to use ssh to login and work on the workstation. But there is a caveat: Launch a job, for example, `python my-job.py` within a ssh login. **If the login is disconnected, the job terminates**. This makes long-running jobs unfeasible, especially over an unstable internet connection. The canonical way of doing this is using a [terminal multiplexer](https://en.wikipedia.org/wiki/Terminal_multiplexer). We won't explain the mechanism behind this. We only show how to use [tmux](https://en.wikipedia.org/wiki/Tmux) to avoid the disconnection problem. We give a simple mental model for using tmux: **When you execute `tmux` in the terminal, it creates a persistent terminal, denoted as PT, which does not end even if the internet connection is lost.** In this section, we are going to use a simple python script `job.py` to mimic a long-running job: ``` # content of job.py import time from datetime import datetime while True: time.sleep(1) print("Curent time", datetime.now()) ``` Now execute `tmux new -s MyPT`. after which you are in a new terminal. Run `python job.py`, it displays the current time roughly every second. ![](https://hackmd.io/_uploads/SknOhyHNK.png)\ Now, close the cmd, Powershell, Mobaxterm, or whatever you use to login. You can also try turning off your internet :-> Login back to the workstation again, now type `tmux ls`, it should display something like this: ![](https://hackmd.io/_uploads/BkB5jkSNt.png)\ Your terminal is still alive! Use `tmux attach -t <MyPT>`, you shall go back to the previous terminal and see your `job.py` is still running, as if nothing happens! :::info `tmux`\ Start a new PT with auto-generated name. `tmux new -s <name>` does the same with given name. One can not create another PT inside a PT. <br></br> `tmux ls`\ Display the names of all PTs. <br></br> `tmux attach -t <name>`\ Get back to the PT with that name. This is the magic to use when one wants to get back the terminal previously working in after losing the connection. <br></br> Inside PT:\ `exit` Outside PT:\ `tmux kill-session -t <name>` End a PT by either (you will lose your job inside the PT). One only needs to end PTs occasionally. It's usually for cleaning up the zombie PTs on the workstation. <br></br> Some other helpful tips we do not cover. For example, one can split a PT into two or more panels, which allow multiple jobs to run inside a single PT. Check out this [youtube video](https://www.youtube.com/watch?v=Yl7NFenTgIo). ::: ## C. Stanford Cluster Sherlock and Oak Storage [Stanford Research Computing](https://srcc.stanford.edu/)<!--(https://researchcomputing.princeton.edu/) --> offers bountiful computing resources. Working on the cluster is mostly the same as working on the workstation. First, use ssh to connect to the Stanford cluster, just like our workstation. Use Sherlock login node for example: `ssh <sunetid>@login.sherlock.stanford.edu` <!--`ssh <Your Princeton ID>@della-gpu.princeton.edu`--> ![sherlock_login_options](https://hackmd.io/_uploads/SyHOwLXKa.png) You will also get information about current usage just by logging in: ![sherlock_login_usage](https://hackmd.io/_uploads/BkQ7d8XKa.png) However, there are crucial differences between using the cluster and the workstation: 1. One can freely acquire computing resources (CPU & [GPU](https://www.sherlock.stanford.edu/docs/user-guide/gpu/)) on Sherlock and use a browser through [On Demand](https://www.sherlock.stanford.edu/docs/user-guide/ondemand/), but for scheduling many jobs it is faster to use [Slurm](https://www.sherlock.stanford.edu/docs/user-guide/running-jobs/). **One can submit their jobs via the Slurm system. A batch of runs may be submitted through Slurm, and Slurm will determine which jobs first get executed.** If computing with the [Doerr School](https://stanford-rc.github.io/docs-earth/docs/resources_overview), remember to use `-p serc`. 2. There are **no** existing Conda environments like what we provide on the workstation. You have to create on your own before using Slurm. Sherlock suggests you build [virtual environments](https://www.sherlock.stanford.edu/docs/software/using/anaconda/) instead of using anaconda, and we do too. To create a virtual environment in python 3.9 on Sherlock: ``` cd my/folder module load python/3.9 python3.9 -m venv chooseNameOfVenvHere source theNameYouChose/bin/activate pip3.9 install numpy etc deactivate ``` then you can activate and deactivate this environment as you please. Note that you will need to load python 3.9 whenever you want to use it. 3. Using Slurm is just one step more than `python myscript.py`. **Slurm asks a text file called *Slurm script* for submitting jobs.** So the process becomes: - Write your `myscript.py` - Create Conda environment on the cluster - Write the Slurm script `my_submit.slurm` - inside the `my_submit.slurm`, specify you want to do -- `python myscript.py` - Submit `my_submit.slurm` to Slurm 4. While you can still run small scripts on the cluster without contacting Slurm, it's highly recommended to fully-tested your jobs on your PC (e.g. small routines) or the workstation (e.g. GPUs, machine learning, large forward numerical model). Use the cluster only when you are ready to submit a large amount of computing work. #### c-1.1 Understand the cluster and Write Slurm script We highly recommend going through the guide from Stanford Research Computing: - https://www.sherlock.stanford.edu/docs/getting-started/ - https://www.sherlock.stanford.edu/docs/getting-started/connecting/ - https://www.sherlock.stanford.edu/docs/getting-started/submitting/ <!-- https://researchcomputing.princeton.edu/get-started/guide-princeton-clusters - https://researchcomputing.princeton.edu/support/knowledge-base/slurm--> <!--If you are impatient, you can directly jump to the Python page: - https://researchcomputing.princeton.edu/support/knowledge-base/python--> #### c-1.2 A helper tool for batch submitting jobs to the cluster Before reading this section, be sure you understand the basics of a Slurm script (b-2.1). To understand the use case of the helper tool, take our [Shelf 2D](https://github.com/YaoGroup/IceShelf2D) as example. We have a script file `script_inverse.py` for inversion of hardness. One can run the script via terminal: ``` python script_inverse.py 0.001 -o ./output_dir ``` where the number `0.001` specifies the noise ratio. To systematically run the script with different noise ratios, using the terminal, one could do: ``` python script_inverse.py 0.001 -o ./noise_experiment && python script_inverse.py 0.002 -o ./noise_experiment && python script_inverse.py 0.005 -o ./noise_experiment && python script_inverse.py 0.01 -o ./noise_experiment && python script_inverse.py 0.02 -o ./noise_experiment && python script_inverse.py 0.05 -o ./noise_experiment && ... ``` The above will run the script with different parameters, one after one (i.e. sequentially). To run the jobs parallelly on the cluster, we need to write a Slurm script using [job array](https://www.sherlock.stanford.edu/docs/advanced-topics/job-management/),<!--[job array](https://researchcomputing.princeton.edu/support/knowledge-base/slurm#arrays),--> which looks like (some details ommited): ``` #!/bin/bash #SBATCH --job-name=test_sample #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 #SBATCH --mem-per-cpu=4G #SBATCH --time=0:30:43 #SBATCH --array=0-5 if [[ $SLURM_ARRAY_TASK_ID -eq "0" ]]; then python script_inverse.py 0.001 -o ./noise_experiment/0001 if [[ $SLURM_ARRAY_TASK_ID -eq "1" ]]; then python script_inverse.py 0.002 -o ./noise_experiment/0002 if [[ $SLURM_ARRAY_TASK_ID -eq "2" ]]; then python script_inverse.py 0.005 -o ./noise_experiment/0005 if [[ $SLURM_ARRAY_TASK_ID -eq "3" ]]; then python script_inverse.py 0.01 -o ./noise_experiment/001 if [[ $SLURM_ARRAY_TASK_ID -eq "4" ]]; then python script_inverse.py 0.02 -o ./noise_experiment/002 if [[ $SLURM_ARRAY_TASK_ID -eq "5" ]]; then python script_inverse.py 0.05 -o ./noise_experiment/005 ``` We create a simple tool for precisely the above use case: https://github.com/YaoGroup/slurm_tool **As long as your script can vary the target variable by accepting argument(s) from terminal**, our tool can transform the above terminal task into a Slurm script for the cluster. For detailed usage, please refer to the GitHub page. #### c-1.3 Check Remaining Storage Space on Sherlock Very simple, just type `sh_quota`:![sh_quota](https://hackmd.io/_uploads/SJrZYU7Ka.png) #### c-1.4 Places on Sherlock and Oak There are several basic places (less metaphorically, folders) on Sherlock and Oak that are useful to know and common to users. These are, with nbcoffey in place of your personal SUNetID: 1. `/oak/stanford/groups/cyaolai` - our group’s Oak storage 1. `/home/users/nbcoffey` - your personal home 1. `/home/groups/cyaolai/` - our group home 1. `/scratch/users/nbcoffey` - your personal scratch 1. `/scratch/groups/cyaolai` - our group scratch. How should I use these places? Stanford has an suggested [outline with details](https://www.sherlock.stanford.edu/docs/storage/); for this document, these 2 screenshot from that website are sufficient: ![use_space_sherlock_oak](https://hackmd.io/_uploads/H1r3oL7Fa.jpg) ![stanford_cluster_recs](https://hackmd.io/_uploads/rkKN2I7YT.jpg) ## D. Data and Code storage :::info To make sure we do not loss any important progress, it's our policy that: * Upload codes to GitHub when sharing codes/ after project completion for reproducibility. * Store all the data/documents on the cluster storage with backup system ::: You already know how to upload code to GitHub. This section provides step-by-step guide on mounting data storage supported with backup system, provided by [Stanford Research Computing](https://srcc.stanford.edu/systems/data-storage)<!--[Princeton Research Computing](https://researchcomputing.princeton.edu/support/knowledge-base/data-storage)-->. Our group acquires a few TB of space. :::info A special note for Stanford users below. ::: *We have space for computing on Sherlock, and space for storage on Oak. Read about [Storage on Sherlock](https://www.sherlock.stanford.edu/docs/storage/#quotas-and-limits). **All files not modified within 90 days on Scratch `/scratch/users/<sunetid>` and Group Scratch `/scratch/groups/cyaolai` within Sherlock [are automatically purged](https://www.sherlock.stanford.edu/docs/storage/filesystems/#expiration-policy).*** In addition to saving your work on GitHub, we recommend checkpoints and saving your main codes in your personal home on Sherlock `/home/users/<sunetid>` during project development, and Oak `/oak/stanford/groups/cyaolai` upon project completion. <a id="section-mounting-storage"></a> ## E. Mounting Storage Systems to Personal Device There are basically two different protocols to connect directly to the storage space on local machine **SMB** and **SSH**. :::success User must be **inside Stanford VPN** for all the followings methods. Also the **login account and password is Stanford University Net ID (with some variation for SMB, see below)** not workstation account. ::: Before diving in to mounting protocols, we note that it is possible to mount any file path that is convenient for you in the [Sherlock system](https://www.sherlock.stanford.edu/docs/storage/data-transfer/#rsync). So feel free to add your most commonly used file path to your local device for quick access. Olivia has a nice description of mounting for Oak [on the last slide of this slideshow](https://docs.google.com/presentation/d/1oUMxVPAakmxu_i0xqHwydK5ulREsG97ODmtWyN5R0cE/edit?usp=sharing), which is available to anyone within the Stanford domain (read: email address). We show how to mount local drive to the space on SSH and/or SMB (heads up, please give feedback on this section, as the current editor has a Windows machine): > **For Linux** > > We can mount the folder via SSH: > > `mkdir ~/Oakdata`\ > `sshfs <SUNetID>@sherlock.stanford.edu:/oak/stanford/groups/cyaolai/ ~/tigressdata` > > Replace ~/Oakdata with the directory you want to mount on. And the SUNetID is your Stanford Net ID, not necessarily the username for your workstation account. > **For Mac** > ----- Using SMB (Easier, Recommended)----- > > Mount /oak/stanford/groups/cyaolai/ using SMB. Open a Finder window then press ⌘K. Paste `smb://smb-cyaolai.oak.stanford.edu/groups/cyaolai` into the top bar.![Screenshot 2024-02-03 at 4.00.36 PM](https://hackmd.io/_uploads/Syj3XLn5T.png) For your credentials, use your full Stanford email address and your Stanford password. Or if you prefer, [do it in command line](https://gist.github.com/natritmeyer/6621231) > > Follow the tutorial, while using `smb://cyaolai.oak.stanford.edu/groups ` as path, `STANFORD\<SUNetID>` as username and password of your Stanford account. > > ----- Using SSH ----- > > To use SSH protocol to mount the space. Open a terminal and install the followings using brew: > >`brew cask install osxfuse`\ `brew install sshfs` > > And following the same instruction on **Linux** section should work. > **For Windows** > > Windows has to manually install mores for mounting directory via SSH. > [Check out this](https://github.com/billziss-gh/sshfs-win) > > On the other hand, SMB is much easier (it's native for Windows). We recommend using **SMB** on Windows. Open "PC", click the "computer" at top-left, then click "Map Network Drive". > > Select the whatever Drive (Y:, Z:, ...) you like, and type the following into the Folder field:\ > `\\smb-cyaolai.oak.stanford.edu\groups` > ![mount_windows_screenshot](https://hackmd.io/_uploads/rkgCdwhca.jpg) > > For credentials, type the followings:\ > account: STANFORD\\<SUNetID>\ > password: your password for Stanford account ><!-- ![](https://i.imgur.com/0s6h5Iq.png) --> After mounting, accessing the directory is equavalent to access the /oak/stanford/groups/cyaolai on Stanford Oak data storage system. So the files copied/written into it automatically share the benefit of Oak, like secure backup, etc. ## F. Frequently Asked Questions > Can I use Jupyter Notebook or Lab on Sherlock? * Yes, see [here](https://www.sherlock.stanford.edu/docs/user-guide/ondemand/) and [here](https://stanford-rc.github.io/docs-earth/docs/jupyter-notebooks-hpc). > Can I use Jupyter Notebook or Lab on Sherlock? * Yes, see [here](https://stanford-rc.github.io/docs-earth/docs/jupyter-notebooks-hpc). > Do I have to download Anaconda on Sherlock? What about modules? * Read [here](https://www.sherlock.stanford.edu/docs/software/using/anaconda/) about not downloading Anaconda and instead using virtual environments. Additionally, read about modules on Sherlock [here](https://www.sherlock.stanford.edu/docs/software/modules/). > Where can I learn more about storage on Sherlock, as well as best practices so that I do not lose my scientific results? * Try this [link](https://www.sherlock.stanford.edu/docs/storage/)! ## G. Other References ### [Slides: Group Computing Resources Summary](https://docs.google.com/presentation/d/1oUMxVPAakmxu_i0xqHwydK5ulREsG97ODmtWyN5R0cE/edit#slide=id.p) ### [Crash Course on Version Control using Git](https://hackmd.io/EAsegki1QS-1ewBzMqsfrQ) ### [Set Up Visual Studio Code on the Workstation](https://hackmd.io/VtUnQzXDQYGTIUTtd3uXWA) ### [Sherlock, Globus, and AMD Tutorial](https://hackmd.io/1_wqypFXS2a3qM3djws8gw) ### [Remote Access to the workstation, Login without Password](https://hackmd.io/KVnFw0YFQ7m2hHgQni5dyw?view) ## H. Archived ### [Installing Elmer on Linux Ubuntu System (under construction)](https://hackmd.io/nES9mtPVSdquQ7sRogH1KA) ### [Jupyter with Environment File on Cluster (Princeton)](https://hackmd.io/XC4WHWkeRm-YnDJ0eDZFbQ) ### [Ray: Remote Connections (Princeton)](https://hackmd.io/8hPo_g1rTAut2cPnjniz7Q) ### [Della MATLAB GUI (Princeton)](https://hackmd.io/TMx3ak1PTxqbIp__VLk2zA) ### [Scales-up PINN to Real Data (under construction)](https://hackmd.io/MjgazPAuSRqpdSXBnWb4Gw) <a id="section-PFC-Sherlock"></a> ## I. Using PFC with Sherlock This tutorial assumes basic familiarity with [PFC](https://www.itascacg.com/software/pfc) and [Sherlock](https://hackmd.io/1_wqypFXS2a3qM3djws8gw). If you run PFC on the Workstation, please be mindful of our [best practices](https://hackmd.io/KVnFw0YFQ7m2hHgQni5dyw?view). ### Setting up PFC licensing on Sherlock First, login in to Sherlock using `ssh yourNetID@login.sherlock.stanford.edu`. Then type the following (you will need to obtain the Itasca username and password from a group member). ```bash mkdir -p ~/.config/Itasca cat << EOF >> ~/.config/Itasca/wad700.conf [weblicense] email=<<< Your Itasca username >>> password=<<< Your Itasca password >>> EOF ``` Furthermore, you must ask a group member to share with you the container file called `flac3d.sif`. Upload that either to your Sherlock home folder or Oak for permanent storage. Do not place it in the Scratch folders because it will be automatically deleted after 90 days. For example, if you place it in a folder called `$HOME/pfc_container`, you can open the command-line version of 2D PFC in Sherlock with the command ```bash singularity exec $HOME/pfc_container/flac3d7.sif /opt/itascasoftware/v700/pfc2d700_console.sh ``` or the 3D version with ```bash singularity exec $HOME/pfc_container/flac3d7.sif /opt/itascasoftware/v700/pfc3d700_console.sh ``` Then you may program in PFC on Sherlock in real time without the GUI. However, it is best practice to design the simulation with a GUI then submit the long jobs to Sherlock. See the next section. ### Designing and running a simulation The first step is to design your simulation using the GUI of PFC (either on your personal computer or the group workstation). As an illustrative example, we will create a new model for granular distribution and motion in a periodic box, simulate forward for 10,000 time steps, then save the model. First, open a new project in PFC then create a file called `example.dat` and paste the following content. ```FISH ; necessary lines to create a new model model new model large-strain on ; set the model domain and boundary conditions model domain extent 0 1 0 1 condition periodic ; initialize random seed and execute process of randomly distributing grains model random 10001 ball distribute porosity 0.2 radius 0.02 0.03 gauss box 0 1 0 0.5 group 'my_grains' ; assign grain properties contact cmat default type ball-ball model hertz property hz_shear 5.0e7 hz_poiss 0.4 dp_nratio 0.1 ball property 'fric' 0.1 range group 'my_grains' ball attribute density 2500 ; set the timestep to be automatic and simulate forward a bit model mechanical timestep auto model cycle 10000 ; save the model. model save 'model_which_has_started_running' ``` Execute this code. When it finishes (in less than a minute), move the output .sav file to Sherlock. This can be accomplished, for example, with the following sequence on your personal computer (note that you will need to enter passwords and possibly dual authentication). ```bash cd my/folder/with/simulation/setup scp -r yourNetID@sdss-yaolai.stanford.edu:/home/yourNetID/PFCprojects/model_which_has_started_running.sav . scp -r model_which_has_started_running.sav yourNetID@login.sherlock.stanford.edu:/scratch/users/yourNetID/folder/where/you/will/run/simulation/ ``` The final command may be replaced by clicking and dragging `model_which_has_started_running.sav` into your [docked Oak folder](#section-mounting-storage) then using the `mv` command within Sherlock to bring it to your preferred folder to run the simulation. Now to run the code in Sherlock, first login in using `ssh yourNetID@login.sherlock.stanford.edu`. Then go to the location where you wish to run the code (and where you placed the `.sav` file) using `cd`. We will make a script to continue simulating the model forward and periodically save specific data. This can easily be accomplished with a Python script, although it is possible to do this purely with `.dat` files. Create and open your script by typing `touch continue_simulation.py` then `vim continue_simulation.py`. Now input the following contents. ```Python import itasca itasca.command(""" model restore 'model_which_has_started_running' """) # sometimes you must redo the imports after restoring a model import itasca # define a function for outputting data def data_output(fileNum): with open('output_'+str(fileNum)+'.txt', 'w') as ff: for ballIter in itasca.ball.list(): # access various data for each grain ID = ballIter.id() radius = ballIter.radius() cNum = len(ballIter.contacts()) x = ballIter.pos_x() y = ballIter.pos_y() vx = ballIter.vel_x() vy = ballIter.vel_y() output = (str(ID) + '\t' + str(radius) + '\t' + str(cNum) + '\t' + str(x) + '\t' + str(y) + '\t' + str(vx) + '\t' + str(vy)) ff.write(output) ff.write('\n') ff.flush() # this will simulate for 20 output files representing 20,000 time steps forward for i in range(20): # cycle for 1000 time steps before outputting a new data file itasca.command("model cycle 1000") data_output(i) # create a new .sav file which can be continued later itasca.command(""" model save 'finished_simulation' exit """) ``` You may notice that we have not changed any aspects of the model whatsoever and are simply continuing the simulation. It is straightfoward to change the model within the Python script, and extensive documentation about that can be found on the [PFC website's section on Python scripting](https://docs.itascacg.com/flac3d700/common/docproject/source/manual/scripting/python/doc/python_pfc.py.html). One example where you might want to do that is if you want to apply a time-varying property that requires some analysis of the simulation data to control. There are Python functions that directly control the model. For example, to create a ball of radius 0.1 at position (0,0,0) you can use the command `itasca.ball.create(0.1, vec(0,0,0))`. However, you can also directly write commands in PFC's native FISH language within the Python script by saying `itasca.command("my FISH command")`. This second method can perform more quickly in some situations. Next, you must create a `.sbatch` file to ask Sherlock to run this job. Type `touch run_my_job.sbatch` then `vim run_my_job.sbatch` and enter the following contents. ```bash #!/bin/bash #SBATCH --job-name=my_name_for_my_job #SBATCH --time=01:00:00 #SBATCH -p serc #SBATCH -c 32 #SBATCH --mem=32GB #SBATCH --mail-type=ALL #SBATCH --mail-user=yourNetID@stanford.edu singularity exec $OAK/yourNetID/pfc_container/flac3d7.sif \ /opt/itascasoftware/v700/pfc2d700_console.sh call continue_simulation.py ``` This is a basic slurm file which opens PFC using the `singularity` command, then runs your Python script within PFC. To run it just type `sbatch run_my_job.sbatch`. If you have properly entered your email address then you will receive a notification once the job starts. Beware that an error will not end the simulation but will just leave PFC open until the slurm-specified computation time (one hour in this example) elapses; please check the initial progress of the simulation in the slurm simulation file after the job starts to make sure it is really running. Once your simulation is finished, you might consider moving the entire folder to Oak using `cp -r . $OAK/yourNetID/some/folder` to prevent auto-deletion if you ran this code in a Scratch folder. <a id="section-LIGGGHTS"></a> ## I. LIGGGHTS ### Installation ### Sherlock In order to install LIGGGHTS on Sherlock, first you can follow [these instructions](https://www.cfdem.com/media/DEM/docu/Section_start.html). Next, you must modify `MAKE/Makefile.user` as folows: `MPICXX=/share/software/user/open/openmpi/4.1.2/bin/mpicxx` `MPI_LIB_USR=/share/software/user/open/openmpi/4.1.2/lib` `CXX=c++` After that, if you wish to use Paraview for visualizations, you need to install vtk separately (have not tested this yet). #### MacOS/Unix [LIGGGHTS](https://www.cfdem.com/media/DEM/docu/Manual.html) is a special version of [LAMMPS](https://www.lammps.org/#gsc.tab=0), improved for granular simulations and coupling continuum fluid flow models. It is straightforward to install if you are using [Ubuntu](https://ubuntu.com/server/docs/nvidia-drivers-installation), see the [official installation guide](https://www.cfdem.com/media/DEM/docu/Section_start.html#building-for-a-mac) or [this helpful website](https://www.engineerdo.com/wp-content/uploads/2020/06/EngineerDo_Installation_liggghts.pdf). If you are using [Fedora](https://fedoraproject.org), try following [these instructions](https://www.slideshare.net/slideshow/liggghts-installationguide/87204872#) (NOT YET TESTED--NEED MORE NOTES HERE ON INSTALLING WITH FEDORA). If you are using MacOS, there is no good tutorial online, but we have identified the following steps. 1. Make sure [Open MPI](https://www.open-mpi.org), Boost, and [FFTW](https://fftw.org) are installed. This can be accomplished with [Homebrew](https://brew.sh) by `brew install openmpi boost fftw`. 2. Navigate to the folder where you want to install LIGGGHTS and type `git clone git@github.com:CFDEMproject/LIGGGHTS-PUBLIC.git` 3. Type `cd LIGGGHTS-PUBLIC/src; make auto`. You will likely see some error messages. 4. Modify the make settings: open the file `MAKE/Makefile.user`. Search for the line `USE_VTK = "ON"`. If it is commented out, uncomment it, and change `"ON"` to `"OFF"`. Next, search for the line `#BOOST_INC_USR=-I/path/to/boost/include ` if uncommented, comment it out. Paste immediately below it `BOOST_INC_USR=-I/opt/homebrew/opt/boost/include`. 5. Type `make auto` once more. #### Virtual Machine Setup The installation of LIGGGHTS is simplest on an Ubuntu operating system (OS), meaning that it can be preferrable to install an Ubuntu virtual machine (VM) on your preferred OS and install LIGGGHTS there, instead of bothering with installing LIGGGHTS directly. Installation on Ubuntu can be done through referencing https://www.engineerdo.com/wp-content/uploads/2020/06/EngineerDo_Installation_liggghts.pdf. There are a few different ways to install an Ubuntu VM, depending on your current OS. ##### Windows The easiest way to set up Ubuntu on Windows is to use the [Windows Subsystem for Linux (WSL)](https://ubuntu.com/desktop/wsl), which can be directly downloaded from the [Microsoft store](https://apps.microsoft.com/detail/9pdxgncfsczv?rtc=1&hl=en-us&gl=US). The set up follows pretty directly from the installer file. One thing to note: your home directory for Ubuntu would be different from the Windows `C:\Users\username`. To align the home directories, first locate the path of your Windows home directory -- for example, `C:\Users\username` would be located at `/mnt/c/Users/username`. Then, type `usermod -d /newhome/username username` to change your home directory. ##### Fedora There are a few ways to install an Ubuntu VM on Fedora (and other Red Hat distributions). ### Running a simple script Go in your terminal to any folder where you would like to write your LIGGGHTS script. Create the script, for example by typing `touch mytestscript.flow`. Open the script and paste the sampe script from the bottom of [this webpage](https://www.cfdem.com/media/DEM/docu/Section_input_script.html#an-example-input-script). Save the script and type into your terminal `mpirun -np 4 lmp_auto -in mytestscript.flow` where the number 4 indicates how many processors you would like to use (limited by whichever computer you are using). ### Post-processing After running a script, LIGGGHTS should output a dump file -- for a simulation called `in.flow`, a `dump.flow` file should be generated in the same directory. This dump file contains information about the position and velocity of each particle and forces on them, at each time step. For projects that do not come with a VTK file (which visualizes the results), a few extra steps are required to convert the dump files into something able to be displayed by Paraview. You can also put the dumpfiles directly into [Ovito](https://www.ovito.org/#download). The conversion is done using a software called [LIGGGHTS Post-Processing (LPP)](https://www.cfdem.com/post-processing-liggghtsr-simulations). To install LPP, use the following steps: 1. `git clone https://github.com/hoehnp/LPP/` 2. From the LPP directory created, run either `./install.sh` or `./install.sh -p path/to/install/directory` if you would like to install to a different directory. To run LPP, use the command `lpp /path/to/dump/file -o /path/you/want/to/VTKs`. This should generate groups of VTK files that can be directly imported into Paraview. <a id="section-LAMMPS"></a> ## I. LAMMPS [LAMMPS](https://www.lammps.org/#gsc.tab=0) (Large-scale Atomic/Molecular Massively Parallel Simulator) is an open-source software for molecular dynamics, including granular-type force fields. ### Installation The [documentation](https://docs.lammps.org/Intro.html) includes various ways to install and build LAMMPS. However, you may encounter serious issues if you are working on multiple platforms with slightly different installations. For example, a restart file generated by last year's version of LAMMPS will not run in this year's version. That means that each Sherlock version is completely incompatible with the default version you might install in your own computer, so we have installed the latest version in the `$GROUP_HOME` folder. I will recommend the following, in order to maintain consistency across platforms (note that this will work to reinstall LAMMPS in Sherlock should it ever be deleted): 1. Change directory to wherever you would like to install LAMMPS, then type `git clone -b release https://github.com/lammps/lammps.git lammps` 2. To build using CMake, type `cd lammps; mkdir build; cd build`, then `cmake ../cmake -D PKG_GRANULAR=ON`, assuming you want the granular package included, and finally `cmake --build .; make install` ### Running LAMMPS You should now have an executable called `lmp`. Type this into the terminal to open an interactive LAMMPS session. If you have a script called `myLAMMPSscript.flow`, to run type `lmp -in myLAMMPSscript.flow`. If you wish to run in parallel using 4 processes, assuming you are using openMPI (as is pre-installed on Sherlock), load the module then type `mpiexec -np 4 -in myLAMMPSscript.flow`. If you wish to run a parameter sweep which involves specifying a variable called `input_script_name` from the command line, type `lmp -in myLAMMPSscript.flow -var input_script_name test1.xyz`

Import from clipboard

Paste your markdown or webpage here...

Advanced permission required

Your current role can only read. Ask the system administrator to acquire write and comment permission.

This team is disabled

Sorry, this team is disabled. You can't edit this note.

This note is locked

Sorry, only owner can edit this note.

Reach the limit

Sorry, you've reached the max length this note can be.
Please reduce the content or divide it to more notes, thank you!

Import from Gist

Import from Snippet

or

Export to Snippet

Are you sure?

Do you really want to delete this note?
All users will lose their connection.

Create a note from template

Create a note from template

Oops...
This template has been removed or transferred.
Upgrade
All
  • All
  • Team
No template.

Create a template

Upgrade

Delete template

Do you really want to delete this template?
Turn this template into a regular note and keep its content, versions, and comments.

This page need refresh

You have an incompatible client version.
Refresh to update.
New version available!
See releases notes here
Refresh to enjoy new features.
Your user state has changed.
Refresh to load new user state.

Sign in

Forgot password

or

By clicking below, you agree to our terms of service.

Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
Wallet ( )
Connect another wallet

New to HackMD? Sign up

Help

  • English
  • 中文
  • Français
  • Deutsch
  • 日本語
  • Español
  • Català
  • Ελληνικά
  • Português
  • italiano
  • Türkçe
  • Русский
  • Nederlands
  • hrvatski jezik
  • język polski
  • Українська
  • हिन्दी
  • svenska
  • Esperanto
  • dansk

Documents

Help & Tutorial

How to use Book mode

Slide Example

API Docs

Edit in VSCode

Install browser extension

Contacts

Feedback

Discord

Send us email

Resources

Releases

Pricing

Blog

Policy

Terms

Privacy

Cheatsheet

Syntax Example Reference
# Header Header 基本排版
- Unordered List
  • Unordered List
1. Ordered List
  1. Ordered List
- [ ] Todo List
  • Todo List
> Blockquote
Blockquote
**Bold font** Bold font
*Italics font* Italics font
~~Strikethrough~~ Strikethrough
19^th^ 19th
H~2~O H2O
++Inserted text++ Inserted text
==Marked text== Marked text
[link text](https:// "title") Link
![image alt](https:// "title") Image
`Code` Code 在筆記中貼入程式碼
```javascript
var i = 0;
```
var i = 0;
:smile: :smile: Emoji list
{%youtube youtube_id %} Externals
$L^aT_eX$ LaTeX
:::info
This is a alert area.
:::

This is a alert area.

Versions and GitHub Sync
Get Full History Access

  • Edit version name
  • Delete

revision author avatar     named on  

More Less

Note content is identical to the latest version.
Compare
    Choose a version
    No search result
    Version not found
Sign in to link this note to GitHub
Learn more
This note is not linked with GitHub
 

Feedback

Submission failed, please try again

Thanks for your support.

On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

Please give us some advice and help us improve HackMD.

 

Thanks for your feedback

Remove version name

Do you want to remove this version name and description?

Transfer ownership

Transfer to
    Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

      Link with GitHub

      Please authorize HackMD on GitHub
      • Please sign in to GitHub and install the HackMD app on your GitHub repo.
      • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
      Learn more  Sign in to GitHub

      Push the note to GitHub Push to GitHub Pull a file from GitHub

        Authorize again
       

      Choose which file to push to

      Select repo
      Refresh Authorize more repos
      Select branch
      Select file
      Select branch
      Choose version(s) to push
      • Save a new version and push
      • Choose from existing versions
      Include title and tags
      Available push count

      Pull from GitHub

       
      File from GitHub
      File from HackMD

      GitHub Link Settings

      File linked

      Linked by
      File path
      Last synced branch
      Available push count

      Danger Zone

      Unlink
      You will no longer receive notification when GitHub file changes after unlink.

      Syncing

      Push failed

      Push successfully