# Biomimetic Robotics Lab Onboarding A quick guide for resources, check-offs, and todos for new members of the lab. Ask Se Hwan Jeon for any questions/clarifications (sehwan@mit.edu). ## Sections [General](#General) [Administrative](#Administrative) [Courses](#Courses) [Research Resources](#Research-Resources) [Post-Grad. Paths](#Post-Grad.-Paths) ## General Our lab, headed by Sangbae Kim, focuses on highly dynamic locomotion and manipulation with robotic systems. We're not actually all that biomimetic, but when we can draw connections to biology it's super interesting to see. Our lab is generally divided into hardware/software groups, but people definitely do both and mix. The hardware team focuses on design, sensing, and actuation for our robots, while the software team focuses on controls and learning. Don't feel pressured to pick a lane, your interests will definitely have a place in the lab. **Strengths of our lab**: incredible hardware, tightly knit with controls knowledge. Lack of hierarchy and lots of freedom to choose the project you want. Name-value of the lab is significant too. Labmates are also really nice and helpful. We have a (terrible) IM soccer team! **Downsides to the lab**: There's little/no organization to our codebase, hardware parts, etc. We're trying to be a little more formal about it, but research is pretty individual and it can be difficult to collaborate in an organized way. The freedom to choose a project can also be stressful when you're not sure what you want to work on. Some people might want more guidance and mentoring, but Sangbae is pretty hands-off. We also move really fast as a lab, so a lot of overwhelming jargon and terms get thrown around a lot. Just ask about it immediately. ## Administrative Get access to each of these! Ask Se Hwan or Elijah to be added. - [ ] BRL Slack channel (and #full-time channel) - [ ] BRL Password (general password for most of our accounts) - [ ] BRL Dropbox folder - [ ] BRL shared google calendar - [ ] BRL "scripts" website (might take some time) - [ ] Relish (free food after lab meetings) - [ ] BRL website profile (send these to Se Hwan) - [ ] Photo - [ ] Email and/or personal website, if you want it displayed - [ ] BRL Amazon account (ask Elijah for details, or someone else to purchase for you) - [ ] BRL computer - [ ] BRL mailing list - [ ] BRL Github organization # Courses Usually limited to two per semester if you're in the MechE graduate program. Recommendations for classes lab members have taken for your first semester. Generally, prerequisite classes aren't necessary to do well. There's definitely more that I might have forgotten about, so if you have a course in mind, ask about it. At the end of the day, tailor them to your interests, and don't worry too much about them. You'll learn a lot more in lab than in class! If you can't make up your mind, taking any math course + 2.74 is a very safe choice. ## Fall - **2.032: Dynamics** = one of quals. courses. Super old school course, good for quals. but not much else. I'd actually recommend Patrick Wensing's Analytical Dynamics instead, if you want to get into hardcore dynamics. - **2.160: ID, Estimation, Learning** - applications + some theory on random variables, filters, function approximation, etc. Heard it's pretty useful and interesting. - **2.740: Bioinspired Robotics** - Sangbae's class, recommend it your first semester. Basics of simulation, hybrid dynamics, contact, etc. Project-based! - **18.100A/B: Real Analysis** - (personal take) I think a proof-based math course was super, super interesting, and really forced me to think in new ways. - **18.0851: Computational Methods** Don't remember the exact name, but it's the math class nearly all the incoming STEM students take. Heard it's fairly easy, but not sure how much you'll get out of it. - **2.151: Adv. Sys. Dynamics and Control** - Fundamentals of modern state-space control. Modeling and optimal control techniques for linear systems (LQR, LQG, etc.). Good if you're not comfortable with it. ## Spring - **6.832: Underactuated Robotics** - would not recommend this your first year, but really good survey class of trajectory optimization, state-space control, stability, etc. (Steve: definitely _would_ recommend for your first year, if you have a controls slant. Covers a lot, but not in-depth.) - **16.32: Optimal Control and Estimation** - Detailed class on linear and nonlinear optimization/filters. - **2.12: Intro. to Robotics** - Good if you've never taken a formal robotics course. Basics of fwd., inverse kinematics, Jacobians, etc. Probably not needed if you take 2.74, but it's a quals. course. - **6.*: Machine Learning Things** - Tons of ML classes you can take in the EECS department, if you want a basic intro. - **6.484: Computational Sensorimotor Learning** - Rundown on modern Ml techniques and methods. Overview of supervised, unsupervised, imitation, reinforcement, etc. learning, and algorithms to do so. ## Robotics Seminars MIT also has a group of professors that organize weekly robotics seminars on Fridays. Different professors from around the world are invited to present on their research. You can find more information (time, location, schedule of speakers) on the [website](https://robotics.mit.edu/robotics-seminar) or google "MIT Robotics Seminars". # Research Resources (These are for Software, Hardware tends to be different) ## Background Knowledge Getting caught up to speed on what we're working towards (as of 2022) **General Overview:** Our lab is known for our hardware + actuation, and combining it with "model-based" techniques to do cool things. The Mini Cheetah backflipping and walking around attracted a lot of attention (Sangbae's actually featured in the movie "Patriot's Day" with Lana Condor and he was on the Tonight Show) back in 2018-2019. We now have the MIT Humanoid and the MIT Teleoperation Arm, and we want to focus on doing these dynamic, agile tasks at the limits of our actuation capabilities. **What about Boston Dynamics?** We don't really have a connection to them, but that company was founded by Marc Raibert who headed the Leg Lab at MIT in the '80s. We're more of a spiritual successor of that lab than Boston Dynamics. We are collaborating with Marc Raibert's enw company, which is called Boston Dynamics AI Institute, even though it does not have actual ties to the original Boston Dynamics (Marc is apparently just as original as we are when it comes to naming). The parkour videos BD releases is definitely frustrating (Steve: I don't know why you say that), but many of their videos are in super controlled environments with a huge, *hydraulic* powered robot. We're hoping that we can start exploring principles behind humanoid locomotion and translate that into walking, running, rolling controllers for our *electric* humanoid over general terrain. (Steve: they are building an electric humanoid, I'm sure it'll be kickass.) **Things to Know** General topics that are good to know (just a Wikipedia understanding at least): - LQR, iLQR, DDP - Model-predictive control - Whole-body control - Trajectory optimization - Value functions - Supervised vs. unsupervised vs. reinforcement learning vs. imitation learning - Function approximation For a primer on MPC, go [here](https://www.youtube.com/watch?v=YwodGM2eoy4). All of Steve Brunton's videos are great. Go through this tutorial on [trajectory optimization](http://www.matthewpeterkelly.com/tutorials/trajectoryOptimization/) from Matthew Kelly. It's super clear and is a great breakdown of a lot of tools being used right now for robotics. The best papers to get caught up on our work are these: [Mini Cheetah Platform](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8793865): Design and capabilities of the Mini Cheetah. [Model-predictive control](https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8594448): Locomotion controller for the Mini Cheetah. We talk about MPC a LOT, so it's useful to understand how it's been implemented and how it works. [Whole-body Control](https://arxiv.org/pdf/1909.06586.pdf): The reactive layer that "fine-tunes" what comes out of MPC. Also important to know. **Ask questions, general or specific, about how any of this works.** It's good to get a firm understanding of these topics. In the last year or two, our lab has made a big push towards exploring machine learning techniques for legged locomotion, and possibly manipulation too. The paper that really did this for us was [this](https://www.science.org/doi/10.1126/scirobotics.abk2822). Check out [this video](https://www.youtube.com/watch?v=oPNkeoGMvAE) of their results. This is already a lot, so learn these steadily - you'll quickly catch on to the ideas we discuss if you have a baseline understanding. Don't worry on knowing all the details of the papers, but the high-level concepts and controller architectures. ## Specific Skills This will depend heavily on whether you want to focus on design/hardware or controls/software, but here's a rough outline of useful skills: **Software** - C++ (for our main simulation environment with a direct connection to hardware, Robot-Software) - Python (for our reinforcement learning environment, gpuGym in Nvidia's IsaacGym environment) - [LCM](https://lcm-proj.github.io) (how we do a lot of our communication onboard our robots) - MATLAB (a lot of offline trajectory optimizations + model-based studies can be done in MATLAB effectively) **Hardware** (I know very little about this, these are just the words I hear thrown around) - Solidworks - Soldering - PCB Design - Electronics + communications - SPI/CAN ## Setup Once you have access to our Github repo (and a computer), you can set up both of our major codebases (gpuGym + Robot-Software). It's most stable to use Ubuntu 20.04 LTS, and we can help you dual-boot your computer/laptop if you like using another OS as well. Once you're at this point, you can follow the READMEs in their respective repos, or ask someone to walk you through the installation process, and general codebase structure. Our research code is super messy, so having someone go through what to change to actually influence behavior is really helpful. It just takes time to get familiar with. A lot of use [VSCode](https://code.visualstudio.com/download) (NOT Visual Studio) as our text editor and debugger. It's pretty powerful with a lot of extensions, so I recommend it, but of course, use whatever you're comfortable with. ### Getting Started If you plan to work on controls and software for the robot, I'd recommend trying two things with our codebases. 1. **Build a PD controller in Robot-Software**: From the root directory, go to systems/quadruped/state_machine/FSM_State_JointPD.cpp (FSM stands for finite state machine). Try to parse what's going on (asking someone else in the lab to walk you through it can save a lot of time), and see if you can build a simple PD controller. This should get you familiar with the major blocks of code you need to interact with (state estimation, controllers, FSM states, the hierarchy). You'll need to: - Create a new FSM state and add it to the control list. - Design joint PD commands to send to the Mini Cheetah (can be fixed, sinusoidal, etc.) - Compile, build, and run the code in Unity to test your controller. 2. **Train a standing/walking policy in gpuGym**: If you're more interested in machine learning approaches, we have a lot of base code that can be extended. From the root directory of gpuGym, go to gpugym/envs/mini_cheetah.py and mini_cheetah_config.py. Not sure if the base training currently works or not, but try to get familiar with the options available in the config file. The main crux of training will be in the rewards you define, which are at the bottom of mini_cheetah.py and legged_robot.py. You can see which variables are available to you and how to return a reward to the agent. You'll need to: - Define custom rewards that are "good" for walking (keep a base height, track a velocity, etc.) - Run the training environment code with command-line arguments (CLI) for your desired training (look at the get_args() function in helpers.py) - Modify the config. file of mini_cheetah to tune your defined rewards until you have a stable walking policy. Neither of these are mandatory or even critical, but I think they would be good, simple projects to build confidence and get familiar with the code structure of our software. ### Running the Mini Cheetah Right now, our most "demo-ready" robot is still the Mini Cheetah. It's capable of locomotion with various gaits, and several jumps. If you have friends visiting Boston and you want to show them around the lab, it's always nice to be able to show them a cool demo. Follow the steps outlined in the README of Robot-Software for running the Mini Cheetah. Ask around the lab for where Mini Cheetah 9 is (a.k.a. Mini Cheetah Vision), and its controller. If you're interested in being able to run the robot and it's your first time, ask someone to walk you through using the controller and starting the code onboard the robot. Steve: this is obsolete, caus'... you will _have_ to learn to run the mini-cheetah :). If you're not in the [schedule](https://docs.google.com/spreadsheets/d/1YV_DOlJMnok42N1u4oka7rUdn2te_Wkonm8a9TScJOw/edit#gid=1737264812), talk to Steve about getting on the schedule (but I'll probably come find you anyway). # Post-Grad. Paths Probably don't need to worry about this now, but to be kept in mind. If you're interested in academia, maintaining a strong GPA to apply for fellowships, grants, etc., and pursuing leadership opportunities will definitely make you competitive for professorship/post-doc positions. Lots of resources for fellowships I won't put down here, but ask if you're curious. Publications will matter a lot, so finding your "angle" about what makes your research unique is key. If you're interested in industries, keep a lookout for internships you'd want to do and ask Sangbae early about doing them the following summer. Probably don't need to worry about this until Year 2 or 3, but keep it in mind. A couple places our lab members have gone to: - Google - Apple - Spyce (robot restaurant startup in Boston! Apparently it's pretty good) - Kitty Hawk - Apptronik (Texas-based robotics) - Boston Dynamics