# Existing pipelines and tools for behavioural analysis Here we will list existing software solutions for behavioural analysis that are under use/development, both locally in UCL/SWC/GCNU and in the wider neuroscience community. - [Existing pipelines and tools for behavioural analysis](#existing-pipelines-and-tools-for-behavioural-analysis) - [Data management \& databases](#data-management--databases) - [Data acquisition](#data-acquisition) - [Video (pre)processing](#video-preprocessing) - [Motion tracking / Pose estimation](#motion-tracking--pose-estimation) - [Behavioural analysis \& classification](#behavioural-analysis--classification) ## Data management & databases * [datashuttle](https://github.com/neuroinformatics-unit/datashuttle): project data management tool being developed by NIU * [DataJoint](https://datajoint.io/): database management system for neuroscience data, used by the Aeon project ## Data acquisition * [Bonsai](https://bonsai-rx.org/) is a visual programming environment for data acquisition. Was developed by SWC members/alumni and is used in the institute (e.g. by the Aeon project). [Paper](https://www.frontiersin.org/articles/10.3389/fninf.2015.00007/full) | [GitHub](https://github.com/bonsai-rx/bonsai) | [Docs](https://bonsai-rx.org/docs/) ## Video (pre)processing * **OpenCV**: Open Source Computer Vision Library, pre-built for python as [opencv-python](https://pypi.org/project/opencv-python/)(cv2). Currently in use by Aeon. [Docs](https://docs.opencv.org/3.4.3/d6/d00/tutorial_py_root.html) * [Common-Coordinate-Behaviour](https://github.com/BrancoLab/Common-Coordinate-Behaviour) is a tool written by [Philip Shamash](https://github.com/philshams) (formerly Branco Lab) to generate and apply a geometric transform to align videos that may be at different positions, angles, and zoom (useful for when the arena or the camera moves). It also implements a fisheye correction for the camera perspective (calibration with checkerboard is needed). [GitHub](https://github.com/BrancoLab/Common-Coordinate-Behaviour) ## Motion tracking / Pose estimation * [DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) is an efficient method for 2D and 3D markerless pose estimation based on transfer learning with deep neural networks. [Paper1](https://www.nature.com/articles/s41593-018-0209-y) | [Paper2](https://www.nature.com/articles/s41592-022-01443-0) | [GitHub](https://github.com/DeepLabCut/DeepLabCut) * [SLEAP](https://sleap.ai/) is a deep learning-based approach for tracking animals in videos. Used by Jeff Ehrlich's group, and likely to be used by Aeon [Paper](https://www.nature.com/articles/s41592-022-01426-1) | [GitHub](https://github.com/talmolab/sleap) * [TRex](https://trex.run/): is a fast multi-animal tracking system with markerless identification, and 2D estimation of posture and visual fields. [Paper](https://elifesciences.org/articles/64000) | [GitHub](https://github.com/mooch443/trex) * [OpenPose](https://github.com/CMU-Perceptual-Computing-Lab/openpose) has represented the first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images. [GitHub](https://github.com/CMU-Perceptual-Computing-Lab/openpose) * [DeepPoseKit](https://github.com/jgraving/deepposekit) is a deep learning toolkit for pose estimation and tracking. [Paper](https://elifesciences.org/articles/47994) | [GitHub](https://github.com/jgraving/deepposekit) * [FaceMap](https://github.com/MouseLand/facemap) is a matlab/python GUI for unsupervised video analysis of rodent behavior (capable of processing multiple camera views) * [Anipose](https://anipose.readthedocs.io/en/latest/) is an open-source toolkit for robust, markerless 3D tracking of animal behavior from multiple camera views. It leverages the machine learning toolbox DeepLabCut to track keypoints in 2D, then triangulates across camera views to estimate 3D pose. [Paper](https://www.sciencedirect.com/science/article/pii/S2211124721011797?via%3Dihub) | [GitHub](https://github.com/lambdaloop/anipose) | [Documentation](https://anipose.readthedocs.io/en/latest/) * [FreiPose](https://lmb.informatik.uni-freiburg.de/projects/freipose/): a Deep Learning __C++__ Framework for Precise Animal Motion Capture in 3D Spaces. [Paper](https://www.biorxiv.org/content/10.1101/2020.02.27.967620v1) | [GitHub](https://github.com/lmb-freiburg/FreiPose) * [DeepBehavior](https://github.com/aarac/DeepBehavior) is a deep learning __matlab__ toolbox for automated analysis of animal and human behavior imaging data. [GitHub](https://github.com/aarac/DeepBehavior) | [Paper](https://www.frontiersin.org/articles/10.3389/fnsys.2019.00020/full) * [DeepFly3D](https://github.com/NeLy-EPFL/DeepFly3D) is a deep learning-based approach for 3D limb and appendage tracking in tethered, adult Drosophila. [Paper](https://elifesciences.org/articles/48571) | [GitHub](https://github.com/NeLy-EPFL/DeepFly3D) * [OptiFlex](https://github.com/saptera/OptiFlex) - Multi-Frame Animal Pose Estimation Combining Deep Learning With Optical Flow. [Paper](https://www.frontiersin.org/articles/10.3389/fncel.2021.621252/full) | [GitHub](https://github.com/saptera/OptiFlex) * [idtracker.ai](https://idtrackerai.readthedocs.io/en/latest/) allows to track groups of up to 100 unmarked animals from videos recorded in laboratory conditions. [GitLab](https://gitlab.com/polavieja_lab/idtrackerai) | [Paper](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007354) * [DANNCE](https://github.com/spoonsso/dannce/) * [FastTrack](https://github.com/FastTrackOrg/FastTrack) * [DeepLabStream](https://github.com/SchwarzNeuroconLab/DeepLabStream) --build on DLC, real-time * For a database of neuroscience software/hardware tools, this is a good site: https://open-neuroscience.com/ ## Behavioural analysis & classification * [MoSeq](https://dattalab.github.io/moseq2-website/index.html) is an unsupervised machine learning method used to parse mouse behavior into a set of re-usable sub-second motifs called syllables [Wiltschko et al., 2015](http://datta.hms.harvard.edu/wp-content/uploads/2018/01/pub_23.pdf). MoSeq discovers the set of syllables and grammar expressed in any given experiment. Originally it was developed for depth camera images, but since then it has worked with 3D videos and (very recently) with motion tracking outputs (e.g. from DeepLabCut.) The latter approach is called *KeypointMoSeq*. [GitHub](https://github.com/calebweinreb/keypointMoSeq) | [Paper in progress](https://www.overleaf.com/project/624b7a6d2c18c0053e72c21c). Good for many hours of open field data, with a lot of keypoints. * [DLC2action](https://github.com/AlexEMG/DLC2action) is an action segmentation package that makes running and tracking of machine learning experiments easy. It can run supervised action segmentation on DeepLabCut outputs. [GitHub](https://github.com/AlexEMG/DLC2action) | [GUI GitHub](https://github.com/amathislab/dlc2action_annotation) | [Documentation](https://alexemg.github.io/DLC2action/html_docs/dlc2action.html) * [DLC2Kinematics](https://github.com/AdaptiveMotorControlLab/DLC2Kinematics) is a module for kinematic analysis of deeplabcut outputs. [GitHub](https://github.com/AdaptiveMotorControlLab/DLC2Kinematics) * [B-SOID](https://github.com/YttriLab/B-SOID): Behavioral segmentation of open field in DeepLabCut, or B-SOID ("B-side"), is a pipeline that pairs unsupervised pattern recognition with supervised classification to achieve fast predictions of behaviors that are not predefined by users. [Paper](https://www.nature.com/articles/s41467-021-25420-x) | [GitHub](https://github.com/YttriLab/B-SOID) * [SimBA](https://github.com/sgoldenlab/simba) - Simple Behavioral Analysis: a pipeline and GUI for developing supervised behavioral classifiers. [Paper](https://www.biorxiv.org/content/10.1101/2020.04.19.049452v2) | [GitHub](https://github.com/sgoldenlab/simba) * [cebra](https://cebra.ai/) is a self-supervised method for non-linear clustering that allows for label-informed time series analysis. It jointly uses behavioral and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. One of the developers, [Jin Hwa Lee](https://jinhl9.github.io/) is currently a PhD student at SWC. [Paper](https://arxiv.org/abs/2204.00673) | [GitHub](https://github.com/AdaptiveMotorControlLab/CEBRA) * [movement](https://github.com/neuroinformatics-unit/movement): A small __Python__ package started by Adam to analyze body movement from pose estimation (e.g. DeepLabCut) output. Currently planning to further develop this as NIU. * [opendirection](https://github.com/adamltyson/opendirection) aims to correlate spike times and spatial behaviour. Spike times are generated using [kilosort](https://github.com/cortex-lab/KiloSort), and animal body positions using [deeplabcut](https://github.com/AlexEMG/DeepLabCut). Developed by Adam. * [Kino](https://github.com/BrancoLab/Kino) is a __Python__ package for 2D animal locomotion kinematics written by Federico Claudi from the Branco lab. Still under development but has some useful features. [GitHub](https://github.com/BrancoLab/Kino) * [Behavior-opto-analysis](https://github.com/philshams/behavior-opto-analysis) is a __Python__ package for analyzing free-moving behavioral data during optogenetics experiments. [GitHub](https://github.com/philshams/behavior-opto-analysis) * [JAABA](https://jaaba.sourceforge.net/) - Janelia Automatic Animal Behavior Annotator (__MATLAB__). I's a machine learning-based system that enables researchers to automatically compute interpretable, quantitative statistics describing video of behaving animals. Users encode their intuition about the structure of behavior by labeling the behavior of the animal, e.g. walking, grooming, or following, in a small set of video frames. JAABA uses machine learning techniques to convert these manual labels into behavior detectors that can then be used to automatically classify the behaviors of animals in large data sets with high throughput. [GitHub](https://github.com/kristinbranson/JAABA) | [Paper](https://www.nature.com/articles/nmeth.2281) * [PyRat](https://github.com/pyratlib/pyrat) - __Python__ in Rodent Analysis and Tracking: a user-friendly library in python to analyze data from the DeepLabCut. [Github](https://github.com/pyratlib/pyrat) | [Paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9125180/)