DeepLabCut is a toolbox for markerless tracking of body parts of animals in lab settings performing various tasks, like trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. for details). There is, however, nothing specific that makes the toolbox only applicable to these tasks and/or species. Please check out www.mousemotorlab.org/deeplabcut for video demonstrations of automated tracking. The implementation of DeepLabCut given here was created by NeuroCAAS developers to benchmark performance of our platform. To try out DeepLabCut, look at the template job provided. Data for both training and testing is provided from the DeepLabCut repo.
The NeuroCAAS implementation of DeepLabCut works with the version 1.0 version of DeepLabCut. The analysis here offers both training and testing modes. For training, we assume that frames are labeled elsewhere by the user. For testing, we assume input as videos. IMPORTANT: we additionally assume that the user will only be analyzing videos with one shuffle and one training fraction at a time (see myconfig.py) file. Different parameters are required for the training and testing routines, as described below.
-Input: (zip file) A zipped folder containing the training frames to be analyzed, as well as a .csv file containing the tracked positions. Make sure that the formatting of this csv file is commensurate with what is stipulated in the myconfig file. This zipped file should include the folder itself.
-Config: (yaml) A YAML file containing the name of the directory that was zipped, and the DeepLabCut "myconfig" file. See template config for details.
-(folder) Full model fit from the data. Can subsequently be passed into the inputs to analyze videos directly.
-Input: (video file) A video file (make sure the format is commensurate with what is specified in the myconfig_analysis file).
-Config: (yaml) A YAML file containing the path to the model folder that will be used to analyze videos, and the DeepLabCut myconfiganalysis file. See template config for details.
-(h5) Traces providing a per frame pose for the combination of video and model that were provided as inputs.
You must login to use an analysis.