Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# Code, Data & Results for 2d pose estimation methods comparison ## 1. Environment The python scripts in the Code subfolder were ran using Python 3.6.8. It is advised to use a virtual environment, e.g. using `virtualenv` and `virtualenvwrapper`, or `anaconda`/`miniconda`, to avoid possible conflicts with the system's python version and packages. Required python packages are listed in the file `requirement.txt` ## 2. Scripts description ### 2.1 Formatting scripts 1. [`raw_to_formatted.py`](raw_to_formatted.py) creates new files in a unified format, since all methods provide output data in different formats. Possible unified formats: COCO 17 ('coco'), or OpenPose 18 ('openpose'). Also logs and outputs text file with metrics (missing detections and keypoints, redundant and replaced detections, etc.) and creates empty files to fill gaps in missed detections. 2. [`rearangeGT_deeplabcut.py`](rearangeGT_deeplabcut.py) rearranges keypoints from DLC's labelling tool (14 keypoints) into COCO 17 and OpenPose's Body 25 formats and exports them into .csv files. 3. [`rearangeGT_minirgbd_to_OP25.py`](rearangeGT_minirgbd_to_OP25.py): rearranges the ground-truth data from the mini-rgbd data set into OpenPose's BODY 25 format (used by smplify-x), and save the data into a single .csv for each synthetic infant instead of having one text file for each image. ### 2.2 Evaluation scripts 1. [`extract_mprofile.py`](extract_mprofile.py): When a memory profile file exists, will extract and save the average and peak RAM usage and the runtime in a text file. 2. [`comparisons.py`](comparisons.py): Script that compares ground-truth data (2D labeled by L.N.) to the estimated keypoints. The Euclidean distances and OKS values for each frame of a given method's results folder of a given video, then save the results in a .csv file. Also saves the keypoints and their confidence values (either global score when available, or median of the individual keypoints) for each frame in a .csv, for easier use in subsequent comparison/analysis scripts. The results are done for three types of detection selection modes: 'default' (first detection), 'match' (optimal detection with regards to euclidean distances), and 'score' (highest confidence detection). 3. [`comparisons_avgs.py`](comparisons_avgs.py): Calculates Mean OKS and Mean Euclidean distances for all methods for a given video from the .csv files created by [`comparisons.py`](comparisons.py) and saves a .csv file with the results. 4. [`draw_dists.py`](draw_dists.py): For each keypoint, draws circles on an image of a given video, centered around the ground-truth keypoints for that image. Colour of the circle represents the method, and its radius represents the 2D Mean Euclidean Distance for that keypoint across the whole video. 5. [`correlations.py`](correlations.py): Calculates correlations (Shapiro-Wilk normality test, and Spearman ROC). Correlates: 1) global score with OKS over all videos; 2) Individual keypoints' confidences and their Euclidean Distances per video. As an extra, calculates overall mean OKS and Euclidean Distances and their standard deviations over all videos to take advantafe that we loop over the data. 6. [`calc_ar_ap.py`](calc_ar_ap.py): Calculates Average Precision and Average Recall. Based on [cocoapi's cocoeval](https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py). 7. [`draw_dists_avgs.py`](draw_dists_avgs.py): For each keypoint, draws circles on an image, centered around the ground-truth keypoints for that image. Colour of the circle represents the method, and its radius represents the mean Neck-MidHip (or MidShoulder-MidHip) error ratio for that keypoint across all videos, based on Euclidean Distances normalised by Neck-MidHip segment length. 8. [`reliability.py`](reliability.py): Calculates the Intraclass Correlation Coefficient between two coders for manually-labeled keypoints, separately for x and y coordinates. Outputs three .csv files, one with all the icc for each keypoint and each coordinate, one with an overall icc for the whole, and one that gives the mean, std, median and minimum icc for each icc version ("all", "x' and "y). ### 2.3 Utils / Automation scripts 1. [`batch_process.sh`](batch_process.sh): Makes it easier to automate the use of the Formatting scripts (1. and 2.) and Evaluation scripts (1. to 4.) over all the data and methods and processing types. From the root folder containing the data, if the data has the correct structure, it will find all datasets (real infants and MINI-RGBD synthetic infants), all individual infants of each dataset, all weeks, all sessions (recordings), and then all the pose estimation methods folders containing their results. Comment and uncomment as needed the script calls to process what is needed. 2. [`batch_rename_video_processed.sh`](batch_rename_video_processed.sh): Makes it easier to automate batch renaming of some result files (e.g. jsons) from the pose estimation methods to unify the results into something more usable. Read the warning in the file and Use this script very carefully. Comment and uncomment the rename command lines as needed. 3. [`prepare_reliability_subdataset.py`](prepare_reliability_subdataset.py): From the images originally labeled, prepares the reliability subdataset, based on 20% and 30% of the full dataset. ## 3. Data & results datasets separation There are two folders: 1. `data_results_seq`: this concerns only the processed data and results for the secondary extra dataset presented in the Supplementary Materials with 900 sequential images from a single real infant recording 2. `data_results`: this concerns all the processed data and results about our own dataset of real infants recordings mentioned in the main manuscript and in the Supplementary Materials, except what is in `data_results_seq`, plus the processed data and results for the synthetic infants from the MINI-RGBD dataset. ## 4. Data & Results folder structure Overall folder structure for the data & results folder. - Infant type (real, synthetic): - Infant ID (AA, TH) or dataset (MINI-RGBD) - week (8w, 11w) or placeholder folder for synthetic infants (to keep the structure level identical for automation) - video name (i.e. 1 folder per video processed) - gt: contains gt files in .csv format - processing type (images_proc, video_proc) - methods - method name (alphapose, detectron2, etc.) - keypoints_coco17: contains 1 json per frame containing the detections and kps data - keypoints_raw: contains data in the original format output by the method - stats.txt: contains number of missing keypoints, etc. for that video - results (for that single video and processing type only) - detetection selection type (first=first ranked, match=lowest Euclidean distance, score=highest score): contains .csvs with results (Euclidean distances, oks, correlations) - results (overall videos) - processing type (images\_proc, video\_proc): contains AP AR results - detetection selection type (first=first ranked, match=lowest Euclidean distance, score=highest score): contains .csvs with mean results (Euclidean distances, oks, correlations)
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.