Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
# 1. Install MediaPipe Install from python's `pip install`, in an anaconda virtual environment following the official instructions. Use python 3.10 or above. It should install all dependencies as well, if they are not installed (in particular opencv and numpy). We assume the name for the environment to be `mediapipe`. See ressources: [https://github.com/google-ai-edge/mediapipe] [https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker/python] [https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker] # 2. Weights choice and how to change them We used `Pose landmarker (Heavy)`, which weights are included in the repository, and that is used as a default in our main script. If you want to change it, don't forget to change the `main.py` file, in the default parameters of the `__init__()` function for the `MMPoseDetector` class, and download the new weights in the `./models/` folder. See [https://developers.google.com/mediapipe/solutions/vision/pose_landmarker] to check out and download other models. # 3. Run MediaPipe In the following text, replace `$data_path` by the path to the data (video or image folder) and `$result_path` by the path to the result folder root. Open a new terminal window, from mediapipe's root folder, then use the commands: `conda activate mediapipe` # 3.1 Direct processing Memory profiling can be done by installing the python package `memory-profiler` in the same conda environment, and then adding the following command before mediapipe's regular processing command: `mprof run --include-children --multiprocess --output $result_path/mediapipe/mprofile.dat` Output of the keypoints in a csv file can be done by adding the following parameter to the command: `--csv_path $result_path/mediapipe/keypoints_MP33.csv` Output of a video with visualised keypoints can be done by adding the following parameter to the command: `--output_path $result_path/mediapipe/results_vid.mp4` Note that because we were processing sequential images, frame-by-frame processing also uses the `VIDEO` tracking that MediaPipe offers to use to improve performance. To change this, please modify the `_create_landmarker()` function of the `MMPoseDetector` class. # 3.1.1 Processing frame-by-frame Run the following command: `python main.py --images_path $data_path` # 3.1.2 Processing a video Run the following command: `python main.py --video_path $data_path` # 3.2 Automated / Batch processing Two bash scripts `batch_processing_real.sh` and `batch_processing_syn.sh` are used for batch processing, which can be modified or used as an example to adapt to your case, depending on the data folder structure, and the expected results folder structure. The script can be called by opening a terminal on the folder they are located and using the command: `sh batch_processing_real.sh` By default: 1. The script assume that the data folder structure has a depth of 3 to reach the folder of images or the videos to process. As such, `$data_path` should be 3 levels above the data, and the first line of the scripts should be updated to this path. (e.g., `$data_path` is `xxx/data` and has a tree of subfolders corresponding to `./Infant_ID/Infant_Age/Session_ID/`, then the image folder or video should be in the `Session_ID` subfolder). The script will explore all these subfolders and process what it finds. 2. The script will automatically replace any folder with "data" in its name in `$data_path` by "results", and create an identical structure, e.g., `xxx/results/Infant_ID/Infant_Age/Session_ID/`, in which it will create a folder `mediapipe` and save the output files, if there is any. 3. This also assume that there will be only one video per subfolder, and currently only .mp4, .avi and .mov are sought, you can modify line 28 to add more video formats to the find command if needed. 4. It assumes the processing of videos, please comment lines 28 and 29 and uncomment line 24 to process images frame-by-frame
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.