Main content

Home

Menu

Loading wiki pages...

View
Wiki Version:
![Automated quantification of ventricular dimensions][1] See our paper here: https://www.liebertpub.com/doi/10.1089/zeb.2019.1754 See demonstration of our algorithm and framework on the test set data: https://youtu.be/i5bX_XbwXq0 The medaka (Oryzias latipes) and the zebrafish (Danio rerio) are used as a model organism for a variety of subjects in biomedical research the here presented work aims to study the potential of automated ventricular dimension estimation through heart segmentation in medaka. For more on this, it's time for a closer look on our paper and the supplementary materials. ## Getting Started The following will get you kick-started with the on your local machine. ## Prerequisites and Installing What things you need to install the software and how to install them ``` Python (version: 3.6.7) import cv2 (version: 3.4.4.19) import tensorflow (version: 1.12.0) import keras (version: 2.2.4) import numpy (version: 1.15.4) import scipy (version: 1.2.0) import progressbar (version: 3.39.2) ``` ## Project Structure ``` |- code |- models (pretrained models) - unet_heart.hdf5 (segmentation model) |- segmentation (code for training the segmentation model) - unet_model.py (unet model with adapted loss, see codebase further down for initial github code) - main.py - data.py - main.py (beachhead to run the code and set parameters) - data_loader.py (loading images and models) - data_writer.py (writing log files to csv and images) - inference.py (inferencing pretrained models) - segmentation_analysis.py (handling segmentation predictions and some manual filtering / smoothing) - timeseries_analysis.py (determine frames at which the heart is in its systolic / diastolic states) - ventricular_dimensions.py (determine a list of ventricular dimensions, depending on the camera perspective) |- data | test_images (200 annotated image sequences for testing) |- test_videos (folder where your input data has to be put for inference) - example_R0004.avi |- frames (colorframes of the testset) |- label_masks (annotation of the testset) |- train_images(550 annotated image segmentation data) |- ventral_mask (heart groundtruth masks) |- ventral_samples (color images related to the masks) |- ventral_sample_gray (grayscale images related to the masks) |- logs |- e.g. example_R0004 (automatically generated log folders) |- segments_with_ellipses (folder with raw input frames, as well as determined segments displayed as fitted ellipses) - N0052_ventricular_dimensions.csv (.csv file with the determined ventricular dimensions) |- gifs (visualized segmentation result and original data) ``` ## Running the code and parameter setting * Make sure to place your video data in /data/test_videos/yourVideo.avi * Adjust the file path in main.py: ``` video_name = 'yourVideo' ``` * Run the code Once the code is finished you will find an automatically generated log directory, with the determined ventricular dimensions as well as visualizations of the segmentation results. This has been done exemplary for two test videos. Also find comments on the details of the code within the code (especially within main.py) ## Code Base * U-net on github: https://github.com/zhixuhao/unet.. ## Data Base The raw data was provided by: Dr. Jakob Gierten Affiliated with: 1. Department of Pediatric Cardiology, University Hospital Heidelberg, Im Neuenheimer Feld 430, 69120 Heidelberg, Germany 2. Centre for Organismal Studies, Heidelberg University, Im Neuenheimer Feld 230, 69120 Heidelberg, Germany ## Contributing We hope this work sparks additional research in this direction. Either by contributing to this framework, deploying the framework, or reusing the annotated groundtruth data. In any case feel free to reach out and make sure to reference this work. Schutera, M., Just, S., Gierten, J., Mikut, R., Reischl, M., & Pylatiuk, C. (2019). Machine Learning Methods for Automated Quantification of Ventricular Dimensions. Zebrafish. Contact: mark.schutera@kit.edu and pylatiuk@kit.edu [1]: https://mfr.de-1.osf.io/export?url=https://osf.io/dvxcn/?action=download&mode=render&direct&public_file=True&initialWidth=828&childId=mfrIframe&parentTitle=OSF%20%7C%20Wiki_image.JPG&parentUrl=https://osf.io/dvxcn/&format=2400x2400.jpeg
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
Accept
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.
Accept
×

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.