Main content



Loading wiki pages...

Wiki Version:
**The Image-to-Physical Liver Registration Sparse Data Challenge** The challenge is based on the publication by Collins et al. (2017) appearing in the IEEE Transactions on Medical Imaging (vol. 36, no. 7, pp. 1502-1510) and is described by the SPIE Medical Imaging 2019 Proceedings paper (please cite both of these if using data within a paper). To describe briefly, there are surgical workflow advantages if one could align preoperative liver image volume data to its intraoperative physical counterpart using sparse surface data visible during the procedure at presentation. We have developed a novel human-to-phantom framework that allows us to transpose real operating room (OR) data patterns that we acquired clinically using an optically tracked stylus onto a quantitative deforming phantom environment. This framework allows the development and testing of image-to-physical registration algorithms in the presence of deformation with quantitative subsurface targets for assessing error and within the context of realistic OR data acquisition. We note that the deformations we have imposed on the phantom mimic patterns of deformation we have seen in the OR. Specifically, the presentation of the organ allows the anterior surface of the organ to be visible at various levels of extent, and deformations are associated with the surgical packing of the organ on the posterior side of the organ. For this challenge, these states can be assumed. As part of this new challenge, we have developed a new phantom and more data patterns not previously used. We also have many more subsurface targets for characterization than in previous work (n=159 targets). In order to formerly enter your result for the challenge and have it posted on the Final Result Dashboard as complete, results need to be provided for all 112 data sets. You do not need to submit results to all data sets to interact with the challenge. The Dashboard will also track partial submissions, i.e. you can provide subsets for analysis while you are developing and these results will be provided on the Dashboard. Only latest results will be retained. **Brief Data Description for Challenge** If you register for the challenge, the data provided is relatively simple to understand and use. We provide you a binary image volume mask of the phantom liver in its undeformed state to use as you see fit. We also provide you our 3D tetrahedral mesh based on that mask in real physical dimensions. You are more than welcome to use the mesh. All results will be ultimately reported and assessed on this grid which aligns with the mask if applying the header. Choosing to use our 3D grid in your algorithm likely obviates the use of the binary mask. The last file is a zip-file of 112 sparse data sets representing real patterns of data from the OR transposed onto our phantom (7 patterns per data extent, 3 different data extents, and 4 deformations – providing 84 patterns of contact-based digitization data, and then an additional 28 novel patterns to investigate the benefit of non-contact digitization – here is an sparse data pattern example). With respect to our phantom, it is constructed from 80% Ecoflex 00-10 platinum-cure silicone mixed with 10% Silicone Thinner and 10% Silicone Slacker by volume (Smooth-On Inc., Pennsylvania) with stiffness similar to the liver. The phantom incorporated 159 subsurface 1mm diameter bead targets to serve as ground truth target locations (liver volume targets) . The phantom is homogeneous, except for the targets (which have negligible effect on deformations). Deformations are applied on the posterior surface, and surface data is acquired from the anterior surface. *Binary Mask Note:* If you wish to build your own representation of the liver domain, the mask is provided which is a 512 x 512 x 631 binary image volume (voxel size is 0.683594 x 0.683594 x 0.8 mm). *3D Tetrahedral Grid Note:* The results you will be providing will be the Dx, Dy, Dz transform of each point within the grid provided by our mesh. Once provided, your transformed point cloud will be used to establish a comparison deformations between your result and our true target deformations as provided by repeat CT imaging. In this experiment, we have n=159 subsurface targets that are distributed throughout the phantom. *Zip File Data Note:* A data set pattern for your use consists of 4 separate ascii point clouds (left inferior ridge, right inferior ridge, falciform region, and general surface swabbing). Your task is to align the grid space associated with 3D mesh provided to that of the sparse data pattern. It is your choice whether you use any of the geometric references provided. We provide because these have been salient features that physicians can determine easily and without moving the organ. In addition, several other investigators have begun to follow this same work. **Use of this dataset must cite the following publications:** [1] EL Brewer, LW Clements, JA Collins, DJ Doss, JS Heiselman, MI Miga, CD Pavas, and EH Wisdom III. "The Image-to-Physical Liver Registration Sparse Data Challenge." In Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 10951, pp. 364-370. SPIE, 2019. [2] JA Collins, JA Weis, JS Heiselman, LW Clements, AL Simpson, WR Jarnagin, and MI Miga. "Improving registration robustness for image-guided liver surgery in a novel human-to-phantom data framework." IEEE transactions on medical imaging 36, no. 7, pp. 1502-1510, 2017. [3] JS Heiselman, JA Collins, MJ Ringel, TP Kingham, WR Jarnagin, and MI Miga. "The Image-to-Physical Liver Registration Sparse Data Challenge: comparison of state-of-the-art using a common dataset." Journal of Medical Imaging, 2024. (in press)
OSF does not support the use of Internet Explorer. For optimal performance, please switch to another browser.
This website relies on cookies to help provide a better user experience. By clicking Accept or continuing to use the site, you agree. For more information, see our Privacy Policy and information on cookie use.

Start managing your projects on the OSF today.

Free and easy to use, the Open Science Framework supports the entire research lifecycle: planning, execution, reporting, archiving, and discovery.