Main content



Loading wiki pages...

Wiki Version:
# **Open Source Computer Vision-based Layer-wise 3D Printing Analysis** > *Aliaksei L. Petsiuk and Joshua M. Pearce* <br /> *Michigan Technological > University* <br />*March 2020* --- # **Highlights** - Developed a visual servoing platform using a monocular multistage image segmentation - Presented algorithm prevents critical failures during additive manufacturing - The developed system allows tracking printing errors on the interior and exterior <br /><br /> # **Abstract** The paper describes an open source computer vision-based hardware structure and software algorithm, which analyzes layer-wise the 3-D printing processes, tracks printing errors, and generates appropriate printer actions to improve reliability. This approach is built upon multiple-stage monocular image examination, which allows monitoring both the external shape of the printed object and internal structure of its layers. Starting with the side-view height validation, the developed program analyzes the virtual top view for outer shell contour correspondence using the multi-template matching and iterative closest point algorithms, as well as inner layer texture quality clustering the spatial-frequency filter responses with Gaussian mixture models and segmenting structural anomalies with the agglomerative hierarchical clustering algorithm. This allows evaluation of both global and local parameters of the printing modes. The experimentally-verified analysis time per layer is less than one minute, which can be considered a quasi-real-time process for large prints. The systems can work as an intelligent printing suspension tool designed to save time and material. However, the results show the algorithm provides a means to systematize in situ printing data as a first step in a fully open source failure correction algorithm for additive manufacturing. --- <br /> # **Dependencies** | No. | Package | License | Description | Author(s) | | :--- | :--- | :--- | :--- | :--- | | 1 | [numpy]( | OSI Approved (BSD) | Multidimensional data container | Travis E. Oliphant et al. | | 2 | [pandas]( | BSD | Data structures designer | Wes McKinney | | 3 | [matplotlib]( | Python Software Foundation (PSF) | Comprehensive visualization library | John D. Hunter, Michael Droettboom | | 4 | [scipy]( | BSD | Mathematics toolkit | Travis Oliphant, Pearu Peterson, Eric Jones | | 5 | [opencv]( | MIT | Graphic library | Intel Corporation, Willow Garage, Itseez | | 6 | [multi-template-matching]( | GPL-3.0 | Object recognition package | L.S.V. Thomas, J. Gehrig | | 7 | [scikit-image]( | BSD | Image processing kit | Stéfan van der Walt | | 8 | [scikit-learn]( | OSI Approved (BSD) | Machine learning module | David Cournapeau | | 9 | [meshcut]( | MIT | 3-D mesh analyzer | Julien Rebetez | | 10 | [numpy-stl]( | BSD | STL editing library | Rick van Hattem | | 11 | [pycode]( | GPL-3.0 | G-Code parser | Jerome Bergmann | <br /><br /> # **STL model used in the experiments** The algorithm was tested during regular printing without failures of the 42x51x70 mm [low-polygonal fox model]( (CC BY-NC-SA 3.0 license) with the following printing parameters: 1.75mm PLA, 0.4mm layer height, 0.4mm line width, 30% grid infill, and 3.2mm wall thickness. The entire model consists of 175 layers, but the tests were carried out for the first 96 layers since part of the model was located outside of the visible area. The image dataset is available. <br /><br /> # **Data flow** | Input data | Output data | | :---: | :---: | | Intrinsic camera parameters<br />Source image for a single layer<br />Extrinsic camera parameters | Vertical level error distribution<br />Global contour corrections<br />Local infill defects localization | --- <br /><br /> # **Repository contents** ### 1. Main control interface ### 2. Image processing cycle ### 3. Volumetric slider ### 4. STL and G-Code visualization ### 5. Proposed algorithm <br /><br /> ### **1. Main Control Interface** @[osf](fu5tm) The software developed in Python-language environment parses the source G-Code, dividing it into layers and segmenting the extruder paths into categories such as a skirt, infill, outer and inner walls, support, etc.. The developed program synchronized with the printer uses RAMPS 1.4 3-D printer control system and the open-source firmware Marlin as an intermediate driver. <br /><br /> ### **2. Image Processing Cycle** @[osf](g8av4) The image processing pipeline for a single layer could be divided into three branches: 1. Side view height validation 2. Global trajectory correction 3. Local texture analysis Starting with the side-view height validation, the algorithm analyzes the virtual top view for global trajectory matching and local texture examination. This allows taking into account both global and local parameters of printing processes. <br /><br /> ### **3. Volumetric Slider** @[osf](q4a6v) The volumetric slider application with GUI allows uploading the image dataset of the printed part and conduct post-printing volumetric analysis. <br /><br /> ### **4. STL and G-Code Visualization** @[osf](qc4ry) The additional script for visualization of the camera position and the G-Code trajectories projected onto camera view. <br /><br /> ### **5. Proposed Algorithm** @[osf](rnb3k) <br /> The proposed algorithm for detecting printing failures assumes the presence of one camera located at an angle to the working surface of the 3-D printer. An angled camera allows us to observe both the active printable layer and part of the printing model from the side. Thus, one source frame can be divided into a virtual top view from above and a pseudo-view from the side. Such criteria as bed leveling, dimensionality lost, and non-circularity are dependent on a specific printer model and are manually calibrated by the user at the time of the first run. It is possible to create calibration tables to determine the correction factors for G-Code trajectories. However, at this stage, the above parameters are checked only for compliance/non-compliance with the specified values. In case of non-compliance in bed leveling, dimensionality, and circularity, printing is suspended. This method does not eliminate these errors during the printing process, but it saves time and material. <br /><br /> # **Conclusions** The development of an adaptive algorithm is a comprehensive and complex problem, because it is challenging to (1) uniquely visually determine the type of error, (2) establish a direct causal relationship between the type of error and the printing parameter involved, and (3) declare in advance what parameter value (scaling coefficients, feed rate, temperature, traveling speed, etc.) should be used to correct the failure. The experiments above are based on the assumption that the mechanical parameters (stability of assembly, the presence of grease in moving parts, belt tension, the electrical voltage of stepper motor drivers, etc.) of the printer are configured and calibrated optimally. The experimental results obtained for the case of the nominal printing mode without deviations allow determining the accuracy and tolerance of the adaptive algorithm. Thus, at this stage of the research, the presented work is more an intelligent printing suspension tool designed to save time and material rather than a full failure correction algorithm for printing enhancement. However, this work will allow users to systematize knowledge about failure mechanisms and will serve as a starting point for deep study in the future and a full failure correction system for open source additive manufacturing. ---