Detecting 3D-Printer failures, a different simple approach

Some background

Rees Pozzi
9 min readMar 14, 2022

3D-Printing is something that has already revolutionised the way the world views manufacturing in general and will continue to do so in the future. What started out in the 1980’s as a basic way to print a 3D object layer by layer, has evolved into a deeply complex and varied technology. In the modern era, 3D-Printing is now capable of creating things such as prosthetic limbs, edible food, rocket and supercar components, biomaterials, and nano structures. This just scratches the surface of what is possible with 3D Printing, and as the technology continues to evolve so will potential applications.

Although 3D-Printing technology continues to evolve and excel within these complex sectors, it is still adored and used widely by more novice users at home or smaller business owners. Considering the high risk and cost involved with failed 3D-Printed rocket engines, biological structures, or anything equally concise and safety-critical, it naturally follows that a proportionate amount of money and effort should be expended into both research and development of methods to minimise failure and analyse prints. Because of this, the average user has been, in part, forgotten about, and print error detection and object analysis continues to mostly evolve and grow alongside only the most complex applications of 3D Printing.

Photo by Kadir Celep on Unsplash

Why bother?

Whilst the average user may not be creating functioning vehicles with 3D-Printed parts, or using highly expensive print filaments, there is still excess cost and waste produced when any print fails without being noticed and stopped. The 3D printer continues to extrude filament and will continue to do so until the print is supposed to be finished or the user spots it manually and interrupts the printing process. This wastes electricity, print filament, and time, not to mention the frustration caused by a failed print.

The idea

I spent dozens of hours researching all existing notable related work within the 3D-Printing field, analysing the advantages and drawbacks of each unique approach in depth, and studying mutliple areas of computer vision to find existing approaches. To summarise vastly here, as the required application becomes less generalised, the computational complexity of the image processing algorithm, or the time taken to complete the machine learning training portion of the system, generally increases proportionately. I also discovered that the computer vision techniques are more concerned with locating a search object within a target image. This is more like object detection than verification of correctness in most cases, like The Spaghetti Detective. If the search object can be located even with occlusions, then it is wholly likely for a print to be detected as being correct, even if there are incorrect areas of print material surrounding it, or vice versa. Hence, there must exist a method which can generalise well, yet be computationally inexpensive, without utilising ML & AI to try and spot every possible kind of error from a bank of training images.

I took a few steps back after getting caught up in all the newest tech and exciting AI approaches, and tried to consider the problem from first principles. I ignored the fact this was 3D printer related, and just considered how I might detect a failure if I was watching the print.

K.I.S.S — “Keep it simple, stupid”

I found that the external shape is the most important aspect of the correctness of a 3D-Print. Almost all errors that can occur during a print, will result in some sort of discrepancy of the overall shape property of the print. When you consider splitting the analyis of a 3D-Object into fast shape checking in multiple 2D planes, you eliminate a lot of computation, but in this specific application retain a high level of knowledge about the correctness of the system. Existing approaches are either trying to detect these localised errors based on past similar errors (ML), or analysing each layer as it is printed, which adds a large amount of time to the print process.

I wanted to show that it’s still possible to achieve a desired outcome (error/no error), without the analysis tool becoming overly complex and verbose for the task which it aims to complete.

Methodology

It took me a while to think of an approach for this, scouring the internet for any possible mathematics or computer vision papers that dived into 2D Shape Analysis, which is what the problem was now essentially being considered as. Applying the same K.I.S.S principle as above, I came up with a (very) simple approach for comparing the total percentage difference between two binary images (Binary to keep computation to a minimum). Initially thinking it was too good to be true, I found a few great methods after this that were close to validating that what I wanted would work, most notably this paper. The joy of this is that the user set up eliminates a lot of the rotation variances that could be introduced, accounting for rotation would require a more complex approach. I eliminated any translation and scale variances by mapping the objects centroid using contours and moments, as this is a constant relative to the object no matter it’s location in the image or it’s scale. The centroid also gives me a reference point for splitting both images into similar bins.

General Approach, where black dot is the objects centroid in a given 2D plane.

So how does this work? After some image processing, the model gets a centroid mapped, and gets split into 16 bins. The number of pixels per bin is counted, and assigned a percentage of the shape, and the relative bins of the model and the captured image are compared to check for any discrepancies as below.

Another perk of this approach is that the more the shape is altered due to a printing error, the more the centroid varies relatively from the expected model. This increases the percentage difference between the two shapes further, as well as doing this at a pixel level, there is never going to be 0% difference between the expected model and the captured image, so a pass threshold was introduced to the code.

In practice, the script is run with an Expected Model and a Captured Image, at a given point during the print process, to check if an error has occurred. This has been abstracted to a high level for the scope of finding a new method, more can be read about this in the limitations section.

Converting the captured image into a binary image

There are two flows when converting the captured photo into a binary image containing the object, that you can explore more within the codebase.

I also considered the chance of users having poor quality cameras / noise introduced in poor lighting, and added Pyramid Layers, subsampling with Gaussian blur at each layer to try and ensure the object was detected in the image, as you can see below, it really made a difference.

Software with Gaussian Pyramid
Software before adding Gaussian Pyramid

Limitations

This project had a very limited scope, to come up with a new approach for detecting 3D-Printer errors with a low level of computational complexity, so naturally there are a few limitations.

  • Backgrounds — In this technique, probably the most important stage is correctly identifying and segmenting the object that is being printed from all other information in the input image. For the expected object model, this is trivial as all edges are clearly defined and there is no background. In reality this perfect segmentation when capturing an image is not trivial, and may involve setting up a pre-registered background as I demonstrated at a simple level here. In general, this would require some attention to work in all environments.
  • Binary Comparison — This method uses bins of white pixels to compare and evaluate the print against the expected model, even a single pixel difference can cause a percentage difference. The problem with this, is that it takes very little to cause at least one pixel worth of difference between the two objects. This will be the case in every single detection test that is run with the tool, which highlights the importance of choosing and allowing a suitable threshold to say whether the shape resembles the expected shape closely enough to be considered successful.
  • Filling the biggest contour — In most cases, this does not cause an issue. However, this can cause problems in more specific edge cases. For example, consider printing a donut vertically. In this case, the object’s hole is actually very useful when it comes to analysing the print, it can contain erroneous material that this method would ignore. (Though, it’s worth noting this material in the middle of the donut means the edges wouldn’t be built properly, and an error would still likely be caught).
  • Rotation — Rotation variance is eliminated by controlling the camera position relative to the print here. Full rotation invariance would require an adaption of the method.
Actual Print (right). Expected model (left)
No error detected

Extensions

  • Video — In practice, this would involve capturing frames from a live video feed during the print process. This would involve a more active approach to background removal / object segmentation as the lighting in the video / parts of the background may change over time.
  • Analyse from multiple perspectives — This only considers a front-on view, a full solution should have multiple perspectives and run a combined analysis of each to be as accurate as possible.
  • Create a full E2E system — The obvious one! I hope to try and incorporate this into a full system and see if it really works in practice. It would require a fair amount of other work, and splitting the original expected model into slices at different stages of the print for the plugin to compare with, another complex area to consider the timings of when to run a comparison, and capturing the 2D shape from different perspectives.
Potential system architecture with OctoPrint.

Overview

Generally, I found that this approach worked surprisingly well at a basic level, with an average run time of 0.31 seconds per comparison, and you may only need 5–10 comparisons per print at a low level too. Of course, I’m not forgetting that being part of a larger system, with some background removal functionality and 3D object splitting will add more time, but as a general start this is MUCH faster than other approaches, such as analysing every single layer that’s printed, or training a ML model to recognise every possible 3D printer error that can occur. This is a fast, generalised approach that doesn’t rely on any other print data, and minimises computational complexity whilst retaining the ability to detect a print deviation.

Testing Performance

You can run and try this tool for yourself, and explore the code by cloning the 3D-Printer Github repository. This was a great project and the research was really entertaining, and tough to summarise! Hopefully I or someone else can pick up some of the extensions in the future and see if it would work in practice.

--

--

Rees Pozzi
Rees Pozzi

Written by Rees Pozzi

Senior DevOps/Platform Engineer

No responses yet