Scientists and engineers are constantly developing new materials with unique properties for 3D printing, but figuring out how to print with these materials can be a complex and costly challenge.
Often, expert operators must use manual trial and error (potentially thousands of prints) to determine the ideal parameters for printing new materials consistently and efficiently. These parameters include print speed and the amount of material deposited by the printer.
MIT researchers have now used artificial intelligence to simplify the process. They developed a machine-learning system that uses computer vision to observe the manufacturing process and then correct errors in its handling of materials in real time.
They used simulations to teach neural networks how to adjust printing parameters to minimize errors, and then applied the controller to a real 3D printer. Their system printed objects more accurately than all other 3D printed controllers they compared to.
This work avoids the expensive process of printing thousands or millions of real objects to train neural networks. It could make it easier for engineers to incorporate new materials into their prints, which could help them develop objects with special electrical or chemical properties. It also helps technicians adjust the printing process on the fly if materials or environmental conditions unexpectedly change.
“This project is truly the first demonstration of building a manufacturing system that uses machine learning to learn complex control policies,” said senior author Wojciech, professor of electrical engineering and computer science at MIT and leader of the Computational Design and Manufacturing Group (CDFG). Matusik said. ) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have smarter manufacturing machines, they can adapt in real-time to changing conditions in the workplace, thereby increasing yield or the accuracy of the system. You can squeeze more out of the machine.”
Co-lead authors are Mike Foshey, a mechanical engineer and project manager at CDFG, and Michal Piovarci, a postdoc at the Austrian Institute of Science and Technology. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former technical associate at CDFG. The research will be presented at the Society for Computing Machinery’s SIGGRAPH conference.
Determining the ideal parameters for a digital manufacturing process can be one of the most expensive parts of the process because of the amount of trial and error required. Once the technician finds a combination that works well, these parameters only apply to a specific situation. She has little data on how the material will perform in other environments, on different hardware, or whether new batches exhibit different properties.
Working with machine learning systems is also challenging. First, the researchers needed to measure what was happening on the printer in real time.
To do this, they developed a machine vision system that uses two cameras to aim at the nozzle of a 3D printer. The system shines light on the material as it is deposited and calculates the thickness of the material based on the amount of light that passes through.
“You can think of the visual system as a set of eyes that observe processes in real time,” Foshey said.
The controller will then process the image it receives from the vision system and adjust the feed speed and orientation of the printer based on any errors it sees.
But training a neural network-based controller to understand this manufacturing process is data-intensive and requires millions of prints. So the researchers built a simulator.
To train their controllers, they used a process called reinforcement learning, in which the model learns by trial and error and is rewarded. The task of this model is to select the printing parameters that will create a specific object in the simulation environment. After showing the expected output, the model is rewarded when the parameters chosen by the model minimize the error between its print and the expected result.
In this case, “Error” means that the model dispenses too much material, placing it in areas that should remain open, or assigns insufficiently, leaving open spots that should be filled. As the model performs more simulated prints, it updates the control strategy to maximize reward, becoming more and more accurate.
However, the real world is messier than the simulation. In practice, conditions often change due to subtle changes or noise in the printing process. So the researchers created a numerical model that approximates 3D printer noise. They used the model to add noise to the simulation, resulting in more realistic results.
“What we found interesting is that by implementing this noise model, we were able to transfer control policies trained purely in simulation to hardware without any physical experiment training,” Foshey said. “We don’t need to make any fine-tuning of the actual device after that.”
When they tested the controller, it printed objects more accurately than any other control method they evaluated. It is especially good at infill prints (that is, printing the inside of objects). Some other controllers deposited so much material that the printed object would bulge, but the researchers’ controller adjusted the print path to keep the object level.
Their control strategy can even understand how the material propagates after deposition and adjust parameters accordingly.
“We’re also able to design control strategies that can control different types of materials on the fly. So if you have a manufacturing process in the field and you want to change the material, you don’t have to revalidate the manufacturing process. You just load the new material and the controller does It adjusts automatically,” Foshey said.
Now that they have demonstrated the effectiveness of this technique in 3D printing, the researchers hope to develop controllers for other manufacturing processes. They also want to see how the method can be modified for situations where there are multiple layers of material or when printing multiple materials at once. Additionally, their approach assumes that each material has a fixed viscosity (“syrup”), but future iterations could use AI to identify and adjust viscosity in real-time.
Other co-authors on this work include Vahid Babaei, head of the AI-Aided Design and Manufacturing Group at the Max Planck Institute; Piotr Didyk, associate professor at the University of Lugano, Switzerland; Szymon Rusinkiewicz, David M. Siegel ’83 Computer at Princeton University Professor of Science; and Bernd Bickel, Professor at the Austrian Academy of Science and Technology.
This work was supported in part by the FWF Lise-Meitner program, a European Research Council start-up grant and the US National Science Foundation.