Imagine for a second, that you are a technician in a modern German factory. Everything around you is solid and shiny, and all the the beautiful machines are buzzing with productivity. Complex production lines arranged in clockwork precision, manufacturing products in unquestionable quality standards. And it doesn’t stop there: Industry 4.0 is everywhere and decisionmakers are driving the use of Big Data and Analytics to make those production lines more and more effective. It is like a scene from a Stanley Kubrick movie - it is utopia.
That is, until something breaks. Deep down in the belly of your complex production line, a power belt passes into silence, a motor overheats or a workpiece gets jammed in the guts of a machine. Then you face the ugly reality of a historically grown manufacturing practice: On second sight, your beautiful toolchain is most likely a patchwork of devices from a variety of different manufacturers. While some machinery is brand new and all digitized, some other components could be a heirloom of the company founder himself. And if you want to find out what is actually wrong, you face a Babylonian chaos of different error reporting styles, from well-documented and interactive to virtually non-existent. Most likely you will end up spending a day in the company archives, sifting through tons of paper documentation, and then fold your cards and call an external technician. That is not only frustrating, but it wreaks havoc on productivity numbers and can cause a rat’s tail of followup problems along the production line.
Together with our friends at Fraunhofer IPA and the Reutlingen Research Institute, we have been discussing this issue a lot. We were electrified by the question how it was even possible that one of the most modern industrial sites in the world hadn’t come up with a proper solution for the interaction with machines that malfunction? We decided to have our shot at the problem and see how NLG could help bash through that Gordian knot of different documentation and interaction styles. The first guinea pigs for this prototype were soon found in the model factory at the University of Reutlingen, namely a UR5 and a UR10 manipulator from Universal Robots and their neighbors in the production line.
The vision was to provide a database in which the error behaviour of all kinds of machines would be added and to translate the different error codes into a unified ontology that would allow AX Semantics to semantically understand what the machines were trying to tell us. Then we would train AX Semantics to write an error report in easily understandable language and with all the information in it which would be necessary to fix it. But we wouldn’t stop there. The next stages of our vision would be writing different reports depending on who was asking for them.
A manager would get information about the duration of the repairs and the impact on his overall productivity. The technician would get the repair info in all detail, while the operator of the machine would get assistance in deciding whether to turn off the machine or not, call an external technician or fix it himself and whether to shut down the rest of the production line until the problem has been fixed. Once we would have an understanding of what the machines should communicate, we were also planning on getting rid of the static mailings and bolt a dialogue layer on top of our toolchain instead. In our bigger vision, stakeholders could basically walk up to a machine and ask it what was wrong.
During the process, we found out a lot of things about production environments in Germany. For example, a lot of machines were not reporting meaningful errors at all because they were simply left over from (waaay) before the digital age. Then again, some others would give you errors, but no meta information on how to fix them. It was a classical data-scarcity situation - every Computational Linguist’s nightmare. There was a lot to advise and do, but no data to formulate it from coming from the machines. And in addition, it would be a herculean if not impossible task to include all of that information into our ontology of machine errors.
But if the machines weren’t going to talk to us, who would? Who would know what tools were necessary to repair a machine and how long it would take on average? The humans of course. We figured that we could solve the problem of data scarcity in machine output if we found a way to structure and store the experience that human operators and technicians had made in previous incidents with the machines. So we refactored our data model to accommodate information like how difficult a repair was, what skills and tools were required and how long and expensive it had been to fix an error. To get the information, we would send a form to the technician after the repair. To get the information well structured and machine-readable, we tried to use as few textboxes as possible. To have a learning effect, we modified the backend, so it would calculate averages for the duration and difficulty values and use those for its reports. More complex analytic or even predictive methods could follow here.
There are first talks with pilot customers and we will have a physical demonstrator of NLG in an industrial IoT environment ready in autumn 2018. If we don’t keep you posted, the machines will ;)