Can sonifying data help us take better decisions? on Data Sonification for real time process monitoring/2

When I started building my use case on the potentialities of investigating the design of sound for real time process monitoring led by AI, I was still unaware of a peculiar characteristics of deep learning algorithms. Which is, they still make a lot of mistakes and in doing this, they make the life of a control room operator pretty difficult.

In a private conversation, a partner in a project on the sonification of monitoring data in a water management plant (I’ll share more soon!) told me that currently the anomaly detection algorithm triggers as much as one acoustic false alarm (a siren, or a loud bell) every day. With the obvious and understandable consequence that operators keep the alarm switched off. Better to risk a - still improbable - cyber attack than to drawn in admin work to report false alarms.

The error rates of AI led me to re-think the role of representing data in this particular field.

If our data source is not reliable, our primary goal should not give the human recipient of our representation a final answer: alarm Yes/alarm No. Mainly because we might put a lot of effort in communicating something false, or misleading.

Our primary focus should be

to communicate what data are saying in a way that empowers the human operator to use her personal experience to autonomously decide wether the alert is credible or not and to take action consequently

Basically what we would like to design is a human-scaled “facilitator”, a system that represents and communicates information in the best possible way to seamlessly merge with the context of usage, the goals and the personal history of the user, leaving the key decision on when to act to the human - and not to the machine. Not a pre-confectioned report of standardised decisions taken by the algorithm, but a tool that leverages our sophisticated sensory and cognitive system in detecting and preventing potentially dangerous situations.

After all, isn’t what we constantly do when we are alerted by our senses on something wrong in the environment? A sound we use to hear suddenly stops: is the vacuum cleaner clogged? A thunder rumbles in the distance: a storm is coming. The engine of my car is higher pitched than usual: something’s wrong with the transmission. Imagine now being an operator with 25 years of experience of the same water plant, which also happens to be the one of your home town. How many subtle changes in the system’s behaviour would you be able to appreciate, if properly scaled down from a few thousand (or million) of numerical data every second to a humanly understandable artefact?

Our trained mind has efficiently processed sensory inputs for a few thousand years now through endless layers of embodied experiences. The fact that AI can manage enormous quantities of inputs gives it an advantage in the field of machine - focussed tasks, but to take decisions our embodied knowledge of the environment around us is still key.

In the next post, I’ll start sharing some real stuff. The projects I am working on at the moment covers cyber - attacks detection in water plants, cybersecurity and anomaly detection in internet networks, and detection of sleep anomalies in patients with nocturnal apneas.

Image source Present & Correct