Tecknoworks Blog

What Can You Do with Artificial Intelligence?

Here at Tecknoworks we deal with a lot of interesting projects. And of top of the projects themselves, we also get to work with some cool pieces of equipment. One example is the Breathomix eNose device. If you’ve been wondering, what can you do with artificial intelligence, here is just one example.

This device analyzes the mixture of molecules in exhaled breath in real-time, based on advanced signal processing and an extensive online reference database, infused with AI. 

The eNose is a technically and clinically validated integration between routine spirometry and electronic nose (eNose) technology, allowing instant diagnosis and targeted treatment decisions for patients with cancer and infectious or inflammatory disease.

So that’s one very impactful example of what AI can do: Medical diagnosis of various pulmonary diseases. But we thought we’d try a fun little AI experiment to see what else we could do with the eNose.

Repurposing the nose for an AI experiment

The eNose device has multiple sensors that can detect various types of chemicals (or volatile compounds). It also includes sensors for temperature, pressure and humidity. We had a little time on our hands, and so we thought: Since most real-life objects interact with these sensors in their own distinct way, is it possible that we can use the eNose to detect particular smells? To be more specific, here at Tecknoworks we are very focused on what we and others are eating and drinking at all times 🙂 So we set out to build a system that could “smell” and distinguish among three of our favorites: Tea, ice cream, and coffee.

Validating assumptions before the AI experiment

Before we could start with this very important project, we needed to validate two key assumptions:

● Do coffee, tea, and ice cream emit VOCs?

● Can the eNose detect these compounds?

Volatile organic compounds

It turns out, that for the first question, the answer is yes. According to Wikipedia:

Volatile organic compounds (VOCs) are organic chemicals that have a high vapor pressure at ordinary room temperature. Their high vapor pressure results from a low boiling point, which causes large numbers of molecules to evaporate or sublimate from the liquid or solid form of the compound and enter the surrounding air, a trait known as volatility. There are even a few papers (herehere, and here) documenting the VOCs emanated by coffee.

And we have a cool infographic to boot.

The same can be said about tea and ice cream (see here and here). Both have strong odors, especially ice cream. Being a mixture of different substances, there are lots of VOCs that ice cream releases into the air.

Can we detect the VOCs with the device?

The internal specs of the eNose confirm that we can capture some of the VOCs released. It only remains to be seen if we have enough of a reliable signal to fingerprint them and build a sufficiently accurate classifier on the captured data.

Data gathering and modeling

As with all machine learning projects, collecting the (correct) data may be the hardest part. We set out to collect ours by making multiple measurements of the three food and drink items.

Methodology

We collected 100 measurements for each of the three items, trying our best to account for possible variations:

● We used 4 different kinds of tea

● We used 4 different kinds of coffee

● Espresso

● Coffee with milk

● Americano

● Cappuccino

● We varied the temperature of both tea and coffee (ranging from hot to room temperature)

In addition to the three categories above, we added a fourth one – calibration. Below, I’ll explain why.

What a measurement looks like

In the figure above, you can see how the system reacts when someone breathes multiple times into it. The tail end of each spike is a period of decalibration where chemicals aren’t actually detected, but the sensors, being saturated, still send detection signals until they level off. Think of this transition as what happens to a temperature sensor going from hot to cold. There is a period of “in-between-ness” that exists. Although you can visually see the signals leveling off in the image above, it’s quite hard to program a computer (except for using machine learning) to know when there is nothing to predict (i.e., when the subject of your measurement – the coffee – isn’t present under the sensors anymore). Equally important is knowing the baseline measurements for a specific sensor, so we can teach the algorithm to distinguish between noise and signal. For all of the above reasons, we collected data on the calibration category, a state which may happen: Between 10 seconds after the readout of a substance and up to 1 minute after that (the desaturation) After 3 minutes have passed since the last measurement (the idle state) In hindsight, it may have been better to actually split the two in their own categories, but I guess we’ll leave this to a future iteration.

Dataset

As a reminder, we had four categories:

● Tea

● Ice cream

● Coffee

● Calibration

And for each one we collected 100 samples, for a total of 400 data points.

Exploratory data analysis

After loading the data into pandas and completing a few data cleansing steps, we graphed the outputs of all the sensors from all the samples using boxplots, as seen below:

For anyone able to interpret a boxplot, it is easy to see that there are sensors with distinct shapes, specific to only one category. The only question that we need to solve now with machine learning is finding out what pattern of sensor readings is associated with each category.

Sensor deduplication

Another problem with our setup was that the eNose had multiple redundant sensors. Of course, we could have just read the specification on the device and kept only a single instance of each, but: This device was a prototype (assembled by hand with some sensors possibly malfunctioning). Different placements of sensors make them more or less capable of detecting VOCs

So, we went with the data scientist way of deduplicating them. The main goal was to only keep the readings of the sensors that showed to be most promising in terms of information gain. 

On this end, we computed the Pearson correlation coefficient and put everything into a dendrogram plot that you can see below.

 

We also trained a simple logistical regression model for inspecting the learned weight magnitudes (the results of which you can see below). The basic idea is that large weights (either positive or negative) are a proxy for important features that we want to keep.

Since this wasn’t enough, we also computed a more scientifically valid feature importance by training a Random Forest.

All in all, through a semi-manual process that took all the above results into consideration, we selected only the reading of the sensors below:

Model tuning

After figuring out that the problem may be solvable and what data we should use, the next step was to put everything to the test and train a machine learning algorithm to assess the system performance. We didn’t use anything too fancy, just a plain Random Forest that we later hyper-tuned using grid search where the best configuration found was:

{

‘criterion’: ‘gini’,
‘max_depth’: 10,
‘max_features’: ‘log2’,
‘min_samples_leaf’: 1,
‘n_estimators’: 1000,
}

Results

The final accuracy we got was around 87%. You can see a full classification report below:

Deployment

The final thing we did was to deploy this into “production”:

● Serialize the notebook trained model into a pickle format

● Encapsulate the model into an Azure function

● Write a quick web app that would do the measurements and use the deployed model to predict the outcome

What Can You Do with Artificial Intelligence: Conclusions

Even though we were just having fun, we knew this was a bit of a hack and that we could improve a lot regarding the methodology, data modeling, and analysis. Should we iterate this? Still, we were able to build an end-to-end system for detecting coffee, ice cream, and tea with around 87% accuracy. Not bad for a little office diversion!

This article originally appeared on www.clungu.com.

Latest Articles

Discover materials from our experts, covering extensive topics including next-gen technologies, data analytics, automation processes, and more.

Unlock the Power of Your Data Today

Ready to take your business to the next level?