Cameras are Bad….on their Own
All sensors have positives and negatives like let’s take the example of sensors in a self-driving car. With just a camera, we can pick up audio and video, but they’re fairly expensive and can only face one direction. What if used Radar sensors? Now we can see far away and see in bad weather conditions, but a Radar lacks in being precise.
Either of these sensors alone is good but has some flaws but if you use them together, they can excel at each other's flaws. To use the different sensors together, you use Sensor Fusion.
To rephrase, Sensor Fusion has the ability to bring together the data from 2 or more data sources and helps people understand a system easier. It becomes more accurate, consistent and dependable.
It also doesn’t necessarily have to be from different types of sensors. For example, using data from two different cameras would be considered sensor fusion.
Sensor fusion is extremely important to any autonomous system. These are things like self-driving cars, radar tracking stations and the internet of things. So how does it work?
Let's first look at autonomous systems. These systems focus a lot of their time interacting with the world and there are 4 main steps of doing so:
- Sensing — This step mainly focuses on collecting data, A LOT of it.
- Perceiving — This step focuses on understanding the data.
- Plan — This step focuses on figuring out what to do with the data and find a path.
- Act — This step focuses on controlling the device and following the plan.
Sensor fusion helps a lot in steps 1and 2. It will help increase the quality of the data, increase reliability, measure the immeasurable states and increase coverage.
Quality of Data
Improving the quality of your data is a big part of sensor fusion. It uses different AI algorithms to understand which ones work the best. The more data points you get data from, the better your data is.
For example, let’s say you’re using EEG data. In EEG data, there’s a lot of noise so to reduce it, you can use multiple electrodes and take the average of the data from all the electrodes. This way, you will have the least noise.
Having more sensors increases your reliability because the more data the better as you can recognize outliers a lot easier.
Let’s go back to the EEG example. What if one of the electrodes was broken? This would cause a lot of problems but if you have other sensors, you can see the difference and understand if something is wrong.
Measuring the Immeasurable States
This is what we talked about right at the beginning of the article. With different types of sensors, you can use another sensor to improve upon a sensor's weaknesses. If you look at the chart above, you can see the different strengths and weaknesses of Cameras, Lidar and Radar.
It also tests out different algorithms to see which one works the best, so you can have the best data and results.
For example, let’s take the Central limit theorem. It takes the mean of data and more samples from wherever there are more results(Read More)
It will look at this data and compare it to other algorithms like the ones listed below.
- Kalman filter
- Bayesian networks
- Convolutional neural network
After going through all of these algorithms, you can compare the uncertainty reduction and have the best data.
Uncertainty Reduction: This means more accurate, more complete, or more dependable.
Autonomous systems are going to become a lot more available in the future and understanding them, will become a lot more important. It’s already being used in self-driving cars and other products that work on their own. Sensor fusion enables context awareness, which has huge potential for the Internet of Things (IoT) and will become extremely important in the healthcare field as well.