Questions related to Sensor Fusion
I want to know that what is the difference between sensor data fusion and sensor data stitching. For example, we have two homogeneous sensor of two autonomous driving systems ( lets say it is "camera sensor") . So I want to combine the camera sensor data for better accuracy and better precision. But I am confuse to use the term "Stitching" and "fusion".
Which term is more suitable?
What are the key differences between these two terms in autonomous driving systems domain ?
I have several sensors which are interfaced with the help of ROS and are synchronized with the ROS time(ROS1). The sensors and their nodes are fully functional. Each sensor does some processing ,after it senses a detection in it's environment, before eventually timestamping this data in ROS. Since the sensors are of different kinds, and have their own processing before eventually timestamping it's data in ROS, there is an expected delay between the detection and the timestamping and also a delay is expected between the different sensors.
I am interested in the delay that takes place between the sensor detecting an event and eventually timestamping this data in ROS. The image shows the different processing for each sensor that takes place before timestamping.
When searching for this specific problem not much could be found, so any advice would be appreciated.
For GPS - INS fusion how to find the rotation matrix from body to navigation frame and how the coriolisis and gravity will effect the navigation purturbation equations?
I have a very limited knowledge on control systems. Now my research field is on quadcopters and I want a comprehensive book in this field that can fill all my knowledge gaps on quadcopter sensor fusion techniques and also modeling,system identification and control methods of a quad-copter.
Is there any clear resources for Inertial measurement unit sensor fusion?how to calculate rotation , positions with Arduino in Euler and quaternions ?I read some paper and googled these term but stil not find a stright forward explination with arduino examples
In our studies we are using different sensors
We would like to hear from researchers and industry experts about sensor data dashboard, storage platforms and analysis tools.
What are the technologies and tools used to store and visualize real time sensor data? The answers can be in different application areas from automative and manufacturing to health domains. What are your recommendations for sensor data capturing?
I'm facing the problem of model calibration for a larg-scale manipulator with a half-spherical workspace of around 40m diameter.
The typical approach would be computer vision, which is difficult in this case due to: i) larg scale and therefore huge distance to cameras, ii) only possible outdoors, where the sunlight might be a problem. iii) calibration of 5-6 cameras takes some time, the measurement should be finished in one day.
So my ideas are the following: i) use triangulation of radio frequency signals (e.g. RF position sensorii) use extra IMUs iii) develop a sensor fusion scheme, like kalman filter
I want to ask you for alternative approaches, expected accuracy of RF position sensors without sensor fusion, problems with RF reflections (the manipulator is constructed from steel) or any other input.
thanks in advance!
Hello, well, I want to get the linear and angular velocity of a vehicle based on the data of IMU and GPS. all the exemples I saw so far in the internet do a sensor fusion using Kalman filter to get the position not the velocity. Also we can get the transformation matrices between two set of points from the roll, pitch and Yaw angles, but I wonder if I can get the linear velocity taken t0 position as [0,0,0] and then devided the distance between two points over the timestamp. Actually the probleme here is the linear velocity because the angular velocity is already provided from the gyro. Pease any help will be appreciated.
There's a MEMS sensor which has an accelerometer giving tri-axial angular velocity and linear acceleration. I wanted to measure the total kinetic energy which has 2 parts: translational and rotational part. K=0.5*m*V^2+0.5*Iw^2. I have the omegas (w) so I can measure the rotating part but I need to estimate the Eulerian framework velocities to get the translating part of the kinetic energy. I think I need to convert the Lagrangian acceleration values to Eulerian framework then with a filter (or an integral) Eulerain velocity can be estimated.
I'd be appreciate it if anyone could help me with the route to estimating velocity (Eulerian) to have the translational part for Kinetic energy.
What are the latest developments in measring Physical exertion? In terms of multimodels sensor fusion, algorithms, wearables, applications.
Some of the literature I’ve read describes the Kalman Filter as an excellent sensor fusion or data fusion algorithm. Because of this, I wonder (1) if it’s possible to implement a Kalman Filter with a single sensor’s data--every implementation I've read uses more than one sensor--and (2) if possible, is Kalman Filtering truly an optimal algorithm for this system?
Specifically, my sensor measures just position, p, at designated time intervals. I then use the distance traveled divided by the time interval to calculate velocity, v, at the next time step. (Velocity at first timestep is assumed to be zero.) The state, x, is defined in terms of p and v.
However, given that I merely did manipulation with the first data set, I’m not convinced this is "data fusion" when all data is derived from the same source. Nor am I sure if the KF could do much more than subdue measurement noise in this case.
Commercial, industrial, and military vehicles are required to safely navigate predetermine paths in degraded visual environments (i.e., fog, snow, heavy rain, dust, ect.). Various techniques based on sensor fusion (i.e., CCD cameras, RADAR, Laser sensors, ultrasonic sensors, ect.), Machine Learning/AI, human-aided operations and other are applied.
I am fusing a certain sensor into an Unscented Kalman Filter (UKF).
This sensor reports a certain measurement including an SNR metric for each measurement.
I am wondering if it is a good idea to keep changing R for this sensor based on the SNR values. However, this would mean too frequent updates of R. or I could adapt R at a lower frequency, like 1Hz or slower.
Or should R be rather kept constant and I simply make use of the sensor's SNR metric to decide whether to use or throw out certain measurements.
Thanks for the advice.
I want to interface hyper-spectral camera with USB interface and CMOS camera Sony alpha 9 which come with HDMI interface and USB as well with FPGA . I know the implementation of USB 3.0 is difficult in FPGA. Is there any alternative solution, because my constraint is I have to extract data simultaneously from both cameras . My project is related to sensor fusion so timing is important. The whole setup is going to be mounted on UAV
Currently i am working on Smart Greenhouse development, and i want to know the Combination of sensors like Temperature , Humidity, Camera, Co2 detector, pH etc.... which can support me in Optimal results.
is it same or different for all three? if it differs then how to calculate energy dissipation for unicast, multicast, and broadcast at the same distance?
Scenario: Cluster based wireless sensor network.
In case of a crowdsourcing app for monitoring the ambient sound for example. Can we rely on one smartphone or many are required?
I want to know what are the techniques currently used in the highway traffic monitoring. like RFID etc
Please suggest me available publication or any mean of sources
I am trying to find the IoT term definition for my research. It seems that it is good opportunity to ask this question once more. In most cases I know the term IoT/IIoT can be replaced by SCADA (Supervisory Control an Data Acquisition) , ICT (Information and Communication Technology) and the text still will be perfectly OK.
Do you think the box (or even pack) of cigarettes could be the “thing”. It has a barcode, so it is the source of data. Is it the sensor - no because in this case the barcode reader (industrial scanner) is the “sensor”. Can we recognize the bar code reader as the “thing” – the answer is not if the goal is to provide GLOBAL cigarettes tracking system. The same applies to drags for example. Is it IoT/IIoT solution - my answer is YES no doubts - it is vital for selected industries.
Is the "thing" smart - I don't think we can call the bar code something smart. The most interesting observation is that we can recognize this case as the IoT solution, but we have not mentioned Internet, wireless, etc. at all, but only that we have important mobile data and the solution is globally scoped.
Let’s now replace the word GLOBAL by LOCAL (for example cash desks farm in the shop) and the same application is no longer IoT deployment , isn’t it? It is true even if the cash desks are interconnected using IP protocol !
My point is that a good term definition is important to work together on: common rules, architecture, solutions, requirements, capabilities, limitations, etc. The keyword in the previous sentence is COMMON. Importance of the sensor and data robustness requirement could be applicable to many applications, e.g. controlling an airplane engine during flight. The same engine could be monitored and tracked after landing in any airport using local WIFI by uploading archival data to a central advanced analytics system. Is it IIoT? During the flight it isn't, but the solution is life sensitive. After landing it is IIoT, but the reliability of the data and data transfer is not so important, isn't it.
My concern is that your definition provides pretty good description of the Universe, but working on engineering standards is like carving on the stone - it is one way ticket. To buy one way ticket you must be sure where you are going.
To be constructive my proposal for the definition is as follows:
Try it against the above example.
In the above proposal the open question is: what is the "mobile data", but I believe that the definition is much closer to the final expectation. To answer this question I propose this approach: Data is Data It Doesn’t Matter Where It Comes From!
For implementation of this concept we can use Object Oriented Internet paradigms coved by the:
The only missing thing is how to use these building blocks to make the consistent IoT puzzle (deployment domain). In this case a sponsor is needed to scope globally the outcome.
I believe that finally this way we will get good starting point for the further standardization.
Let me know how this scenario works for you.
I am working on a system that needs to detect Road accidents to initiate its processing. i am trying to particularity monitor a human being instead of any vehicle in which he might be traveling. The system is designed in such a way that it can be helpful to even a pedestrian. I have already considered complete reversal of acceleration as a prime indicator and need other factors that i can use to increase the systems accuracy and reduce false alarm conditions. Since the system needs to be a vehicle independent so can't use the sensors used in, such as air bag systems etc..
I'm making a real time motion analysis tools by putting IMU on the Arduino Due. How can I set sensors to 0 by using quaternion (calibration)? thanks
4 sensors using serial port. Quternion realtime transmit to matlab by serial port.
totally 16 values for each sample.
I would like to take 5 second's of data for baseline. and then use the mean of the 5 s to be 0.
I am trying to perform track-to-track fusion in a asynchronous case. I have great results for monoradar tracks (using IMM (CV, CT, Singer)). But when i starting to fuse tracks I am getting something awful. I do not take into consideration cross-covariances at all. I know it is wrong but these calculations need plots from radars and I have an asynchronous case (periods are different and radars aren't synchronized). How to do it simple and efficient? Is this cross-covariance effect really have THAT huge impact on results?
I am working on multi-platform, multi-target tracking problem. I am using Interactive multiple models (IMM) [CV + CT + Singer]. My problem is asynchronous (different times of each plot for every platform). Monoplatform processing is fine. I am satisfied with the results. But when I try to fuse my measurements, I get unappropriate quality.
I want to do something like that: extrapolate all local(monoplatform) tracks to one time and fuse them here. Calculation of cross-covariances is quite involved and resourse-consumption for this case. So I've chosen method that was described at Zhen Ding, Lang Hong " A distributed IMM fusion algorithm for multi-platform tracking " (1998). It suggests to use "global" filter and fuse locals using both locals and "global" filters results. (It performes the same procedure with Bar-Shalom - Multitarget-multisensor tracking: principles and techniques p .464 ("Optimal fusion using central estimator") but for IMM filter, not for KF).
My problem is: I want to propagate this article's technique to asyncronous case. I don't understand how to get predicted and filtrated values for local tracks (because I have only predicted value to get all local tracks to the same time). Any suggestions?
The second problem is less complicated. I just don't know what initial value to set on "global filter".
I knew there are some interesting applications that can make use of embedded accelerometers in smartphones.
Besides low measurement accuracy and high weight, I guess the important drawback of these applications is their low sampling frequency. I mean, fast vibrations are completely invisible to the applications that I have used so far. Therefore, I just wanted to ask if anyone can introduce an application with a high sampling frequency, and in general, I wonder what is the highest sampling frequency we can expect from these devices?
Thank you in Advance,
Let's say I have a sensor, e.g. a digital encoder, which has a sensitivity s, i.e. the measured value falls in a interval of +/- s/2 around the true value.
With no other assumption, is it reasonable to model the noise as zero mean Gaussian?
If yes, what variance should it have?
Does exist a rule of thumb, e.g. 3 sigma = s/2 -> sigma = 3/2 s ?
SensorML is a markup language developed for open geographical information systems. There are java libraries, XSchema definitions for validation, OWL support for semantic reasoning. WHen I did a quick browse through it, then it seems as if it is possible to
- define self-describing sensors
- define processing (from source to recipient)
- and much more
To me, this standard seems to be a real contender for interoperability issues in ubiquitous computing, ambient computing and internet of things. So, have you applied it? What is you experience? What are the advantages and disadvantes? Concerning interoperability, are there any real competitors?
In this question I mean that in my concrete case data is achieved not in batches but as a sequence of measurements. My measurements are being obtained in real-time. So I have to do some actions with every point which has been obtained. And it is hard for me to apply IMM/PDA in a correct way because:
1) It is multiplatform case
2) There is a clutter. PDA can easily deal with it, getting a T time batch of measurements by calculating the association probablities but if I have received 1 point I cant perform PDA well (for example this point can be a clutter point)
What is the way to deal with this? Summarising: I understand IMM/PDA kinda well. But I cant apply it to multiplatform sequential data. How can I deal with it?
I am interested in some literature where i could find some information related to state of the art algorithms used for fusion of multiple sensor data available specifically on an avionic bus of an aircraft (combat).
The problem is I can't use the same procedure as in distributed systems.In DS one can just extrapolate tracks (all tracks that are available at the moment) to the chosen time using KF. I cant use KF in centralised case, because i dont have a velocity. What technique should I use instead? Another problem: How can I solve the following data association problem: for ex. I have 3 radars. I received 2 plots (radar marks). How do i determine whether this object is being followed by 2 radars (in which case i can begin a correlation procedure) or the third plot will be received soon and I should wait for it? Could you recommend any articles with detailed explanation of centralised fusion?
I am currently working on an application which is aimed at measuring and storing maximum rotation speeds of the device attached to an object (in rotation).
Unfortunately I think I have finally come across a problem - my software recalculates gyro values (angular velocity) into rotational speed (in cycles per minute) - unfortunately my Samsung Note II reaches only 167-168 rot/min.
Can somebody advise where I can find the max value of measurements that such "budget" gyro is able to reach?
Do you know of any method to extend that value?
Can we determine the direction of robot by fusing accelerometer and gyroscope outputs? I would like to know if robot is moving forward or reverse on a normal road or even a sloped road without the interference of driving motors/wheels.
Which methods I use for information fusion problems and Is transferable belief model a method for IF and what else? Do you refer to sensor fusion? What kind of problems to solve are there in sensor fusion applications?
I know there are many simulators for a variety of sensor fusion applications, but can you recommend a simulator for a general sensor fusion (if it exist), or some dedicated to body sensors.
Thanks in advance.
I need linear acceleration along the 3 axis of a smartphone.
I already used the device orientation to project and subtract gravity from accelerometer input. with a correct filter it works well.
However when applying a rotation (without any translation) the output is biased. I believe this is resulting from the tangential (maybe centripetal) acceleration component of the rotation. Using the gyroscope one can calculate those components.
What do you think about that? How should the combination be done. I am a beginner with Kalman filtering (which sound good for this) so any help, ideas, references are very welcome.
Some years ago I used some Rogowski coils but I had a problem because the PTFE coating were burned in very few tests. Now the input power is 5 times greater so the problem must be worse. What kind of coating could I use?
I've been trying to cluster time series data received from bunch of sensors. I've multiple trials and I was expecting my algorithm to cluster similar trials together. I'm planning to use Euclidean/Manhattan distance and K means clustering.
Any idea/advice/precaution for clustering numerical data received from Analog/digital sensors? Or a method to gain higher accuracy?
Please give pointwise suggestions, rather than descriptive explanation.
The mems accelerometer is mounted on a quadcopter. Because the quadcopter is very small and flying indoors, it is basically impossible to use GPS to do sensor fusion. The position information is mainly used as feedback signal for autopilot mode.
I haven't tried Kalman filter or other filter technologies yet. And because of the thermal mechanical error I understand the accuracy couldn't last for a long time. Currently my target is to maintain a relatively accurate position signal in one minute.
If a robot is equipped with multiple sensors e.g. camera, audio, tactile, force/torque etc., how can its position estimation be improved by sensor fusion?
I would like to add lidar sensor model to my vehicle dynamic algorithm. I can model lidar time delay by using quantization techniques. To be more realistic, it is essential to add physics through mathematical equations which govern physical laws. Can someone recommend me some literature or their work which starts from simple equations of lidar to more complex examples?
This is a two part question as I think they go hand-in-hand.
Is there a way to determine the minimal set of sensors needed for any given exercise? For example, say one needs to detect specific objects, they start off assuming they might need a camera (more than one?), maybe a lidar, maybe a radar, etc. How does one quantify a minimal set of sensors needed to accomplish this task? Initially I'm thinking that it has something to do with the confidence one has in the detection, but I'm trying to quantify that.
Other question: Once a minimal set of sensors has been determined, how does one quantify the improvement gained in the detection by adding another sensor (could be same type, could be different)?
Multi-Sensor fusion and Complex Event Processing both address the combination of information originating from different sources. To me it seems that if CEP focuses a little bit more on dynamic events, while multi-sensor fusion is dedicated to a less abstract data-level. However, does anybody know about the exact differences between those two concepts?
We targeted filtering the output of sensor (kalman filtering) using c#. So we decided to try it in matlab first (because it's simpler in matrix manipulation) ?
So as we have no idea about reading live data (USB protocol) from matlab we have assumed static data .. the problem was however that the simulated data were filtered perfectly, the real data when transferring the same implementation to c# was not as good as the simulation. Any advice?
I'm trying to fuse the data from a digital compass with 25 Hz of acquisition data rate with the data from a dead reckoning systems which calculates de angulation of a vehicle. I got a little confused while modeling the system, how can I determine my output matrix to get my fused data result?