Science topic
Data Fusion - Science topic
Explore the latest questions and answers in Data Fusion, and find Data Fusion experts.
Questions related to Data Fusion
I am investigating the relationship between hysteresis loop area (stress-strain response) and electrical resistance evolution in multifunctional composites with embedded copper inserts under fatigue. The goal is to understand how mechanical energy dissipation relates to electrical behavior over cyclic loading, particularly under different applied voltages.
🔹 What are the best methods for correlating mechanical and electrical data in fatigue experiments?
🔹 Have you used data fusion techniques for multi-physics problems like this?
🔹 Any recommendations on tools, algorithms, or key references for analyzing such coupled effects?
I’d love to hear insights from researchers working on electrical-mechanical interactions in materials science.
Any suggestions, references, or example code would be greatly appreciated!
📷 Exciting Opportunity: Join us as a Postdoctoral Fellow in Drone and Satellite Data Fusion for Sustainable and Smart Cities! 📷
We are seeking a passionate Postdoctoral Fellow to join the Interdisciplinary Research Center for Aviation & Space Exploration (IRC–ASE) at King Fahd University of Petroleum and Minerals (KFUPM). As part of our innovative team, you'll have the unique opportunity to work directly under the mentorship of Dr. Bilal, leading cutting-edge research on Drone System data collection, processing, and fusion with Satellite Data to drive innovations in Remote Sensing Applications for Sustainable and Smart Cities.
📷 Key Responsibilities:
• Develop and implement advanced Drone data collection techniques
• Process and fuse Drone and Satellite data for accurate Remote Sensing analysis
• Apply solutions to address challenges in Sustainable and Smart Cities development, including environmental monitoring, urban planning, and more
📷 What We Offer:
• Access to cutting-edge facilities and resources
• A dynamic, interdisciplinary research environment
• The opportunity to contribute to shaping the future of remote sensing and drone technology in the context of Sustainable and Smart Cities.
📷 Who We're Looking For:
• PhD holders with expertise in remote sensing, drones, satellite data, or related fields
• Experience with Drone data processing and analysis
• Strong problem-solving skills and a passion for interdisciplinary research focused on Sustainable and Smart urban solutions.
📷 Interested?
Please send your CV and a proposal related to Drone data applications in Remote Sensing for Sustainable and Smart Cities directly to Dr. Bilal at muhammad.bilal@kfupm.edu.sa.
Don't miss out on this unique opportunity to make a lasting impact on the future of Sustainable and Smart City development through cutting-edge Remote Sensing technology! 📷
#PostDoc #SustainableCities #SmartCities #DroneTechnology #DataFusion #ResearchOpportunity #IRC_ASE #AviationSpaceExploration #KFUPM
I am gatheering data for this question for case study and research paper.
In a multi-static radar system when multiple numbers of transmitters and receivers are present in that case for the process of data fusion is it necessary that the data from the same transmitter has to be fused ?
2024 3rd International Conference on Automation, Electronic Science and Technology (AEST 2024) in Kunming, China on June 7-9, 2024.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
(1) Electronic Science and Technology
· Signal Processing
· Image Processing
· Semiconductor Technology
· Integrated Circuits
· Physical Electronics
· Electronic Circuit
......
(2) Automation
· Linear System Control
· Control Integrated Circuits and Applications
· Parallel Control and Management of Complex Systems
· Automatic Control System
· Automation and Monitoring System
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 1, 2024
Registration Deadline: May 24, 2024
Final Paper Submission Date: May 31, 2024
Conference Dates: June 7-9, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback

How to Reasonably Weight the Uncertainty of Laser Tracker and the Mean Square Error of Level to Obtain Accurate H(Z)-value?
Journal: Remote Sensing (MDPI)
Topic: Any topic
article processing charge (APC) of 2500 CHF (Swiss Francs): I will pay it
Scope: Scope
- Multi-spectral and hyperspectral remote sensing
- Active and passive microwave remote sensing
- Lidar and laser scanning
- Geometric reconstruction
- Physical modeling and signatures
- Change detection
- Image processing and pattern recognition
- Data fusion and data assimilation
- Dedicated satellite missions
- Operational processing facilities
- Spaceborne, airborne and terrestrial platforms
- Remote sensing applications
Deadline: 30 December 2022
Process: Please contact me for more information.
I want to know that what is the difference between sensor data fusion and sensor data stitching. For example, we have two homogeneous sensor of two autonomous driving systems ( lets say it is "camera sensor") . So I want to combine the camera sensor data for better accuracy and better precision. But I am confuse to use the term "Stitching" and "fusion".
Which term is more suitable?
What are the key differences between these two terms in autonomous driving systems domain ?
Are there any articles for identifying systems with noise on input eg: using "Extended Least Squares" or "Instrumental Variable algorithms", other ?
Say for arguments sake I have two or more images with different degrees of blurring.
Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?
Best regards and thanks
Hello everyone,
I have 2 land cover maps of an area that were classified by randomForest in R from 2 different sources. I would like to apply the Dempster-Shafer theory to fuse these maps to produce the final land cover map. The basic probability assignment (BPA) function will be constructed by "calculating the probability of each pixel that belongs to each category and the probability of correct classification based on the Random Forest classification".
Can anyone suggest the way to do that in R?
Thanks and regards.
I'm trying to find some articles or book chapters to learn about data fusion in structural health monitoring. Can anyone help me with that please?
My PhD topic is very related. I am to use lidar and SAR data fusion to map AGB within a forest reserve.
Hear from you soonest,
John Ogbodo
The IEEE GRSS data fusion challenge has a dataset with both HSI and LiDAR data. Any idea how to get them?
Are data fusion one stage of data integration? Is data fusion is reduced or replacement technique?
Please do let me know, thanks
I am looking to use Data fusion at the decision level. I came across the Dempster-Shafer theory of evidence. Can anyone point me to a good tutorial where I can understand this theory and apply it for my work? Please note, I need to know everything about it from scratch. Please do let me know. Thanks
I mean we can apply DDPG to the RL model with continuous action space, a general Q learning to those with discrete action space. For a specific instance, if there are three dimensions in the action space, two of them are continuous while one is discrete. How to deal with it?
Hi, I am working on a project in which I am trying to develop Occupancy grid based on inverse sensor modelling. I am only using it for providing the occupancy of static objects.
I have really basic questions as I am not being able to imagine properly the concepts. I have also read online, attended the course for grid modelling on coursera and also studied some publications but I am looking for simple explanation to explain it to non-technical people.
- What is inverse sensor modelling?
- How to give probability to Grid cells by using Bayesian theorem? Mathematical explanation and calculations or in simple words How to make map through inverse sensor model using bayesian technique (I am using 3 sensors data (Radar,Lidar,Camera) so I am also doing sensor data fusion to predict approximately valid detection)
- How to filter out dynamic detections as I am only interested in static objects detections.
- If possible can someone also refer me some valuable and concrete research publications to get more details.
I will be really thankful for your time and explanations in advance.
I want to know about novel data fusion architectures or Models
Some of the literature I’ve read describes the Kalman Filter as an excellent sensor fusion or data fusion algorithm. Because of this, I wonder (1) if it’s possible to implement a Kalman Filter with a single sensor’s data--every implementation I've read uses more than one sensor--and (2) if possible, is Kalman Filtering truly an optimal algorithm for this system?
Specifically, my sensor measures just position, p, at designated time intervals. I then use the distance traveled divided by the time interval to calculate velocity, v, at the next time step. (Velocity at first timestep is assumed to be zero.) The state, x, is defined in terms of p and v.
However, given that I merely did manipulation with the first data set, I’m not convinced this is "data fusion" when all data is derived from the same source. Nor am I sure if the KF could do much more than subdue measurement noise in this case.
Please introduce me any algorithm for fusing ASTER and MODIS data with minimum spectral data lost and enhanced spatial resolution. Glad to hear about your experience and helpful papers.
The double integration of acceleration gives the position of an object. However, how should the position of an object fitted with accelerometer and gyroscope be determined using the data from the two sensors.
I am trying to integrate a GPS and IMU via a loosely coupled integration. My state is the position and velocity vector. I am using the following equations for the nav frame mechanization.
I am trying to use the kalman filter and for this I need:
the state transition matrix φ
the system noise covariance matrix Q.
I assume the measurement matrix H is the identity matrix because the outputs of the IMU and GPS are in the same frame
The measurement noise matrix R.
How do I initialize the initial covariance matrix
I am assuming that I have perfect sensors. Although I would like to know how not having perfect sensors would change these matrices.
my questions are the following:
For the state transition matrix. Is it derived from the IMU mechanization above? If so how do you deal with the fact that the future latitude is a function of the of the future altitude?
Would the system noise co variance be 0 since I am assuming a perfect model?
Is the H matrix the identity?
How is the initial covariance matrix usually initialized?
Detailed Description :
I am working on steering wheel angle sensor that measures absolute angle of steering wheel. As steering angle sensors uses gears and several joints which is totally hardware related so in spite of calibration in start with the passage of time due to usage of mechanical parts and also due to some environmental and road conditions some errors occurs in the values of sensors (e.g. offset, phase change, flattening of signal, delay).
In short due to these errors in the measurements our aim gets distracted means If I am viewing velocity vs time curve so if in the original or calibrated sensor in short close to ideal condition sensor my velocity shows a peak in amplitude but due to error (hysteresis) in measured signal I am not getting peak in velocity curve or I am getting flattening of curve so it will affect my final task.
I have a tolerance let say 1.20 degree for hysteresis so that’s why I am having detailed idea about my signal and want to observe my signal if some changes means offset, delay, lowering has occurred in my signal or not. This will not only provide me an idea that whether to lessen the amount of sensors used for my task or made some changes in hardware of sensor to lessen the amount of hysteresis or do some other actions to reduce it.
What I have done uptill now in which uptill now I am not sure that whether I am right or wrong. I am getting some values for hysteresis but I have few questions regarding those techniques. If someone provides me an idea about it how to improve these techniques or provide me a better approach then it will be nice and great guidance.
I have an ideal sensor signal (under ideal conditions which we want) and values from 1 sensor I have data of 6 different drives from car. I am explaining just 1 example of my first drive and its relation with my reference sensor data.
Given the data reference signal and sensor signal data of size 1x1626100 and 1 x 1626100 double for one reading from sensor but in all readings values from Ideal and measured signal w.r.t to time are same.
In short I want to find out the Hysteresis difference of sensor signal from measured signal.
I have applied few data estimation techniques to accomplish my goals.
1- I have applied Gaussian Technique to estimate error in my data but I am not satisfied with it as I am not getting some good values or expected values with it may be due to outliers or some other errors.
I have subtracted (Ref – measured value of signal). Calculated mean of difference signal after applying my limitations, Standard Deviation of difference signal after applying my limitations, then make a Gaussian Curve along with mean and standard deviation. Made 2 lines one for mean+ standard deviation and 2nd one is with Mean – Standard Deviation and distance between +ve Mean_std and –ve Mead_std is called Hysteresis (Loss).
Please have a look at the attached figure. I have attached figure for 3 modal Gaussian curve but in some diagrams just like picture 3 my data is shifted. Can anyone tell me the reason why it is so and how to erase it because it is occurring in all diagrams but in this figure 3 it was clear.
2- In this method I have applied Regression lines Technique (On upper and lower values of difference signal).
I took difference of my signals (Ref – measured value of signal after applying my limitation on signal).
Applied regression technique above and below the difference signal means on upper values and on lower values separately and difference between upper and lower values regression lines is called as Hysteresis (Loss). Please have a look at figure 3 and 4 for clear view.
The Problem here with this technique is that I define the values for upper and lower regression line by myself after looking into data like up= 0.08 , low= -0.08.
3- I have also applied RMSE technique but have few questions that are confusing me.
As I am dealing with static data so I considered my Reference signal as actual valued signal and my measured valued signal from sensor as measured values and apply RMSE formula on that.
RMSE= square_error = (sig_diff_lim).^2;
mse = mean(square_error)
msedivided = mse/numel(drv(2))
rmse = sqrt(mse)
4- I have also applied correlation function but I think it is not working well with my data.
But it gave me some good insight about data.
Some Questions that also need clarification if possible:
1- What is difference between RMSE and MSE means I know basic stuffs but I want to know that what are the applications where we use RMSE or MSE and which will work in my case for my data.
2- I have now explained Gaussian, Regression technique, RMSE. Just one Request .Can someone explain me which Technique is the best to use because I have mentioned my issues in Gaussian and Regression Technique above. Just some explanation will work for me .Means Pros and cons of above mentioned technique.
I hope that I remained able to provide you the clear idea and detailed review what I want to do and what I am expecting and I hope to get some positive and valuable response from people.
Sorry for providing a long story but As I am not too expert in these as I am still learning so if I will get some useful responses it will improve my knowledge and also help me to accomplish my task/Goal.
Thanks for cooperation in advance.
I am using StarFM for data Fusion between Landsat 8 and MODIS. I re-sampled MODIS from 500 m to 30 m resolution using MRT Tool. I crop the both Images Landsat and MODIS using a Shapefile. When I overlap two images, MODIS resampled image shows more extent than Landsat 8 Image.
I need to make two images of same size and of same resolution.
If yes, how do you treat the weights for the two different variables? Do you launch different weight vector for position and another one for the velocity?
Is it possible to carry out Joint SP-RP model estimation in RPL Framework using Sequential estimation Technique? Any references for the same? To the best of my understanding, It is possible when we use data pooling technique for joint estimation. Need suggestions and clarifications in this regard.
I intend to calculate the Doppler centroid anomaly of Sentinel-1 SLC images to further use the Doppler centroid anomaly for the estimation of the line-of-sight motion. There are three sub-images in one SLC image with 27 Doppler coefficients of 9 groups. I have no idea how to calculate the Doppler centroid anomaly using the 27 Doppler coefficients. Thank you!
For the detection and segmentation, I know several techniques:
(1) Direct thresholding
(2) Finite Moving Average Filter
(3) Savitzky-Golay Filter
(4) Constant False Alarm Rate
(5) Wavelet Transform
For the deinterleaving task, I know
(1) Cumulative Difference Histogram (CDIFF)
(2) Sequence Difference Histogram (SDIFF)
(3) Sequence Searching
Are there any techniques better than those I have mentioned?
Many thanks!
Hello!
I am working on multi-platform, multi-target tracking problem. I am using Interactive multiple models (IMM) [CV + CT + Singer]. My problem is asynchronous (different times of each plot for every platform). Monoplatform processing is fine. I am satisfied with the results. But when I try to fuse my measurements, I get unappropriate quality.
I want to do something like that: extrapolate all local(monoplatform) tracks to one time and fuse them here. Calculation of cross-covariances is quite involved and resourse-consumption for this case. So I've chosen method that was described at Zhen Ding, Lang Hong " A distributed IMM fusion algorithm for multi-platform tracking " (1998). It suggests to use "global" filter and fuse locals using both locals and "global" filters results. (It performes the same procedure with Bar-Shalom - Multitarget-multisensor tracking: principles and techniques p .464 ("Optimal fusion using central estimator") but for IMM filter, not for KF).
My problem is: I want to propagate this article's technique to asyncronous case. I don't understand how to get predicted and filtrated values for local tracks (because I have only predicted value to get all local tracks to the same time). Any suggestions?
The second problem is less complicated. I just don't know what initial value to set on "global filter".
the name in the quote is not a specific article title
i am searching for such an article but no answer till know!!
Hi everyone!
My challenge is to find a way to segment LV short axis without any kind of manual segmentation. Find a way to localize and segment the LV without training data sets or even initial manual segmentation done by experts. I found in the literature, Hough transform as a possible, but not good solution. I thought to use after the application of Hough, deformable model: snakes, for the segmentation. The principal problem is: I can't use a method that requires lot of prior knowledge. any ideas to help me? Thanks...
I use Interactive Multiple Models KF for tracking planes. The main targets are passenger planes. But there are some cases with high maneuver planes. I tried CV, CA, Singer (a~1/30). This didn't satisfied me at all. Maybe i used wrong matrices Gamma and wrong start probabilities, i dont know. I guess I tried a lot of options for it.
Any recomendations for models combinations? Would be great with concrete arcticles. Thank you!
Hello!
I am working on planes tracking. The simplest measurement selection(for an subsequent Kalman filtering) is: http://take.ms/4k8Fh . I use IMM filter though. The question is very simple: how to modify this expression to select measuremetns? I have 3 models (CV, CA, Singer) and probabitities(weights) for each model. I thought about calculation properties of Gaussion mixture here (using probabilities of models). And try to find out some criteria which will provide me assurence of accessory a point to the track with some preassigned probability. But I am not sure that it is a right method.
How should I act?
While course resolution derived SM products like SMOS and SMAP can well describe temporal soil moisture dynamics, they lack spatial detail and we can’t use them in analysis of local hydrological patterns. Contrary, SM product from sentinel-1 (A/B) sensor can resolve dynamics at this spatial level. However, its temporal resolution is less frequent and thus not suitable for acquisition of short-term variations. To overcome these spatial and temporal scale gaps, we can use data fusion approach that fuses multi sensors data. Is there any method which could help me do this?
I want to know if there is any free simulation tool / benchmark or real data set available to evaluate various algorithms in Distributed data fusion/track-to-track fusion, for e.g Covariance Intersection, Covariance Union, Bar Shalom Campo, Kalman filter, etc. In research related to these methods, the performance is evaluated based on some simple vehicle tracking example simulated in Matlab. But I am interested in a real data set or at least an elaborate simulation tool which is easy and flexible to use for evaluating the aforementioned algorithms?
Thanks in advance.
Did anybody done sensor data fusion before, I need some valuable insights and help..
Learning resources are a plenty, but i want to start from basics of data fusion to mastery..
Recently I’m working on a link scheduling algorithm in Wireless Sensor Network, the topology example is shown as figure, it’s a converge cast communication pattern, nodes besides the root upload its data forward to sink.
The queue size(or cached data size) of the aggregators is somewhat large, since for the aggregator(nodes are not leaves in a tree topology), the data collected from its children is similar or have redundancy, and we may need repackage data to transfer along the way to the root(or sink), it’s possible we can have data compression or entropy reduction.
Now I need to some data compression method on the aggregator to reduce the transfer data size while ensure acceptable communication quality. Terms like ‘data fusion’, ‘data aggregation’ of wireless sensor network always lead to topic about routing, which is not I expected. Maybe I use improper terms.
Could any body give me some hint or advise?
Hello!
In this question I mean that in my concrete case data is achieved not in batches but as a sequence of measurements. My measurements are being obtained in real-time. So I have to do some actions with every point which has been obtained. And it is hard for me to apply IMM/PDA in a correct way because:
1) It is multiplatform case
2) There is a clutter. PDA can easily deal with it, getting a T time batch of measurements by calculating the association probablities but if I have received 1 point I cant perform PDA well (for example this point can be a clutter point)
What is the way to deal with this? Summarising: I understand IMM/PDA kinda well. But I cant apply it to multiplatform sequential data. How can I deal with it?
I'm looking for someone or some work (paper) which explain about sensor data fusion (using kalman filter or other algorithm) that make a fusion from more than one orientation data (like arm orientation in Euler angles or based on Quaternions extracted from two or more sensors).
I will be very grateful by any help!
Hi guys,
I would like to know whether it is probably that the EKF update phase can make the estimation poorer than it was at the prediction phase. I noticed on my simulator that there are time instances where the update phase estimated the actual state vector poorer than the estimation was during the prediction phase.
I am executing the update phase each two time steps and the measurement covariance matrix is a bit higher that the noises in the prediction equation.
Could someone confirm that this could happen and it is not EKF implementation mistake?
I just made the system as linear as possible in order to reduce the linearization error of the the EKF. Still same happened.
Hi guys,
I am working on the localization problem of an underwater vehicle. However the problem is still in very simple and it does not matter that it is about underwater.
How can I use the EKF when I have multiple measurements but with different sampling? Not all the measurements are available at the same time to perform the update step.
Can I perform the update for the measurements that I have at each time step? For example, I have a pressure sensor and a speed sensor with frequency 1Hz. But I have also another sensor (USBL) system with 0.01 Hz.
What if I perform update step for pressure and speed sensor at each time step adjusting the measurement model to that case (like I have only these sensors) and once I receive the USBL measurement I augment the measurement vector including the USBL measurement?
Any help?
Thanks in advance
The aim of data fusion(DF) is basically increasing the quality of data through the combination and integration of multiple data sources/sensors. My research is on assessing this impact of DF on DQ(data quality) hence I would appreciate the academic materials to backup your conclusions.
I have being trying to link DF methods to the DQ dimensions that are mostly impacted on to no avail.
I need one or more multivariate datasets (to make a comparison study) where samples are characterised by different chemical analytical sources. It would be better if samples are associated to a qualitative response (class) too.
Can we use slope parameter to extract the DTM using LiDAR data in FUSION Software?
I'm working with Sensor Data Fusion specifically using the Kalman Filter algorithm to fuse data from two sensors and I Just want to give more weight to one sensor than to the other, mostly because there is a transition where the algorithm use just the data obtained from one sensor to a moment where is used both sensors and at this time I need to smooth the transition giving during a short time more dependability to the first sensor and then normalize to a factor of 50% to 50% of reliability to each sensor.
I have used SVM and Object Based Image Analysis for this task and the study area is dense urban with various landcover features. But the results are not up to the mark. I want to extract and reconstruct (3D) the urban landcover features (Buildings, roads, bridges, cars, trees etc) with textures. I am waiting for expert comments.
As far as I know, the filter algorithm can be separated into two parts: prediction and update. How to optimally estimate the state relies on balancing the weight of prediction and update. On the other word, do we trust predicted state or the observations more? It is more important for the dynamic system with unknown model or measurements errors. Could any body can give some advice on what kinds of filters (Adaptive or Robust filters) are more practical for system with unknown model or measurements errors? Is H2/H infinity filter the only choice? Thank you very much!!
I have some trouble in SAR and optic data registration and finding proper ground truth data.
ACCs can work correctly and precise with measuring data and processing them, what kind of data and how does it get them, (with what kind of sensors), does it use data fusion methods, please link me to some sources to continue my research...
Thank You
*Moodi*
I am investigating orbit determination for planetary exploration. I would like to try different kinds of estimation methods such as batch-based estimation and EKF. Up till now, I have found that the accuracy of the batch-based method is higher than the sequential filter. I would like to know why this happens because the principle of the sequential filter also indicates that all the previous measurements contribute to the current estimation.
Since no more information is added, why is the accuracy of two methods different? Could anyone explain?
Instead of doing a simulation, I need a dataset with hundreds or thousands of records distributed over multiple data sources. I need it for data integration purposes where the same entity may have multiple records in different data sources.
When we have features of two modalties extracted with the same method of feature extraction, do we need to normalize the score?
I guess time-series analysis can be also studied under the scope of statistical signal processing. Is this correct? Maybe someone could give me a hand in selecting introductory, intermediate and advanced text books for multivariate time series analysis. Thanks!!
There are some approaches to do the data fusion: Kalman filter, Bayesian, behavior. For a small robot, what is the best?