Science topic

Data Fusion - Science topic

Explore the latest questions and answers in Data Fusion, and find Data Fusion experts.
Questions related to Data Fusion
  • asked a question related to Data Fusion
Question
2 answers
I am investigating the relationship between hysteresis loop area (stress-strain response) and electrical resistance evolution in multifunctional composites with embedded copper inserts under fatigue. The goal is to understand how mechanical energy dissipation relates to electrical behavior over cyclic loading, particularly under different applied voltages.
🔹 What are the best methods for correlating mechanical and electrical data in fatigue experiments? 🔹 Have you used data fusion techniques for multi-physics problems like this? 🔹 Any recommendations on tools, algorithms, or key references for analyzing such coupled effects?
I’d love to hear insights from researchers working on electrical-mechanical interactions in materials science. Any suggestions, references, or example code would be greatly appreciated!
Relevant answer
Answer
The fatigue tests are conducted under tensile-tensile loading with R = 0.1, and electrical resistance was measured in real time during the fatigue test. We recorded force, displacement, and electrical resistance at predefined cycle intervals. The goal is to understand how mechanical energy dissipation (hysteresis loop area) correlates with electrical resistance evolution over fatigue life, particularly under different applied voltages.
  • asked a question related to Data Fusion
Question
1 answer
📷 Exciting Opportunity: Join us as a Postdoctoral Fellow in Drone and Satellite Data Fusion for Sustainable and Smart Cities! 📷
We are seeking a passionate Postdoctoral Fellow to join the Interdisciplinary Research Center for Aviation & Space Exploration (IRC–ASE) at King Fahd University of Petroleum and Minerals (KFUPM). As part of our innovative team, you'll have the unique opportunity to work directly under the mentorship of Dr. Bilal, leading cutting-edge research on Drone System data collection, processing, and fusion with Satellite Data to drive innovations in Remote Sensing Applications for Sustainable and Smart Cities.
📷 Key Responsibilities:
• Develop and implement advanced Drone data collection techniques
• Process and fuse Drone and Satellite data for accurate Remote Sensing analysis
• Apply solutions to address challenges in Sustainable and Smart Cities development, including environmental monitoring, urban planning, and more
📷 What We Offer:
• Access to cutting-edge facilities and resources
• A dynamic, interdisciplinary research environment
• The opportunity to contribute to shaping the future of remote sensing and drone technology in the context of Sustainable and Smart Cities.
📷 Who We're Looking For:
• PhD holders with expertise in remote sensing, drones, satellite data, or related fields
• Experience with Drone data processing and analysis
• Strong problem-solving skills and a passion for interdisciplinary research focused on Sustainable and Smart urban solutions.
📷 Interested?
Please send your CV and a proposal related to Drone data applications in Remote Sensing for Sustainable and Smart Cities directly to Dr. Bilal at muhammad.bilal@kfupm.edu.sa.
Don't miss out on this unique opportunity to make a lasting impact on the future of Sustainable and Smart City development through cutting-edge Remote Sensing technology! 📷
#PostDoc #SustainableCities #SmartCities #DroneTechnology #DataFusion #ResearchOpportunity #IRC_ASE #AviationSpaceExploration #KFUPM
Relevant answer
Answer
I sent an email to the professor M. Bilal, until today I have not received a reply. The draft of the student program are interested and I would like to organize something similar at our faculty ECPD in Belgrade and to achieve cooperation with King Fahd Universuty.
Regards
PhD Tomislav Đorđević arch
  • asked a question related to Data Fusion
Question
2 answers
I am gatheering data for this question for case study and research paper.
Relevant answer
Answer
Dear,
Anatol Badach,
Thank you for sharing these valuable resources on Digital Twins (DTs) in construction. I'll explore them to deepen my understanding and integrate their insights into my research on IoT and Big Data Fusion in Construction MIS. DTs hold great promise for revolutionizing construction practices, enhancing efficiency, safety, and sustainability.
Best regards,
Mithun Kokare.
  • asked a question related to Data Fusion
Question
3 answers
In a multi-static radar system  when multiple numbers of transmitters and receivers are present in that case for the process of data fusion is it necessary that the data from the same transmitter has to be fused ?
Relevant answer
Answer
In the context of multi-static radar systems, data fusion is a critical process that involves integrating information from multiple sensors to enhance target detection, tracking, and environmental perception. The necessity of using data from the same transmitter for fusion in such systems is not explicitly required but can depend on the specific application and system design. The integration of data from different transmitters can be beneficial in certain scenarios, as highlighted by various research findings:
1. **Cooperative Fusion for Enhanced Detection**: A study on passive multistatic radar systems highlighted the use of cooperative fusion techniques, including both hard and soft fusion, to improve target detection. These techniques do not necessarily require data from the same transmitter but leverage spatial diversity across multiple receivers. The study proposed novel fusion techniques that showed significant detection performance improvements without the need for knowledge of the transmitted signal or channel information [1]
2. Benefit Analysis of Data Fusion: Another research conducted a benefit analysis of data fusion for target tracking in multiple radar systems (MRS). It suggests that data fusion, whether from the same or different transmitters, can enhance tracking performance depending on factors like signal-to-noise ratio, deployment, and resolution of each radar. This implies that the necessity of fusing data from the same transmitter is not a strict requirement but should be considered based on the potential performance enhancement [2]
3. Linear Fusion Framework: A linear fusion framework for target detection in passive multistatic radar systems was proposed to improve detection performance. This framework involves a weighted combination of local test statistics from spatially separated receivers, suggesting that integrating data from different transmitters can be advantageous [3]
4. Multifrequency GPR Data Fusion: In the context of ground-penetrating radar (GPR), a novel method was developed for the fusion of data from antennas operating at different frequency ranges. This approach, focusing on enhancing subsurface imaging, illustrates the benefit of combining data from different sources, analogous to fusing data from different transmitters in a radar context [4]
5. Multi-Target Tracking in Passive Systems: A study on multi-target tracking in passive multi-static radar systems using Doppler-only measurements discusses the advantages of fusing measurements from spatially distributed sensors. It highlights the fusion of data from multiple bistatic links, again indicating that the fusion process can benefit from integrating information from different transmitters [5]
6. Hybrid Radar Fusion: [6] introduces "hybrid radar fusion" within an integrated sensing and communication scenario. It involves a dual-functional radar and communications base station performing as a mono-static radar for sensing in the downlink, while also handling communication tasks. Communication users act as bi-static radar nodes in the uplink. The study focuses on fusing information from different resource bands to estimate angles-of-arrival for multiple targets, proposing efficient algorithms for this purpose and demonstrating their performance through simulations.
References
[1] Asif, Asma, and Sithamparanathan Kandeepan. "Cooperative fusion based passive multistatic radar detection." Sensors 21.9 (2021): 3209.
[2] J. Yan, H. Liu, W. Pu, B. Jiu, Z. Liu and Z. Bao, "Benefit Analysis of Data Fusion for Target Tracking in Multiple Radar System," in IEEE Sensors Journal, vol. 16, no. 16, pp. 6359-6366, Aug.15, 2016, doi: 10.1109/JSEN.2016.2581824.
keywords: {Radar tracking;Target tracking;Data integration;Signal to noise ratio;Bayes methods;Radar measurements;Multiple radar system;target tracking;Bayesian Cramér-Rao lower bound;benefit analysis},
[3] Zhao, Hong-Yan, et al. "Linear fusion for target detection in passive multistatic radar." signal Processing 130 (2017): 175-182.
[4] A. De Coster and S. Lambot, "Fusion of Multifrequency GPR Data Freed From Antenna Effects," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 11, no. 2, pp. 664-674, Feb. 2018, doi: 10.1109/JSTARS.2018.2790419.
keywords: {Ground penetrating radar;Radar antennas;Data integration;Antenna measurements;Mathematical model;Antenna effect removal;full-wave inversion;Green's functions;ground-penetrating radar (GPR) multifrequency antenna data fusion},
[5] S. Subedi, Y. D. Zhang, M. G. Amin and B. Himed, "Group Sparsity Based Multi-Target Tracking in Passive Multi-Static Radar Systems Using Doppler-Only Measurements," in IEEE Transactions on Signal Processing, vol. 64, no. 14, pp. 3619-3634, 15 July15, 2016, doi: 10.1109/TSP.2016.2552498.
[6] A. Chowdary, A. Bazzi and M. Chafii, "On Hybrid Radar Fusion for Integrated Sensing and Communication," in IEEE Transactions on Wireless Communications, doi: 10.1109/TWC.2024.3357573.
keywords: {Radar;Sensors;OFDM;Robot sensing systems;Wireless communication;Fuses;Wireless sensor networks;Integrated Sensing and Communication (ISAC);Dual-Functional Radar and Communications (DFRC);radar fusion;hybrid radar;6G},
.
  • asked a question related to Data Fusion
Question
1 answer
2024 3rd International Conference on Automation, Electronic Science and Technology (AEST 2024) in Kunming, China on June 7-9, 2024.
---Call For Papers---
The topics of interest for submission include, but are not limited to:
(1) Electronic Science and Technology
· Signal Processing
· Image Processing
· Semiconductor Technology
· Integrated Circuits
· Physical Electronics
· Electronic Circuit
......
(2) Automation
· Linear System Control
· Control Integrated Circuits and Applications
· Parallel Control and Management of Complex Systems
· Automatic Control System
· Automation and Monitoring System
......
All accepted full papers will be published in the conference proceedings and will be submitted to EI Compendex / Scopus for indexing.
Important Dates:
Full Paper Submission Date: April 1, 2024
Registration Deadline: May 24, 2024
Final Paper Submission Date: May 31, 2024
Conference Dates: June 7-9, 2024
For More Details please visit:
Invitation code: AISCONF
*Using the invitation code on submission system/registration can get priority review and feedback
Relevant answer
Answer
Useful thing
  • asked a question related to Data Fusion
Question
2 answers
How to Reasonably Weight the Uncertainty of Laser Tracker and the Mean Square Error of Level to Obtain Accurate H(Z)-value?
Relevant answer
Answer
Received, thank you for your answer。
  • asked a question related to Data Fusion
Question
6 answers
Journal: Remote Sensing (MDPI)
Topic: Any topic
article processing charge (APC) of 2500 CHF (Swiss Francs): I will pay it
Scope: Scope
  • Multi-spectral and hyperspectral remote sensing
  • Active and passive microwave remote sensing
  • Lidar and laser scanning
  • Geometric reconstruction
  • Physical modeling and signatures
  • Change detection
  • Image processing and pattern recognition
  • Data fusion and data assimilation
  • Dedicated satellite missions
  • Operational processing facilities
  • Spaceborne, airborne and terrestrial platforms
  • Remote sensing applications
Deadline: 30 December 2022
Process: Please contact me for more information.
Relevant answer
Answer
Dear Dr. Dejan
Thank you very much for the information and for asking for research collaboration. If you wish to work with me, please send your abstract first. I am a member of the review board in Land, MDPI. I will check the possibilities for a discount or free publication. Thanks
  • asked a question related to Data Fusion
Question
1 answer
How to calculate accuracy of data fusion between Landsat-8 and Sentinel-2??
Relevant answer
Answer
The integration of data and knowledge from several sources is known as data fusion. This paper summarizes the state of the data fusion field and describes the most relevant studies. We first enumerate and explain different classification schemes for data fusion. Then, the most common algorithms are reviewed. These methods and algorithms are presented using three different categories: (i) data association, (ii) state estimation, and (iii) decision fusion.
Regards,
Shafagat
  • asked a question related to Data Fusion
Question
3 answers
I want to know that what is the difference between sensor data fusion and sensor data stitching. For example, we have two homogeneous sensor of two autonomous driving systems ( lets say it is "camera sensor") . So I want to combine the camera sensor data for better accuracy and better precision. But I am confuse to use the term "Stitching" and "fusion".
Which term is more suitable?
What are the key differences between these two terms in autonomous driving systems domain ?
Relevant answer
Answer
Sensor data fusion is used to take advantage of multiple sensors, generally based on different modality, to accomplish a goal better than any individual sensor would. For example, SLAM algorithm usually rely on sensor data fusion from camera, IMU and sometimes other inputs (lidar, ultrasound, etc.) in order to better localize and model the environment around an vehicle (or UAV, etc.)
The term sensor stitching is more used to extend the spatial range of a single modality with multiple sensor. The best example is using multiple photos with overlapping areas to create a bigger panoramic image.
I hope these two examples help you better understand the distinction between these related concepts.
  • asked a question related to Data Fusion
Question
4 answers
Are there any articles for identifying systems with noise on input eg: using "Extended Least Squares" or "Instrumental Variable algorithms", other ?
Relevant answer
Answer
Application example relying on numerically generated Gaussian white noise ― While investigating adaptive control and energetic optimization of aerobic fermenters, I have applied the recursive least squares algorithm (RLS) with forgetting factor (RLS-FF) to estimate the parameters from the KLa correlation, used to predict the O2 gas-liquid mass-transfer, while giving increased weight to most recent data. Estimates were improved by imposing sinusoidal disturbance to air flow and agitation speed (manipulated variables). The proposed (adaptive) control algorithm compared favourably with PID. Simulations assessed the effect of numerically generated white Gaussian noise (2-sigma truncated) and of first order delay. This investigation was reported at (MSc Thesis):
  • asked a question related to Data Fusion
Question
5 answers
Say for arguments sake I have two or more images with different degrees of blurring.
Is there an algorithm that can (faithfully) reconstruct the underlying image based in the blurred images?
Best regards and thanks
Relevant answer
Answer
  • asked a question related to Data Fusion
Question
4 answers
Hello everyone,
I have 2 land cover maps of an area that were classified by randomForest in R from 2 different sources. I would like to apply the Dempster-Shafer theory to fuse these maps to produce the final land cover map. The basic probability assignment (BPA) function will be constructed by "calculating the probability of each pixel that belongs to each category and the probability of correct classification based on the Random Forest classification".
Can anyone suggest the way to do that in R?
Thanks and regards.
Relevant answer
Answer
Thank you, @Sarvat Gull
  • asked a question related to Data Fusion
Question
3 answers
I'm trying to find some articles or book chapters to learn about data fusion in structural health monitoring. Can anyone help me with that please?
Relevant answer
Answer
You can use SPSS manual and go to to " Transform" and under transform click "recode to different variables". The SPSS MANUAL CAN BE FOUND ONLINE.
  • asked a question related to Data Fusion
Question
2 answers
My PhD topic is very related. I am to use lidar and SAR data fusion to map AGB within a forest reserve.
Hear from you soonest,
John Ogbodo
Relevant answer
Answer
Thank you so much, Erick. I will contact them.
  • asked a question related to Data Fusion
Question
3 answers
The IEEE GRSS data fusion challenge has a dataset with both HSI and LiDAR data. Any idea how to get them?
Relevant answer
Answer
Hi Debakanta,
You have to register in the IEEE GRS contest website to get the dataset. Check this URL:
Good Luck
  • asked a question related to Data Fusion
Question
8 answers
Are data fusion one stage of data integration? Is data fusion is reduced or replacement technique?
Please do let me know, thanks
Relevant answer
Answer
Dear Mahdis Dezfouli,
01. Data Fusion :
Data fusion is the process of getting data from multiple sources in order to build more sophisticated models and understand more about a project. It often means getting combined data on a single subject and combining it for central analysis.
or
Data fusion frequently involves “fusing” data at different abstraction levels and differing levels of uncertainty to support a more narrow set of application workloads.
or
Data fusion is the process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source.
Various types of data fusion work in different ways:
Low, intermediate and high-level data fusion – and likewise distinguish geospatial types of data fusion from other types of data fusion. Another specific type of data fusion is called “sensor fusion” where data from diverse sensors are combined into one data-rich image or analysis.
Data fusion is broadly applied to technologies, for instance, in a research project, scientists might use data fusion to combine physical tracking data with environmental data, or in a customer dashboard, marketers might combine client identifier data with purchase history and other data collected at brick-and-mortar store locations to build a better profile.
Data fusion involves a level of concrete definition from something called the Joint Directors of Laboratories Data Fusion Group which produces six levels for a data fusion information group model:
  1. Source preprocessing
  2. Object assessment
  3. Situation assessment
  4. Impact assessment
  5. Project refinement
  6. User refinement
02. Data integration :
Data integration is a process in which heterogeneous data is retrieved and combined as an incorporated form and structure. Data integration allows different data types (such as data sets, documents and tables) to be merged by users, organizations and applications, for use as personal or business processes and/or functions.
or
Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information. A complete data integration solution delivers trusted data from various sources.
or
Data integration involves combining data from several disparate sources, which are stored using various technologies and provide a unified view of the data. Data integration becomes increasingly important in cases of merging systems of two companies or consolidating applications within one company to provide a unified view of the company's data assets. The later initiative is often called a data warehouse.
or
Data integration in the purest sense is about carefully and methodically blending data from different sources, making it more useful and valuable than it was before. IBM provides a strong definition, stating “Data integration is the combination of technical and business processes used to combine data from disparate sources into meaningful and valuable information.”
An example of data integration in a smaller paradigm is spreadsheet integration in a Microsoft Word document.
Data integration is a term covering several distinct sub-areas such as:
  1. Data warehousing
  2. Data migration
  3. Enterprise application/information integration
  4. Master data management
I hope I have answered your question.
With Best Wishes,
Samir G. Pandya.
  • asked a question related to Data Fusion
Question
2 answers
I am looking to use Data fusion at the decision level. I came across the Dempster-Shafer theory of evidence. Can anyone point me to a good tutorial where I can understand this theory and apply it for my work? Please note, I need to know everything about it from scratch. Please do let me know. Thanks
Relevant answer
Answer
i suggest having a look at the book:
'Mathematics of Data Fusion' by Goodman, Mahler and Nguyen
if i am not mistaken the theory of random finite sets formalised by mahler subsumes Dempster-Shafer theory, so it is worth having a looking into.
  • asked a question related to Data Fusion
Question
3 answers
I mean we can apply DDPG to the RL model with continuous action space, a general Q learning to those with discrete action space. For a specific instance, if there are three dimensions in the action space, two of them are continuous while one is discrete. How to deal with it?
Relevant answer
Answer
I do not understand your question well. What is your observation?
  • asked a question related to Data Fusion
Question
4 answers
Hi, I am working on a project in which I am trying to develop Occupancy grid based on inverse sensor modelling. I am only using it for providing the occupancy of static objects.
I have really basic questions as I am not being able to imagine properly the concepts. I have also read online, attended the course for grid modelling on coursera and also studied some publications but I am looking for simple explanation to explain it to non-technical people.
- What is inverse sensor modelling?
- How to give probability to Grid cells by using Bayesian theorem? Mathematical explanation and calculations or in simple words How to make map through inverse sensor model using bayesian technique (I am using 3 sensors data (Radar,Lidar,Camera) so I am also doing sensor data fusion to predict approximately valid detection)
- How to filter out dynamic detections as I am only interested in static objects detections.
- If possible can someone also refer me some valuable and concrete research publications to get more details.
I will be really thankful for your time and explanations in advance.
Relevant answer
Answer
You can also check this paper; it was part of my thesis.
Experiments with Simultaneous Environment Mapping
and Multi-Target Tracking
  • asked a question related to Data Fusion
Question
5 answers
I want to know about novel data fusion architectures or Models
Relevant answer
Answer
Dear Dr.
I need to fuse heterogeneous data for different applications but first one is an environmental application which uses mainly continuos and discrete variables. The main task is the prediction of pollutant variables.
thanks in advance
M.A. Becerra
  • asked a question related to Data Fusion
Question
9 answers
Some of the literature I’ve read describes the Kalman Filter as an excellent sensor fusion or data fusion algorithm. Because of this, I wonder (1) if it’s possible to implement a Kalman Filter with a single sensor’s data--every implementation I've read uses more than one sensor--and (2) if possible, is Kalman Filtering truly an optimal algorithm for this system?
Specifically, my sensor measures just position, p, at designated time intervals. I then use the distance traveled divided by the time interval to calculate velocity, v, at the next time step. (Velocity at first timestep is assumed to be zero.) The state, x, is defined in terms of p and v.
However, given that I merely did manipulation with the first data set, I’m not convinced this is "data fusion" when all data is derived from the same source. Nor am I sure if the KF could do much more than subdue measurement noise in this case.
Relevant answer
Answer
The human sensory system: view, hearing, smell, taste, touch, is a paradigm for data fusion. In strict sense, for fusion you need at least two sensors for essentially different signals,
About the Kalman filter number of sensors needed, you should first have the models (system and observations models), taking into account their Controllabilty and Observability; This reference tackles this point:
Controllability and Observability: Tools for Kalman Filter Design;
B Southallz y , B F Buxtony and J A Marchantz
London, London WC1E 6BT, UK
  • asked a question related to Data Fusion
Question
4 answers
Please introduce me any algorithm for fusing ASTER and MODIS data with minimum spectral data lost and enhanced spatial resolution. Glad to hear about your experience and helpful papers.
Relevant answer
Answer
Take a look at the text and papers given in the links below; perhaps they might
be of interest to you (it is an answer I gave to a previous questions):
Attached are links to papers I published related to this topic that may be of interest to you (they are relatively old papers on work I did 2 to 3 decades ago).  It will show you an example of kind of 'pushing' the limits of merging very different spatial resolution image data sets.  Keep in mind that usually the spectral information if what the lower resolution data set contributes and spatial resolution (or detail) is what the higher resolution image data set contributes.  Also, I have a paper that discusses how merging multi-resolution image data sets together do better is they cover the similar spectral range to minimize edge reversals.   1991 Paper Comparison of Three Different Methods to Merge Multiresolution and Multispectral Data: Landsat TM and SPOT Panchromatic https://www.researchgate.net/publication/200458993_Comparison_of_Three_Different_Methods_to_Merge_Multiresolution_and_Multispectral_Data_Landsat_TM_and_SPOT_Panchromatic 1986 Paper Digital merging of Landsat TM and digitized NHAP data for 1: 24,000-scale image mapping((National High Altitude Program)) https://www.researchgate.net/publication/240321338_Digital_merging_of_Landsat_TM_and_digitized_NHAP_data_for_1_24000-scale_image_mappingNational_High_Altitude_Program Pat Article Comparison of Three Different Methods to Merge Multiresoluti... Article Digital merging of Landsat TM and digitized NHAP data for 1:...
  • asked a question related to Data Fusion
Question
10 answers
The double integration of acceleration gives the position of an object. However, how should the position of an object fitted with accelerometer and gyroscope be determined using the data from the two sensors.
Relevant answer
Answer
Hello,
the problem you are referring to usually receives the name of "inertial navigation" or "dead reckoning". It is something I am currently working on, and is a problem whose solution is not direct.
The method I am using is:
1. Obtain a good "orientation estimation" or "attitude estimation". This is, find the best guess of the rotation transformation that relates two reference frames: the one attached to your sensor, and other one usually taken to be an inertial reference frame (or approximately inertial) as one whose z-axis points in a direction orthogonal with the Earth surface. The accelerometers sense the gravity accelerations, so it gives you information about "where it is down". Gyroscopes help predict orientation. A common approach is to use a Kalman filter:
although others use gradient descent techniques and also obtain good results:
2. Transform the acceleration measurements (taken in a non inertial reference frame: the one attached to your sensor) to the inertial reference frame using the rotation transformation obtained with orientation estimation.
3. Subtract the acceleration due to gravity.
4. Integrate the resulting acceleration twice to obtain position.
This is what you usually obtain when you perform this process:
If you have seen those videos you will have begun to think that it is not possible. However, in certain scenarios you can use pseudo-measurements to improve your estimation:
I would conclude that depending on your application, you should use pseudo-measurements, or include more sensors to estimate the position. In other case, as far as I know, it is not possible to obtain a good position estimation.
  • asked a question related to Data Fusion
Question
3 answers
Hello can we use a simulator for IOT data fusion?
Relevant answer
Answer
Ok thank you so much Mr Ali Farmani
  • asked a question related to Data Fusion
Question
2 answers
I am trying to integrate a GPS and IMU via a loosely coupled integration. My state is the position and velocity vector. I am using the following equations for the nav frame mechanization.
I am trying to use the kalman filter and for this I need:
the state transition matrix φ
the system noise covariance matrix Q.
I assume the measurement matrix H is the identity matrix because the outputs of the IMU and GPS are in the same frame
The measurement noise matrix R.
How do I initialize the initial covariance matrix
I am assuming that I have perfect sensors. Although I would like to know how not having perfect sensors would change these matrices.
my questions are the following:
For the state transition matrix. Is it derived from the IMU mechanization above? If so how do you deal with the fact that the future latitude is a function of the of the future altitude?
Would the system noise co variance be 0 since I am assuming a perfect model?
Is the H matrix the identity?
How is the initial covariance matrix usually initialized?
 
Relevant answer
Answer
Hi Ariel,
As Graham mentioned, that you will need to add process noise in order for the filter to converge. Also assuming H as identity can lead to bad results as you always assume that the GNSS and IMU are bias-free (which is real life is not the case). I would suggest adding the bias in the GNSS and IMU data as state variables for the filter to estimate them as well. 
You can have a look at our work regarding the same. If you want I can also attach you c++ implementation of the same.
Regards.
  • asked a question related to Data Fusion
Question
4 answers
Detailed Description :
I am working on steering wheel angle sensor that measures absolute angle of steering wheel. As steering angle sensors uses gears and several joints which is totally hardware related so in spite of calibration in start with the passage of time due to usage of mechanical parts and also due to some environmental and road conditions some errors occurs in the values of sensors (e.g. offset, phase change, flattening of signal, delay).
In short due to these errors in the measurements our aim gets distracted means If I am viewing velocity vs time curve so if in the original or calibrated sensor in short close to ideal condition sensor my velocity shows a peak in amplitude but due to error (hysteresis) in measured signal I am not getting peak in velocity curve or I am getting flattening of curve so it will affect my final task.
I have a tolerance let say 1.20 degree for hysteresis so that’s why I am having detailed idea about my signal and want to observe my signal if some changes means offset, delay, lowering has occurred in my signal or not. This will not only provide me an idea that whether to lessen the amount of sensors used for my task or made some changes in hardware of sensor to lessen the amount of hysteresis or do some other actions to reduce it.
What I have done uptill now in which uptill now I am not sure that whether I am right or wrong. I am getting some values for hysteresis but I have few questions regarding those techniques. If someone provides me an idea about it how to improve these techniques or provide me a better approach then it will be nice and great guidance.
I have an ideal sensor signal (under ideal conditions which we want) and values from 1 sensor I have data of 6 different drives from car. I am explaining just 1 example of my first drive and its relation with my reference sensor data.
Given the data reference signal and sensor signal data of size 1x1626100 and 1 x 1626100 double for one reading from sensor but in all readings values from Ideal and measured signal w.r.t to time are same.
In short I want to find out the Hysteresis difference of sensor signal from measured signal.
I have applied few data estimation techniques to accomplish my goals.
1-      I have applied Gaussian Technique to estimate error in my data but I am not satisfied with it as I am not getting some good values or expected values with it may be due to outliers or some other errors.
I have subtracted (Ref – measured value of signal). Calculated mean of difference signal after applying my limitations, Standard Deviation of difference signal after applying my limitations, then make a Gaussian Curve along with mean and standard deviation. Made 2 lines one for mean+ standard deviation and 2nd one is with Mean – Standard Deviation and distance between +ve Mean_std and –ve Mead_std is called Hysteresis (Loss).
Please have a look at the attached figure. I have attached figure for 3 modal Gaussian curve but in some diagrams just like picture 3 my data is shifted. Can anyone tell me the reason why it is so and how to erase it because it is occurring in all diagrams but in this figure 3 it was clear.
2- In this method I have applied Regression lines Technique (On upper and lower values of difference signal).
I took difference of my signals (Ref – measured value of signal after applying my limitation on signal).
Applied regression technique above and below the difference signal means on upper values and on lower values separately and difference between upper and lower values regression lines is called as Hysteresis (Loss). Please have a look at figure 3 and 4 for clear view.
The Problem here with this technique is that I define the values for upper and lower regression line by myself after looking into data like up= 0.08  , low= -0.08.
3-      I have also applied RMSE technique but have few questions that are confusing me.
As I am dealing with static data so I considered my Reference signal as actual valued signal and my measured valued signal from sensor as measured values and apply RMSE formula on that.
    RMSE=    square_error = (sig_diff_lim).^2;
    mse =  mean(square_error)
    msedivided = mse/numel(drv(2))
    rmse = sqrt(mse)
4-      I have also applied correlation function but I think it is not working well with my data.
But it gave me some good insight about data.
Some Questions that also need clarification if possible:
1-      What is difference between RMSE and MSE means I know basic stuffs but I want to know that what are the applications where we use RMSE or MSE and which will work in my case for my data.
2-      I have now explained Gaussian, Regression technique, RMSE. Just one Request .Can someone explain me which Technique is the best to use because I have mentioned my issues in Gaussian and Regression Technique above. Just some explanation will work for me .Means Pros and cons of above mentioned technique.
I hope that I remained able to provide you the clear idea and detailed review what I want to do and what I am expecting and I hope to get some positive and valuable response from people.
Sorry for providing a long story but As I am not too expert in these as I am still learning so if I will get some useful responses it will improve my knowledge and also help me to accomplish my task/Goal.
Thanks for cooperation in advance.
Relevant answer
What type of hysteresis sensor are you studying?
Magnetic?
  • asked a question related to Data Fusion
Question
3 answers
I am using StarFM for data Fusion between Landsat 8 and MODIS. I re-sampled MODIS from 500 m to 30 m resolution using MRT Tool.  I crop the both Images Landsat and MODIS using a Shapefile. When I overlap two images, MODIS resampled image shows more extent than Landsat 8 Image.
I need to make two images of same size and of same resolution.  
  • asked a question related to Data Fusion
Question
2 answers
If yes, how do you treat the weights for the two different variables? Do you launch different weight vector for position and another one for the velocity?
Relevant answer
Answer
Thanks Enrique. Well, your assumptions are true but weights are connected to and calculated based on the measurements vector. Let's say that we have the state vector you proposed above with a measurement vector has ranges and rate of change of ranges as measurements which are associated with the position and speeds in the sates vector. Now, we you calculate the weights using a measure of the distance between the measured quantities and the predicted ones. Here you find that you are getting numbers in very deferent ranges (order of magnitude) since the measured quantities are in different domains i.e. range and rate of change of range. When you normalize these calculated weights, the smaller ones will vanished and have no effect the process. So, the question again, or what I found is that I need to calculate the weights separately, normalize them, and then merge them together but wanted someone to confirm this if they noticed same thing.  
  • asked a question related to Data Fusion
Question
2 answers
Is it possible to carry out Joint SP-RP model estimation in RPL Framework using Sequential estimation Technique? Any references for the same? To the best of my understanding, It is possible when we use data pooling technique for joint estimation. Need suggestions and clarifications in this regard.
Relevant answer
@Mohamed Thanks a lot for your suggestion.
  • asked a question related to Data Fusion
Question
4 answers
I intend to calculate the Doppler centroid anomaly of Sentinel-1 SLC images to further use the Doppler centroid anomaly for the estimation of the line-of-sight motion. There are three sub-images in one SLC image with 27 Doppler coefficients of 9 groups. I have no idea how to calculate the Doppler centroid anomaly using the 27 Doppler coefficients. Thank you!
Relevant answer
Answer
Thank you for everyone's help! I am now studying the papers.
  • asked a question related to Data Fusion
Question
3 answers
For the detection and segmentation, I know several techniques:
(1) Direct thresholding
(2) Finite Moving Average Filter
(3) Savitzky-Golay Filter
(4) Constant False Alarm Rate
(5) Wavelet Transform
For the deinterleaving task, I know
(1) Cumulative Difference Histogram (CDIFF)
(2) Sequence Difference Histogram (SDIFF)
(3) Sequence Searching
Are there any techniques better than those I have mentioned?
Many thanks!
Relevant answer
Answer
Your are quite welcome, Sir.
  • asked a question related to Data Fusion
Question
3 answers
Hello!
I am working on multi-platform, multi-target tracking problem. I am using Interactive multiple models (IMM) [CV + CT + Singer]. My problem is asynchronous (different times of each plot for every platform). Monoplatform processing is fine. I am satisfied with the results. But when I try to fuse my measurements, I get unappropriate quality.
I want to do something like that: extrapolate all local(monoplatform) tracks to one time and fuse them here. Calculation of cross-covariances is quite involved and resourse-consumption for this case. So I've chosen method that was described at Zhen Ding, Lang Hong " A distributed IMM fusion algorithm for multi-platform tracking " (1998). It suggests to use "global" filter and fuse locals using both locals and "global" filters results. (It performes the same procedure with Bar-Shalom - Multitarget-multisensor tracking: principles and techniques p .464 ("Optimal fusion using central estimator") but for IMM filter, not for KF).
My problem is: I want to propagate this article's technique to asyncronous case. I don't understand how to get predicted and filtrated values for local tracks (because I have only predicted value to get all local tracks to the same time). Any suggestions?
The second problem is less complicated. I just don't know what initial value to set on "global filter". 
Relevant answer
Answer
Thanks for the answer!
Didn't find in this arcticle multisensor fusion without using cross-covariances though. Not sure it is possible to combine techniques from this article and "Zhen Ding, Lang Hong " A distributed IMM fusion algorithm for multi-platform tracking " (1998)" to deal with asynchronous IMM.
  • asked a question related to Data Fusion
Question
1 answer
the name in the quote is not a specific article title
i am searching for such an article but no answer till know!!
  • asked a question related to Data Fusion
Question
20 answers
Hi everyone!
My challenge is to find a way to segment LV short axis without any kind of manual segmentation. Find a way to localize and segment the LV without training data sets or even initial manual segmentation done by experts. I found in the literature, Hough transform as a possible, but not good solution. I thought to use after the application of Hough, deformable model: snakes, for the segmentation. The principal problem is: I can't use a method that requires lot of prior knowledge. any ideas to help me? Thanks...
Relevant answer
Answer
Well Aparna,
This is becoming too vague for me. Convince me with a Matlab program that works for the two (I admit, very different) examples uploaded by Lucas.
Harris, cartesian fractals, I don't care. Just segment the images.
  • asked a question related to Data Fusion
Question
3 answers
I use Interactive Multiple Models KF for tracking planes. The main targets are passenger planes. But there are some cases with high maneuver planes. I tried CV, CA, Singer (a~1/30). This didn't satisfied me at all. Maybe i used wrong matrices Gamma and wrong start probabilities, i dont know. I guess I tried a lot of options for it.
Any recomendations for models combinations? Would be great with concrete arcticles. Thank you!
Relevant answer
Answer
Hello. Manoeuvring target tacking is a difficult area. There are really a lot of approaches. If you listen to Bar-Shalom et al, you will be using interacting multiple model, which itself is just an approximation to a Bayesian filer for a hybrid dynamical system. Singer is a more conventional approach. I have also used RHV coordinates (in Blackman's book). The choice of coordinates is really important. The filter is only as good as the dynamical model it's based on. For ATC you should be able to get a pretty good dynamical model (centred on the axes of the aircraft) and if you have 3-D measurements, you should be able to set the problem up pretty well. You will need good measurement gating (which should occur naturally if you have body centred coordinates).
Here are some references:
For the record, the EM (Viterbi+Kalman smoother) approach is very high performance (better than IMM) but does not explicitly allow for clutter (except through a PMHT-like approximation), but should be good for low clutter or in higher dimensional measurement space (read dim > 1).
The survey paper, which would have been the first comprehensive paper on this topic, was supposed to appear in the IEEE Transactions on Aerospace & Electronic Systems in 1998, but was suppressed by the associate editor at the time, whose student was working on ... manoeuvring target tracking! My paper (based on work done in 1996) was in review for 3 years and by the time I got the reviews back it was obsolete! I did complain to the E-I-C of IEEE-TAES but you can't go backwards in time. Now you know why I am not recommending certain work by a person whose name should be obvious to you!
Good luck!
  • asked a question related to Data Fusion
Question
4 answers
Hello!
I am working on planes tracking. The simplest measurement selection(for an subsequent Kalman filtering) is: http://take.ms/4k8Fh . I use IMM filter though. The question is very simple: how to modify this expression to select measuremetns? I have 3 models (CV, CA, Singer) and probabitities(weights) for each model. I thought about calculation properties of Gaussion mixture here (using probabilities of models). And try to find out some criteria which will provide me assurence of accessory a point to the track with some preassigned probability. But I am not sure that it is a right method.
How should I act?
Relevant answer
Answer
Please go through chapter -2 (especially page number 96) of text book: Multitarget- Multisensor Tracking: Principle and Techniques (1995) by Barshalom and X. Rong-li
  • asked a question related to Data Fusion
Question
6 answers
While course resolution derived SM products like SMOS and SMAP can well describe temporal soil moisture dynamics, they lack spatial detail and we can’t use them in analysis of local hydrological patterns. Contrary, SM product from sentinel-1 (A/B) sensor can resolve dynamics at this spatial level. However, its temporal resolution is less frequent and thus not suitable for acquisition of short-term variations. To overcome these spatial and temporal scale gaps, we can use data fusion approach that fuses multi sensors data. Is there any method which could help me do this?
Relevant answer
Answer
Finally i found that sentine-1 could be combine with Radiometer data like SMOS.
I think there is no way to fused sentinel with optic data right now.
  • asked a question related to Data Fusion
Question
2 answers
I want to know if there is any free simulation tool / benchmark or real data set available to evaluate various algorithms in Distributed data fusion/track-to-track fusion, for e.g Covariance Intersection, Covariance Union, Bar Shalom Campo, Kalman filter, etc. In research related to these methods, the performance is evaluated based on some simple vehicle tracking example simulated in Matlab. But I am interested in a real data set or at least an elaborate simulation tool which is easy and flexible to use for evaluating the aforementioned algorithms?
Thanks in advance.
Relevant answer
Answer
you could take a look to our simulator here
Good luck!
  • asked a question related to Data Fusion
Question
2 answers
Did anybody done sensor data fusion before, I need some valuable insights and help..
Learning resources are a plenty, but i want to start from basics of data fusion to mastery..
Relevant answer
Answer
first see the wavemenu in matlab. it is a simple GUI (Graphical User Interface). you can understand how by using wavelets you can fuse two signals. 
find the 1-D DWT for two signals separately and then add their coefficients. finally find the 1-D IDWT for the coefficients. 
by this you can develop the dataset. 
i hope it is helpful to you.
  • asked a question related to Data Fusion
Question
5 answers
Recently I’m working on a link scheduling algorithm in Wireless Sensor Network, the topology example is shown as figure, it’s a converge cast communication pattern, nodes besides the root upload its data forward to sink.
The queue size(or cached data size) of the aggregators is somewhat large, since for the aggregator(nodes are not leaves in a tree topology), the data collected from its children is similar or have redundancy, and we may need repackage data to transfer along the way to the root(or sink), it’s possible we can have data compression or entropy reduction.
Now I need to some data compression method on the aggregator to reduce the transfer data size while ensure acceptable communication quality. Terms like ‘data fusion’, ‘data aggregation’ of wireless sensor network always lead to topic about routing, which is not I expected. Maybe I use improper terms.
Could any body give me some hint or advise?
Relevant answer
Answer
making sensors sends only abnormal readings only for (reading rand: max and min values). each gateway saves range of readings for each sensor and it informs the root gateway if there is an effective change (for example implementing 50th percentile on the historical data).
  • asked a question related to Data Fusion
Question
11 answers
Hello!
In this question I mean that in my concrete case data is achieved not in batches but as a sequence of measurements. My measurements are being obtained in real-time. So I have to do some actions with every point which has been obtained. And it is hard for me to apply IMM/PDA in a correct way because:
1) It is multiplatform case
2) There is a clutter. PDA can easily deal with it, getting a T time batch of measurements by calculating the association probablities but if I have received 1 point I cant perform PDA well (for example this point can be a clutter point)
What is the way to deal with this? Summarising: I understand IMM/PDA kinda well. But I cant apply it to multiplatform sequential data. How can I deal with it?
Relevant answer
Answer
Do you mean that measurements may arrive out of sequence? So that newer data may arrive at your tracker before older data?
Rather than trying to wrangle with the maths (using prediction and 'retro-diction') could you just push all of your measurements through a buffer? Sort them as they pass through. Then apply the IMM/PDA algorithm on the sorted measurements at the other end? This would apply a lag/latency to your system (determined by the maximum expected transmission delay in your network), but if that is tolerable it might be a simple way of dealing with your problem.
  • asked a question related to Data Fusion
Question
10 answers
I'm looking for someone or some work (paper) which explain about sensor data fusion (using kalman filter or other algorithm) that make a fusion from more than one orientation data (like arm orientation in Euler angles or based on Quaternions extracted from two or more sensors).
I will be very grateful by any help!
Relevant answer
Answer
dear carlos
this is a link for ieee paper just from 2015 American Control Conference
the abstract is great you can read it first:
Measurement censoring, or Tobit model censoring, is common in many engineering applications. It arises from limits in sensor dynamic range, and may be exacerbated by poor calibration of sensors. Censoring is often referred to as a clipped measurement or limit-of-detection discontinuity, and is represented as a piecewise-linear transform of the output variable. The slope of the piecewise-linear transform is zero in the censored region. This form of nonlinearity presents significant challenges when a nonlinear approximation to the Kalman filter is to be used as an estimator. The Tobit Kalman filter is a new method that is a computationally efficient, unbiased estimator for linear dynamical systems with censored output. In this paper, we use Monte Carlo methods to compare the performance of the Tobit Kalman Filter to the performance of the Extended Kalman Filter and the Unscented Kalman Filter. We show that the Tobit Kalman Filter reliably provides accurate estimates of the state and state error covariance with censored measurement data, while both the EKF and the UKF provide unreliable estimates in censored data conditions
Best regards
Ashraf Abosekeen
  • asked a question related to Data Fusion
Question
13 answers
Hi guys,
I would like to know whether it is probably that the EKF update phase can make the estimation poorer than it was at the prediction phase. I noticed on my simulator that there are time instances where the update phase estimated the actual state vector poorer than the estimation was during the prediction phase.
I am executing the update phase each two time steps and the measurement covariance matrix is a bit higher that the noises in the prediction equation. 
Could someone confirm that this could happen and it is not EKF implementation mistake?
I just made the system as linear as possible in order to reduce the linearization error of the the EKF. Still same happened. 
Relevant answer
Answer
Hi Peter,
After reading again your question I can confirm that this behavior is normal as you speak of error to the ground truth and not to uncertainty errors. (I was confusing myself between both is the previous messages).
As your state is a random variable depending on measurements, its meaning can be more or less accurate depending the observation.
Suppose that you have the ground truth x_gt = 2 and your estimate at time t is given by x_t = 2.1 with a variance of 4
At this step your mean estimate is very good with a poor variance. Suppose you have a measurement z = 3 with variance = 1.5 (the measurement is correct as the ground truth value is in its 3 sigma bound).
Then the fusion (update step) will move your current estimate to something higher than 2.1 making your distance to GT bigger but reducing the uncertainty.
At this step you cannot say that the result is poorer than the previous one because statistically it is much better.
So the behaviour you observed is not a bug, it's the normal behaviour of such filtering process as we speak of random variable. You have to consider both x and px.
Hoping my answer will help you.
  • asked a question related to Data Fusion
Question
11 answers
Hi guys,
I am working on the localization problem of an underwater vehicle. However the problem is still in very simple and it does not matter that it is about underwater.
How can I use the EKF when I have multiple measurements but with different sampling? Not all the measurements are available at the same time to perform the update step.
Can I perform the update for the measurements that I have at each time step? For example, I have a pressure sensor and a speed sensor with frequency 1Hz. But I have also another sensor (USBL) system with 0.01 Hz.
What if I perform update step for pressure and speed sensor at each time step adjusting the measurement model to that case (like I have only these sensors) and once I receive the USBL measurement I augment the measurement vector including the USBL measurement?
Any help?
Thanks in advance
Relevant answer
Answer
If the measurements arriving asynchronously are independent, you can still use them to update the Kalman filter or EKF. Just predict to the required measurement time and filter using the appropriate H matrix for the measurement you have.
  • asked a question related to Data Fusion
Question
3 answers
The aim of data fusion(DF) is basically increasing the quality of data through the combination and integration of multiple data sources/sensors. My research is on assessing this impact of DF on DQ(data quality) hence I would appreciate the academic materials to backup your conclusions.
I have being trying to link DF methods to the DQ dimensions that are mostly impacted on to no avail.
Relevant answer
Answer
DF is improving DQ when (and only when) the different data input streams are to some degree correlated. If not, DF does not make any sense.
Sorry - I cannot give you academic materials because I have none on this topic. The above is from experience and some own work in this area.
  • asked a question related to Data Fusion
Question
2 answers
I need one or more multivariate datasets (to make a comparison study) where samples are characterised by different chemical analytical sources. It would be better if samples are associated to a qualitative response (class) too.
Relevant answer
Answer
Thanks Rasmus! Yes, I forgot to mention http://models.life.ku.dk/datasets, which is actually really useful if someone is looking for data (for multiblock analysis, multiway analysis or simply 2D)!
Another one for data fusion (QSAR) can be downloaded here: http://archive.ics.uci.edu/ml/datasets/QSAR+biodegradation
However, if someone knows other available sources, please share!
  • asked a question related to Data Fusion
Question
3 answers
Can we use slope parameter to extract the DTM using LiDAR data in FUSION Software?
Relevant answer
Answer
What do you mean by DTM ? Digital Terrain Map ? If so, a good way to generate these maps is the use of either parametric representations or the use of occupancy grids which can be 2,5D or even 3D, purely geometric and/or probabilistic. Interpolation methods to generate occupancy grid exist, some of them take into account lidar data uncertainties. Early in the 90's works of Takeo Kanade, In So Kweo, Martial Hebert, then Raja Chatila, Fawzi Nashashibi, Michel Devy, Simon Lacroix among others have published articles on that. Today's works in autonomous driving and autonomous mapping (e.g. CMU) are interesting. By the way what type of LIDAR are you using ? Single layer ? 4-layers ? Velodyne ?...
  • asked a question related to Data Fusion
Question
13 answers
I'm working with Sensor Data Fusion specifically using the Kalman Filter algorithm to fuse data from two sensors and I Just want to give more weight to one sensor than to the other, mostly because there is a transition where the algorithm use just the data obtained from one sensor to a moment where is used both sensors and at this time I need to smooth the transition giving during a short time more dependability to the first sensor and then normalize to a factor of 50% to 50% of reliability to each sensor. 
Relevant answer
Answer
It is normal in a Kalman Filter to define and provide the error characteristics of each sensor. If you have reason to believe that the sensor errors vary with time or with some other, perhaps geometrical, property of the scenario, you should model that sensor behaviour. This is perfectly standard thing to do. If your model of the sensor behaviour is accurate, the standard KF will provide optimal results. 
  • asked a question related to Data Fusion
Question
10 answers
I have used SVM and Object Based Image Analysis for this task and the study area is dense urban with various landcover features. But the results are not up to the mark. I want to extract and reconstruct (3D) the urban landcover features (Buildings, roads, bridges, cars, trees etc) with textures. I am waiting for expert comments.
Relevant answer
Answer
We used eCognition too. It's also possible to implement your own calculations beside GLCM.
  • asked a question related to Data Fusion
Question
9 answers
As far as I know, the filter algorithm can be separated into two parts: prediction and update. How to optimally estimate the state relies on balancing the weight of prediction and update. On the other word, do we trust predicted state or the observations more? It is more important for the dynamic system with unknown model or measurements errors. Could any body can give some advice on what kinds of filters (Adaptive or Robust filters) are more practical for system with unknown model or measurements errors? Is H2/H infinity filter the only choice? Thank you very much!!
Relevant answer
Answer
Hello. I think it depends on what you mean by unknown. Even the simplest estimators/filters make implicit assumptions about the errors in the system and measurement models, with most techniques requiring independent (white) noise. If you are after robust estimators, then the H-infinity class may be a good choice. I understand that this filter may not always be realisable depending on its tuning. Another class of estimator that does not make many assumptions about the data is set theoretic. In this case, you only need to bound the noise.
  • asked a question related to Data Fusion
Question
5 answers
I have some trouble in SAR and optic data registration and finding proper ground truth data.
Relevant answer
Answer
It doesnt work too!
I read their paper and send them emails and the result is same!
  • asked a question related to Data Fusion
Question
3 answers
ACCs can work correctly and precise with measuring data and processing them, what kind of data and how does it get them, (with what kind of sensors), does it use data fusion methods, please link me to some sources to continue my research...
Thank You
*Moodi*
Relevant answer
Answer
thank you
how does these sensors work,
their efficacy and limits,
what about sensor fusion???
  • asked a question related to Data Fusion
Question
9 answers
I am investigating orbit determination for planetary exploration. I would like to try different kinds of estimation methods such as batch-based estimation and EKF. Up till now, I have found that the accuracy of the batch-based method is higher than the sequential filter. I would like to know why this happens because the principle of the sequential filter also indicates that all the previous measurements contribute to the current estimation.
Since no more information is added, why is the accuracy of two methods different? Could anyone explain?
Relevant answer
Answer
@Laya Das Thank yo so much for your explnation. I am laso studying this issue and try to find the reason why the MHE is better. I'll update this question if I have something valuable.
  • asked a question related to Data Fusion
Question
14 answers
Instead of doing a simulation, I need a dataset with hundreds or thousands of records distributed over multiple data sources. I need it for data integration purposes where the same entity may have multiple records in different data sources.
Relevant answer
Answer
Hi Cliff, a problem with making such a simulated dataset is that there not so much known on how you should introduce "noise" in a realistic fashion. Most researchers agree that the size of a cluster of duplicates is Zipf distributed and some have proposed some models for introducing typographical errors, but apart from that, it would be "guessing" what a realistic error model would be. For example, abbreviations, multi-valued attributes, subjectiveness (e.g. the musical genre of a CD)... And of course, if there is no realistic error model, simulating a dataset tends to be biased to work good on the algorithm you want to test.
  • asked a question related to Data Fusion
Question
2 answers
When we have features of two modalties extracted with the same method of feature extraction, do we need to normalize the score?
Relevant answer
Answer
If the range of the feature is different in different modalities and one is going to use some sort comparative analysis, it is always safe to normalise the features.
Moreover, often the feature set makes more sense and is easily interpretable in the normalized form.
  • asked a question related to Data Fusion
Question
1 answer
I guess time-series analysis can be also studied under the scope of statistical signal processing. Is this correct? Maybe someone could give me a hand in selecting introductory, intermediate and advanced text books for multivariate time series analysis. Thanks!!
Relevant answer
Answer
Dear Marcelo,
please check the Wikipedia entry:
and some of the references at the end. Furthermore, I suggest to look at amazon.com for books on time-series analysis.
Good luck,
Reiner
  • asked a question related to Data Fusion
Question
1 answer
There are some approaches to do the data fusion: Kalman filter, Bayesian, behavior. For a small robot, what is the best?
Relevant answer
Answer
Hello, I think it is really difficult to answer this question, because the navigation performance does not only depends on the data fusion approach but also the measurements you use. For example, I did the research on the autonomous navigation for spacecraft. The outputs from IMU, star sensor and navigation cameras are integrated by EKF to determine the r and v of the spacecraft. The results was very good. However, I was also involved in the demonstration of planetary landing navigation based on IMU, radio metric measurements, Doppler radar, and navigation camera. In this case, the EKF did not work well. I am also trying to solve this problem. But I believe that the EKF is suitable for most of issues.
Good luck! Thank you!
  • asked a question related to Data Fusion
Question
6 answers
The topic is data fusion and routing in wsn
Relevant answer
Answer
ns2 is better option because it is open source and wide range of tutorials are available if you are student then ns2 is best for you
  • asked a question related to Data Fusion
Question
4 answers
An approach to multi sensor data fusion.
Relevant answer
Answer
I would strongly recommend a look at the work on decentralized (sensor) data fusion by Simon Julier and Jeffrey K. Uhlmann or for starters http://dsp-book.narod.ru/HMDF/2379ch12.pdf or even better the Handbook of Multisensor data fusion