Question
Asked 17th Nov, 2017

How do we determine the position of an object with accelerometer and gyroscope sensor data?

The double integration of acceleration gives the position of an object. However, how should the position of an object fitted with accelerometer and gyroscope be determined using the data from the two sensors.

Most recent answer

24th Nov, 2017
Tourangbam Harishore Singh
National Chiao Tung University
Thank you for Prof. Kartik for the help.

Popular Answers (1)

21st Nov, 2017
Pablo Bernal-Polo
University of Murcia
Hello,
the problem you are referring to usually receives the name of "inertial navigation" or "dead reckoning". It is something I am currently working on, and is a problem whose solution is not direct.
The method I am using is:
1. Obtain a good "orientation estimation" or "attitude estimation". This is, find the best guess of the rotation transformation that relates two reference frames: the one attached to your sensor, and other one usually taken to be an inertial reference frame (or approximately inertial) as one whose z-axis points in a direction orthogonal with the Earth surface. The accelerometers sense the gravity accelerations, so it gives you information about "where it is down". Gyroscopes help predict orientation. A common approach is to use a Kalman filter:
although others use gradient descent techniques and also obtain good results:
2. Transform the acceleration measurements (taken in a non inertial reference frame: the one attached to your sensor) to the inertial reference frame using the rotation transformation obtained with orientation estimation.
3. Subtract the acceleration due to gravity.
4. Integrate the resulting acceleration twice to obtain position.
This is what you usually obtain when you perform this process:
If you have seen those videos you will have begun to think that it is not possible. However, in certain scenarios you can use pseudo-measurements to improve your estimation:
I would conclude that depending on your application, you should use pseudo-measurements, or include more sensors to estimate the position. In other case, as far as I know, it is not possible to obtain a good position estimation.
6 Recommendations

All Answers (10)

17th Nov, 2017
Tourangbam Harishore Singh
National Chiao Tung University
Thank you very much eyamba for the answer.
17th Nov, 2017
Azzeddine Bakdi
University of Oslo
Hi,
Sensory data fusion is used to combine data collected by different sensors. You can obtain a better position estimation with lower uncertainty compared to any independent measurement.
1 Recommendation
19th Nov, 2017
Tourangbam Harishore Singh
National Chiao Tung University
Thank you for your suggestion
19th Nov, 2017
Neil Petroff
Tarleton State University
Hi. Referring to the above comments, do you have 3-axis accelerometers? If you truly only want position, then you don't need gyroscopes. However, if you want position and orientation, then you can determine orientation from the accelerometer data (if you have all 3 axes) and compare or update it with gyroscopes to compensate for accelerometer drift.
1 Recommendation
20th Nov, 2017
Tourangbam Harishore Singh
National Chiao Tung University
Thank you for the help. Yes I am using 3-axis accelerometer. Moreover, I want to get the orientation from the data received.
21st Nov, 2017
Pablo Bernal-Polo
University of Murcia
Hello,
the problem you are referring to usually receives the name of "inertial navigation" or "dead reckoning". It is something I am currently working on, and is a problem whose solution is not direct.
The method I am using is:
1. Obtain a good "orientation estimation" or "attitude estimation". This is, find the best guess of the rotation transformation that relates two reference frames: the one attached to your sensor, and other one usually taken to be an inertial reference frame (or approximately inertial) as one whose z-axis points in a direction orthogonal with the Earth surface. The accelerometers sense the gravity accelerations, so it gives you information about "where it is down". Gyroscopes help predict orientation. A common approach is to use a Kalman filter:
although others use gradient descent techniques and also obtain good results:
2. Transform the acceleration measurements (taken in a non inertial reference frame: the one attached to your sensor) to the inertial reference frame using the rotation transformation obtained with orientation estimation.
3. Subtract the acceleration due to gravity.
4. Integrate the resulting acceleration twice to obtain position.
This is what you usually obtain when you perform this process:
If you have seen those videos you will have begun to think that it is not possible. However, in certain scenarios you can use pseudo-measurements to improve your estimation:
I would conclude that depending on your application, you should use pseudo-measurements, or include more sensors to estimate the position. In other case, as far as I know, it is not possible to obtain a good position estimation.
6 Recommendations
22nd Nov, 2017
Tourangbam Harishore Singh
National Chiao Tung University
Thank you very much Pablo for clearing some of my doubts and providing clear concepts with useful information.
24th Nov, 2017
Kartik B Ariyur
SAMMS LLC
There are many books on this subject of inertial navigation. The classic treatment is in Britting: https://www.amazon.com/Inertial-Navigation-Analysis-Technology-Applications/dp/1608070786
For a quick understanding of dead reckoning (inertial integration without any corrective input such as GPS): see the work of Demoz Gebe-Egziabher: https://www.researchgate.net/scientific-contributions/7217363_Demoz_Gebre-Egziabher
1 Recommendation

Similar questions and discussions

How to separate linear acceleration from gravity component using a 3D accelerometer?
Question
18 answers
  • Hoda AllahbakhhiHoda Allahbakhhi
Hi all
I am using a 3D accelerometer for physical activity recognition.To do feature extraction,I learned that I first need to separate body acceleration from gravity component and then extract features from body acceleration(linear acceleration).
I am looking for a resource that can help me doing this step

Related Publications

Conference Paper
The paper describes a fully decentralized architecture for multi-sensor data-fusion. This architecture has no central processing facility and no centralized communications medium, it does not require any ad hoc structure, such as a hierarchy, to be imposed on it, and it does not require a controller to organize it. The structure of this architectur...
Conference Paper
Full-text available
The Magnetospheric Multiscale (MMS) mission consists of four identically instrumented, spin-stabilized observatories, elliptically orbiting the Earth in a tetrahedron formation. For the operational success of the mission, on-board systems must be able to deliver high-precision orbital adjustment maneuvers. On MMS, this is accomplished using feedbac...
Got a technical question?
Get high-quality answers from experts.