Questions related to Motion Capture
Hi, I am looking to performing a regression analysis with the CATREG function in SPSS. Some of the guides I have found online use terminology I am unfamiliar with so I was wondering if anyone had advice on some of the practical steps involved. All help is hugely appreciated!
Task description: Expert and Amateur drummers performed a series of drumming tasks at five different speeds (80, 160, 240, 320, 400 hits per minute). Motion Capture was used to record arm/hand/stick movements, and forearm muscular activity recorded with EMG. A questionnaire was used to collect information about participants’ practice history.
Aim: The aim is to investigate if performance characteristics (e.g., variability in timing) can be predicted by physiological variables (EMG, movement patterns) and practice history variables.
Analysis: I have 8 predictor variables, and 1 outcome. To my knowledge, CATREG has less emphasis on assumptions (normality, etc.) and is a supported alternative to dummy-coding categorical variables. I believe that makes it a viable choice for our aim.
However, I am still uncertain on some points. Would anyone be able to advise on the following? -
1. Is CATREG only appropriate if your outcome/dependent variable is categorical? In my case the DV is continuous.
2. In regards to scaling/discretization, am I correct that numerical predictors do not require specific scaling, only nominal /ordinal ones? If this is correct, are the following SPSS scaling options for each variable ok?
- For the variable 'expertise' (2 levels: Scaling is Ordinal, Descretization is Grouping Normal 2)?
- For variable 'Tempo' (5 levels, Scaling is nominal, Descretization is Grouping Uniform 5)
I am looking for a Markerless Motion Capture System that allows me to objectively measure the following parameters during the execution of a functional test battery for the lower extremities:
- Knee valgus/varus degrees
- Pelvic tilt/alignment degrees
- Trunk frontal/sagittal displacement degrees
- Ankle Eversion/Inversion Degrees or Internal/External tibia rotation degrees
- Shoulder Tilt/Alignment degrees
I revised online and got out with it:
- Microsoft Kinect v2 is the only one validated in the literature, but difficult to use because of the complicated softwares.
- Kinetisense, unfortunately not already validated in literauture but the best choice in terms of available tests to perform (everything you need to perform is possible).
- Human Trak by Vald Performance, not already validated in literature, in addition it has a strict range of tests to perform.
Do you have any recommendations?
I am looking forward for your expertize and answers!
Thanks in advance
1. We want to buy a new system. I am familier with Vicon and Qualisys mocap systems. Coda motion and motion analysis systems are also under consideration. Does anyone has experience with Motion Analysis Inc and Coda Motion mocap systems? Which one you is better for a general purpose life science motion analysis Lab?
2. In Vicon systems we are limited to the vicon plugin gait model mostly. As I know bodybuilder is not an easy to use software but I do not know about the new software developed by Vicon, procalc. Do anyone worked with procalc? What about the skeleton builder of Motion Analysis ?
3. We have a limited space (9 by 6 meter room) and I also want to know if having 3 coda trackers could do the job as 8 cameras of other systems ( because of budget limitations we can atmost buy 8 cameras for passive systems or 3 active trackers each having 3 embeded cameras ). We can not put the trackers in 120 degree distance around the center of capture volume and I am not sure if 3 trackers would be enough.
4. Also I want to know which system is the best for real time data streaming to matlab or labview. We want to use the system for giving real time kinematic feedback to the subject and use the system for biofeedback test and training also for augmented and virtual reality base rehabilitation purpuses.
Sorry for asking 4 questions in one comment. And thank you for sharing your ideas
I am gap-filling motion capture trials using Vicon Nexus. Typically, this is a very straightforward process. However, one of our test subjects is very difficult to gap fill because the marker label quality seems poor. For example, the markers more frequently become unlabeled or swapped throughout the trial. I noticed that the number of trajectories for this participant is extremely high (~2,000). Does this mean there is reflective material being picked up and causing a 'labeling' confusion? Are there ways to reduce the number of trajectories to a much smaller number so that the number of trajectories is closer to the number of desired testing markers in the capture volume? I appropriately masked the cameras and calibrated them before collecting data.
I have some idea on testing the MoCap across the mocap labs. Therefore, I am looking for normal human range-of-motion sequences from different MoCap studios at best Vicon Blade 53 marker setup or Nexus 39 marker setup.
I have direct access to well equipped lab in Poland. I know also that CMU has some RoM recordings, but most of the datasets contain just actions.
I am writing up a project in which I used motion capture as well as heart rate monitoring and questionnaires, but I will be analyzing the motion capture data separately to this project write up - should I still include it within my methodology even though it will not be relevant to the paper?
I conducted a big research project looking into a few different aspects of a virtual reality programme. I used 3D motion capture, heart rate monitoring and questionnaires to investigate, mental health benefits, physical health benefits and adherence. This project effectively looked into 2/3 different research questions at once. Would it be right to split these up into different papers under different titles or should I try to come up with a title that includes all aspects of this study and write it up in one paper?
I wanted to know if the Vicon system, as a motion capture device, can be used for real-time processing data. Can we collect online data em then use it as input to activate other devices? During an experiment...
Hello, I would like to measure the postural control/weight shift during step initiation, especially by mean of CoP. For equipment, I am considering using a 3D motion capture system rather than a force plate.
As far as I researched, many studies have been analysied CoP using the force plate and the 3D motion capture and there is one validation study of the vertical ground reaction forces obtained from a force plate and a motion capture system. But, the motion performed in that study was squats, which is actually not completely suitable for the step initiation as step initiation is more dynamic than squats.
Could you please share your experiences about using a 3D motion capture system for assessing CoP? Or I would be also thank you for a recommendation for studies! :)
I'm trying to synchronize imu sensor with the Motion Capture system. Is this possible? If so, how?
thanks for answering
Polhemus Liberty has great specs / spatio-temporal accuracy and no sensor occlusion, but compensates for static field distortions only, which excludes the concurrent use of magnetic or alternating current stimulation, or even EEG (cable movement).
Polhemus Liberty specs:
-Capture frequency: 240Hz
-Spatial resolution: 0.005mm at a distance of 60cm
Is there any motion capture alternatives with similar specs as Polhemus compatible with concurrent TMS/tDCS/EEG?
Hi, I am looking for an inertial motion capture system to use in full body ergonomic research. I have previously worked with Xsens from Netherland’s, but it is very costly and consequently, I am looking for alternatives. Therefore I hope you will contribute with your knowledge and experience with other systems, both advantages, and disadvantages. Best Regard Mathias Hedegaard
Please, share your application experience, even if MEMS using were not included in an articles etc.
Is it mandatory to use wireless hardware for motion capture? Or... is it acceptably for wired sensors?
Greetings to everybody.
It will be great if you give an argument for your choosing.
I.m using Blender 3D, because it support Python as a main language. My student somebody choose Unity 3d, because it supported easy to developed VR applications.
Just small polling
I am using optitrack motion capture system to calculate the Maximum Lyapunov exponent from kinematic gait data .Different retro reflective markers are placed on trunk and feet of the subjects.
Hello. Im new at Motion Capture field. Currently, I have a dance motion data. For now, Im quite having a trouble to decide which method is the best for retarget a dance motion data. Since Im quite not familiar with "Deep Learning" method, I want to ask the researchers here, is it the best method to implement? Or is it another way around? Thank you
I would like to know how researchers and clinicians calculate speed and stride length during running on a treadmill using an optical motion capture system. My main doubt is how to calculate this since the movement is cyclical.
I am looking to sync data across an ultrasound machine (Terason USmart 3300), Vicon motion capture, force plate, and Delsys EMG data. I have gotten the Vicon, force plate, EMG to sync within a Vicon program, but am looking to add ultrasound to it. Terason has told me it is not possible unless I alter the unit (which they will do and charge me for). I am hoping that it is possible to do it without that expense (since it is not just my lab's unit).
I am looking to see if anyone has done this? I was thinking I could maybe get a digital to analog converter that would allow me to connect it to the trigger that is already hooking up the other devices (Vicon only allows me to add a "generic analog device" in the program). This Terason unit is extremely clinical and does not have any wires that would allow me to hook it up. I have seen some "Trigno wire" devices that seem to fix that problem, but not sure if that would be enough of a solve.
I'm working with infants and want to track their lip movements. My concern is that they won't accept any marker on their lips. I^ve never used any motion capture system but plan purchasing one in the context of my research. Any suggesting is very welcome.
Thanks for your assistance...
I am working on motion capture data analysis of human walking movement. My goal is to find the variation of markers on different body part in relation to the movement of the main body.
For that I am considering upper trunk body and lower trunk body. Upper trunk body include shoulder, chest and upper abdomen. Lower body includes waist , hip ,lower back and lower abdomen.
I have markers placed in each body location. I want to create a marker that represents just the body movement and not the surface variations and joint variations so that it can be used as a surface to create reference variation. For that purpose, I am trying to create a kinematic model
How do I create my virtual point with respect to let's say 3 markers on the upper body? Which motion analysis software can give me this functionality to create some sort of kinematic model.
- let's say create a vector with respect to a plane made by 3 markers and then create a point from the vector with respect to the plane created by the markers.
The attached picture represents my problem for some rough visualization. Here 2 upper green markers are used to create a vector in red which is used to create a green virtual marker.
Instead of writing the code in matlab for this, is there any open source software which I can use for this purpose (apart from OpenCV)? I found a software for this purpose http://www.opensourcephysics.org/items/detail.cfm?ID=7365. But it does not keep track of the shape? Can anyone recommend any software for this?
I have a series of ASCII files that were originally generated by Vicon software (Motion capture software) and i was wondering if there's a way that i could convert them into C3D format?
What is the best wall color of the lab to use with a IR cameras MoCap system (Qualisys, Vicon, Optitrack...)? We have always used white walls, without any problems. I read that blue color is the best option to use with the IR systems/spectrum. What is your opinion? Any color in particular? (please add pantone-color number)
Jose Heredia-Jimenez Human Behavior & Motion Analysis Lab. HubemaLab. University of Granada in Ceuta.
are there any papers showing that gait parameters (e.g. step time, step variability, step length) detected using accelerometers are comparable to the same gait parameters detected using more traditional methods such as motion capture, optogait, force plates etc.?
I am to conduct an experiment in which I want to pass a gas through a column of pills, and detect when the pills start moving.
I want to detect the motion as soon as possible, and i believe I need 2 things:
1. A camera that is of reasonable image quality
2. Some software etc. that can detect tiny movements within the particles.
I have though about using a normal Canon DSLR for recording videos or perhabs rent a high-quality camera. Then process the videos using software.
The detection does not have to be real-time.
The pills are rather big, around the size of a decent vitamin pill or bigger.
Any suggestions of how to approach this would be very appreciated.
I want to use passive markers and i find there are different type ofpassive markers available such as spherical, circular, tapes etc. so which one i should use?
Hlo Sir, I want to make the video database of normal and abnormal gait. I want to capture the gait of persons from two sides i.e. front and side simultaneously so i need two cameras.Is there any such motion capture camera's available that works in synchronization with each other? Lastly, what specifications i should set for camera's to make the accurate video. e.g distance from object, frames,etc.
hoping for helpful response, thanks
La simulazione nella formazione del personale medico, paramedico ed in genere delle figure che operano a contatto con i pazienti sta acquisendo un ruolo sempre maggiore nella formazione medica, sotto la spinta della maggiore disponibilità di strumenti e ambienti sempre più realistici supportate delle più moderne tecnologie e dell’identificazione di metodi efficaci per il loro uso. Non vi è disciplina o livello della formazione medica universitaria o post-universitaria che non possa trarre giovamento dall’uso delle più moderni strumenti di simulazione, a patto che siano scelte in base a specifici bisogni e non per il gusto della tecnologia. Purtroppo nel nostro paese manca una vera e propria cultura nell’utilizzo di tali strumenti per cui l’introduzione di queste attività dovrebbe essere promossa fino dai primi stadi della formazione medica. La complessità della pratica chirurgica possono ad esempio rappresentare un’ardua sfida per i progetti di formazione e training medico scientifico; le simulazioni interattive ed immersive di fatto accelerano il processo di apprendimento, innovando i modelli di formazione sino ad oggi definiti come tradizionali. Oggi più che mai si sente sempre più parlare di realtà virtuale che applicata alla medicina è in grado di consentire il trasferimento di know-how tra un medico ed il suo allievo, offrendo un elevato grado d’interazione più immediato. La realtà virtuale di fatto rappresenta l’ultima frontiera dei nuovi media, permette di elevare la comunicazione a un livello esperienziale, trasformando l’osservatore nel protagonista della scena. Questa tecnologia si sta facendo varco come applicazione e metodologia pratica in tutti quei contesti in cui l’immedesimazione, il realismo e l’immersione dell’utilizzatore nell’ambiente virtuale rappresenta un valore aggiunto poiché capace di fornire stimoli molto simili a quelli naturali definiti dalla realtà. Stimoli ai quali il medico apprendista tenderà a rispondere parimenti in maniera “naturale”. Tutto ciò rende questa tecnologia lo strumento più potente messo a disposizione per tutto quello che riguarda la simulazione della realtà e la formazione esperienziale specialistica, in cui l’esperienza e pratica devono integrarsi con la parte teorica. I moderni visori indossabili consentono oggi di realizzare prodotti audiovisivi con un livello di realismo senza precedenti, capaci di immergerci al punto da ingannare i nostri stessi sensi. L’integrazione con i sistemi di motion capture, amplificano il senso di presenza durante l’apprendimento, permettendo di interagire con l’ambiente virtuale in modalità naturale e intuitiva. E’ proprio per queste ragioni la realtà virtuale ha come campi applicativi la formazione e la simulazione in campo medico. Inoltre rappresenta uno strumento con grandissime potenzialità per tutto quello che concerne la valutazione di prototipi di protesi e la loro simulazione di ambienti e scenari artificiali. Trasponendo in realtà virtuale modelli e progetti, non solo il medico, quanto il paziente stesso potranno avere una chiara percezione della loro caratteristiche di forma, del loro funzionamento e della loro contestualizzazione in scenari di vita quotidiani come se fossero stati realmente e fisicamente realizzati. A livello mondiale in campo medico la realtà virtuale è ancora in corso di sperimentazione essenzialmente con le applicazioni di training on the job e la cura dei disturbi di ansia e delle fobie. Nel primo caso tipicamente si adopera una telecamera 3D installata in sala operatoria e/o sul medico che sta operando per consentire ad altri di assistere in maniera immersiva all’intervento stesso. Nel secondo, si sfrutta invece il realismo offerto dalla realtà virtuale per ricreare alla presenza di un medico situazioni e scenari capaci d’innescare fobie, attacchi d’ansia, o patologie più gravi quale ad esempio l’autismo, al fine di poter essere meglio interpretate le azioni da intraprendere. La realtà virtuale permetterà ad ogni studente di avventurarsi nell’attività da apprendere, senza una partecipazione distaccata ma pienamente immersiva nel ruolo che dovrà ricoprire. Non si parla solo di visori, infatti, ma di ulteriori dispositivi come guanti speciali che offriranno stimoli tattili permettendo di apprezzare la concretezza dell’intervento, il contatto con un corpo e tutto l’impegno richiesto. Nel prossimo futuro l’economicità del sistema renderà inoltre possibile trasmettere queste esperienze in streaming a migliaia di utenti dovunque nel mondo. Un’opportunità di equità ed omogeneità per la formazione dei professionisti sanitari di tutto il mondo. L’obiettivo del prossimo futuro sarà quello di sviluppare progetti a supporto dell’acquisizione e trasferimento della conoscenza capace di realizzare esperienze fuori dal comune, con un approccio pragmatico ed efficacie. Intellisystem Technologies intende partecipare attivamente alla definizione di nuovi metodi e strumenti per valorizzare queste tecnologie avvalendosi del prezioso supporto e collaborazione delle Università Italiane ed estere.
How can we use motion capture systems to improve anything? What are some problems that can be solved with the use of motion analysis? I'm looking for topics, subjects and ideas. It has been used to help the autistic kids and to help injured athletes. If anyone has any ideas I would appreciate it if you share.
my name is Wolfgang and right now I am working on the evaluation of an inertial sensor system. We have validated against an optical motion capture system. I have calculated limits of agreement and now I am in search of references which state acceptable bounds for an agreement between two motion capture systems in the field of clinical gait analysis. Because right now I only can say my system agrees with another system, but is the error resolution good enough for clinical applications? Meaning for example: In clinical gait analysis limits of agreement for sagittal joint angles should always be inside ± X°...
Do you know references regarding this issue or have any other idea?
I am currently part of a startup competition and we are trying develop wireless IMU's to capture motion of the human body for actions such as exercise and posture.
I am an undergraduate student and this is my first endeavor into this field so I am looking for as much advice as I can receive.
I will start with what we know.
We were inspired by the video linked.
We can use a sensor like GY80 10DOF IMU for finding the orientation, and a microprocessor like esp8266, which also is a WiFi module to create a node.
What we do not know is.
How do we connect the nodes to a software application/mobile device.
What software would allow us to do the three dimensional motion analysis.
Are there any free or lowcost software applications we can use our budget is limited to $1000 dollars.
Thanks in advance.
I'm wondering what would be a more suitable software for motion analysis
in terms of animal monitoring, camels to be exact,
How can one get the speed, distance, acceleration, and maybe incorporate those with vital organs indicators like heartbeats, body temperature ..etc.
trying to build a database for camels linking the above information and looking for patterns.
tried Kinovea but its not quite accurate in my opinion, for a beginner level.
I am look at using accelerometer(s) as a wearable sensor to track the acceleration of someone's leg while they perform various motions. I would like to video/ take photos of the subject whilst the accelerometer(s) are collecting data. Is there someway to sync the camera with the data from the accelerometer? In order to draw the acceleration vectors on a frames/image from the camera. Therefore, the camera an accelerometer would have to be synchronised to be in real time
I would like to conduct a motion capture experiment outdoors. I will be collaborating with a laboratory that has recently bought a VICON system equipped with MXT10S cameras. According to VICON documentation, the T10s system can function even outdoors.
Can anyone please confirm that the system is really functional outdoors ?
Will I be dealing with a lot "phantom markers"?
Is the accuracy and precision of the marker position measurement affected?
Thank you in advance.
patellar reflex movement dataset which has displacement data of leg and foot which recorded by motion capture or collected in other ways
In "Real-time human pose recognition in parts from single depth images" by Shotton et al., their method generates "confidence-scored 3D proposals of several body joints" for a single depth image.
I am looking for papers that deal with selecting the best body joints out of these 3D proposals (for example, without using the confidence scores, by searching for body joints that match a pre-defined skeleton).
Can anyone recommend papers on this topic?
Perhaps using Matlab and some toolkit?
I have found two, one from Hendrick Lab, and Mokka (please see attached links). Can someone let me know if it is possible to calibrate videos, and get 3D kinematics through these tools?
If not, and I should look into purchasing software, can you please suggest some?
Beyond matlab, does anyone has any suggestion on how to track the ball in football. I have the video of the matches.
I'd like to start/stop Vicon Nexus recording using Matlab on the host machine. I see there is a network option for remote start/stop but I don't know how output the network signals from Matlab or indeed what these should be.
Any help is much appreciated.
I want to compare the performance of my athletes (14-19 years old) to normative data to know whether their time of the 30m flying is above average or not. Thanks in advance!
Does anyone have experience from using motion capture recording and analysis in a research project to measure communicative movements? Anything published?
In rehabilitation , the ability to capture fingers , is a challenging issue.
what are the best choices for ROBUST and ACCURATE 3D measurement of fingers in a less expensive manner ?
I am using MSP6050, accelerometer-Gyroscope sensor for the purpose of the detection of various hand strokes. For the implementation of one of the algorithms, i need to find the acceleration along the axes. For this, I have taken the data of the accelerations and plotted the waveforms. I am obtaining the higher fluctuations across the axis along which i am producing the motion (that's what is expected), but i am also getting the motion across the another axis. For example, in fig.1 i am drawing a line across Y-axis and getting fluctuations(around 9K) which is higher than that across X-axis(3K). Can any one tell what could be the probable reason for this?? And also how can such fluctuations across different axis be removed?
Also, do decreasing sensitivity of accelerometer is helpful in such cases?
As you may know, the IMU (inertial magnetic unit) based motion capture (wearable motion capture system) is a recently developed system to track human motion and its application is evolving.
I am trying to understand the possibility of their application in a skating and skiing sports as well as its limitations.
However, I am still confusing why these motion capture systems need additional GPS or DGPS or LPS (local positioning sensor) when capturing skiing, skating, and skateboarding as sliding motions.
there are many papers explains the theories, though I need to understand the fundamental concept as simply as possible.
Please share your knowledge experience. Any information can be helpful for me.
Dear fellow researcher,
in a project we aim to measure emotions in real-time to loop this information back into a gamification module. We plan to integrate mainly body cues and facial expressions, but at least in the lab we also look at GSR, heartrate and brain signals. Can anyone recommend a solution (software library, development kit etc. - not a lab software) which integrates such signals and generates emotional states? We search for something like the SHORE kit (facial expressions), only with more modalities.
Hi all, I am using motion capture data to analyze the movement of a person performing lifting activities for my exoskeleton project. the motion capture data is a bit noisy so I decided to filter it. I converted the motion signal into the frequency domain and saw that most of its power is located at low frequencies. I then calculated the frequency where 99% of signal power is available and set it as the cut-off frequency. the results are good but I am not sure if this is the right way to design the filter? is there another way to design it?
I am looking for a system to monitor pistoning of residual limb in a prosthetic socket. There is an approach  which relies on motion capture system, but their results are for static monitoring along. Did anyone try this approach in walking trials? Or Sanders's approach  is the only choice?
Gholizadeh H, Abu Osman NA, Kamyab M, Eshraghi A, Wan Abas WAB, Azam MN. Transtibial prosthetic socket pistoning: static evaluation of Seal-In® X5 and Dermo® Liner using motion analysis system,
Sanders J, Karchin A, Fergason J, Sorenson E. A noncontact sensor for measurement of distal residual-limb position during walking.
Can somebody explain the process behind obtaining ground truth values from a motion capture device, which can be used for comparison with our proposed system like for eg: Kinect? tell me whether we have to do our work in the presence of both the 3D motion capture (ground truth) and our proposed system(Kinect)
I am really interested in the Leap Motion technology. However, as a scientist, I am always careful about what I see on advertising videos and I have a doubt about the level of reliability of that technology. Thus, does anyone knows the reliability of the Leap motion technology in terms of finger joints angle estimation ? Is there any recent publication that compared it to a another method of hand grasp measurement (ex. dataglove) ?
When I opened my tdf or c3d files using Mokka program, I saw yellow dots in the force plates and ground (like in the attachment photo) I wonder if there are any methods for removing them from visualization and why did they appear?
Hello, I´m working in offline motion capture algorithm from Kinect Depth Images. My research will be applied to postural, gait and others movements analysis and I need it to be as accurate as possible. I find out that Ipisoft is a berry used software to process depth maps in general and can create mocap from it. So i´am asking if anyone knows an article where i can find about the algorithms in IPIsoft and their accuracy. Greetings
I am working on project using PIR motion sensors. After I collected information about motion in few rooms and I am planning to use artifical intelligence to predict path of motion. Is anyone use some algorithm for those prediction?
I'm doing an experiment in which I would like to get information whether the person was about to click the pad or the finger was still. I'd like to know whether the inhibition was successful before any movement or after initial preparation for the movement. I'm going to use EMG recording for that. Can anyone recommend some papers that describe similar procedures or have experience with such setup? I'm especially interested in the right placement of electrodes. Thank you for any help.
I have currently used 3-D motion capture to conduct gait analysis on children with cerebral palsy before and after an intensive physical intervention (pre- and post-testing). I collected 4 trials for pre- and 4 trials for post-testing and only one testing session occurred for each. There are 9 subjects total, but I need to use an n-of-1 design because the group is heterogeneous and there is a large amount of variability between subjects. I am using the Standard Error Measurement (SEM) to then calculate the Minimal Detectable Change for each child. The equation for the SEM requires the Intraclass Correlation Coefficient (ICC), which SPSS outputs as a negative value for 2 of the 7 variables I am investigating. Is a negative ICC acceptable? If not, how can this be corrected?
I'm working on a research on environmental awareness and collision avoidance in a full-immersive environment in VR with HMD and motion controllers.
Actually we bought the STEM system but we are waiting the shipment from a long, long time. I'm trying to work with the PSMove from PS3 and PS-eye but the psmoveapi is not that precise especially with quaternions.
Is any of you working with some motion controller that could be bought on-line?
Could anyone direct me to specific studies indicating adult cut points for wrist-worn accelerometers?
I have spoken with Biometrics and they say the DataLINK may be connected direct to third party instrumentation via the R2000i or R2000iBNC analogue and digital output cable. Having look on their website at both of these, they need to be connect to a 3rd party AD board or data acquisition system or the BNC one is fitted with 8 BNC connectors for connection to a range of propriatary AD boards. Which of these cables is best and what AD Board do I need so sync with DataLINK EMG with Qualysis?
I am interested in investigating spine kinematics in Rugby Union players during scrums (simulated and live scrums) by using video for photogrammetric processing. I am now looking for 4-6 cameras that can be synchronised, have enough resolution and sampling rate to not blur and that are not too expensive (budget: approx. 1500 Australian dollars, or ~1200 USD). Measurements will take place outdoors.
Can anyone help me with this issue?
Thanks very much in advance for any help!
I assessed biomechanical parameters during running and I would like to estimate joint stiffness. Is it possible? If yes, how should I do it?
Is there some paper talking about this?
Thank you very much!
Is it possible and scientifically acceptable?
I'm looking for a low cost system, that's why I'm asking about Kinect.
Greetings, I am interested in creating a full body model for Vicon (With Marker set) using the clusters to find the joint center unlike the plug in gait model which has got individual and wand markers.
The idea is to find the reliable joint center for all the major joints thereby reducing the error percentage in the 3D motion analysis (especially in terms of angle interpretations).
While post capturing the same data can be imported into matlab to find the angles and associated graphs for the same.
I will be more than happy to go on with a collaborative study on this.
I am interested to know any image processing approaches or computer vision technologies in particular related to spatiotemporal saliency in the moving object detection (in video) problem.
Leap motion controller only tracks finger/hand gestures (If I'm right). I need to track the motion of reference objects, such as dollies stuck to deforming objects, which is out of the tracking scope of sensors in the class of leap motion controller. Optotrack technology can definitely do this stuff perfectly, but at a huge financial investment. But what I need is a very affordable trick in the cost range of leap motion sensor (or a way to make leap motion sensor track a 'non-finger' object) at a similar resolution.
Thanks a lot.
I am trying to classify motion(i.e walking, under different peturbation forces) using accelerometer(x,yz) and gyroscope(x,y,z) values using DTW algorithm but due to the noisy nature of these sensors on a nao robot I have been thus far unsuccessful, I wanted to ask for suggestions/pointers on possible filtering techniques I could use to minimize this noise.Thanks