Science topic
Object Tracking - Science topic
Explore the latest questions and answers in Object Tracking, and find Object Tracking experts.
Questions related to Object Tracking
For my dissertation,I am doing project on SNN for object tracking,I am confused with how to start and what i need to consider when working for the project.
To count objects in real time, I read that YOLO was usually used. In this blog (https://blog.netcetera.com/object-detection-and-tracking-in-2020-f10fb6ff9af3), it says that SSD is an alternative but there is also somes techniques being used that are too slow for tracking in real time in this blog. Is there anything else ? Can't CNN be used in real time for now ?
We have several options for tracking an object. What are the options for tracking an object over a large area such as 100mx100m with mm accuracy.
Looking for some suggestion of recent technologies that can be adapted for solving this problem.
Inside-out
Outside-In
Inside-In
Many tasks performed by autonomous vehicles such as road marking detection, pavement crack detection and object tracking are simpler in bird's-eye view. Hence, Inverse Perspective Mapping (IPM) is often applied to remove the perspective effect from a vehicle's front-facing camera and to remap its images into a 2D domain, resulting in a top-down view. Unfortunately, this leads to unnatural blurring and stretching of objects at further distance, due to the resolution of the camera, limiting applicability.
Is there an improved version of this technique ? An improved IPM ? How does it work ? I need this to validate my new automatic road crack detection approach.
Greatly appreciate your review
Hello dear researchers.
I run the siamfc ++ algorithm. It showed better results than the other algorithms but its accuracy is still not acceptable. One of my ideas is to run an object detection algorithm first and then draw a bunding box around the object I want to track.I think by doing this I have indirectly labeled the selected object. So the results and accuracy should be improved compared to the previous case. Do you think this is correct? Is there a better idea to improve the algorithm?
Thanks for your tips
Hello dear researchers.
It seems that siam rpn algorithm is one of the very good algorithms for object tracking that its processing speed on gpu is 150 fps.But the problem is that if your chosen object is a white phone, for example, and you are dressed in white and you move the phone towards you, the whole bunding box will be placed on your clothes by mistake. So, low sensitivity to color .How do you think I can optimize the algorithm to solve this problem? Of course, there are algorithms with high accuracy such as siam mask, but it has a very low fps. Thank you for your help.
I am working on development of a tool for the analysis of cell migration and tracking. The tool could further be used for different other object tracking purposes. For this work, I am looking for a few publicly available datasets containing microscopy images of cells or small particles.
How can I tell the distance and proximity as well as the depth of image processing for object tracking? One idea that came to my mind was to detect whether the object was moving away or approaching based on the size of the image.But I do not know if there is an algorithm that I can implement based on?
In fact, how can I distinguish the x, y, z coordinates from the image taken from the webcam?
Thank you for your help
Which object tracking algorithms are highly accurate and also have high processing speed?
Opencv algorithms themselves do not work well at high speeds
Thank you for your help
Hi Everyone,
I'm currently practising an object detection model which should detect a car, person, truck, etc. in both day and night time. Now, I have started gathering data for both day and night time. I'm not sure whether to train a separate model for daylight and another model for the night-light or to combine together and train it?
can anyone suggest to me the data distribution for each class at day and night light? I presume it should be a uniform distribution. Please correct me if I'm wrong.
Eg: for person: 700 images at daylight and another 700 images for nightlight
Any suggestion would be helpful.
Thanks in Advance.
Recently i am working on a task to track multiple moving objects across frames. At each frame, the pixel-wise segmentations are detected by a modified DBSCAN. I need to establish the segmentation ID associations over different frames. The difficulty is the detected segmentations are non-rigid and the shape or contour of each segmentation would change over time with tiny or large variation. Besides, segmentations may partly or totally occluded by each other. I have tried the KLT tracker, it works fine without the occlusions, but the ID switches happened when certain segmentations are totally occluded.
I am wondering is there any unsupervised algorithm specially developed for such situation. Since we don't have the ground truth, the DeepLearning based methods cannot be adopted. Hoping someone could answer me, i am very appreciated!
I am looking for a animal pose estimation code that predicts the pose of an animal given 100-200 annotated frames from scratch using deep learning on a frame-by-frame basis. Is there any such code? I am not looking for something like DeepLabCut or DeepPoseKit or LEAP/SLEAP.ai tools. Looking for a simple baseline preferably written in PyTorch that could be easily modified.
Is it possible to determine any Physics Law from a moving object using deep learning model algorithm or can I train my model to detect it ??
For our Thesis, I proposed a computer vision algorithm for vehicular obstruction detection. The prototype was developed using the Computer Vision toolbox in MATLAB with several videos as input.
I need to calculate how accurate the algorithm identifies a vehicle as a violation. My plan is to compare the total violators from the manual observation (ground truth) and the total detected violators(from the prototype). With these, the False Accept Rate (FAR) and the False Reject Rate (FRR) come in also.
Example:
Ground Truth = 100 violators
Total Detected Violators = 80 (False Accepts: 5, False Reject: 25)
My idea:
Overall Accuracy: 75/100 -> 75%
1.) Is the overall accuracy computation correct?
2.) How do I obtain the FAR, FRR and the Equal Error Rate (EER) also?
*Attached is a sample output snapshot of a violator detected by the algorithm

Please share the link or the file. I have tried in NCAC but the website is not accessible. Thank you in advance.
Hello. I am an undergraduate student and I need to collaborate with my professor (who specialises in Computer Vision, Image Processing and 3-D medical imaging). I am looking for research ideas mainly in the topics of object detection, visual object tracking, object recognition, semantic segmentation, localization using u-net or medical imaging. Can anybody help me jot down growing research fields in these areas? Thank you!
I have been trying to tackle a problem where I need to track multiple people through multiple camera viewpoints on a real-time basis.
I found a solution DeepCC (https://github.com/daiwc/DeepCC) on DukeMTMC dataset but unfortunately, this solution has been taken down because of data confidentiality issues. They were using Fast R-CNN for object detection, triplet loss for Re-identification and DeepSort for real-time multiple object tracking.
Can someone share some other resources regarding the same problem?
I am particularly looking for dataset containing cameras capturing videos in a distributed setup. The videos, captured by multiple cameras have to be correlated. Is there any benchmark dataset on that? I found some of the existing datasets for distributed networks, but the factor of correlation is absent. It would be great to get some help.
Can you suggest me a paper which discusses about easy to implement tracking algorithm in micro controller?
I'm doing an internship where I perform several experiments of rubbing moist cloth over a glass pane and record it with a high speed camera. My goal is to calculate the to and fro (vibratory) displacement of certain parts of the cloth through the duration of the video. So, I have to input the video, read and compare it frame to frame, select parts of the frames which has to and fro movements, follow these (there are several parts with folds in the cloth which vibrates) parts throughout the video and plot the displacement with time. From these displacements, I should find the frequency of vibration.I have to follow different points which vibrate (Like the red part in the picture I've attached) and get their displacements also. So, there are multiple parts of in a video to focus on. I'm an amateur to matlab and new to image and video processing. If anybody could help me out with codes to this or guide me on how to proceed using an alternative if present, I'd be very grateful.
I'm looking for the object tracking methods which has high accuracy in low resolution videos. Actually low resultion can be Combine with the high similarity of ROI with background area. For clarifying the problem you can assume the movement of a car in foggy weather or movement of a fish in soily water. Actually, the speeds of the method is not important so high, just accuracy is important.
Hi
I want to know where to find a map matching source code and efficient one
I have a huge GPS dataset and I want to get the set of road ID traversed from each GPS trip
did any one test a map matching software or code before?
Thanks in advance
I've recently found some time to do some research and find this field particularly interesting. It's been a while since I looked at any research. Just wondering what the latest and greatest is.
the new techniques that efficiently works with multiple object tracking?
1. Am using following code its generate (size - frames x 4 ) of .mat file. but i need size( 1 x 12)
positions(frame,:) = [pos target_sz];
results = [floor(positions(:,[2,1]) - positions(:,[4,3])/2), floor(positions(:,[4,3]))];
save(strcat('.\result\',video,'_tracker.mat'),'results');
Hello,
I am conducting some behavioural experiments in which I would like to track the movements of two-spotted spider mites, but EthoVision is a little out of my price range. Does anyone know of any free versions that could be used instead?
Thanks,
Joe
My objective is to track the time varying delay and at the same time design a robust control for the system under time varying delay and multiple delay under uncertain conditions.
My research involves tracking space objects using telescope images and orbit determination. I'm quite new to the field of space object tracking and I'm wondering if there is a place I can download optical telescope images of tracking LEOs such as satellites in sidereal frame ?
We have a reseach project about new methods in walkability, and I think, object tracking of people on a public place, where are their ways, how many take one way, etc. is a good base to formulate walkability criteria. A little bit like a "digital Burano method"... just want to try it out, if it is possible. Open Source would be fine.
Thanks a lot
there are different format of image exist. in the application of object tracking, we generally apply background subtraction then left out image is binary in nature. so it is one application can be said. likewise , how many applications are there?
Is there a decent open source environment or tool for annotating and evaluating video analysis such as object detection, tracking etc? I have found a couple online but they are either old and hardly working, or only do annotation but not evaluation. Evaluation would be to compare the ground truth (from annotation) with the methods result and produce common statistics for it. Many researchers publish evaluations/comparisons of video tracking using common data sets, but are there any common tools for performing these evaluations?
Hi,
I am trying to compute the time to contact (ttc) for moving (non-rigid) objects using moving monocular camera videos by computing the change of object size in each frame.
theoretically, the method is OK. practically, I got unstable results (e.g. increasing and decreasing numbers while the object is approaching).
I tried to reduce the temporal noise between every two consecutive frames by calculating the ttc over a time window (for example 10 frames). this reduced the inaccuracy but still no clear pattern in the results. I would expect the ttc to be increasing when the object is moving away and to be decreasing when the object is approaching, but I can't get such a result!
it is worth to mention that both the camera and the object are moving.
could anyone help me in this?
did I miss something?
Thanks
Deep feature, color feature, gray feature, HOG or others?
What are the advantages of these features?
Hi,
Can anybody suggests an objects tracking database/benchmark taken by a wide-angle moving/wearable camera?
Thanks,
We need to track at least 3 (2mm*2mm) markers which are near to each other (distance is approximately 1 cm). We have a Basler camera that can take up to 513 frames per second. We need an application to track the markers. I wonder if there is a program available online to track these markers?
The research content includes a proposed algorithm for image/object matching and two proposed algorithms for multiple object detection.
The algorithms for image/object matching and multiple object detection are not related.
My question is how to organize them to form a Phd thesis? How to unify them into a big problem to present? What title is appropriate?
In face alignment or object tracking, what is the superiority of Lie group compared with other geometric transformation such as Affine and Homography?
Does anyone know a popular outlier removal technique other than RANSAC (Random Sample Consensus)? To enhance the performance of image registration, I have proposed a technique to reject false matches after obtaining keypoint matches. To verify the effectiveness of the proposed technique, it is necessary to compare with existing outlier removal techniques. RANSAC should be the most famous one, but it is old. It would be better to include some other popular outlier removal technique(s) when making performance comparisons. If anyone suggests some outlier removal technique, it would be great that the source code is avaible. Thanks.
Apart from the four rotor inputs of the quadrotor, I want to know other input and output features must be considered in control of quadrotor using classical control technique.
I want to detect brand of a car by using template matching in matlab.
The AIS data represents that information sent by vessels to other vessels while in move.
In "Real-time human pose recognition in parts from single depth images" by Shotton et al., their method generates "confidence-scored 3D proposals of several body joints" for a single depth image.
I am looking for papers that deal with selecting the best body joints out of these 3D proposals (for example, without using the confidence scores, by searching for body joints that match a pre-defined skeleton).
Can anyone recommend papers on this topic?
I want to know what are the techniques currently used in the highway traffic monitoring. like RFID etc
Please suggest me available publication or any mean of sources
I am doing the vein recognition and localization work.I found that it is difficult to distinguish the capillary and the veins,especially the blur veins.I use the frangi filter to extract the veins, but the dark thin capillary become thick after the filter. Then, my algorithm is confused to identify which is the thick vein.I want to recognize the capillary, so that I can do some judgment about the thickness of the veins. Anyone can give some advise?
The image below is captured by a common industrial camera with infrared filter under the infrared light.
Greetings
Beyond matlab, does anyone has any suggestion on how to track the ball in football. I have the video of the matches.
Sincerely
In "Real-time human pose recognition in parts from single depth images" by Shotton et al., their method generates "confidence-scored 3D proposals of several body joints" for a single depth image.
While I have found datasets of human motion that include raw depth images, I cannot find any that have these 3D proposals already computed.
Specifically, I am looking for a dataset that includes multiple 3D proposals for body joints, as well as the ground truth positions (e.g., the best proposals for each image frame). Do any of these exist?
Need some research articles using blocking mathcing techniques for thermal videos object tracking.
how to put bounding box of a tracker in single object tracking?
Hello,
To find occlusion similarity, I have to add structured/block noise in image. How can I add this noise in an image using opencv?
HI All,
It is highly admired your suggestions and opinions in advanced.
I'm wondering what is the best way to identify weight and its location simultaneously.
Example: object A is a 10kg weighted while object B is 20kg weighted. Both objects are moving around known area. But need to identify where is the A and where is the B suddenly by their weight. Any Method,suggestions?
I need to have the size of blob fixed and not affected much by illumination..
Hello to all, As I'm working with MOA 16.04 tool, and when I run the task for detecting Drift, I'm getting the bellow shown output, I wanted to know which file or data they have used to detect the drift? Are there any possible ways to detect the drift with a different data set ( custom )? if yes then how?
I hope I'm clear from my side. Thank you.
I am doing my PHD in image processing, I am to build a system that automatically detect anomaly behavior in crowded scenes. I have a knowledge of feature extraction and also density estimation map of the crowd. Which algorithms that i would use to achieve my goal.
hi
I would need a suggestion for a tracking software that could be used to count and follow sperma or other parcticles from a video. I would need something free, that could provide sperm trajectory, speed, and saves the result in a csv (or similar) file.
any suggestion?
thanks Carlo
I need a test to measure shooting accuracy in futsal.
the name in the quote is not a specific article title
i am searching for such an article but no answer till know!!
I was working on moving human identification in a video. I divided the work into detection, tracking, classification and identification. I've worked up till tracking and my approach is unique and novel. Do I need to work on the rest two stages or I may leave it and start writing papers and Thesis.
This is an M. Tech Project.
We want detect bee by camera, and camera will be attached at UAV..
After detect bee, UAV will track the bee and we can know location of bee by map what UAV provided.
UAV should keep some distance from bee.
This situation, what is most suitable camera for detection bee?
I am trying to do image processing with NI vision (NI IMAQ), I want to perform "white balance for the captured image taken from webcamera" which algorithm should I follow for the white balancing?
I have captured frames of a video. A video of a particle moves through a flume. I wanted first to undistort it but I didn't capture calibration images. I just know some measures, for example the width of the flume the diameter of the particle and etc. Is it possible to undistort it?
and next question is detecting this particle in each frames. it's too hard to extinguish the particle from the background. I couldn't do it by subtracting the background. can anybody help? I attached one of the frames please find it.
Thanks in advance

We're looking for motion datasets as multi-dimension time-series in which exist partial similarities between different categories based on some of the dimensions.
For example, consider two persons waving, but one is doing that while walking and the other while driving in a car. So their generally they are related to different motion categories, are different but both are similar according to their arm movements.
Regardless of the human based above example, the dataset can be any biological dataset!
Thanks!
Hello,
At present, I am working on moving object tracking. In which I need to detect the occlusion in moving camera environment. May you tell the simple (basic) approach to detect the occlusion within frame or samples of frame?
Thank you
Hello,
I'm working on a commercial product where I need to estimate the 6DOF pose of a known 3D CAD (closed 2-manifold triangular mesh) in a single 2D image (see attached picture).
In general, this a difficult problem but under our operational conditions, we can impose the following constraints simplifying the problem:
- The CAD object is known and we do NOT aim for generality like recognizing the class of all chairs.
- We could get the user to position the camera approximately to a specific pose (distance from the object, general orientation +/-15deg, etc.)
- If possible, we would use only the edges of the image (like Canny) to find and match against the object viewed from a given position.
- This would be used in an industrial environment with manmade objects (pipes, valves, junctions, etc) without much texture.
All these constraints lead me to think that even relatively old and somewhat basic techniques could work. For example, in the Sonka and al. book (see link below), section 12.3.2 Goad's algorithm explains a 1986 paper from Goad C. "Fast 3D model-based vision" which could work relatively well under our assumptions with its top-down "hypothesize-and-verify" approach.
I also know that the industrial vision and robotics community have tackled this problem and its generalization for a long time so there is bound to be something usable out there.
My question is the following: Would somebody know of a commercially usable implementation (like OpenCV, etc.) solving this problem? More specifically:
- I'm NOT looking for deep learning stuff needing an offline learning phase with thousands of viewpoints from our CAD models.
- I'm NOT looking for RGB-D techniques relying on depth sensors. The color is also irrelevant as the CAD is colorless.
- "Old" techniques out of research fashion are OK, even preferred as we will be running on a low performance computer.
- Ideally, using C or C++ (may depend on OpenCV for example).
- Usable commercially (licensed under BSD, MIT, BOOST, etc.), not GPL.
Thanks in advance for any leads anybody could offer.
Bruno


I want to track the eyes of person in IR images. I was able to track eye in open condition using Blob detection (iris will be the blob), but once eyes are closed i am unable to track. please give me some suggetion
the input will be a person driving the vehicle.
We need an Indoor Positioning System to study consumer behaviour in real indoor environments (large rooms: e.g. supermarket), but I'm not sure which is the best technology for that aim (WiFi, radio signals, bluetooth, infrastructure-based, etc.). Given that there is no a recognized standard, I'm a little puzzled.
In any case, the system should be accurate (submeter accuracy, if possible), unobstrusive, user-friendly, easily deployable, stable enough, and, if possible, providing a good quality-price ratio. It is crucial that the system can register and map position, tracks, dwell times, etc. Any suggestions?
Many thanks in advance!
I want to use it in Object tracking.
I want to use SIFT in Object tracking.
require how to model the camouflage background to detect objects using statistical models or is there any alternative methods
Well, it is about underwater vehicles moving on the concentric circles for tracking. Let the vehicles be XA on circle A, and XB on circle B, XC on vehicle C and so on. (There could be more than one vehicle on each concentric circle).
Since the circumference on A is smaller than that of the others, the vehicles will finish the tracking on A faster. After finishing the tracking, they will move to B to assist in finishing the tracking the vehicles on B have not yet finished.
When the vehicles XA and XB now finish the tracking on B, they will also all move to the next circle to assist in the remaining tracking the the vehicles there have not yet finished. This will move on until the last circle is tracked.
I am currently calibrating 2D digital camera with 3D tracking system(Optical tracking system or Electromagnetic tracking system).
In most of the literature I studied, First the camera is calibrated with checkerboard pattern then the calibration between camera and the 3D tracking system is obtained by hand-eye calibration technique to get the Rotation and translation.
I used Direct Linear Transform to calibrate the 3D tracking system and camera. The position and orientation of camera is determined in 3D tracker coordinate frame. Calibration is done by locating same calibration points in camera frame and 3D tracker frame.
Now if I need to transform a point from 3D tracker frame to camera frame would it be the correct according to the method I used? Can I bypass the Hand -Calibration according to method I used?
I would like to apologize for the long post. Thank you for your precious time.
I am using krige.cv() for cross validation of a data set with 1394 locations. In my code, empirical variogram estimation, model fitting, and kriging (by krige()) everything works fine. But when I use krige.cv() I get the following error.
Error: dimensions do not match: locations 2786 and data 1394
One can notice that 1394*2 = 2786. What could I be missing? Please note that there are no NA or NaN or missing values in the data, variogram or kringing results. Everything works fine, and it's just krige.cv() that does not work.
cooperative perception may include cooperative localization, mapping, target tracking and so on.
Hi all,
I want to track particles in a micro-channel by transient method, but during simulation just one of particles move along channel and other stay in initial container.
I'm really confused. What's your idea?
Hello everyone,
I am currently doing some research in multiple objects tracking using Image Based Visual Sservoing methods. I have already found some papers about the topic of IBVS in general and also about my main concers - keeping multiple objects in the field of view of the camera. Do You know some must-read papers? Maybe someone is also doing a research on similar topic ? I would not mind sharing ideas at all.
I am also interested in methods of extracting the depth to the tracked object and in any promising ways of adaptation to uncertainy in this parameter. I am using a monocular vision.
Any help would be appreciated.
Can the Kalman achieve low mean square errors in these two mentioned scenarios.
I am expecting to get a good result, since visually I think I am getting acceptable results in pedestrian tracking, but when I evaluate it by MOTA and MOTP, I get 48% and 70% respectively. Both my TP and FN are accurate enough, but FP is very high and leads to a decrease in MOTA.
I do not know exactly what to do now to decrease FP except the trial and error.
Does anyone have any ideas about it?
Thank you.
Can anybody suggest a method to track the ball in an indoor soccer game.
The input will be video frames.
Any matlab or opencv code would be very nice.
I have mixed some detection and tracking algorithms to do multiple pedestrian tracking. At this stage I have my results and I can visually see the tracked pedestrians. However, I don't know how to evaluate my results to show how good my method works. Do you have any suggestions?
Thank you.
Is there any way to use methods used to track single objects in video frames for multiple object tracking? If yes, what is the disadvantages of using single object tracking methods vs multiple object tracking methods?
Does anyone has an idea?
Thank you.
I am trying to detect 3d model from live video stream.Model should be detected by any face.how can do that?
Hello all, I want to implement a people tracking algorithm suggest me some recent good papers.
location estimation of a person
without GPS
heterogeneous network
We are currently researching the best solution for tracking players for various sporting events. We are currently working on American Football. The current solutions are GPS, OCR, and Particle Filter-based Predictive tracking. We are looking for new solutions and new research that might all the price point to reduced.
I am looking for an accurate method for human tracking which is based on pre-extracted features from our dataset. Anyone knows an algorithm like that?
I am currently working on publishing an applications paper on my automated object tracking and data analysis software that I will be releasing for free as open source. I have some good experiment footage of various behavioral ecology studies that the software has been very successful in collecting data on, but I am interested in exposing the software to as many different tracking environments as possible for showing it's capabilities.
If you provide me with footage, you will of course be cited/credited on the resources section of the paper. I am only interested in footage of animals (insects, fish, mammals, etc.) and preferably in an ecological or experimental context (handheld camera footage isn't ideal). Please let me know if you are interested and if you also know of any publicly available datasets of animal behavior footage that fits the criteria I've listed.
Thank you and I hope to see some suggestions!
You can contact me at: jon.patman@enmu.edu
I am also working on the web-site for the software and I will post a link here shortly so that interested researchers can sign-up on a mailing list and be contacted when I release the software this year.
Is it possible to use traditional target tracking methods like JPDA in this situation?
Or in what conditions, measurements must be time-stamped? (from the control and estimation point of view)
How do I detect multiple objects which are in motion in front of a camera with opencv code?
occlusion for multiple object tracking
In the context of target tracking how can one decide whether the received bearing corresponds to a target or clutter?can any one give an example by taking a set of 100 or so measurements and separating(classifying) the target measurements from clutter.
I have a tracker that outputs the trajectory (x,y,z) of an object (e.g., a can).
I want to use these trajectories to train a classifier (i.e., SVM) in order to infer the activity that the person manipulating the object is performing (e.g., drinking from a can or pouring from a can )
Which kind of features should I use to quantize these trajectories?
Hello forum,
In my project, the object ( only one ) to be tracked is small ( ~30 pixels ) and has very little to no features. It however, has certain properties ( shape of contour, rigidity, etc ) that can be utilized. Its motion however, is completely random.
By performing background subtraction (MOG) (OpenCV function) over multiple frames and by utilizing the properties, I have managed to detect the object of interest on a not-so-noisy background.
I believe the technique is similar to the Greedy Search algorithm where the nodes (object + noise) are detected over multiple frames ( 3 frames in my case ) and the nodes that are connected by the lowest weights ( score utilizing the object properties ) represent the actual object. I have attached a picture showing an example of it.
By running background subtraction continuously, I am able to track the desired object with rather decent accuracy. The program switches to a different algorithm under occlusion with MOG running continuously in the background to re-detect the object in case it is lost.
The problem is, as the project requires multiple cameras, the program lags and hence is no longer real-time.
Should the program switch to a more efficient searching/tracking algorithm after the object is successfully detected and call up the Background Subtraction algorithm only when the object is lost or is it okay to run Background Subtraction 24/7 ? ie. the program could be lagging because I am running it on a laptop.
Secondly, since Background Subtraction (MOG) requires a few frames to initialize + I require the nodes to be established over at least 3 frames, I am worried that calling MOG only when the object is lost may be too late thus reducing my accuracy.
Secondly, is optical flow more efficient than Background Subtraction ?
Hope the experts here can enlighten me.
Regards,
Haziq
PS: I really like my current work so far and would like to improve/optimize it instead of switching to a different technique.

Can anyone help me if the Mean-Shift algorithm is good to track people..
Seeking new methods of object tracking .
Actually in my work, I am using well known 'walk, hall monitor, and daria run frame sequences' for optical flow calculation. But I am not able to find ground truth results for these video sequences for performance analysis.
May anybody help me to find ground truth results of these sequences or other video sequences for optical flow?