Science topic
OpenCV - Science topic
Explore the latest questions and answers in OpenCV, and find OpenCV experts.
Questions related to OpenCV
Hi, I am working on domain adaptation for emotional detection on Hollywood Actor/Actress data and I want its adaptation for Pakistani Actor/Actress picture. Is there any data available online or anyone have, please share... It's urgent I have a research project to do.
"Dear ResearchGate Community,
I am currently working on a project that involves camera calibration using OpenCV. My goal is to achieve precise calibration by incorporating physical measurements from a ruler or another measuring tool. Can anyone provide insights, tips, or a step-by-step guide on how to perform camera calibration in OpenCV while incorporating real-world measurements? Your expertise and guidance would be greatly appreciated.
Thank you in advance for your assistance.
I want to overlay a processed image onto an elevation view of a ETABS model using openCV and ETABS API in c# !
I am using python and capturing video using OpenCV. I want to find coordinates position and orientation of object and some coordinates of edge, so I can find the equation of curve of edge.
I have been reading about the geometry of image formation and getting to the conclusion that knowing the exact position of a point in the real world, you can calculate the coordinates of that point in the camera coordinate system and lately the pixel image coordinates using the extrinsic and intrinsic parameters. All the information can be found in the next link: https://learnopencv.com/geometry-of-image-formation/
My doubts arrive when trying to work on the backward, trying to estimate the real world coordinates from the pixel coordinates, knowing the rest of the parameters using OpenCV camera calibration process I
Here is my CV. Could you please review it and mention my mistakes. Your effort makes my CV impressive. Please review it and suggest me some suggestions.
Please!
Thank you for your precious time and suggestions in advance.
- Can you please suggest in any online material that uses opencv and python for geospatial datasets
Which object tracking algorithms are highly accurate and also have high processing speed?
Opencv algorithms themselves do not work well at high speeds
Thank you for your help
Hi..
I have a set of videos and I want to extract natural scene statistic (NSS) features from it by using OpenCv with python language. I need a source code to extract NSS features.
Please, anyone can help me?
I've been using yolov3 with OpenCV and now I want to change to yolov5 and I saw it has .pt files instead of .weights files and it is in PyTorch.
Is there any way to use yolov5 model in OpenCV?
When I try to use the .pt file in OpenCV with this command:
net = cv2.dnn.readNetFromTorch('./model_6/best.pt')
It gives me this error:
cv2.error: OpenCV(4.4.0) D:\Build\OpenCV\opencv-4.4.0\modules\dnn\src\torch\torch_importer.cpp:1017: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'cv::dnn::dnn4_v20200609::TorchImporter::readObject'
Hi Everyone, I'm currently converting video into images where I noticed 85% of the images doesn't contain the object. Is there any algorithm to check whether an image contains an object or not using the objectness score?
Thanks in advance :)
Hi Everyone,
I'm currently practising an object detection model which should detect a car, person, truck, etc. in both day and night time. Now, I have started gathering data for both day and night time. I'm not sure whether to train a separate model for daylight and another model for the night-light or to combine together and train it?
can anyone suggest to me the data distribution for each class at day and night light? I presume it should be a uniform distribution. Please correct me if I'm wrong.
Eg: for person: 700 images at daylight and another 700 images for nightlight
Any suggestion would be helpful.
Thanks in Advance.
Hi all,
I have a computer vision task in hand. Much as it's quite simple in my opinion, I'm very naive in this area thus looking for the simplest and fastest methods.
There's a laser pointer projected on a screen that keeps bouncing around. I need to capture the location and the velocity of the projected dot with respect to some reference point. I would really appreciate it if someone elaborates on the procedure in simple terms.
Regards,
Armin
I have been trying to track feature points in the video using opencv. However, use of Shi Tomasi Corner Detection along with optical flow algorithm like Lucas Kanade Optical Flow doesn't seem to be able to track objects with faster motion accurately. Any insights on robust feature tracking in videos?
A monocular camera is to be calibrated, which is located in the area of the vehicle and looks in front of the direction of travel. During the calibration, the extrinsic parameters (position and rotation between the camera coordinate system and the origin of the vehicle coordinate system: Center of the rear axle of the vehicle) should be calculated. The camera parameters are calculated online, i.e. while the camera is taking pictures. The algorithm should automatically calculate the extrinsic parameters from driving scene images while driving.
I am looking forward to your feedback !
I'm having trouble installing OpenCV with Conda. I tried running numerous commands, none of which worked
For example, when I ran conda install -c anaconda opencv I get this error:
Note: you may need to restart the kernel to use updated packages.
ERROR: Could not find a version that satisfies the requirement opencv (from versions: none)
ERROR: No matching distribution found for opencv
Why is this happening and how can I install OpenCV in Spyder?
Lately, I was doing tasks on image and PDF parsing. And I figured out that a lot of users are using modules like Camelot, OpenCV, or Tesseract OCR for that. The only problem is that modules require installation and they are not that flexible.
But I was somehow sure that for that reason neural-networks are used.
So basically I'm just wondering what do you use for solving such a task. Do you use modules or functions maybe?
I am working with python programming language, my field is image processing, and I did FFT2 via Scipy, numpy, and OpenCV.
I need to code PSF and MTF to an image would you tell me how to code them in python?
I am a student and eye tracking is a part of my project.
are I need to study CNN or not?
can i build it using OpenCV only?
what techniques or algorithm are used .
I am working on a CV project where I am trying to extract key frames from videos. The videos are of bottles containing text labels, now the criteria for key frame in my case, is "to extract those frames such that the frames cover all the text on the bottle". So as you can see the criteria for choosing key frames is more text driven here.
I know that we generally either use frame clustering , shot detection or compare histograms of frames to extract the key frames but I am not sure if that is the best approach for this particular use case, given that the colour intensity may not vary much from frame to frame(Black/White text written on white label)
So have anyone of you worked on such a problem before or any pointers as to what could be a better way to approach this
I'd like to see the output of every layer of DNN under OpenCV Python. So, I am asking you here to help me. If anyone knows a blog or something else explain how to show us outputs of layers, please, put its link in an answer.
Have implement neural network based mask detection algorithm with different colors of masks and that is working with 99% accuracy (around 2 fails in 850 tests) but the CNN based algorithm is too slow on boards like Raspberry Pi around 1-2 FPS at 720p. Shall I try PCA /ICA based techniques?
I am testing several methods for finding region of interest in hand gesture. in opencv for example I found some methods like camshift (for tracking a interest object), some background extraction methods (MoG, MoG2, ..) which specially are used in video to subtract background from foreground, which can also be used when we have hand as an object in a video with a complex background. and also GrabCut and backproject methods which can be used for hands posture in a static state. Contours, edge detection or skin methods are some other approaches for detecting hand in an image or video. And lastly I found that haar cascade can be used as well. I want to know that for passing from this stage, which algorithm is the best choice, considering that I use images with complex background. some algorithms like Grabcut or backproject were good but the most important problem was that I should manually specify some regions as foreground or background and this is not what it should be. After choosing a method for roi, generally what are the most important features in hand gesture recognition? for extracting features which method is your suggestion? that can work well with one of the general classifiers like svm, knn, etc to classify an specified image.
Thank you all for taking your time
Can convolutional neural network recognize small targets with very few pixels? How is the processing speed compared with the traditional opencv algorithm? I need some information for help.
Hello, I have a binary edge detected picture(used Canny edge detection method), now I would like to filter out small or not required edges ( by length or shape or number of pixels) in python, does anyone have any suggestions ?
thanks in advance
I am working on image segmentation using python open cv package but couldn't able to work with it because of error indicating "ModuleNotFoundError: No module named 'cv2'" and the same error is shown in the image
Can anyone suggest me how to fix this issue for proper working of open cv package.

- I'm currently working on a project to extract text from document- images (like passport and license) and storing the passport number and driving license number along with the name of the person in a database.
- I have used Pytesseract for the same.
- Does Pytesseract use any of the Neural Network Algorithms?
- The code with the sample image and output IS ATTACHTED BELOW.
from PIL import Image
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files (x86)\Tesseract-OCR\tesseract.exe'
im = Image.open('C:/Users/Kiran Lalwani/Desktop/dss/56db21ec-8d4d-4128-889a-948e81eb7127.jpg')
text = pytesseract.image_to_string(im, lang='eng')
- Is there any other more efficient method?
- What about Tesseract-OCR or OpenCV or CNN or MATLAB for text extraction?
I try to install open cv using anaconda prompt with below commends but its not install, can any one help to solve this problem.
1. Type conda install -c condo-forge opencv.
2. conda install -c condo-forge/label/broken open cv
Hello,
I am using Python and openCV to find the centroid of the blobs in a binary image. I use cv2.Moments() function to identify the centroid if there is only one blob. However, I do not have a system to loop through the blobs when there are multiple objects in the frame. I have tried using a contour based loop, however there are too many identified contours. I have also tried Canny Edge detection paired with a Gaussian Blur to help reduce the amount of edges picked up. Attaches is one of the binary images I am sending the python script.

I am doing my final year research project about this.
I am calibrating my camera and took 5 images on it. I used OpenCV for the calibration and i end up having 1 Set of Camera Intrinsic Matrix , 5 Rvecs and 5 Tvecs . I would like to know how to solve a common Rotation (3x1) and Translation vector (3x1) that would represent my camera out from the 5 images.
Can someoone help me with any explanation or a, opencv code greatly help too.
Thank you.
Instead of writing the code in matlab for this, is there any open source software which I can use for this purpose (apart from OpenCV)? I found a software for this purpose http://www.opensourcephysics.org/items/detail.cfm?ID=7365. But it does not keep track of the shape? Can anyone recommend any software for this?
I am working on a project where I take images of the crops & need to analyzed it. For the same, I have designed a simple spectral camera with different filters that narrow the range of wavelengths. I plan on using ImageJ to analyze these images taken with this camera. I want to look at the different images, on each plant's unique spectral reflectance. Can ImageJ be used to pickout these differences, is ImageJ capable of that? Are there plug-ins already available for something like this? What are the capabilities of ImageJ in regard to UV/Visible/NIR?
Can OpenCV be the savior? If so, how the images taken be used to measure the reflectance?
[image processing]: Convert line 58 to Scilab coding from Matlab coding?
My original Matlab program wrote many images {after entering 8 rows and 8 columns) to the specified location.
I'm struggling to do the same thing with Scilab coding, as i'm not a natural programmer - how about you, can you help? File attached: all colour. My operating system is macOS Mojave running on an iMac, the Scilab version being used is 6.0.1 [@ the time of writing]
Aside: Using, only, the image processing Module as specified on the first line of the program which works very well in other image processing programs both on this and the Microsoft W10 platform and some, but not all Linux distributions {where the included openCV 'bits' don't incorporate without too much experimentation}.
In the very first frame of the video, I define a ROI by drawing a close line on the image. The goals is to recognize that ROI in a later frame, but that ROI is not a salient object. It is just a part of an object, and it can deform, rotate, translate and even not be fully in the frame.
Essentially, this algorithm should be used to reinitialize trackers once they are lost.
I have used a histogram based algorithm which works somewhat well, but it doesn't "catch" the ROI entirely.
The object is a soft and deformable object, soft tissue in a way, meaning you can expect deformations and also visual changes due to lightning.
Hello everybody, i hope you're doing fine, i'm working on a project about vehicles dete'ction and counting, and i used HAAR cascade classifier provided by opencv library on python and i did an algorithm to estimate the speed, i'm working on Sublime text 3, now the next step is to build an api to stock informations about the detected vehicles, maybe the real time number of the detected vehicles, a dashboard to visualize this data, do you have any ideas about how can i do that ? thank you so much.
I am trying to implement a depth computation algorithm on a low cost single board computer (ARM ODROID XU4), I'm using c++ with opencv for this purpose(opencv for simple operations such as reading and displaying images). the algorithm is executed and tested on CPU. In my implementation I'm using STL deque to mimic the behaviour of a shift register; each time I'm done processing one pixel I pop out the front of the deque and push a new elements at the back, hower the cost of executing this operation is very high. am I right in choosing the deque ? Note that the its size is predefined to be only 8.
Please, how can I resolve this error :
OpenCV(3.4.1) Error: Insufficient memory (Failed to allocate 63489024 bytes)
in cv::OutOfMemoryError, file C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\core\src\alloc.cpp, line 55
OpenCV(3.4.1) Error: Assertion failed (u != 0) in cv::Mat::create, file C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\core\src\matrix.cpp, line 362
Error: OpenCV(3.4.1) C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\core\src\matrix.cpp:362: error: (-215) u != 0 in function cv::Mat::create
here is the code :
- def extract_features(image_path, vector_size=32): image = imread(image_path, mode="RGB") try: alg = cv2.KAZE_create() kps = alg.detect(image) kps = sorted(kps, key=lambda x: -x.response)[:vector_size] kps, dsc = alg.compute(image, kps) dsc = dsc.flatten() needed_size = (vector_size * 64) if dsc.size < needed_size: dsc = np.concatenate([dsc, np.zeros(needed_size - dsc.size)]) except cv2.error as e: print 'Error: ', e return None print(dsc) def batch_extractor(images_path, pickled_db_path="features.pck"): files = [os.path.join(images_path, p) for p in sorted(os.listdir(images_path))] result = {} for f in files: print 'Extracting features from image %s' % f name = f.split('/')[-1].lower() result[name] = extract_features(f) with open(pickled_db_path, 'w') as fp: pickle.dump(result, fp) extract_features('HR.jpg', vector_size=32)
I am working on eye gaze. currently i have reached till eye gaze projection using OpenCV and using web camera. Now i want to project these onto the screen(can consider imaginary) to justify where the user is looking at exactly. Any leads to move forward on this?? Thank you.
File "C:/Users/asus/Desktop/Mar1.py", line 30, in <module>
net = cv2.dnn.readNetFromTensorflow(weightsPath, configPath)
error: OpenCV(3.4.1) C:\Miniconda3\conda-bld\opencv-suite_1533128839831\work\modules\dnn\src\tensorflow\tf_importer.cpp:1582: error: (-2) Unknown layer type Sub in op Preprocessor/sub in function cv::dnn::experimental_dnn_v4::`anonymous-namespace'::TFImporter::populateNet
Hello all,
I am new to research gate but feel this will be a good place to discuss project Ideas. I am a senior computer science student with a passion for computer vision. I have taken some image processing classes and have some experience using opencv for previous projects(face detection). My project has to be research based. I want to do something computer vision based but want to extend on it with machine learning(neural networks svm etc). My advisor suggested LPR(license plate recognition) and implementing my own solution with my own data and training my own networks. However after doing some research I see that this technology is already in place and have seen that extensive work has already been done in this field. What are some research topics I can pursue that I could possibly contribute new data or a new approach? I understand I am just an undergrad and won’t be changing the world with my project but I would like a topic that is challenging and that has maybe less data contributed. something I could write a technical paper for. I understand it will be challlenging, am not looking for a light project. I want to do a project that will expose me to some crucial machine learning concepts and make me a good candidate for higher once I graduate, any ideas ???
In previous versions of opencv , there was an option to extract specific number of keypoints according to desire like
kp, desc = cv2.sift(150).detectAndCompute(gray_img, None)
But as in opencv 3.1 SIFT and other "non free" algorithms are moved to xfeatures2d ,so the function is giving error . Kindly tell me how can i set limit on the number of keypoints to be extracted using opencv 3.1. Thanks !
I use openCv 2.4.9. I put .dll files beside the exe and run my programs. I find these .dll files in openCv directory : "opencv\build\x64\vc10\bin".
but for using functions without this method, must be compile open CV with Cmake tools. this method has many bug. Would I be wrong? please help me to compile this library
I have researched about the same, unfortunately couldn't be able to find a right approach.
All I have got to see is to use OpenCV, but how would I able to integrate OpenCV with NAO?
Any suggestions would be much appreciated.
Hello,
I am building a CBIR system with Corel Database (100 classes of 100 pictures). I have implemented some "current" (non deep learning) descriptors (Sift, Surf, HOG, Color Histogram, HSV histogram, LBP histogram, ORB, Hu Moments of the image, GLCM descriptors (contrast, homogeneity,...)). Also, I implemented FlannMatcher and BFMatcher for Sift, Surf and ORB ; `compareHist` function of OpenCV and its four distances for all the histograms and NORM1,2,INF pour vectors (Hu & GLCM).
However, I get really poor results and bad R/P curve. More precisely, it seems to really depend on the query. For example, for the bear, the best results (Top 50) are reached with GLCM where I get this :
[![results with GLCM][1]][1]
On the other hand, when the query is a playing card (which is rather simple), that gives rather good results, at least with some algorithms such as Sift.
[![results with Sift][2]][2]
I was wondering if it was normal to have so bad and variable results ? Actually, I just used OpenCV functions so I don't see where I could have been wrong...
Could it be relevant to make a weighted sum of some descriptors ? Just by normalizing the distances, weight them add sort the global sum ?
Is there another way to improve "simply" the results ?
Thank you in advance for your help
ToxTrac is a free Windows program optimized for tracking animals. It uses a Computer Vision tracking algorithm that is robust; very fast; and that can handle one or several animals in one or several environments. The program provides useful statistics as output. ToxTrac can be used for fish, insects, rodents, etc.
ToxTrac is currently being used in dozens of institutions and is one of the best available tracking software for animal studies.
The Project is currently being developed by only one person, but there is a large amount of work to be done. So a call for collaboration is open.
What I need, is people with knowledge in C++ with expertise in some of the following areas:
• Computer Vision and programming in OpenCV
• Machine Learning (with knowledge of TensorFlow)
• User interface design with QT
Authorship in all related scientific contributions will be shared.
Thank you for your support and patience.
Contact: o_siyeza@hotmail.com
ToxTrac website: https://toxtrac.sourceforge.io
Instruction video: https://youtu.be/RaVTsQ1JwfM
ToxTrac Guestbook: http://pub47.bravenet.com/guestbook/3993433667
Citations:
• Rodriguez, A., Zhang, H., Klaminder, J., Brodin, T., Andersson, P. L. and Andersson, M. (2018). ToxTrac: a fast and robust software for tracking organisms. Methods in Ecology and Evolution. 9(3):460–464.
• Rodriguez, A., Zhang, H., Klaminder, J., Brodin, T., and Andersson, M. (2017). ToxId: an algorithm to track the identity of multiple animals. Scientific Reports. 7(1):14774.
ToxTrac is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
I want to tract cars and determine speed from videos using Open CV. Which algorithm you think will provided better results?
I'm currently using an app called IP camera that send the live stream through a localhost link if the mobile and pc are connected to the same network. I would like to build a similar application because I want to add some functionalities to that
To calculate "pixel/mm", principal point (in mm) and focal length (in mm) using openCV's "calibrationMatrixValues" function. The aperture width and aperture height are required as input. Aperture width and aperture height are not provided by manufacturer also. How to find those values?
Thank You :D
I wrote a program for processing three video whit C++ and OpenCV for my Master's thesis, But I didn't design and implement a graphical user interface for it. Program received several videos and did process on them and saved the resulting video in the given directory.
Now, my question is, if i want to write a program that shows videos of 6 to 8 cameras on a system whit maximum 5-core and I can process them at the same time. Firstly, what language can I use that Be quick its process and be suitable for real time requirements?
Secondly, i can design and create strong GUI whit language.
and thirdly, a language that can to communicate whit hardware of the cameras or can receive videos from cameras directly.
Which of java, linux, c#, c++, python is the best?
What language and what library?
What language use for create processing video program and its GUI in big companies?
Are this two subjects depended and Related or Unrelated؟
thanks a lot.
Hi everyone, I want to automatically analyse video with beetle movement. Output should be matrix with x/y coordinates (pixels) of beetle position in time. I have a problem, because I am beginner in computer vision, so can you give me some advice how to solve this task?
Analyse for 1 object. More than 400 hours of video/camera, 8 cameras.
Is openCV/Python OK for this task?
Screenshot from video is in appendix.
Thanks in advance.

After installing tensorflow and necessary package when I install openCV and want to run spyder then the following error happening. It will be very much helpful if anyone suggest about the solution. Thanks in advance
EigenFaces Face Recognizer Recognizer
FisherFaces Face Recognizer Recognizer -
Local Binary Patterns Histograms (LBPH) Face Recognizer
I have applied traincascadedetector , KNN ,featurematching, estimategeomatric transform in Matlab, opencv & Python.
Can anyone suggest me some another method to detect the symbol?

I am interested in segmenting an image using conditional random fields and would like to do it in OpenCV. Is there any inbuilt tool for that, or anything compatible with OpenCV?
I want to know about face detection. I used Vision API Framework for face detection. Is it good or not. Please tell me the difference
Hi, everyone,
I built an CNN-LSTM model with Keras to classify videos, the model is already trained and all is working well, but i need to know how to show the predicted class of the video in the video itself.
I searched a lot on the internet, but nothing... I don't know if i can do this by using the OpenCV library, or any other one.
Here is the example of what i want, in this youtube video:
Thanks for the attention!
I am working on techniques to obtain high resolution reconstructed images of license plates. The source of these images are from CCTV video footage.
Hello, I am trying to use 2d wavelet transform for image processing task in opencv c++. I found this library on http://wavelet2d.sourceforge.net/ which looks really awesome. I am using visual studio 2017 and I followed every single word of their instructions on their website. First, I followed this menu and added the header file of wavelet2d.h:
Project| Properties| VC++ Directories| Include Directories
And then I added the same path to
Project| Properties| C/C++ | General| Additional Include Directories. Afterward I add the folders containing “wavelet2d.dll” and “libfftw3-3.dll” to the following menus:
Project| Properties| VC++ Directories| Library Directories
Project| Properties| C/C++ | Linker| Additional Library Directories Finally, I add “wavelet2d.lib” to the following menu:
Project| Properties| C/C++ | Linker| Input| Additional Dependencies. Well, it seems that it should work, but it doesn´t and I get the following error messages when I try to compile the project: LNK2019: unresolved external symbol "__declspec(dllimport) void * __cdecl swt_2d(class std::vector<class std::allocator<double="" std::vector<double,class=""> >,class std::allocator<class std::allocator<double="" std::vector<double,class=""> > > > &,int,class std::basic_string<char,struct std::char_traits<char="">,class std::allocator<char> >,class std::vector<double,class std::allocator<double=""> > &)" (__imp_?swt_2d@@YAPEAXAEAV?$vector@V?$vector@NV?$allocator@N@std@@@std@@V?$allocator@V?$vector@NV?$allocator@N@std@@@std@@@2@@std@@HV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@2@AEAV?$vector@NV?$allocator@N@std@@@2@@Z) referenced in function main and also this one: LNK1120: 1 unresolved externals Can anyone help me with solving these errors? What´s wrong? Thank you so much
I have a stack of brain MRI with tumor present in some slices. I wanted to remove the tumor from that slices and fill these pixels with relevant values, such that it will be like non-tumor or normal brain MRI.
I tried with opencv's image inpainiting function for this, but the results are not that good enough. Please suggest the direction that I shall follow.

Suppose I have a stucture file in MATLAB workspace that contains several parameter and matrices of different data type (e.g. int or double, etc.). It is required to save this struct file as single .dat file (i.e. an array of size equal to over all memory space filled by all the variables) such that it can be accessed in a C++ program to read the contents using *fid and *fread instructions.
Useful advices are anticipated.
Regards.
Helal
I want to find the total number of distinct colors in an image. for example an image have red,green,blue,yellow colors so my answer should be 4.
please guide me through this problem.
any help will be appreciated.
Actually I got header file in my Open CV release (<opencv2/dnn.hpp>) but not implemented.
I have a raspberry pi 3 with camera module and i want send picture captured by using OpenCV, to ubuntu server with TCP or UDP
Any idea ??
I want to use RGB and depth video generated from Kinect (not version v2) and extract real coordinates so to map them on point cloud. I do not possess Kinect device but only the data. I am primarily relying upon OpenCV for this but can use some other open source tool. I have extracted frames from both videos and frames from both videos are numpy nd arrays 640X480X3. Can anyone provide some pointers or suggestions or existing solutions please? Thanks a ton!
I'm estimating the distance using a chessboard and solvePnP function from openCV.
I heard that to have the more accurate distance, I shouldn't have the chessboard parallel to the image plan because the real change of distance in front of the camera doesn't have the same scale in the image because of the perspective.
For example if I move the chessboard 1cm away from the camera probably the size of the chessboard (in the image) won't be very noticeable to the camera hence the distance won't be very accurate in that situation.
Is that true and is there any books or references about this fact?
I have finished a simple demo for 2d-rigid image registration in python just only using opencv, numpy and scipy, I found it is very fast. But when i used the python to preform the 2d-norigid image registration algorithm like FFD using B-spline, it is so slow. I decided to turn to the C++ and using a parallel way to complete this algorithm. But I find that there is no a tool or lib in c++ like numpy in python which is so convenient for me to operate the multidimensional matrix or tensor, also I find that I can not install the TBB in my computer(windows10,vs2015) although I tried it many times.So I use the PPL(only be used in windows) and take a intuitional way to construct a multidimensional tensor(combine the vector<T> one by one in STL),But I did not think it will work very well and quickly.
I google some papers and find many algorithms in them preform very well, but there are no details about which tool, language,tec they used to achieve the high performance.
I wonder that which libs or tools in C++ could help me to construct a multidimensional tensor and to speed up the image registration.Also I had tried to install the ITK, but it failed.I don't know why,I used to send some emails to some specialist to ask for help,but I find it did't work, the issue is not fixed today.I guess maybe some wrong happened when using the vc++ compiler to build the source code of ITK. So I hope someone can give me some advice or give me a source code about image registration.
Thanks a lot!!!
I have a bunch of depth images (see attachement). I want to perform skeletal tracking on them. This data is captured though ASUS Xtion sensor but I only have access to the depth images and not to the videos. Is there a way to perform skeletal tracking in MATLAB/OpenCV or something else on these depth images?
Hello everyone, i'm beginner .I'm reading a paper where after segmentation for more accurate result graph cut is used.. please guid me i have no idea about it after segmentation how it is used and and about graphcut..please give few references...my work platform is opencv + visual studio 2010 + c++. Thanks
I applied butterworth low-pass filter but it didn't work well. The noises in the image seems horizontal periodic noises in the center of the image.

Dear OpenCV Users,
Does anyone have practical results of how much speed improvements can be achieved using OpenCV CascadeClassifier::detectMultiScale with TBB library?
Thanks in advance.
Hi,
I am working on a project that needs to track multiple objects in real time... using the minimum time and processing cost. The goal of the tracking process is to extract object location, speed, type, and direction.
I'm using OpenCV and Android OS in my implementation. Could anyone help me to decide which tracker can give the best performance for this case?
Thanks
i have downloaded train station data set from following source
they have provided images in raw format. i never used openCV so how can i convert these raw files into RGB images using matlab?
Hi. I am currently working on a medical image processing project that segments the coronary artery blobs from the axial CT slices of the human heart and converts the segmented coronary blob slices to a 3D coronary artery model. I wish to extend my project by mapping the 3D model to the 2D axial input slices, ie, by clicking on a point in the 3D model, it must display the appropriate CT slice and also the point to the position in that slice. Are there any methods ( techniques ) or softwares available to do this? If so, how?
I am using OpenCV to implement this project.
I am trying to find the angle of a line detected through HoughLinesP in Open CV w.r.t a horizontal line. I have the starting and end points of the line through HoughLinesP.
However, I am getting strange results when I use atan2.
Basically, I have multiple lines, all not starting at the same position and I want to find their orientation/ angle w.r.t to a horizontal line.
Here is the code snippet-
Point p1, p2;
p1 = Point(l[0], l[1]);
p2 = Point(l[2], l[3]);
//calculate angle in radian, if you need it in degrees just do angle * 180 / PI
double angle = atan2(p1.y - p2.y, p1.x - p2.x);
double angles = angle * 180 / 3.14159265358979323846;
cout << "line coordinates are " << l << endl;
cout << "Angles are " << angles << endl;
These are the images and angles obtained are attached. I want to find the angles of the blade lines with respect to a horizontal line through the hub.
Should I use acos(vec2d(p2.x,p2.y)(cv::vec2d(1,0)) ?? Any help is appreciated. Thank you in advance!


I need help in OpenCV. I have six 1200x1 Mat objects/variables. I want to make a single Mat variable which is 1200x6. How to do this? For example, i have six mat objects a,b,c,d,e,f of 1200x1 size and now i want to make A = [a,b,c,d,e,f] , which is 1200x6 size. Please help
I have to add structured/block noise in image. So,How can change amplitude value in image using opencv?
I need to have the size of blob fixed and not affected much by illumination..
Hi every, I want to research for OpenCV C++ with MFC using car counting. I try working but it very hard. I want every help for me. Thanks all. Contact gmail: doducchien3795@gmail.com
I have to continuously monitor the angle formed by the blade of a wind turbine w.r.t an imaginary horizontal axis. I have looked at Kinovea and Tracker as possible options. With Kinovea, I need some sort of marker on the blade and hub to track the angle, which is not possible. Besides this, we cannot export the tracked data into a spreadsheet. And with Tracker (OSP), it isn't possible to get a stream from a webcam.
Can someone please suggest another software that will be helpful for the same?
- should be able to get a feed from a webcam.
-should provide tracking of multiple angles of different turbines and export to a spreadsheet.
- should have some sort of perspective filter to correct distortions due to the angled position of the web camera.
- preferably open source, but other options can be looked into.
The final option is using OpenCV and coding using C++. However, in this, I face a problem of how to find the angle with respect to an imaginary horizontal axis, how to code when the camera is angled, and when there are multiple turbines to be detected.
Any help/ suggestions would be greatly appreciated. Thank you.
Update-
Here is a more defined version of my question-
An ideal Wind Energy Farm will have all the turbines rotating with the same Blade Angle*, in a similar fashion. The blades of different turbines spin at variable speeds. As a result of this, the Blade Angle for every Wind Turbine is different. Considering a case of 4 Wind Turbines, each placed a 100 meter apart and forming a Blade Angle of ө1, ө2, ө3 and ө4, we can use OpenCV to monitor the Blade angles of each turbine by using suitable computer vision algorithms and by taking into account the distance, location and other such factors of the WebCam used to monitor the same. Computer vision comes into play when the camera is not located directly in front of the turbine, but at an angle and certain distance to it. The idea is to get an accurate value of the Blade Angles formed.
*Blade Angle(here)- the angle formed between the first blade and an imaginary horizontal axis, measured in an anti-clockwise direction.
I hope this provides better clarity.
In OpenCV, I have the following methodology planned-
Get image/ frame- use canny edge detection- use Hough lines transform to find lines-recognise blade lines-find blade angles- go to next frame.
My problem here is- I don't know how to recognize only the blade lines after finding Hough lines. Another problem I face is how exactly I should make an imaginary horizontal line through the hub to measure the blade angle.
Do you have any thoughts on this? Thanks a lot. Any help is appreciated.
In case of circle pattern:
How I can determine the size and distant between circle items pattern?
What are the criterions should be consider that will allow to better calibration results using c++/opencv ?
I am working with OpenCV in Java to build a ALPR system and I need to separate the "X" from the actual plate's frame so I can then take the letter to read it, but I haven't been able to figure out what else to do. This is what I've tried:
1) Ostu threshold.
2) Watershed.
You can see that also in the distance transform the "X" is still merged and after thresholding this image I can't find any value to actually separate the "X" without damage in the other letters.
Hope somebody can help me with this.
Thanks in advance!



Hello,
I am working with C++(OpenCv) and I want to compute the runtime of my method to compare it with other methods. I use clock_t tStart = clock(); and printf("Time taken: %.4fs\n", (double)(clock() - tStart)/CLOCKS_PER_SEC);
The problem is that i don't know where to put the clock start? Is it after image reading and image preprocessing or after them? also the same for clock end.
Thank you
Hello,
I want to decompose of homography matrix in opencv?
In opencv3.0 and 3.1, decomposeHomographyMat() function is used for decomposition of homography matrix, but it handled unhanded exception.;
Can anybody help me for how to use this function?
Thank you,
Hello,
Please, i am using opencv 2.4.9 on ubuntu 14.04 and i try to read/write an uncopressed video but it gives me a segfaults on cvQueryFrame(capture) when i try to open a rawvideo. Can help me?
Thank you
it is possible to implement watermarking algorithms with open CV.
I have converted my MATLAB code for background subtraction to C++ code using MATLAB coder. The integration of C++ code with OpenCV library is done in Visual Studio 2015.
I am getting error "Exception thrown at 0x00007FF61EB25260 in myProject.exe: 0xC0000005: Access violation reading location 0xFFFFFFFFFFFFFFFF."
If there is a handler for this exception, the program may be safely continued.
Dear all,
I would like to extract all dI/dV curves of a CITS (series of dI/dV curves) measured by WSXM program for further analysis with Matlab and other program. I try with the fopen of Matlab to read the binary file, but it does not work. Do you have any trick to overcome this problem? If yes, please share with me. Thank you in advance.
Regards
I'm new to the field of computer vision and I want to solve the following task (preferrably with OpenCV and C#, but other solutions like with Scilab? are also gratefully welcome)
I got the following error:-
OpenCV Error: The function/feature is not implemented (HOG cascade is not supported in 3.0) in read, file /tmp/opencv3-20160822-4825-e1u8p8/opencv-3.1.0/modules/objdetect/src/cascadedetect.cpp
I successfully could find hand contour thanks to findContour function in openCV and by setting some arguments. But I do not understand how mathematically openCV can find outer contour of hand? I know that it saves points in an array, but how it uses the hand's points to join them? My question is obviously about the concept behind this function and hierarchy settings. Note that in the case of hand, we do not need to find all the contours, so the first hierarchy will actually work. I also found the idea of reference paper something similar to freeman chain code, but more advanced. I don't know what is the concept behind finding connected components in this method. The download link of paper is provided.
I really appreciate you answering this question. Thank you all
OpenCV is normally used for face recognition applications. I want to embed an emotion recognition algorithm on openCV. How to do? Also which version of openCV is suitable for windows 7?
How to calculate Haralick Texture Features in openCV?
I calculated the same in MATLAB but unable to find any code in openCV.
Hi,
I have OpenCV, OpenMPI compiler and linker flags set for using cmake. I want to include OpenACC compiler flags too in order to compile an OpenMPI enabled C++ OpenCV code with OpenACC. Thanks in advance.
I have implemented a program to convert an image from ADTF framework to Opencv and back to ADTF. I have passed this image to a video display plugin.
The program runs without error. But I see only a black screen as image.
Please let me know what could be the issue and how to rectify it.
Thanks in advance
I have an image transmitted to the input (pin type: cVideoPin ) of my ADTF plugin. I have also created a buffer that can hold the input image. I need to make the image available in this buffer be available to Opencv computation preferably through a Mat container.
I need suggestions on how to make the image in ADTF buffer compatible with opencv Mat.
Thanks in advance.
I have a raspberry 3 with Raspbian system image installed includes opencv and OpenNI.
Is there any possible way to install the MATLAB 2016a and Simulink Support Packages for Raspberry Pi 3 *without deleting* the content of my SD card?
B.S. I already tried to install Support Packages on an SD card and it worked fine (i.e., I was able to run the sample program of user green light).. Sure all the information contained in the *SD card was deleted*.
Again, I need to install the MATLAB 2016a and Simulink Support Packages for Raspberry Pi 3 *without deleting the previous content of my SD card*?