Science topic

Robot Vision - Science topic

Explore the latest questions and answers in Robot Vision, and find Robot Vision experts.
Questions related to Robot Vision
  • asked a question related to Robot Vision
Question
4 answers
What new occupations, professional professions, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
The recent rapid development of generative artificial intelligence applications is increasingly changing labor markets. The development of generative artificial intelligence applications is increasing the scale of objectification of work performed within various professions. On the one hand, generative artificial intelligence technologies are finding more and more applications in companies, enterprises and institutions increasing the efficiency of certain business processes supporting employees working in various positions. However, there are increasing considerations about the possibility of black scenarios coming true in futurological projections suggesting that in the future many jobs will be completely replaced by autonomic AI-equipped robots, androids or systems operating in cloud computing. On the other hand, in opposition to the black scenarios of future developments in labor markets are contrasted with more positive scenarios presenting futuristic projections of the development of labor markets, where new professions will be created thanks to the implementation of generative artificial intelligence technology into various aspects of economic activity. Which of these two scenarios will be realized to a greater extent in the future is currently not easy to predict precisely.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What new professions, professional occupations, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
What new professions will soon be created in connection with the development of generative artificial intelligence applications?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Relevant answer
Lutsenko E.V., Golovin N.S. The revolution of the beginning of the XXI century in artificial intelligence: deep mechanisms and prospects // February 2024, DOI: 10.13140/RG.2.2.17056.56321, License CC BY 4.0, https://www.researchgate.net/publication/378138050
  • asked a question related to Robot Vision
Question
6 answers
Will the black scenarios of the futurological visions known from scence fiction films, in which autonomous robots equipped with artificial intelligence will be able to reproduce and self-improve, come true in the future?
The theoretical basis for the concept of the essence of artificial intelligence has been developing since the 1960s. Since then, black scenarios of futurological visions, in which autonomous robots equipped with artificial intelligence will be able to reproduce themselves, self-improve, become independent of human control and become a threat to humans, have been created in literature and film of the genre of scence fiction. Nowadays, in the situation of dynamic development of artificial intelligence and robotics technologies, the above-mentioned considerations return to topicality.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Will the black scenarios of futurological visions known from scence fiction films, in which autonomous robots equipped with artificial intelligence will be able to reproduce and self-improve, come true in the future?
Will artificial intelligence-equipped autonomous robots that can reproduce and self-improve emerge in the future?
And what is your opinion about it?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Relevant answer
Answer
While the future remains uncertain, it's essential to consider the potential of autonomous robots with AI. While they won't be popping out little robot babies any time soon, significant advancements are likely. Self-improvement is already evident in AI research, but let's hope they won't outsmart us entirely! As responsible developers, we must prioritize safety and ethics to avoid any "I, Robot" scenarios. Remember, the future is what we make it, so let's aim for a world where robots and humans coexist harmoniously – a future where even the Jetsons would envy our technological prowess!
  • asked a question related to Robot Vision
Question
27 answers
Can autonomous robots equipped with artificial intelligence that process significantly larger amounts of data and information faster than humans pose a significant threat to humans?
Will autonomous robots equipped with artificial intelligence, which process much larger amounts of data and information faster than humans, only be useful, friendly and helpful to humans, or could they also be a significant threat?
Robots equipped with artificial intelligence are being built to create a new kind of useful, friendly and helpful machine for humans. Already, the latest generations of microprocessors with which computers, laptops and smartphones are equipped have high computing and processing capacities that exceed those of the human brain. When new generations of artificial intelligence are implemented in computers with high computing and multi-criteria data processing powers, intelligent systems are obtained that can process large amounts of data much more quickly and with a high level of objectivity and rationality in comparison to what is referred to as natural human intelligence. AI systems are already being developed that process much larger volumes of data and information faster than humans. If such artificial intelligence systems and autonomous robots equipped with this technology were to escape human control due to certain errors, a new kind of serious threat to humans could arise.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can autonomous robots equipped with artificial intelligence that process significantly larger amounts of data and information faster than humans pose a significant threat to humans?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Relevant answer
Answer
Dear Dariusz Prokopowicz , I find this fine article belong to your new question.
Decoding the business of brain–computer interfaces
Brain–computer interfaces could one day allow people with severe paralysis to control robotic arms or generate synthetic speech solely by thinking. One of the major stumbling blocks for the technology is detecting brain activity — it looks as if this can’t be done at high-enough resolution with electrodes on the scalp. Electrodes implanted right into the brain can pinpoint activity to a few dozen neurons, but that comes with safety concerns. Last year, those concerns led the US Food and Drug Administration to reject a human-trials application from Elon Musk’s company Neuralink...
  • asked a question related to Robot Vision
Question
3 answers
Hello,
I have several sensors which are interfaced with the help of ROS and are synchronized with the ROS time(ROS1). The sensors and their nodes are fully functional. Each sensor does some processing ,after it senses a detection in it's environment, before eventually timestamping this data in ROS. Since the sensors are of different kinds, and have their own processing before eventually timestamping it's data in ROS, there is an expected delay between the detection and the timestamping and also a delay is expected between the different sensors.
I am interested in the delay that takes place between the sensor detecting an event and eventually timestamping this data in ROS. The image shows the different processing for each sensor that takes place before timestamping.
When searching for this specific problem not much could be found, so any advice would be appreciated.
  • asked a question related to Robot Vision
Question
20 answers
At present, the economies of developed countries are entering the period of the fourth technological revolution known as Industry 4.0.
The previous three technological revolutions:
1. The industrial revolution of the eighteenth and nineteenth centuries, determined mainly by the industrial application of the invention of a steam engine.
2. Electricity era of the late nineteenth century and early twentieth century.
3. The IT revolution of the second half of the twentieth century determined by computerization, computerization, the widespread use of the Internet and the beginning of the development of robotization.
The current fourth technological revolution, known as Industry 4.0, is motivated by the development of the following factors:
- artificial intelligence,
- cloud computing,
- machine learning,
- Big Data database technologies,
- Internet of Things.
In every previous technological revolution, the same question was repeated many times. However, economies developed and changed structurally and labor markets returned to balance. Periodically, short-term economic crises appeared, but their negative economic effects, such as falling income and rising unemployment, were quickly reduced by active state intervention.
It seems to me that self-malting and robotization, IT, artificial intelligence, learning machines will change the labor markets, but this does not necessarily mean a large increase in unemployment. New professions, occupations, specialties in these areas of knowledge and technology will be created. Someone, after all, these machines, robots, etc. must design, create, test, control, and implement into production processes.
Therefore, I am asking you:
Will the technological development based on self-mulization, robotization, IT development, artificial intelligence, machine learning increase unemployment in the future?
Please reply. I invite you to the discussion
Relevant answer
Answer
We do care AI and other Technological approaches and apply them for better environment and humanity ,love and affection but still human is at the center as human intelligence is sentiment oriented and this will be lost or decay .Thus hybrid approach paradigms should be adopted .
  • asked a question related to Robot Vision
Question
1 answer
FMC-9: How does the long-term memory (LTM) in visual cortex be retrieved in order to rebuild the image in short-term memory (STM) for perceptions and reasoning?
Relevant answer
Answer
My guess is that prefrontal cortex STM sets up 'pointers' to the relevant sensory cortices, thus stimulating them. The older work from Miyashita points in that direction.
  • asked a question related to Robot Vision
Question
60 answers
Will the development of artificial intelligence, e-learning, Internet of Things and other information technologies increase the scope of automation and standardization of didactic processes, which could result in the replacement of a teacher by robots?
Unfortunately, there is a danger that due to the development of artificial intelligence, e-learning, learning machines, the Internet of Things, etc. technology can replace the teacher in the future. However, this will not happen in the next few years, but this scenario can not be excluded in the perspective of several decades. In addition, the work of the teacher is a creative work, a social education, etc. Currently, it is assumed that artificial intelligence will not be able to fully replace a human teacher, because it is now assumed that you can not teach artistry machine, social sensitivity, emotional intelligence, empathy, etc.
Do you agree with me on the above matter?
In the context of the above issues, the following question is valid:
Will the development of artificial intelligence, e-learning, Internet of Things and other information technologies increase the scope of automation and standardization of didactic processes, which could result in the replacement of a teacher by robots?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Ningún robot puede sustituir el papel del profesor en el proceso enseñanza aprendizaje. El profesor piensa y tiene ltiene una memoria superior a cualquier robot.
  • asked a question related to Robot Vision
Question
17 answers
Is one of the products of the integration of Information Technology Industry 4.0 will be the creation of fully autonomous robots capable of self-improvement?
Is the combination of artificial intelligence and technology learning machines, robotics, the Internet of Things and data processing in the cloud and Big Data databases automatically acquiring information from the Internet and possibly other advanced information technologies typical of the current technological revolution Industry 4.0 will allow to create fully autonomous robots capable of self-improvement?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, autonomous robots were used, for example, in hospitals in infectious diseases departments to help care for people suffering from Covid-19, and in city parks and other public places to monitor citizens' compliance specific rules of anti-pandemic safety. The pandemic could therefore accelerate the process of improving the technology of building autonomous robots equipped with artificial intelligence, technologies that learn machines and other technological solutions Industry 4.0.
Greetings,
Dariusz Prokopowicz
  • asked a question related to Robot Vision
Question
25 answers
In which branches, industry, the development of robotics, work automation and the implementation of artificial intelligence into production processes, logistics, etc. is currently the most dynamic?
Please reply
I invite you to the discussion
Relevant answer
Answer
During the SARS-CoV-2 (Covid-19) coronavirus pandemic, robotics developed effectively in the field of automation of procurement and delivery logistics processes in logistics centers. In addition, during a pandemic, robots are used in infectious disease departments of hospitals to help care for people suffering from Covid-19 disease. Robots are also used in shopping malls, city parks and other public places to check, for example, whether citizens wear protective masks and whether they maintain an appropriate social distance.
Best regards,
Dariusz Prokopowicz
  • asked a question related to Robot Vision
Question
15 answers
What kind of scientific research dominate in the field of The development of automation and robotics?
Please, provide your suggestions for a question, problem or research thesis in the issues: The development of automation and robotics.
Please reply.
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
In the field of determinants of the development of automation and robotics, they propose the following research topic: Analysis of the development of robotics applications, artificial intelligence, learning machines, etc. in the field of improving anti-pandemic safety systems, i.e. in limiting the development of subsequent infections, the development of the SARS-CoV-2 coronavirus pandemic (Covid-19), improvement of pandemic risk management systems and crisis management in public institutions, health care institutions, enterprises and corporations.
Best regards,
Dariusz Prokopowicz
  • asked a question related to Robot Vision
Question
25 answers
Dear Researchers, Scientists, Friends,
Will intelligent robots replace human in all difficult, cumbersome jobs and professions as part of the progress of civilization?
Will robotization in the course of civilization progress replace man in all difficult, arduous jobs and professions?
Artificial intelligence technology is developing rapidly and finding more and more applications. More and more companies and enterprises are implementing AI-based information systems and applications into their businesses. On the other hand, there are various risks arising from the improper use of applications, AI agents.
Artificial intelligence technology has been rapidly developing and finding new applications in recent years. I have described the main determinants, including potential opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please feel free to respond,
I invite you all to join the discussion,
Thank you very much,
Best wishes,
I would like to invite you to join me in scientific cooperation,
Dariusz Prokopowicz
Relevant answer
Answer
Dear Fatema Miah,
Yes, you gave another example of the perfect application of robots, i.e. in difficult conditions for human work. Of course, robotics must be developed in a variety of applications, but only under full human control.
Thank you, Best wishes,
Dariusz Prokopowicz
  • asked a question related to Robot Vision
Question
12 answers
Are there futurological estimates when mass autonomous humanoid robots will be manufactured in series?
When mass-produced autonomous humanoid robots will be produced in series, that is, futurological visions known from such science fiction novels and movies as "I. Robot", "AI Artificial Intelligence", "Ex Machina", "Chappie", "Bicentennial Man", "Star Wars", "Star Trek", etc.?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Relevant answer
Answer
...To increase the autonomy of humanoid robots, the visual perception must support the efficient collection and interpretation of visual scene cues by providing task-dependent information.... Grotz, M., Habra, T., Ronsse, R., & Asfour, T. (2017, September). Autonomous view selection and gaze stabilization for humanoid robots. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 1427-1434). IEEE.
  • asked a question related to Robot Vision
Question
5 answers
I am currently investigating means to assess human interaction interest on a mobile robotic platform before approaching said humans.
I already found several sources which mainly focus on movement trajectories and body poses.
Do you know of any other observable features which could be used for this?
Could you point me to relevant literature in this field?
Thanks in advance,
Martin
Relevant answer
Answer
you can read this research I hope it will be useful for you and good luck
  • asked a question related to Robot Vision
Question
4 answers
Can someone provide me with information on how to control a physical robot arm with V-Rep?
I am new to robotics and I want to link my V-Rep simulation to a real KUKA arm
Relevant answer
Answer
Zeashan Khan
thanks!
  • asked a question related to Robot Vision
Question
2 answers
Regarding unmanned Autonomous Vehicle (UAV) and specially drones:
With (deep) reinforcement learning, a common network can be trained to directly map state to actuator command making any predefined control structure obsolete for training.
There are now some relevant results in term of supervised DNN capable of providing a position with respect to a given image and thus the drone is able to navigate both outdoor and indoor.
Now if we are able to memorize places (at least relevant features) and create a topological map respecting the temporal sequence (temporality) and probably add some basic movement information like “move straight, turn left, turn right...”. I am aware that the task is very complex because you should be able to recognize the place whatever the point of view is and under different meteorological conditions (ideally). Of course such map should be able to achieve loop closure.
It opens the door to a completely new architecture for unmanned autonomous vehicle because you will not need to know (very) accurately your position anymore (SLAM will not be longer required).
You will have loosely coupled deep neural nets working together (I think this is also a quite interesting subject of research in the future).
Indeed the movements of the UAV will be computed with respect of the perceived environment and the UAV will be able to plan a route based on its « topological/semantic » map. It will use the memory to know where it is and where to go and thus plan movements accordingly (for instance: I have to move straight – a coffee shop on my right, then a stop sign, then move to the left...).
It will get closer to what mammals (at least the one studied in laboratories) are doing.
I would like to know if some people out there have heard about research going on on such coupling between place recognition and topological/semantic map?
Thanks
Relevant answer
Answer
Mohamed-Mourad Lafifi thank you for the references, some looks interesting!
  • asked a question related to Robot Vision
Question
11 answers
In some activities it is possible and robots are already produced, which replace people in specific repetitive activities.
However, will robots replace people in future in all activities and functions?
In my opinion, it is not possible for this type of futurological vision to be realized.
People are afraid of such a scenario of the future development of civilization.
The expression of these fears is the predominance of negative futurological visions known from fictional literature and films that such a development of civilization in which autonomous robots replace people in almost all activities, difficult work, production processes and achieve a high level of artificial intelligence generates serious threats to humanity.
Please answer
Best wishes
Relevant answer
This will definitely happen. In the field of maritime industry (marine technology), which is also my specialty, robots have replaced many crew members involved in maritime operations. Of course, this applies to the advanced ports of the world (the port of Singapore or Port Rotterdam, etc.). In my opinion, the arrival of autonomous ships since 2020 would increase the dependence of maritime operations on robotics and artificial intelligence. But the fact must be accepted that, in many situations, the role of human is crucial and essential.
  • asked a question related to Robot Vision
Question
4 answers
To be used for humanoid robots or personal robots.
Relevant answer
Answer
The term "understanding" is related to humans, it is not a machine category. In this sense, the answer is "No.".
  • asked a question related to Robot Vision
Question
5 answers
Hello, 
I want to use multiple kinects for localizing mobile manipulator
I want to know if that is possible
Thank you  
Relevant answer
Answer
Hello
ROS is built to do this particular kind of integrations. Yes, it is possible to integrate unlimited number of sensors with ROS. All what you need is a node that reads this new sensor and publishes on a specific topic.
Cheers
Salah Eddine Ghamri
  • asked a question related to Robot Vision
Question
3 answers
I have such an application where I want the live stream of video taken from a camera mounted on mobile robot. I want to know whether it is possible to transmit video taken from mobile robot using Xbee and convert these live streams at receiver side using Xbee.
Relevant answer
Answer
no xbee cannot practically transmit live video streaming as its data rate is very low.
  • asked a question related to Robot Vision
Question
13 answers
I am a beginner in the area of computer vision. I have a basic doubt in 2D perspective projection. While analyzing 3D-2D transform  we are taking the focal length as the distance to image plane. ( In almost all the reference). Focal length of a lens is actually the distance between center point of the lens to the point on the optical axis where the parallel lines will converge. So if we are placing the image plane at this point how we can get the image? If any clarification on my question please mention, I will elaborate. Hope valuable reply, it will help me to improve my basic knowledge in the area.
Relevant answer
Answer
See the book "Optical Metrology, 3rd edition", John Wiley & Sons, Chichester 2002.
  • asked a question related to Robot Vision
Question
4 answers
It is well known that the study of computer vision includes acquiring knowledge from images and videos in general. One of its application include deriving 3D information from 2D images.
I would like to know whether the use of 3D imaging from IR cameras be considered in the computer vision field as well. Or is it better suited to be called a Machine Vision application instead?
Relevant answer
Answer
IR and Visible Imaging are both of them full part of Computer Vision field. All topics of Visible Stereo Imaging applies to IR Stereo Imaging (feature point/contour detection, feature tracking, 3D Reconstruction from image sequences and so on...) . Neverthless, some computer vision techniques have to be specifically tuned/adapted to address IR applications. 
Some sample references are available by following:
  • asked a question related to Robot Vision
Question
3 answers
I have to make a rudder system control for an autonomous ship and I must do it in labview. I will have GPS, sensors for measuring distances, velocity and depth. Also, I have to detect other vessel, so I guess I need a GPU but I'm not very sure about it.
Relevant answer
Answer
Why do you think you need to use a GPU? Are you processing image data? Regular CPUs are extremely fast these days. Take the easiest route until it is clear that it is too slow. Even if you do end up needing a parallel implementation, you will need a correct serial implementation to test it, so it won't be wasted effort. 
  • asked a question related to Robot Vision
Question
3 answers
I'm trying to figure out how to get an indication of torso twist from a camera image perpendicular to a six-man canoe. Obviously mounting a camera above each paddler would get the best results but is derivation of torso twist angle from simple side image analysis?
Relevant answer
Answer
Sounds very challenging if you want to use only images from a camera looking at the row of paddlers from the side.
You are measuring/estimating the twist angle for a reason. You should first ask yourself how accurately you need to know the torso twist angle in order to accomplish what you plan to do with the measurements.Presumably you do not need to be very accurate. +/- 5 degrees may suffice for instance? I have no idea. But the required accuracy goes a long way toward judging the methods and adequacy of measurement.
Suppose that you determine that the required accuracy to be +/- X degrees. Then consider if it is possible for a person, visually inspecting the camera image, to estimate the torso rotation with roughly the same accuracy. If so, then the features that people use visually are candidate features for automatic estimation.If not, then you need to come up with better-than-human-visual estimation methods, based on better quantitative assessment of the visual-based features, and/or based on entirely different features derived from characters in the scene, which people are not using (by training a neural network for instance).  
In any case, knowing the require accuracy for some purposeful objective is the first step to finding an estimation  method.
Ronald
.
  • asked a question related to Robot Vision
Question
3 answers
I need it for windows.
Relevant answer
Answer
  • asked a question related to Robot Vision
Question
3 answers
for tracking finger position in 3D , accuracy is very important.
there are various techniques in 3D measurement.
what is the most accurate one ?
Relevant answer
Answer
For non contact 3D measurement, LIDAR is the most accurate. Dense LIDAR is very expensive, sparse (such as automative) is cheaper, but will return only a sparse set of points with true depth, in the scene, Other less accurate methods are structured light, time of flight and stereo/multiview camera 3D.
  • asked a question related to Robot Vision
Question
4 answers
I have an arm robot. An object coordinates will captured by the camera and need to be mapped to the robot to implement the IK (inverse kinematic) algorithm and then robot has to move to a location defined in the camera image displayed by the supervising computer.
I'm wondering, what kind of a vision system should I apply to capture object coordinates? .and to measure surface defects to characterize surface roughness in polishing task?
Relevant answer
Answer
You can extract texture features and then run some supervised classification experiments to determine roughness. You can do this with CVIPtools and run experiments with CVIP-FEPC. CVIPtools software is available here: http://cviptools.ece.siue.edu/
  • asked a question related to Robot Vision
Question
11 answers
I'm a student with electrical/mechanical background, in my project I'm searching for a solution for a company who wants to start with 3D cameras for robotics.
At the moment I'm working with Matlab and it works great, the possibility to create your own GUI is a big plus.
But I read Matlab is more for developing purpose and is slower (overhead).
A second software package that I try to use is Halcon, at the moment I've no overview of the possibilities.
But it looks to me that you can program in Halcon's own language hdevelop or using their libraries in your own code (like C++).
Programming in hdevelop with it's GUI seems to be easier/faster than low-level programming (e.g. C++), but I don't know the limitations.
A disadvantage is that there is no community for support, you need to use their documentation.
A third option I read a lot about is OpenCV, but with no low-level programming background this seems too ambitious for me.
I'm not searching the best solution for me, but for a company (although I know the company hasn't a lot of computer engineers).
I was hoping to find software with a good GUI to reduce low-level programming, Halcon seems to be the closest match.
Thanks for your help.
Relevant answer
Answer
Hi Mat,
I use Halcon, it's a very powerful tool mainly for industrial purposes. For investigation it may be used in processes that aren't your focus because some functions are like a black box (its their knowledge and marketing advantage).
There is a group on LinkedIn about Halcon with experienced users that gives faster answers than Halcon support.
And, YES you develop all your code and export it to other languages or use hdevengine in which any modification require just to replace an Halcon file and not to compile again all app
  • asked a question related to Robot Vision
Question
2 answers
I am working on hand gesture recognition and I wanted to know about "skinmodel.bin", cause I am supposed to provide it whereas I don't know exactly what's it.
  • asked a question related to Robot Vision
Question
6 answers
As in case of object recognition, different work is done in this field with different test objects, how can I compare the performance of my work with any existing method?
Relevant answer
Answer
Dear Zubair Please find the following article which may help you.
  • asked a question related to Robot Vision
Question
2 answers
I would like to build up the one-one correspondence between two dense 3D facial meshes (~5000 vertexes per face). I tried several methods such as MLS, FFD, Laplacian Deform and MM. However, none of them can output satisfying registration results. There are lots of papers on this topic but most of them without public codes. Is there any public code or package for 3D face registration available?
Relevant answer
Answer
  • asked a question related to Robot Vision
Question
2 answers
I'm mainly interested in aspects concerning (cognitive) vision.
I know the project lists on Cordis, but it would be great to have an article that summarized and relates projects to each other.
Relevant answer
Answer
That looks indeed very interesting! Thank you for the link!
  • asked a question related to Robot Vision
Question
2 answers
Currently I am doing a project on unmanned ground vehicle.
Relevant answer
Answer
Have a look at the attached thesis. It provides a good overview of the design process. 
  • asked a question related to Robot Vision
Question
16 answers
Given two RGB pixels, with components R1,G1,B1 and R2,G2,B2, how would be such function f(R,G,B) -> X for which a comparison f(R1,G1,B1) < f(R2,G2,B2) would be noise robust and have a reasonable interpretation ?
Relevant answer
Answer
Hi Jorcy,
if you want to compare ("<") two RGB values you need a projection function of the 3-dimensional RGB space onto the real axis. Of course, there are infinite many possibilities to do this. An easy way would be to use the HSV transform, as already suggested. The hue (H), however, is not appropriate as a linear order function because it is circular (i.e. the value 1.0 is identical with 0.0, so you cannot decide if 0.5>0.0 or 0.5<0.0, for example). The saturation (S) or the value (V) are appropriate projection functions for your purpose, however. If you want to have colored pixels "larger" than monochrome pixels, you will prefer S. If you want to have lighter pixels larger than darker pixels, you will probably prefer V. Also any combination of S and V would be a valid projection function, e.g. S+V.
Best regards, Ralf
  • asked a question related to Robot Vision
Question
4 answers
can anyone help me, how to find optical flow on featureless planar
i have interested in mobile navigation using optical flow,
but recently i have problem following
Optical flow algorithm ( i use a Pyramid Lucas Kanande Algorithm ) don't found correct velocity or direction on the featureless planar
  
so I would like to seek an advice above problem
how to find velocity or direction using optical flow or other method
can i get the your advice or paper you recommend?
Relevant answer
Answer
Dear Gi Dong Kim,
If you *know* for sure that what you are looking at is a featureless plane, then you can compute the optical flow if you know the distance and orientation of your camera relative to the plane. You could get these by projecting a structured light pattern onto the plane (of course this requires prior calibration of your camera + structured light system).
But then, if you end up using structured light, this raises the question of why you want to get the optical flow at all. Generally, the optical flow is computed as a means to extract more "useful" information (e.g. depth or time to collision). Here you would be going the other way, starting from what is generally an "end result" (distance and orientation) to compute the optical flow.
  • asked a question related to Robot Vision
Question
5 answers
I am doing my project in image processing, project is to restore for example, a person is wearing a goggle but we would like to predict his face without goggle, so we found some related faces but how I can replace that goggled portion by without goggled portion of related matched faces?
Relevant answer
Answer
Hi Sandesh,
The problem you mentioned doesn't have a single solution I believe.
There would be a trade off between complexity of method and accuracy of solution.
Let me propose two solutions. (a) simple (b) not-so-simple
(a) In this solution you'll deal with 2D only. You'll need to find two parameters of the object (i.e. face in your case) in both, the query image and the retrieved (similar) image.
(i) pose/inclination  (ii) scale
Attached links might help you for that.
You just need to compensate for the difference in pose and scale and you can replace the desired part.
This is an extremely coarse solution.
The other solution includes creating a generative 3D model.
I'll explain that if you are interested.
Hope this helps.
  • asked a question related to Robot Vision
Question
3 answers
what i want to do
I am using opencv to track the bot and I am getting the coordinates. I want the bot to go to a predefined coordinate from it current location. For this I want the bot to traverse with precision using just 2 dc motors and nothing else.
One way is to use PID to minimize error (error value is the perpendicular distance of the bot's current coordinates from the initial path of traversal)
another way is to draw the path on the GUI display and use the typical PID line following principle). Please tell me the limitations of the two approaches and if possible a better way out.
constraints:
can accomodate max 2 more analog input pins only, not more (all other I/O pins are already in use)
also i am very restricted by size, the entire bot should definitely fit in 9cm x 9xm x 5cm space
Relevant answer
May I ask you which Arduino you are using?
and do you have any digital pins left?
I think it would be much more easy if u had 4 Pins left - or you have to use something like the Microchip MCP23008 to expand your IOs.
This would avoid that you have to change your whole setup.
  • asked a question related to Robot Vision
Question
11 answers
Suppose I have a webcam on head of nao robot and I would like to segmentation RGB image algorithm while nao robot walking. Can anyone propose a good method to segmentation RGB images that have the most accurate? 
It should be noted that it is an example for explain what is my problem and in fact i do not use for soccer. i focus on segmentation of  RGB image that capture from indoor scenes Specially room work and Hallway.
segmentation base on  change of intensity and color is better than segmentation by detect line with algorithms like Hough transform? i think  segmentation by detect line is better, because color is so like brown and do not change in  scenes. but i am not sure. can anyone opinion about this problem too? is there another way for solving this problem?
i found many codes but they can not find segment well for The majority of scenes. also the speed of algorithm is important for me but Not as much accurate.
Thanks for your answers.
Relevant answer
Answer
Several segmentation algorithm exist in the literature like Meanshift, Statistical region merging, Watershed, Superpixels, SLIC, Jseg, Color Structure Code, etc.
  • asked a question related to Robot Vision
Question
6 answers
Interested in doing some research in Computer Vision and Mobile Visual Search. Could you please suggest some novel ideas/issues that are emerging in that research topic?
Relevant answer
  • asked a question related to Robot Vision
Question
2 answers
I have read about ASM and discrete symmetry operator ,and I got the main idea .
But I got confused of the bundle of the non-understood functions. Is there any simplified illustration for both of them?
Relevant answer
Answer
This is a good question.
Discrete symmetry is nicely explained in
S.V. Smirnov, Adler map for Darboux q-chain, Moscow State University:
See mapping (4), page 2 (see also the Proposition on the same page).
More to the point, consider
A.W.M. El Kaffas, Constraining the two Higgs double method with CP-violation, Ph.D. thesis, University of Bergen, Norway, 2008:
Symmetry is related to the harmony, beauty and unity of a system so that under certain transformations of a physical system, parts of the system remain unchanged (p. 6).   A discrete symmetry describes non-continuous changes in a system.  Such a symmetry flips a system from one state to another state.   See Section 2.1.1, starting on page 6. 
  • asked a question related to Robot Vision
Question
2 answers
Can anyone help me to understand hand label colour space?
Relevant answer
Answer
I am a little confused about the context of the question, but I am assuming that you are asking in the perspective of automatic color segmentation in images for some machine vision application. I think that in this case, when they use learning algorithms they train them on some test images which are segmented and color labeled by hand so that the learning algorithms are tuned on them before proceeding to the verification and real application. These hand labeled images are known to be a part of the hand label color space or more mathematically correct terminology would be the "these images span the hand labeled color space".
I am giving the link to a paper in which the perspective I have mentioned above is used, hope you find it useful.
  • asked a question related to Robot Vision
Question
2 answers
I want help in quadcopter programming
How can I  programming the 4 motors by using arduino to control the sped and the direction 
and when it show up I want just slow dawn the speed with out effect the direction 
*note my motors are DC not stepper 
thanks a lot
Relevant answer
Answer
alter the pwm to the dc motor for changing speeds of motor
  • asked a question related to Robot Vision
Question
10 answers
Does anyone have any experience on the development of a domestic robot's ability to locate itself in an indoor environment?
For your answer, take into account the possibility of having a camera on the robot (image recognition may be a way to go?).
I believe it may be necessary to take multiple inputs. For example, an image recognition algorithm, together with a "dead-reckoning" method, such as estimating diplacement as a function of the revolution of the robot's wheels could be used to estimate the position of the robot.
All feedback would be greatly appreciated, as I am just starting with this investigation.
Thank you very much!
Relevant answer
Answer
You could have a look at RatSLAM, as it would fit your constraints very well (works indoor + outdoor, uses a camera as input). There is an open source version of it available, too: OpenRatSLAM.
  • asked a question related to Robot Vision
Question
10 answers
I would like to know if there is a mobile robot simulator which is helpful in research. Some of the main simulators I found are Microsoft robotics, Matlab Robotic Simulator, Webots, Lego, Darpa, Robotics etc. Can any one suggest which simulator is useful in research for robot path planning, obstacle avoidance and autonomous navigation?
Relevant answer
Answer
Gazebo in combination with ROS is very powerful. V-Rep is also an option. Both are open source.
  • asked a question related to Robot Vision
Question
3 answers
In the new MATLAB version (2014a) you can control RPi's camera module using some classes but they work using snapshot or recording in a file and not continuously. I tried to set Simulink RPi-USB to use camera module but there are some lags. Actually, the only way that I have found to do that is using Linux remote commands from Matlab command window but they do not allow any access to output flowing video signal and they need duration of acquisition where I need a continuous acquisition. Advice?
Relevant answer
Answer
Although this answer may come across as a bit of an off-tangent, I would suggest utilizing Linux to work with the Raspberry Pi. You can take a look at "Practical OpenCV" book, which comprises of a few basic programs you can run on the Pi, with consummate ease.
  • asked a question related to Robot Vision
Question
9 answers
I need to calculate the angle of rotation and displacement distance object in two frames of video with matlab. How can I do this to obtain an accurate answer?
Relevant answer
Answer
If your object of interest has enough detail then Harris corners may be detected and tracked over the two frames using LK tracker. Then pose estimation may be used to get the rotation and translation parameters. Refer to http://docs.opencv.org/trunk/doc/py_tutorials/py_calib3d/py_pose/py_pose.html
for more details on the code side of the implementation. For theory refer to 'Learning OpenCV'- there is a chapter on pose estimation.
  • asked a question related to Robot Vision
Question
4 answers
I am looking for a local stereo matching algorithm, which can be used as a standard comparison for other algorithms. I hope this algorithm has source codes and is able to point out occlusion parts (e.g. NaN). Definitely no graph cuts or equivalent. Could anyone suggest such a 'standard' local stereo matching algorithm?
Thanks!
Relevant answer
Answer
Check this webpage:
You'll find canonical stereo benchmarking images with the help of which you can compare the performance of whatever you implemented against a lot of other algorithms whose results were already submitted.
  • asked a question related to Robot Vision
Question
10 answers
I could see sonar working, as far as detecting an object in the water, but the cost of sonar is also very high. Any easy step to find a human dead body from water within 1 km?
Relevant answer
Answer
let's sumerize your options to detect a human:
1. waves around human body parts (2 to 10 Hz)
2. waves through human body (2250 Hz & 388 Hz)
3. waves on human surface (30.7 Mhz)
these are all options I could imagine (in, out, surface).
  • asked a question related to Robot Vision
Question
2 answers
I can't understand how the simultaneous positioning and camera controls are being used.
Relevant answer
Answer
Here you have a work about visual servoing based on image using depth for each feature or for the set of features. In addition, you have some references about those important authors.
  • asked a question related to Robot Vision
Question
17 answers
I read this article: "Eight pairs of descending visual neurons in the dragonfly give wing motor centers accurate population vector of prey direction". I found it really important and really interesting.
Do you think it could be used to develop the possibilities of an autonomous UAV?
Relevant answer
Answer
So it can see in all directions as the Camelion.
To build an autonomous drone that has the characteristics of desdragonfly vision, use the sensors distances (as Sharp IR Sensor) and camera motion capture, and make learning with software image processing.
  • asked a question related to Robot Vision
Question
2 answers
I want to know if any newer feature extractor in machine vision was published after David G.Lowe in 2004?
Is there any shift and scale invariant feature extractor for object recognition?
Relevant answer
Answer
Well, there are others like RIFT, G-RIF, SURF, PCA-SIFT that you can read about here: http://is.gd/a1Kb40
As of 2005 (SIFT was first published in 1999), SIFT and SIFT-like techniques apparently outperformed others within limits, according to Mikolajczyk, K., and Schmid, C., "A performance evaluation of local descriptors", IEEE Transactions on Pattern Analysis and Machine Intelligence, 10, 27, pp 1615--1630, 2005. You can see a summary here: http://is.gd/xN3Kj6
Speed Up Robust Features (SURF) has become very popular since 2006 and updated in 2008: http://en.wikipedia.org/wiki/SURF.
Local Energy based Shape Histogram (LESH) is another "new" one as of 2008: http://en.wikipedia.org/wiki/LESH
I don't know about "best" as I don't don't know all of the trade-offs or your application. I've had SIFT used on 3D shape-from-motion projects that worded very well, and SURF-128 (http://www.vision.ee.ethz.ch/~surf/eccv06.pdf) on pose estimation for navigation that worked better than SIFT under those specific conditions in the project.
  • asked a question related to Robot Vision
Question
10 answers
If a robot is equipped with multiple sensors e.g. camera, audio, tactile, force/torque etc., how can its position estimation be improved by sensor fusion?
Relevant answer
Answer
There are plenty of filters to deal with your problem.
The main choices are the Kalman filters (Extended Kalman filter, Unscented Kalman filter, etc) and the particle filters.
The first is useful when you have an initial position (or an idea of where it could be) and lineal (or near lineal) measurements.
The second is able to handle not knowing the initial position and having non linear measurements (map matching, etc.), but requires a significant amount of computing compared to the Kalman filters.
In all cases you will need to model the robot states (position, velocity, orientation, etc), estimate their propagation (probably using the odometry) and update the estimations with any measurement you receive.
I recommend the book "Probabilistic Robotics", from Sebastian Thrun, Wolfram Burgard, Dieter Fox on the subject.
  • asked a question related to Robot Vision
Question
1 answer
I am trying to use this method for machine learning
Relevant answer
Answer
  • asked a question related to Robot Vision
Question
8 answers
I would like to find an introductory document for the topic of Visiual Servoing that is more oriented to image processing and uses Matlab as testing tool.
Relevant answer
Answer
The best ,professional and complete reference is Peter Corke's book.
It contains full document and excellent Matlab Toolbox.
  • asked a question related to Robot Vision
Question
4 answers
I'm working on kinect on Matlab, it works well but I need to let it work in near mode I do the following.
% Create the object for the depth sensor
vid = videoinput('kinect',2);
src = vid.Source;
src.DepthMode='Near';
then use preview command
preview (vid);
matlab give error: matlab can not open depth sensor
  • asked a question related to Robot Vision
Question
35 answers
Like OpenCV which are best alternative open source tools for development of image processing and computer vision algorithms.
Relevant answer
Answer
It depends on the aplication but I will recomend Python.
  • asked a question related to Robot Vision
Question
10 answers
How can I use the data obtained from "haar wavelet" in image processing?
Are there examples for such application?
Relevant answer
Answer
Dear Hossein,
Depend on your aim and you objective you can use directly the coefficients from wavelet sub-bands, or you can extract other features from sub-bands such as statistical features, local features, ....etc.
  • asked a question related to Robot Vision
Question
9 answers
I have a problem "shifting" one view to the other.
Currently I have two images taken by two cameras in different positions, that is, two images that are taken from different views. I have the ground truth disparity maps of those two images, but how can I "shift" the left image to match the right image properly?
My attached result is calculated as following: if several pixels are mapped to the same position after "shifting" according to the disparity map, I always keep the pixel with the smallest (should be biggest) disparity value. The reason is that I believe a smaller (should be bigger) disparity value means that this pixel comes from the object in front. Otherwise, just "shift" according to the disparity map.
Since my result is very bad, I do hope someone can give me some advice.
For the attached image, the upper left is the left image, the lower left is the one "shifted" from the right image.
Relevant answer
Answer
For stereo pairs with calculated disparities, it's even easier than figuring out the 3D position, fundamental matrix, etc. By definition, the projected position of a point seen in one image when viewed from the position of the other image is simply the original position plus (or minus, depending on which image) the disparity. This extends very easily to other viewing positions.
Consider a point seen at pixel (x,y) in the right image with disparity d(x,y). That point is (again, by definition) seen at position (x',y') = (x,y) - (d(x,y),0) in the left image. Do this for all of the pixels in the right image, and you have the corresponding image from the left viewpoint. As noted in an earlier answer, keep in mind that nearer objects have larger disparity.
You can extend this to other viewing positions in a similar manner. Parameterize the space of viewing positions (s,t) such that the source camera is at (0,0) and the other camera is at (s,t) = (1,0). Then the projected position of each pixel is given by scaling the disparities by the (s,t) values: (x',y') = (x,y) - (s d(x,y), t d(x,y). Be careful to flip the subtract to an add depending on which image is the source what what conventions you're using for disparity. And yes, this works for vertical offsets! Remember that the disparity tells you "if you move the camera this much (the baseline separation), the pixel moves that much in the opposite direction".
And of course remember that some pixels won't have things project to them since they correspond to unseen content, and you will most definitely have holes in the result for any reasonably complicated scene.
  • asked a question related to Robot Vision
Question
4 answers
If we want to do advanced image processing like stereo image processing, object tracking, point tracking, etc. then which type of hardware is used or which hardware is used normally for real-time response? Which hardware is used by space research companies for advanced image processing?
Relevant answer
Answer
you can use Zync processor which has got programmable FPGA and dual Arm Cortex 9 cores
  • asked a question related to Robot Vision
Question
10 answers
The result of 3D reprojection using StereoSGBM algorithm in Emgu cv (open cv) is the X,Y,Z coordinates of each pixel in the depth image.
public void Computer3DPointsFromStereoPair(Image<Gray, Byte> left, Image<Gray, Byte> right, out Image<Gray, short> disparityMap, out MCvPoint3D32f[] points)
{
points = PointCollection.ReprojectImageTo3D(disparityMap, Q);
}
by taking the first element of this result:
points[0] = { X= 414.580017 Y= -85.03029 Z= 10000.0 }
I'm confused here, to which pixel this point refers to and why is it not like this X=0,Y=0,Z=10000.0
Relevant answer
Answer
Mark it.
  • asked a question related to Robot Vision
Question
4 answers
I want to study the dynamic response of the simulated model.
Relevant answer
Answer
thanks 4 ur answer Mehdi, I really appreciate ur help
  • asked a question related to Robot Vision
Question
1 answer
Disparity images should be produced with 4 fps or better. At my institute, we can do some engineering by ourselves. However, FPGA programming is the limiting factor.
Relevant answer
Answer
Dear Sir,
I am a VHDL/FPGA design expert working in the UK.
I spent 10 years working as an FPGA designer in the Vision Department of Siemens-UK R&D.
There I developed FPGA-based image processing systems that performed the real-time (25fps) disparity and feature tracking front-end for their unmanned ground and air vehicle naviagtion systems.
Having done this before, I could efficiently develop this again for you.
Regards,
Nicholas Lee
  • asked a question related to Robot Vision
Question
4 answers
Which methods return high accuracy depth map information for robot grasping?
Can I get very good results using two stereo cameras with open cv library to find the disparity and depth map, or can you suggest to follow another method?
Relevant answer
Answer
Finding the right setup for your system depends on how the whole scene looks like, how large the working area is, the detection accuracy you need to grasp the objects, the Illumination conditions and, last but not least, the objects themselves.
Stereo Vision is a well known approach that works fine for many applications. It requires a good camera calibration and Registration to the Robot coordinate system. It's also worth thinking of using a Microsoft Kinect camera and the Point Cloud Library PCL since the Kinect gives you a depth map directly and is more robust against object and Illumination properties than the stereo vision approach. another option is to use a PMD sensor which also directly gives you a depth map, but has a low spatial Resolution. If you provide more Infos on your Task, the objects and the gripper you could get more detailed answers on how to get started....
  • asked a question related to Robot Vision
Question
6 answers
Most literature uses processing of video/cam images. I would need a simpler solution, also need to avoid ethical issues due to videoing people.
Relevant answer
Answer
Dear All,
Thanks for your answers - it is really appreciated. The thermitrack is most interesting, seems to be the right mixture between commercial and development tool (still allows to get the numeric data, which are hidden by most commercial devices).