Science topic
Robot Vision - Science topic
Explore the latest questions and answers in Robot Vision, and find Robot Vision experts.
Questions related to Robot Vision
What new occupations, professional professions, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
The recent rapid development of generative artificial intelligence applications is increasingly changing labor markets. The development of generative artificial intelligence applications is increasing the scale of objectification of work performed within various professions. On the one hand, generative artificial intelligence technologies are finding more and more applications in companies, enterprises and institutions increasing the efficiency of certain business processes supporting employees working in various positions. However, there are increasing considerations about the possibility of black scenarios coming true in futurological projections suggesting that in the future many jobs will be completely replaced by autonomic AI-equipped robots, androids or systems operating in cloud computing. On the other hand, in opposition to the black scenarios of future developments in labor markets are contrasted with more positive scenarios presenting futuristic projections of the development of labor markets, where new professions will be created thanks to the implementation of generative artificial intelligence technology into various aspects of economic activity. Which of these two scenarios will be realized to a greater extent in the future is currently not easy to predict precisely.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What new professions, professional occupations, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
What new professions will soon be created in connection with the development of generative artificial intelligence applications?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Will the black scenarios of the futurological visions known from scence fiction films, in which autonomous robots equipped with artificial intelligence will be able to reproduce and self-improve, come true in the future?
The theoretical basis for the concept of the essence of artificial intelligence has been developing since the 1960s. Since then, black scenarios of futurological visions, in which autonomous robots equipped with artificial intelligence will be able to reproduce themselves, self-improve, become independent of human control and become a threat to humans, have been created in literature and film of the genre of scence fiction. Nowadays, in the situation of dynamic development of artificial intelligence and robotics technologies, the above-mentioned considerations return to topicality.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Will the black scenarios of futurological visions known from scence fiction films, in which autonomous robots equipped with artificial intelligence will be able to reproduce and self-improve, come true in the future?
Will artificial intelligence-equipped autonomous robots that can reproduce and self-improve emerge in the future?
And what is your opinion about it?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Can autonomous robots equipped with artificial intelligence that process significantly larger amounts of data and information faster than humans pose a significant threat to humans?
Will autonomous robots equipped with artificial intelligence, which process much larger amounts of data and information faster than humans, only be useful, friendly and helpful to humans, or could they also be a significant threat?
Robots equipped with artificial intelligence are being built to create a new kind of useful, friendly and helpful machine for humans. Already, the latest generations of microprocessors with which computers, laptops and smartphones are equipped have high computing and processing capacities that exceed those of the human brain. When new generations of artificial intelligence are implemented in computers with high computing and multi-criteria data processing powers, intelligent systems are obtained that can process large amounts of data much more quickly and with a high level of objectivity and rationality in comparison to what is referred to as natural human intelligence. AI systems are already being developed that process much larger volumes of data and information faster than humans. If such artificial intelligence systems and autonomous robots equipped with this technology were to escape human control due to certain errors, a new kind of serious threat to humans could arise.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can autonomous robots equipped with artificial intelligence that process significantly larger amounts of data and information faster than humans pose a significant threat to humans?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
Hello,
I have several sensors which are interfaced with the help of ROS and are synchronized with the ROS time(ROS1). The sensors and their nodes are fully functional. Each sensor does some processing ,after it senses a detection in it's environment, before eventually timestamping this data in ROS. Since the sensors are of different kinds, and have their own processing before eventually timestamping it's data in ROS, there is an expected delay between the detection and the timestamping and also a delay is expected between the different sensors.
I am interested in the delay that takes place between the sensor detecting an event and eventually timestamping this data in ROS. The image shows the different processing for each sensor that takes place before timestamping.
When searching for this specific problem not much could be found, so any advice would be appreciated.
At present, the economies of developed countries are entering the period of the fourth technological revolution known as Industry 4.0.
The previous three technological revolutions:
1. The industrial revolution of the eighteenth and nineteenth centuries, determined mainly by the industrial application of the invention of a steam engine.
2. Electricity era of the late nineteenth century and early twentieth century.
3. The IT revolution of the second half of the twentieth century determined by computerization, computerization, the widespread use of the Internet and the beginning of the development of robotization.
The current fourth technological revolution, known as Industry 4.0, is motivated by the development of the following factors:
- artificial intelligence,
- cloud computing,
- machine learning,
- Big Data database technologies,
- Internet of Things.
In every previous technological revolution, the same question was repeated many times. However, economies developed and changed structurally and labor markets returned to balance. Periodically, short-term economic crises appeared, but their negative economic effects, such as falling income and rising unemployment, were quickly reduced by active state intervention.
It seems to me that self-malting and robotization, IT, artificial intelligence, learning machines will change the labor markets, but this does not necessarily mean a large increase in unemployment. New professions, occupations, specialties in these areas of knowledge and technology will be created. Someone, after all, these machines, robots, etc. must design, create, test, control, and implement into production processes.
Therefore, I am asking you:
Will the technological development based on self-mulization, robotization, IT development, artificial intelligence, machine learning increase unemployment in the future?
Please reply. I invite you to the discussion
FMC-9: How does the long-term memory (LTM) in visual cortex be retrieved in order to rebuild the image in short-term memory (STM) for perceptions and reasoning?
Will the development of artificial intelligence, e-learning, Internet of Things and other information technologies increase the scope of automation and standardization of didactic processes, which could result in the replacement of a teacher by robots?
Unfortunately, there is a danger that due to the development of artificial intelligence, e-learning, learning machines, the Internet of Things, etc. technology can replace the teacher in the future. However, this will not happen in the next few years, but this scenario can not be excluded in the perspective of several decades. In addition, the work of the teacher is a creative work, a social education, etc. Currently, it is assumed that artificial intelligence will not be able to fully replace a human teacher, because it is now assumed that you can not teach artistry machine, social sensitivity, emotional intelligence, empathy, etc.
Do you agree with me on the above matter?
In the context of the above issues, the following question is valid:
Will the development of artificial intelligence, e-learning, Internet of Things and other information technologies increase the scope of automation and standardization of didactic processes, which could result in the replacement of a teacher by robots?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Is one of the products of the integration of Information Technology Industry 4.0 will be the creation of fully autonomous robots capable of self-improvement?
Is the combination of artificial intelligence and technology learning machines, robotics, the Internet of Things and data processing in the cloud and Big Data databases automatically acquiring information from the Internet and possibly other advanced information technologies typical of the current technological revolution Industry 4.0 will allow to create fully autonomous robots capable of self-improvement?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
In which branches, industry, the development of robotics, work automation and the implementation of artificial intelligence into production processes, logistics, etc. is currently the most dynamic?
Please reply
I invite you to the discussion
What kind of scientific research dominate in the field of The development of automation and robotics?
Please, provide your suggestions for a question, problem or research thesis in the issues: The development of automation and robotics.
Please reply.
I invite you to the discussion
Thank you very much
Best wishes
Dear Researchers, Scientists, Friends,
Will intelligent robots replace human in all difficult, cumbersome jobs and professions as part of the progress of civilization?
Will robotization in the course of civilization progress replace man in all difficult, arduous jobs and professions?
Artificial intelligence technology is developing rapidly and finding more and more applications. More and more companies and enterprises are implementing AI-based information systems and applications into their businesses. On the other hand, there are various risks arising from the improper use of applications, AI agents.
Artificial intelligence technology has been rapidly developing and finding new applications in recent years. I have described the main determinants, including potential opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please feel free to respond,
I invite you all to join the discussion,
Thank you very much,
Best wishes,
I would like to invite you to join me in scientific cooperation,
Dariusz Prokopowicz
Are there futurological estimates when mass autonomous humanoid robots will be manufactured in series?
When mass-produced autonomous humanoid robots will be produced in series, that is, futurological visions known from such science fiction novels and movies as "I. Robot", "AI Artificial Intelligence", "Ex Machina", "Chappie", "Bicentennial Man", "Star Wars", "Star Trek", etc.?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
I am currently investigating means to assess human interaction interest on a mobile robotic platform before approaching said humans.
I already found several sources which mainly focus on movement trajectories and body poses.
Do you know of any other observable features which could be used for this?
Could you point me to relevant literature in this field?
Thanks in advance,
Martin
Can someone provide me with information on how to control a physical robot arm with V-Rep?
I am new to robotics and I want to link my V-Rep simulation to a real KUKA arm
Regarding unmanned Autonomous Vehicle (UAV) and specially drones:
With (deep) reinforcement learning, a common network can be trained to directly map state to actuator command making any predefined control structure obsolete for training.
There are now some relevant results in term of supervised DNN capable of providing a position with respect to a given image and thus the drone is able to navigate both outdoor and indoor.
Now if we are able to memorize places (at least relevant features) and create a topological map respecting the temporal sequence (temporality) and probably add some basic movement information like “move straight, turn left, turn right...”. I am aware that the task is very complex because you should be able to recognize the place whatever the point of view is and under different meteorological conditions (ideally). Of course such map should be able to achieve loop closure.
It opens the door to a completely new architecture for unmanned autonomous vehicle because you will not need to know (very) accurately your position anymore (SLAM will not be longer required).
You will have loosely coupled deep neural nets working together (I think this is also a quite interesting subject of research in the future).
Indeed the movements of the UAV will be computed with respect of the perceived environment and the UAV will be able to plan a route based on its « topological/semantic » map. It will use the memory to know where it is and where to go and thus plan movements accordingly (for instance: I have to move straight – a coffee shop on my right, then a stop sign, then move to the left...).
It will get closer to what mammals (at least the one studied in laboratories) are doing.
I would like to know if some people out there have heard about research going on on such coupling between place recognition and topological/semantic map?
Thanks
In some activities it is possible and robots are already produced, which replace people in specific repetitive activities.
However, will robots replace people in future in all activities and functions?
In my opinion, it is not possible for this type of futurological vision to be realized.
People are afraid of such a scenario of the future development of civilization.
The expression of these fears is the predominance of negative futurological visions known from fictional literature and films that such a development of civilization in which autonomous robots replace people in almost all activities, difficult work, production processes and achieve a high level of artificial intelligence generates serious threats to humanity.
Please answer
Best wishes
To be used for humanoid robots or personal robots.
Hello,
I want to use multiple kinects for localizing mobile manipulator
I want to know if that is possible
Thank you
I have such an application where I want the live stream of video taken from a camera mounted on mobile robot. I want to know whether it is possible to transmit video taken from mobile robot using Xbee and convert these live streams at receiver side using Xbee.
I am a beginner in the area of computer vision. I have a basic doubt in 2D perspective projection. While analyzing 3D-2D transform we are taking the focal length as the distance to image plane. ( In almost all the reference). Focal length of a lens is actually the distance between center point of the lens to the point on the optical axis where the parallel lines will converge. So if we are placing the image plane at this point how we can get the image? If any clarification on my question please mention, I will elaborate. Hope valuable reply, it will help me to improve my basic knowledge in the area.
It is well known that the study of computer vision includes acquiring knowledge from images and videos in general. One of its application include deriving 3D information from 2D images.
I would like to know whether the use of 3D imaging from IR cameras be considered in the computer vision field as well. Or is it better suited to be called a Machine Vision application instead?
I have to make a rudder system control for an autonomous ship and I must do it in labview. I will have GPS, sensors for measuring distances, velocity and depth. Also, I have to detect other vessel, so I guess I need a GPU but I'm not very sure about it.
I'm trying to figure out how to get an indication of torso twist from a camera image perpendicular to a six-man canoe. Obviously mounting a camera above each paddler would get the best results but is derivation of torso twist angle from simple side image analysis?
for tracking finger position in 3D , accuracy is very important.
there are various techniques in 3D measurement.
what is the most accurate one ?
I have an arm robot. An object coordinates will captured by the camera and need to be mapped to the robot to implement the IK (inverse kinematic) algorithm and then robot has to move to a location defined in the camera image displayed by the supervising computer.
I'm wondering, what kind of a vision system should I apply to capture object coordinates? .and to measure surface defects to characterize surface roughness in polishing task?
I'm a student with electrical/mechanical background, in my project I'm searching for a solution for a company who wants to start with 3D cameras for robotics.
At the moment I'm working with Matlab and it works great, the possibility to create your own GUI is a big plus.
But I read Matlab is more for developing purpose and is slower (overhead).
A second software package that I try to use is Halcon, at the moment I've no overview of the possibilities.
But it looks to me that you can program in Halcon's own language hdevelop or using their libraries in your own code (like C++).
Programming in hdevelop with it's GUI seems to be easier/faster than low-level programming (e.g. C++), but I don't know the limitations.
A disadvantage is that there is no community for support, you need to use their documentation.
A third option I read a lot about is OpenCV, but with no low-level programming background this seems too ambitious for me.
I'm not searching the best solution for me, but for a company (although I know the company hasn't a lot of computer engineers).
I was hoping to find software with a good GUI to reduce low-level programming, Halcon seems to be the closest match.
Thanks for your help.
I am working on hand gesture recognition and I wanted to know about "skinmodel.bin", cause I am supposed to provide it whereas I don't know exactly what's it.
As in case of object recognition, different work is done in this field with different test objects, how can I compare the performance of my work with any existing method?
I would like to build up the one-one correspondence between two dense 3D facial meshes (~5000 vertexes per face). I tried several methods such as MLS, FFD, Laplacian Deform and MM. However, none of them can output satisfying registration results. There are lots of papers on this topic but most of them without public codes. Is there any public code or package for 3D face registration available?
I'm mainly interested in aspects concerning (cognitive) vision.
I know the project lists on Cordis, but it would be great to have an article that summarized and relates projects to each other.
Currently I am doing a project on unmanned ground vehicle.
Given two RGB pixels, with components R1,G1,B1 and R2,G2,B2, how would be such function f(R,G,B) -> X for which a comparison f(R1,G1,B1) < f(R2,G2,B2) would be noise robust and have a reasonable interpretation ?
can anyone help me, how to find optical flow on featureless planar
i have interested in mobile navigation using optical flow,
but recently i have problem following
Optical flow algorithm ( i use a Pyramid Lucas Kanande Algorithm ) don't found correct velocity or direction on the featureless planar
so I would like to seek an advice above problem
how to find velocity or direction using optical flow or other method
can i get the your advice or paper you recommend?
I am doing my project in image processing, project is to restore for example, a person is wearing a goggle but we would like to predict his face without goggle, so we found some related faces but how I can replace that goggled portion by without goggled portion of related matched faces?
what i want to do
I am using opencv to track the bot and I am getting the coordinates. I want the bot to go to a predefined coordinate from it current location. For this I want the bot to traverse with precision using just 2 dc motors and nothing else.
One way is to use PID to minimize error (error value is the perpendicular distance of the bot's current coordinates from the initial path of traversal)
another way is to draw the path on the GUI display and use the typical PID line following principle). Please tell me the limitations of the two approaches and if possible a better way out.
constraints:
can accomodate max 2 more analog input pins only, not more (all other I/O pins are already in use)
also i am very restricted by size, the entire bot should definitely fit in 9cm x 9xm x 5cm space
Suppose I have a webcam on head of nao robot and I would like to segmentation RGB image algorithm while nao robot walking. Can anyone propose a good method to segmentation RGB images that have the most accurate?
It should be noted that it is an example for explain what is my problem and in fact i do not use for soccer. i focus on segmentation of RGB image that capture from indoor scenes Specially room work and Hallway.
segmentation base on change of intensity and color is better than segmentation by detect line with algorithms like Hough transform? i think segmentation by detect line is better, because color is so like brown and do not change in scenes. but i am not sure. can anyone opinion about this problem too? is there another way for solving this problem?
i found many codes but they can not find segment well for The majority of scenes. also the speed of algorithm is important for me but Not as much accurate.
Thanks for your answers.
Interested in doing some research in Computer Vision and Mobile Visual Search. Could you please suggest some novel ideas/issues that are emerging in that research topic?
I have read about ASM and discrete symmetry operator ,and I got the main idea .
But I got confused of the bundle of the non-understood functions. Is there any simplified illustration for both of them?
Can anyone help me to understand hand label colour space?
I want help in quadcopter programming
How can I programming the 4 motors by using arduino to control the sped and the direction
and when it show up I want just slow dawn the speed with out effect the direction
*note my motors are DC not stepper
thanks a lot
Does anyone have any experience on the development of a domestic robot's ability to locate itself in an indoor environment?
For your answer, take into account the possibility of having a camera on the robot (image recognition may be a way to go?).
I believe it may be necessary to take multiple inputs. For example, an image recognition algorithm, together with a "dead-reckoning" method, such as estimating diplacement as a function of the revolution of the robot's wheels could be used to estimate the position of the robot.
All feedback would be greatly appreciated, as I am just starting with this investigation.
Thank you very much!
I would like to know if there is a mobile robot simulator which is helpful in research. Some of the main simulators I found are Microsoft robotics, Matlab Robotic Simulator, Webots, Lego, Darpa, Robotics etc. Can any one suggest which simulator is useful in research for robot path planning, obstacle avoidance and autonomous navigation?
In the new MATLAB version (2014a) you can control RPi's camera module using some classes but they work using snapshot or recording in a file and not continuously. I tried to set Simulink RPi-USB to use camera module but there are some lags. Actually, the only way that I have found to do that is using Linux remote commands from Matlab command window but they do not allow any access to output flowing video signal and they need duration of acquisition where I need a continuous acquisition. Advice?
I need to calculate the angle of rotation and displacement distance object in two frames of video with matlab. How can I do this to obtain an accurate answer?
I am looking for a local stereo matching algorithm, which can be used as a standard comparison for other algorithms. I hope this algorithm has source codes and is able to point out occlusion parts (e.g. NaN). Definitely no graph cuts or equivalent. Could anyone suggest such a 'standard' local stereo matching algorithm?
Thanks!
I could see sonar working, as far as detecting an object in the water, but the cost of sonar is also very high. Any easy step to find a human dead body from water within 1 km?
I can't understand how the simultaneous positioning and camera controls are being used.
I read this article: "Eight pairs of descending visual neurons in the dragonfly give wing motor centers accurate population vector of prey direction". I found it really important and really interesting.
Do you think it could be used to develop the possibilities of an autonomous UAV?
I want to know if any newer feature extractor in machine vision was published after David G.Lowe in 2004?
Is there any shift and scale invariant feature extractor for object recognition?
If a robot is equipped with multiple sensors e.g. camera, audio, tactile, force/torque etc., how can its position estimation be improved by sensor fusion?
I am trying to use this method for machine learning
I would like to find an introductory document for the topic of Visiual Servoing that is more oriented to image processing and uses Matlab as testing tool.
I'm working on kinect on Matlab, it works well but I need to let it work in near mode I do the following.
% Create the object for the depth sensor
vid = videoinput('kinect',2);
src = vid.Source;
src.DepthMode='Near';
then use preview command
preview (vid);
matlab give error: matlab can not open depth sensor
Like OpenCV which are best alternative open source tools for development of image processing and computer vision algorithms.
How can I use the data obtained from "haar wavelet" in image processing?
Are there examples for such application?
I have a problem "shifting" one view to the other.
Currently I have two images taken by two cameras in different positions, that is, two images that are taken from different views. I have the ground truth disparity maps of those two images, but how can I "shift" the left image to match the right image properly?
My attached result is calculated as following: if several pixels are mapped to the same position after "shifting" according to the disparity map, I always keep the pixel with the smallest (should be biggest) disparity value. The reason is that I believe a smaller (should be bigger) disparity value means that this pixel comes from the object in front. Otherwise, just "shift" according to the disparity map.
Since my result is very bad, I do hope someone can give me some advice.
For the attached image, the upper left is the left image, the lower left is the one "shifted" from the right image.
If we want to do advanced image processing like stereo image processing, object tracking, point tracking, etc. then which type of hardware is used or which hardware is used normally for real-time response? Which hardware is used by space research companies for advanced image processing?
The result of 3D reprojection using StereoSGBM algorithm in Emgu cv (open cv) is the X,Y,Z coordinates of each pixel in the depth image.
public void Computer3DPointsFromStereoPair(Image<Gray, Byte> left, Image<Gray, Byte> right, out Image<Gray, short> disparityMap, out MCvPoint3D32f[] points)
{
points = PointCollection.ReprojectImageTo3D(disparityMap, Q);
}
by taking the first element of this result:
points[0] = { X= 414.580017 Y= -85.03029 Z= 10000.0 }
I'm confused here, to which pixel this point refers to and why is it not like this X=0,Y=0,Z=10000.0
I want to study the dynamic response of the simulated model.
Disparity images should be produced with 4 fps or better. At my institute, we can do some engineering by ourselves. However, FPGA programming is the limiting factor.
Which methods return high accuracy depth map information for robot grasping?
Can I get very good results using two stereo cameras with open cv library to find the disparity and depth map, or can you suggest to follow another method?
Most literature uses processing of video/cam images. I would need a simpler solution, also need to avoid ethical issues due to videoing people.