To read the full-text of this research, you can request a copy directly from the author.
The proposed paper outlines the design of an economical robotic arm which is used to visualize the chess board and play with the opponent using visual servoing system. We have used the FaBLab RUC's mechanical design prototype proposed and have further used Solidworks software to design the 4 jointed gripper. The proposed methodology involves detecting the squares on the corners of the chessboard and further segmenting the images. This is followed by using convolutional neural networks to train and recognize the image in order to determine the movement of the chess pieces. To trace the manipulator, Kanade-Lucas-Tomasi method is used in the visual servoing system. An Arduino uses Gcode commands to interact with the robotic arm. Game Decisions are taken with the help of chess game engine the pieces on the board are moved accordingly. Thus a didactic robotic arm is developed for decision making and data processing, serving to be a good opponent in playing chess.
To read the full-text of this research, you can request a copy directly from the author.
... The robot works on two-stage cleaning mechanism and can be charged automatically using power from solar panel when needed. An Arduino based robotic arm is designed in  which can conceptualize a chess board and has the ability of playing chess. ...
Solar panel is vulnerable to accumulated dust on its surface. The efficiency of the solar panel gradually decreases because of dust accumulation. In this paper, an Arduino based solar panel cleaning system is designed and implemented for dust removal. The proposed solar panel cleaner is waterless, economical and automatic. Two-step mechanism used in this system consists of an exhaust fan which works as an air blower and a wiper to swipe the dust from the panel surface. a dc motor is used to power the wiper. Since, the system does not need water to clean solar panel, it avoids the wastage of water and effective in desert areas. Experimental results show that the proposed cleaning system can operate with an efficiency of 87-96% for different types of sand.
Cable-driven parallel robots are robots with cables instead of rigid links. The use of cables introduces advantages such as high payload to weight ratio, large workspaces, high velocity capacity. Cables also bring drawbacks such as bad accuracy when the robot model is not accurate. In this paper, a visual servoing control is proposed in order to achieve high accuracy no matter the robot model precision. The stability of the solution is analyzed to determine the tolerable perturbation limits. Experimental validation is performed both in simulation and on a real robot to highlight the differences.
This paper focuses on the design of a 6 degrees-of-freedom (DoF) visual servoing control law. Instead of use of geometric visual features in standard vision-based approaches, the proposed controller makes use of wavelet coefficients as control signal inputs. It uses the multiple resolution coefficients representing the wavelet transform of an image in the spatial domain. The main contributions are the definition of the multiple resolution wavelet interaction model that links the time-variation of wavelet coefficients to the robot spatial velocity and the associated task function controller. The proposed control law was tested and validated experimentally using a commercial micromanipulator in an eye-to-hand configuration. To be able to judge the efficiency of the control law, several validation tests were carried out under different conditions of use i.e., large illumination variations, noisy images, partial occlusions and using unknown 3D scenes. It is also demonstrated experimentally that the proposed approach outperforms, the well-known photometry visual servoing as well as a feature-based visual servoing, namely in unfavorable conditions of use.
The proposed work outlines the architecture and implementation of an autonomous style of gardening, which operates by itself with the support of autonomous robots. The robot operates inside the garden with the help of sensors to monitor and maintain a database of the plants such as soil content, nutrients, environmental conditions and fruits location. In this paper, the architecture of the system along with experimental results for object recognition, navigation, and manipulation is presented and discussed. The work is carried out using cherry tomatoes which are fitted with sensors to keep track of the plant’s well-being. The proposed work reduces manual labour and increases the efficiency of the system.
The proposed work deals with the architecture and implementation of an autonomous style of gardening. A robot with eye-in-hand camera is made mobile and features are added to the robot to capture and identify the plant location in the garden. On locating the plant, the robot then directs watering of the plant and when fruits are detected, the robot will also grasp the
fruit and collects them. The work is carried out using cherry tomatoes which are fitted with sensors that monitor and keep track of the plant’s well-being which includes the state of fruit, soil humidity, weed formation, use of manure etc. The monitored work is then further communicated to the robots for appropriate action. Task allocation, monitoring, manipulation and sensing is distributed and centrally coordinated. In this paper, we have presented the architecture of the system along with experimental results for object recognition, navigation and manipulation.
The study of a robotic arm copied with 3D-printer combines computer vision system with tracking algorithm is proposed in the paper. Moreover, the designing to the intelligent vehicle system with the integration of electromechanical for planning to apply it to the operations in various fields is presented too. The main purpose of this work tries to avoid the complicated process with traditional manual adjustment or teaching. It is expected to achieve the purpose that the robotic arm can grab the target automatically, classify the target and place it in the specified area, and even accurately realize the classification through training to distinguish the characteristics of the target. Eventually, the mechanical arm's movement behavior is able to be corrected through a real-time image data feedback control system. In words, with the experiment that the computer vision system is used to assist the robotic arm to detect the color and position of the target. By adding color features for algorithm training as well as through human-machine collaboration, which approves that the proposed algorithm has well known that the accuracy of target tracking definitely depends on both of two parameters include “object locations” and the “illustration direction” of light source. The difference will far from 75.2% to 89.0%.
Brain Computer Interface (BCI) offers a direct communication and controlling channel between the human brain and various physical devices. This paper introduces a mind controlled device that enables monitoring of patient and indoor location using Capsule Network (CapsNet) Algorithm. This system offers huge benefits to patients who are restrained to wheelchairs and ventilators but have their cognitive ability intact. The electroencephalogram (EEG) signals are extracted and processed using Arduino Uno Microcontroller and classified with deep learning techniques. Comparison of the EEG signals is done by means of capsule network algorithm for feature extraction and to predict the brain attention and meditation state. The proposed architecture based on CapsNet outperforms the existing ANN approaches.
The paper is a review on the computer vision that is helpful in the interaction between the human and the machines. The computer vision that is termed as the subfield of the artificial intelligence and the machine learning is capable of training the computer to visualize, interpret and respond back to the visual world in a similar way as the human vision does. Nowadays the computer vision has found its application in broader areas such as the heath care, safety security, surveillance etc. due to the progress, developments and latest innovations in the artificial intelligence, deep learning and neural networks. The paper presents the enhanced capabilities of the computer vision experienced in various applications related to the interactions between the human and machines involving the artificial intelligence, deep learning and the neural networks.
Semantic Segmentation is a very active area of research in the examining the medical images. The failure in the conventional segmentation methods to preserve the full resolution throughout the network led to the research’s that developed methods to protect the resolution of the images. The proposed method involves the semantic segmentation model for the biomedical images by utilizing the encoder/decoder structure to down sample the spatial resolution of the input data and develop a lower resolution feature mapping that are very effective at distinguishing between the classes and then perform the up samples to have a full-resolution segmentation map of the biomedical images reducing the diagnostic time. The frame work put forth utilizes a pixel to pixel fully trained cascaded convolutional neural network for the task of image segmentation. The evaluation biomedical image analysis using the semantic segmentation shows the performance improvement achieved by the minimization of the time required in testing and the augmentation in the analysis performed by the radiologist.
In this paper, we propose a visual servoing scheme that imposes predefined performance specifications on the image feature coordinate errors and satisfies the visibility constraints that inherently arise owing to the camera’s limited field of view, despite the inevitable calibration and depth measurement errors. Its efficiency is demonstrated via comparative experimental and simulation studies.
This paper proposes a novel nonlinear geometric hierarchical dynamic visual servoing approach to drive a quadrotor to the desired pose defined by a previously captured image of a planar target. Different from existing works, the key novelty is to extend the position-based nonlinear hierarchical control to image-based nonlinear hierarchical control. More specifically, by seamlessly integrating the nonlinear hierarchical control with the geometric control, and taking full advantage of the cascade property of the system, the proposed visual servoing strategy does not require the thrust force or its derivative to be measurable when compared with the existing backstepping methods, which brings much convenience for practical applications. For the attitude loop, the axis-angle rotation representation is adopted to design a tracking control law on the vectorspace
. In the outer loop, perspective image moments in the virtual image plane are employed as image feedback to construct the outer-loop image-based visual servoing controller with geometric control and backstepping techniques. Based on Lyapunov techniques and the theory of cascade systems, it is rigorously proven that the proposed image-based controller achieves asymptotic stability. Comparative experiments are conducted to show that the proposed approach has advantages of better transient performance, better steady-state performance and stronger robustness.
Magnetically actuated microswimmers have attracted researchers to investigate their swimming characteristics and controlled actuation. Although plenty of studies on actuating helical microswimmers have been carried out, robust closed-loop controls should be still explored for practical applications. In this paper, we proposed a data-driven model-free method using Image-Based Visual Servoing (IBVS), which uses features directly extracted in the image space as feedbacks. The IBVS method can eliminate camera calibration errors. We have demonstrated with experiments that the proposed IBVS method can enable velocity-independent path following of an arbitrarily given path on the plane, which permits a better experience of user interaction. The proposed control method is successfully applied to obstacle avoidance tasks and has the potential for the application in complex circumstances. This approach is promising for biomedical applications.
Can a large convolutional neural network trained for whole-image classification on ImageNet be coaxed into detecting objects in PASCAL? We show that the answer is yes, and that the resulting system is simple, scalable, and boosts mean average precision, relative to the venerable deformable part model, by more than 40% (achieving a final mAP of 48% on VOC 2007). Our framework combines powerful computer vision techniques for generating bottom-up region proposals with recent advances in learning high-capacity convolutional neural networks. We call the resulting system R-CNN: Regions with CNN features. The same framework is also competitive with state-of-the-art semantic segmentation methods, demonstrating its flexibility. Beyond these results, we execute a battery of experiments that provide insight into what the network learns to represent, revealing a rich hierarchy of discriminative and often semantically meaningful features.
Can a large convolutional neural network trained for whole-image
classification on ImageNet be coaxed into detecting objects in PASCAL? We show
that the answer is yes, and that the resulting system is simple, scalable, and
boosts mean average precision, relative to the venerable deformable part model,
by more than 40% (achieving a final mAP of 48% on VOC 2007). Our framework
combines powerful computer vision techniques for generating bottom-up region
proposals with recent advances in learning high-capacity convolutional neural
networks. We call the resulting system R-CNN: Regions with CNN features. The
same framework is also competitive with state-of-the-art semantic segmentation
methods, demonstrating its flexibility. Beyond these results, we execute a
battery of experiments that provide insight into what the network learns to
represent, revealing a rich hierarchy of discriminative and often semantically