Conference Paper

Smart Projector with Wireless Mouse

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
This paper presents an analysis of auto fitting multiple projections to a single screen using a camera as feedback. The projector camera system is consisting of two (or more) mobile projectors, a stationary camera, and a planar screen. The method assumes the camera to be calibrated with the condition to capture the full planar screen site. For ideal system, the geometrically compensating images projected on the screen from both projectors need to coincide so that the displayed images will always be aligned and centered on the screen with a correct aspect ratio. The system automatically performs correction calculation on-line on the captured camera frames and recovers all transformations to recommend for movable/robot projector and/or to apply on each projector’s frame. To realize these properties, we adopt software approach while presuming that hardware one is available on-demand. In the proposed software, the auto-correction is obtained by measuring the actual coordinates of the projectors and achieved by either finding the maximum intersection rectangle of both (or multiple) projections and auto-adjusting of each video frames in graphic card buffers, or by calculating the transformation matrix to mechanically correct positions of mobile/robot projectors. We adopt a fast method to find the maximum intersection rectangle and extend it to be able to deal with temporal changes of the system. We show experimental results obtained by a simulated testing system where the overall execution time is less than the frame’s displaying frequency.
Conference Paper
Full-text available
Teaching complex assembly and maintenance skills to human operators usually requires extensive reading and the help of tutors. In order to reduce the training period and avoid the need for human supervision, an immersive teaching system using spatial augmented reality was developed for guiding inexperienced operators. The system provides textual and video instructions for each task while also allowing the operator to navigate between the teaching steps and control the video playback using a bare hands natural interaction interface that is projected into the workspace. Moreover, for helping the operator during the final validation and inspection phase, the system projects the expected 3D outline of the final product. The proposed teaching system was tested with the assembly of a starter motor and proved to be more intuitive than reading the traditional user manuals. This proof of concept use case served to validate the fundamental technologies and approaches that were proposed to achieve an intuitive and accurate augmented reality teaching application. Among the main challenges were the proper modeling and calibration of the sensing and projection hardware along with the 6 DoF pose estimation of objects for achieving precise overlap between the 3D rendered content and the physical world. On the other hand, the conceptualization of the information flow and how it can be conveyed on-demand to the operator was also of critical importance for ensuring a smooth and intuitive experience for the operator.
Conference Paper
Full-text available
A co-located projector camera system where the projector and camera are positioned in the same optical position by a plate beam splitter enables various spatial augmented reality applications for dynamic three dimensional scenes. The extremely precise alignment of the projection centers of the camera and projector is necessary for these applications. However, the conventional calibration procedure for a camera and projector cannot achieve high accuracy because an iterative verification process for the alignment is not included. This paper proposes a novel interactive alignment approach that displays a capture of the projected grid pattern on the calibration screen. Additionally, a mis-alignment display technique that employs projector camera feedback is proposed for fine adjustment.
Article
Full-text available
In projector-based applications, if the laser spot of a laser pointer can be quickly and accurately tracked, the laser pointer can act as an alternative to a computer mouse, and thus a user can more conveniently interact with the computer from a distance. Presented is an efficient calibration algorithm, which is immune to both the geometry of projection surfaces and the distortion of optical lens, for laser pointer tracking. Experiments verify the effectiveness of the proposed algorithm.
Article
Facial landmark detection is an important but challenging task for real-world computer vision applications. This paper proposes an accurate and robust approach for facial landmark detection by combining data-driven and model-driven methods. Firstly, a fully convolutional network (FCN) is trained to generate response maps of all facial landmark points. Such a data-driven method can make full use of holistic information in a facial image for global estimation of facial landmarks. Secondly, the maximum points in the response maps are fitted with a pre-trained point distribution model (PDM) to generate initial facial landmark shape. Such a model-driven method can correct the location errors of outliers by considering shape prior information. Thirdly, a weighted version of Regularized Landmark Mean-Shift (RLMS) is proposed to fine-tune facial landmark shapes iteratively. The weighting strategy is based on the confidence of convolutional response maps so that FCN is integrated into the framework of Constrained Local Model (CLM). Such an Estimation-Correction-Tuning process perfectly combines the global robustness advantage of data-driven method (FCN), outlier correction advantage of model-driven method (PDM) and non-parametric optimization advantage of RLMS. The experimental results demonstrate that the proposed approach outperforms state-of-the-art solutions on the 300-W dataset. Our approach is well-suited for face images with large poses, exaggerated expression, and occlusions.
Conference Paper
Most would agree: human vision is the most important of the five senses. Tragically many elderly people lose their vision due to incurable diseases which could have been avoided if diagnosed early enough. Fortunately some of these diseases can be diagnosed or at least have their symptoms detected with the use of simple tests. The use of smartphone or tablet applications have become common for these tests, so eye diseases can be detected early and even at home. However, none of the smartphone or tablet applications considers the screen to face distance of a person doing an eye test to be an important parameter for these sorts of tests. In this paper we present an algorithm to derive a new context: the smartphone user's screen to face distance. Our algorithm utilizes the smartphone front camera and an eye detection algorithm. After initializing the algorithm with person specific values, the algorithm continuously measures the eye to eye distance to derive the user's actual screen to face distance. We also present an investigation on the algorithm accuracy and speed, which shows: a smartphone based screen to face distance measurement is possible in the distance range from 19cm to 94cm with a maximum deviation of 2.1cm and at a rate of three distance measurements per second.
Conference Paper
This paper proposes the laser extraction system by adjusting camera parameters. This system is based on presentation tool with laser pointer, and it detects the laser spot overlapped with projection image by CCD/CMOS camera. In this research, we confirmed the method of extracting laser spot from projection image by adjusting camera parameters. We determined essential parameters and optimal threshold to extracting laser spot. We constructed the prototype system, and showed its effectiveness by laser spot extraction experiments. As a result, our proposal made it possible to extract the laser spot effectively regardless of the background image.
Article
A diffractive optical element (DOE) is applied to effectively locate a laser pointer spot on a projection screen for laser pointer interaction applications. The DOE is placed in front of a digital web camera to blur the background image while transforming the laser spot into a large diffractive pattern, such as a circle. To calculate the diffractive pattern position on the screen, only a simple subtraction method using two successive digital images with the laser ON and OFF, respectively, is needed. This approach also improves the compressed digital image transmission latency.
Article
In this paper, we present a system which uses laser pen for human-computer interaction in the projection display system. The system adds two additional hardware devices to original projection display system: camera and laser pen with functions of mouse buttons. The laser point on the projection screen is captured by the camera and its location is computed by coordinate conversion. The movement of cursor is controlled by the projection points of laser pen; the functioning keys of the mouse can be simulated by the functioning keys of the laser pen. Users can control the object on the projection screen by the laser pen just like normal mouse.
Conference Paper
Due to the development of technology, electronics have become intertwined with our daily lives. Because of our reliance on such products, they need to be user-friendly, thus, improving current technology products is critical. One example of an electronic product with an opportunity for improvement is the projector. In many places, projectors have become an indispensable instrument for presentations or teaching. In this paper, we will improve the traditional projector, whose harsh light may hurt or cause discomfort for human eyes. Our system has been successfully applied to projected PowerPoint presentations and the experimental results speak to its performance.
Article
Skin color provides a useful cue for detecting faces and reproducing preferred colors. However, skin color detection based on just a static model often decreases the detection rate, as skin color in an image captured by a camera undergoes variations as the illumination changes. Thus, to enhance skin color detection using a static model, the color of an estimated illuminant from images captured under various illumination conditions is converted to that of a canonical illuminant. First, the illuminant color is estimated from the pixels in the sclera region of the eyes, then converted to a canonical color for robust skin color detection. Experimental results show that the proposed skin color detection method increases the detection rate, especially for images taken with a low or high correlated color temperature of illumination.