Michael Wilcox

University of Texas at Austin, Austin, Texas, United States

Are you Michael Wilcox?

Claim your profile

Publications (14)2.12 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditional machine vision systems have an inherent data bottleneck that arises because data collected in parallel must be serialized for transfer from the sensor to the processor. Furthermore, much of this data is not useful for information extraction. This project takes inspiration from the visual system of the house fly, Musca domestica, to reduce this bottleneck by employing early (up front) analog preprocessing to limit the data transfer. This is a first step toward an all analog, parallel vision system. While the current implementation has serial stages, nothing would prevent it from being fully parallel. A one-dimensional photo sensor array with analog pre-processing is used as the sole sensory input to a mobile robot. The robot's task is to chase a target car while avoiding obstacles in a constrained environment. Key advantages of this approach include passivity and the potential for very high effective "frame rates."
    Biomedical sciences instrumentation 02/2008; 44:373-9.
  • Journal of Electronic Imaging 01/2006; 15:043005. · 1.06 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.© (2005) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
    05/2005;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Musca domestica, the common house fly, has a simple yet powerful and accessible vision system. Cajal indicated in 1885 the fly's vision system is the same as in the human retina. The house fly has some intriguing vision system features such as fast, analog, parallel operation. Furthermore, it has the ability to detect movement and objects at far better resolution than predicted by photoreceptor spacing, termed hyperacuity. We are investigating the mechanisms behind these features and incorporating them into next generation vision systems. We have developed a prototype sensor that employs a fly inspired arrangement of photodetectors sharing a common lens. The Gaussian shaped acceptance profile of each sensor coupled with overlapped sensor field of views provide the necessary configuration for obtaining hyperacuity data. The sensor is able to detect object movement with far greater resolution than that predicted by photoreceptor spacing. We have exhaustively tested and characterized the sensor to determine its practical resolution limit. Our tests coupled with theory from Bucklew and Saleh (1985) indicate that the limit to the hyperacuity response may only be related to target contrast. We have also implemented an array of these prototype sensors which will allow for two - dimensional position location. These high resolution, low contrast capable sensors are being developed for use as a vision system for an autonomous robot and the next generation of smart wheel chairs. However, they are easily adapted for biological endoscopy, downhole monitoring in oil wells, and other applications.
    Proc SPIE 05/2005;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.
    Biomedical sciences instrumentation 02/2005; 41:175-80.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Those studying biological systems are often interested in the morphology of the various microscopic organelles. The three dimensional reconstruction and visualization of objects provide a powerful tool to understand the nature of each object, and its relationship to other objects. Segmentation is the key to 3D analysis and study of objects that have been recorded with a series of sectioned images, such as from a confocal laser scanning microscope (CLSM). Segmentation is the process of completely separating or isolating the individual objects in an image. A seed-based semi-automatic segmentation tool has been developed to aid in the process of 3D visualization of objects recorded with serial sectioned images, including a boundary creation method that maintains the separate identity of contacting objects. This segmentation tool also allows the user to retain background information as a separate object, providing important reference and landmark information for the object of interest. This paper summarizes the main parts of the segmentation algorithm and presents 3D reconstructions of visual neurons of the housefly, Musca domestica. These reconstructions are compared to typical 3D images produced from other widely used software packages, including standard CLSM imaging software and the popular ImageJ supported by National Institute of Health (NIH). Efforts are underway to develop a user-friendly graphical user interface (GUI) for the segmentation algorithm to entice broader used in research settings.
    Biomedical sciences instrumentation 02/2005; 41:235-40.
  • Robert Madsen, Steven Barrett, Michael Wilcox
    [Show abstract] [Hide abstract]
    ABSTRACT: The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.
    Biomedical sciences instrumentation 02/2005; 41:335-9.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Our understanding of the world around us is based primarily on three-dimensional information because of the environment in which we live and interact. Medical or biological image information is often collected in the form of two-dimensional, serial section images. As such, it is difficult for the observer to mentally reconstruct the three dimensional features of each object. Although many image rendering software packages allow for 3D views of the serial sections, they lack the ability to segment, or isolate different objects in the data set. Segmentation is the key to creating 3D renderings of distinct objects from serial slice images, like separate pieces to a puzzle. This paper describes a segmentation method for objects recorded with serial section images. The user defines threshold levels and object labels on a single image of the data set that are subsequently used to automatically segment each object in the remaining images of the same data set, while maintaining boundaries between contacting objects. The performance of the algorithm is verified using mathematically defined shapes. It is then applied to the visual neurons of the housefly, Musca domestica. Knowledge of the fly"s visual system may lead to improved machine visions systems. This effort has provided the impetus to develop this segmentation algorithm. The described segmentation method can be applied to any high contrast serial slice data set that is well aligned and registered. The medical field alone has many applications for rapid generation of 3D segmented models from MRI and other medical imaging modalities.
    Proc SPIE 01/2005;
  • [Show abstract] [Hide abstract]
    ABSTRACT: An algorithm to track a rat swimming in a Morris Water Maze has been developed. The system is automatically configured to any pool and relative suitable light conditions. It tracks the rat's position and head pose 10 times per second. The output data is displayed in a bitmap and also in a text file. The system was tested with an X - Y plotter using a simulated rat swimming in the maze. Known signals were provided to a model rat and compared to the position and pose information provided by the tracking algorithm. The algorithm was able to track rat velocities up to 2.32 m/s, localize rat position to 4 mm within the maze, and provide head pose information. Early prototypes of the algorithm were also used to track actual rats in a water maze.
    Annals of Biomedical Engineering 09/2004; 32(8):1141-52.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Two challenges to an effective, real-world computer vision system are speed and reliable object recognition. Traditional computer vision sensors such as CCD arrays take considerable time to transfer all the pixel values for each image frame to a processing unit. One way to bypass this bottleneck is to design a sensor front-end which uses a biologically-inspired analog, parallel design that offers preprocessing and adaptive circuitry that can produce edge maps in real-time. This biomimetic sensor is based on the eye of the common house fly (Musca domestica). Additionally, this sensor has demonstrated an impressive ability to detect objects at subpixel resolution. However, the format of the image information provided by such a sensor is not a traditional bitmap transfer of the image format and, therefore, requires novel computational manipulations to make best use of this sensor output. The real-world object recognition challenge is being addressed by using a subspace method which uses eigenspace object models created from multiple reference object appearances. In past work, the authors have successfully demonstrated image object recognition techniques for surveillance images of various military targets using such eigenspace appearance representations. This work, which was later extended to partially occluded objects, can be generalized to a wide variety of object recognition applications. The technique is based upon a large body of eigenspace research described elsewhere. Briefly described, the technique creates target models by collecting a set of target images and finding a set of eigenvectors that span the target image space. Once the eigenvectors are found, an eigenspace model (also called a subspace model) of the target is generated by projecting target images on to the eigenspace. New images to be recognized are then projected on to the eigenspace for object recognition. For occluded objects, we project the image on to reduced dimensional subspaces of the original eigenspace (i.e., a "subspace of a subspace" or a "sub-eigenspace"). We then measure how close a match we can achieve when the occluded target image is projected on to a given sub-eigenspace. We have found that this technique can result in significantly improved recognition of occluded objects. In order to manage the combinatorial "explosion" associated with selecting the number of subspaces required and then projecting images on to those sub-eigenspaces for measurement, we use a variation on the A* (called "A-star") search method. The challenge of tying these two subsystems (the biomimetic sensor and the subspace object recognition module) together into a coherent and robust system is formidable. It requires specialized computational image and signal processing techniques that will be described in this paper, along with preliminary results. The authors believe that this approach will result in a fast, robust computer vision system suitable for the non-ideal real-world environment.
    Proc SPIE 01/2004;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Our understanding of the world around us and the many objects that we encounter is based primarily on three-dimensional information. It is simply part of the environment in which we live and the intuitive nature of our interpretation of our surroundings. In the arena of biomedical imaging, the image information most often collected is in the form of two-dimensional images. In cases where serial slice information is obtained, such as MRI images, it is still difficult for the observer to mentally build and understand the three-dimensional structure of the object. Although most image rendering software packages allow for 3D views of the serial sections, they lack the ability to segment, or isolated different objects in the data set. Typically the task of segmentation is performed by knowledgeable persons who tediously outline or label the object of interest in each image slice containing the object [1,2]. It remains a difficult challenge to train a computer to understand an image and aid in this process of segmentation. This article reports of on-going work in developing a semi-automated segmentation technique. The approach uses a Leica Confocal Laser Scanning Microscope (CLSM) to collect serial slice images, image rendering and manipulating software called IMOD (Boulder Colorado), and Matlab (The Mathworks Inc.) image processing tools for development of the object segmentation routines. The initial objects are simple fluorescent microspheres (Molecular Probes), which are easily imaged and segmented. The second objects are rat enteric neurons, which provide medium complexity in shape and size. Finally, the work will be applied to the biological cells of the household .y, Musca domestica, to further understand how its vision system operates.
    Biomedical sciences instrumentation 02/2003; 39:117-22.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Machine vision for navigational purposes is a rapidly growing field. Many abilities such as object recognition and target tracking rely on vision. Autonomous vehicles must be able to navigate in dynamic enviroments and simultaneously locate a target position. Traditional machine vision often fails to react in real time because of large computational requirements whereas the fly achieves complex orientation and navigation with a relatively small and simple brain. Understanding how the fly extracts visual information and how neurons encode and process information could lead us to a new approach for machine vision applications. Photoreceptors in the Musca domestica eye that share the same spatial information converge into a structure called the cartridge. The cartridge consists of the photoreceptor axon terminals and monopolar cells L1, L2, and L4. It is thought that L1 and L2 cells encode edge related information relative to a single cartridge. These cells are thought to be equivalent to vertebrate bipolar cells, producing contrast enhancement and reduction of information sent to L4. Monopolar cell L4 is thought to perform image segmentation on the information input from L1 and L2 and also enhance edge detection. A mesh of interconnected L4's would correlate the output from L1 and L2 cells of adjacent cartridges and provide a parallel network for segmenting an object's edges. The focus of this research is to excite photoreceptors of the common housefly, Musca domestica, with different visual patterns. The electrical response of monopolar cells L1, L2, and L4 will be recorded using intracellular recording techniques. Signal analysis will determine the neurocircuitry to detect and segment images.
    Biomedical sciences instrumentation 02/2002; 38:123-8.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The ability to visualize and understand three-dimensional objects from two-dimensional cross-section or slice images is difficult, even if the observer has a general concept of the object of interest. The focus of this research is to apply image-processing methods to two-dimensional cross-section electron transmission micrographs of the biological cells of the Musca Domestica's, or household fly's visual system in an effort to better understand the cells responsible for processing visual information. The application of knowledge gained from biological systems is know as biomimetics. The first task will be to construct a useful three-dimensional data set from two-dimensional micrographs provided by the U.S. Air Force Academy in Colorado Springs, Colorado. The data set will be constructed by aligning these images in an edge-to-edge fashion to form a layer. Once each layer is reconstructed, the layers will be stacked and registered to form the third dimension of the data set. This task is complicated by the fact that translation, rotation and scaling mismatches exist in the images. The second task will be to segment and label the biological cells of interest. Computerized segmentation has not yet proved successful, resulting in a manual or "brain-powered" approaches being used at many institutions. By using and modifying current computer image-processing techniques, advances leading to a semi-automated segmentation process may result. Finally, the segmented data must be formatted for use with existing software to render and view the cell(s) of interest. A "marching cubes" surface-rendering algorithm is often implemented in current visualization software, along with routines to view, rotate and scale the resulting surfaces in real time. The result of viewing and manipulating the biological data set will be an increased understanding of the processes of the fly's visual system. Other researchers will use the knowledge gained from the three-dimensional renderings of the cells to further develop an analog vision system based on the fly's compound eye. Much of this research is funded by the Navy Air Warfare Center in an effort to design an analog visual system with real-time target identification and tracking capabilities.
    Biomedical sciences instrumentation 02/2002; 38:363-8.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A tracking algorithm has been developed to efficiently track a moving object in 2D image space. The algorithm employs a limited exhaustive template matching scheme that combines the accuracy of an exhaustive search with the computational efficiency of a coarse-fine template matching scheme. The overall result is an accurate, time-efficient tracking algorithm. After providing a theoretical discussion of the algorithm, three separate biomedical applications of the algorithm are described: (1) stabilizing an irradiating laser on the retinal surface for photocoagulation treatment, (2) measuring target fixation eye movement to construct pattern densities at the retina, and (3) tracking a rate swimming in a Morris Water Maze for psychophysiological studies. Results for each application is provided. The paper concludes with a discussion of the relative merits of the tracking algorithm and recommendations for methods to improve the performance of the algorithm.
    Journal of Electronic Imaging 01/2001; 10:785-793. · 1.06 Impact Factor

Publication Stats

16 Citations
2.12 Total Impact Points

Institutions

  • 2008
    • University of Texas at Austin
      Austin, Texas, United States
  • 2005
    • University of Wyoming
      • Department of Electrical and Computer Engineering
      Laramie, WY, United States
  • 2004–2005
    • United States Air Force Academy
      Colorado Springs, Colorado, United States