Helio Perroni Filho

Helio Perroni Filho
Universidade Federal do Espírito Santo | UFES · Laboratório de Computação de Alto Desempenho (LCAD)

Doctor of Philosophy

About

11
Publications
3,422
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
5
Citations
Additional affiliations
October 2013 - September 2016
University of Tsukuba
Position
  • PhD Student
January 2007 - February 2010
Universidade Federal do Espírito Santo
Position
  • Master's Student
Education
October 2013 - October 2016
University of Tsukuba
Field of study
  • Mobile robotics, computer vision, cognitive architectures
April 2007 - February 2010
Universidade Federal do Espírito Santo
Field of study
  • Computer Vision
April 1999 - November 2004
Universidade Federal do Espírito Santo
Field of study
  • Computer Science

Publications

Publications (11)
Data
Part 01/03 of training data for the behavioral learning test project from Udacity.
Data
Part 02/03 of training data for the behavioral learning test project from Udacity.
Data
Part 03/03 of training data for the behavioral learning test project from Udacity.
Data
Validation data for the behavioral learning test project from Udacity.
Article
Visual recognition of previously visited places is a basic cognitive skill for a wide variety of living beings, including humans. This requires a method to extract relevant cues from visual input and successfully match them to memories of known locations, disregarding environmental variations such as lighting changes, viewer pose differences, movin...
Chapter
Self-location—recognizing one’s surroundings and reliably keeping track of current position relative to a known environment—is a fundamental cognitive skill for entities biological and artificial alike. At a minimum, it requires the ability to match current sensory (mainly visual) inputs to memories of previously visited places, and to correlate pe...
Article
Differential Visual Streams (DiVS) is a method to recognize common locations and quantify viewpoint differences between two image sequences, enabling robot self-localization along known paths (for which reference images exist) from images recorded by a single uncalibrated camera. It combines concepts from memory networks and convolutional networks,...
Article
Differential Visual Streams (DiVS) is an image processing method to quantify changes in picture sequences. It works on monocular images captured by a single uncalibrated camera. Experiments show DiVS provides a sound basis for an appearance-based navigation system effective under a variety of lighting conditions (both controlled and natural), landm...
Conference Paper
The multichannel model is a complete reassessment of how neurons work at the biochemical level. Its results can be extended into an overarching theory of how vision, memory and cognition come to be in the living brain. This article documents a first attempt at testing the model's validity, by applying its principles to the construction of an image...
Conference Paper
Full-text available
This article presents Skeye, a platform for the study of neural computation models (particularly in relation to machine vision), and its application to the problem of image template matching in the context of a desktop automation framework. Cases where the template-matching algorithm may fail are investigated, their causes identified and correction...
Conference Paper
Full-text available
We have examined Virtual Generalizing Random Access Memory Weightless Neural Networks (VG-RAM WNN) as platform for depth map inference from static monocular images. For that, we have designed, implemented and compared the performance of VG-RAM WNN systems against that of depth estimation systems based on Markov Random Field (MRF) models. While not...

Questions

Questions (2)
Question
One of the foremost contributions of Turing to computer science is the concept of a "Universal Machine" which can emulate other machines of arbitrary complexity, performance considerations notwithstanding. Indeed, much of the success of modern computers can be attributed to this trait, which allows us to build applications (abstract "machines") of ever increasing complexity on top of relatively simple hardware architectures.
Thinking about this one day, I was struck by a curious similarity between the concept of a Universal Machine and our own cognitive abilities:
* A Universal Machine can simulate other machines of arbitrary complexity without need of any significant change to its own structure;
* Human brains can learn behaviors of seemingly unending variety, yet the rate of morphological change over life doesn't seem to account for this ability – and in fact, a great deal of what change we go through (mostly during childhood) seems more linked to developing the *ability to learn* than to the learning of any specific behavior.
If our brains are assimilating new behaviors all the time, yet they're not physically changing to accommodate these novel abilities, could be they do this in a way similar to how a Universal Machine works – that is, by simulating other constructs not unlike itself? Is the brain a Universal Neural Network, which learns new behaviors by simulating networks tailored to its requirements?
Making a "real" device able to simulate any number of "imaginary" variations of its basic design looks, at first sight, like a simple and elegant way to add extensibility to a system that cannot undergo much morphological change. And it seems a natural step, evolution-wise, for biological neural networks to duplicate their own mechanics on a higher level as they grow in sophistication. So surely someone must have thought of this before? Yet I couldn't find any reference to this idea.
What do you make of this idea? Are there clear arguments against it – things on the "any fool knows" category – either from neuroscience or artificial neuron network theory, that would outright disprove the conjecture and/or otherwise make it seem a waste of time? Did anybody think of this before?
Question
Let ''I'' be a set of bit arrays of length ''n''. Given a bit array ''t'', it is desired to find the array ''i'' in ''I'' which is closest to ''t'' in terms of their Hamming distance [1] ''h'' – which may be 0 if ''t'' is in ''I'', but most likely ''h > 0''.
Clearly this is an instance of the multidimensional nearest-neighbor problem, but if ''n'' is too large (say, ''n = 256''), traditional nearest-neighbor algorithms such as k-d trees become impractical. However, it also seems certain that the domain allows for some optimizations – after all, only two values (0 and 1) are possible for each dimension along the data points.
Does anyone know of a nearest-neighbor algorithm that is optimized for this case?