Taro Maeda

Osaka University, Suika, Ōsaka, Japan

Are you Taro Maeda?

Claim your profile

Publications (127)45.21 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: "The illusion of agency" arises when individuals look at own and other's hand motions alternately from a first person perspective (Takumi Yokosaka, 2014). Here, agency is a kind of cognition whereby people perceive both congruent movement (CM) and incongruent movement (IM) as a united, single, and continuous motion of one's own hand. The report says apparent motion (AM) perceived between CM and IM. Then, we consider there are two directional AM repeatedly (from own hand toward other's one and vice versa) and integration between the two occurs. So, to make AM occurrence denser despite, a CM-IM sequence lasts 250ms despite the former 500ms. And to prevent from interaction between the lasting time ratio of CM:IM and the two AM velocity difference, we used 1:1 despite the former 1:2. IM is a recorded movement of a gloved hand. We asked subject to keep his hand close to the IM. We experimented under three conditions below: (#1) CM lasts 125ms and IM does 125ms without black frame, (#2) 83ms black frame inserted after the sequence of 83ms CM and 83ms IM, (#3) after 83ms CM, two split 41ms-black frames were inserted before and after 83ms IM. As a result, the three in four subjects told the recorded movement is observed in each trial under the condition #1 and #3 while hardly did under #2. "The illusion of agency" was relatively observed under all condition. The fact that the CM and IM were not integrated as a single movement under #1 and #3 indicates the illusion was weaker than #2. Supposing the inserted black frame makes the AM slower, the two-AM-velocity difference is dominant. Therefore, the integrated difference yields the polarity of direction towards the other's hand. This seems to induce own hand's predicted position closer to the other's hand unconsciously. Meeting abstract presented at VSS 2015.
    No preview · Article · Sep 2015 · Journal of Vision
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Galvanic vestibular stimulation (GVS) can be applied to induce the feeling of directional virtual head motion by stimulating the vestibular organs electrically. Conventional studies used a two-pole GVS, in which electrodes are placed behind each ear, or a three-pole GVS, in which an additional electrode is placed on the forehead. These stimulation methods can be used to induce virtual head roll and pitch motions when a subject is looking upright. Here, we proved our hypothesis that there are current paths between the forehead and mastoids in the head and show that our invented GVS system using four electrodes succeeded in inducing directional virtual head motion around three perpendicular axes containing yaw rotation by applying different current patterns. Our novel method produced subjective virtual head yaw motions and evoked yaw rotational body sway in participants. These results support the existence of three isolated current paths located between the mastoids, and between the left and right mastoids and the forehead. Our findings show that by using these current paths, the generation of an additional virtual head yaw motion is possible.
    Preview · Article · May 2015 · Scientific Reports
  • [Show abstract] [Hide abstract]
    ABSTRACT: In developing wearable computing technology, it is often desirable to produce the sensation of tapping a virtual object in mid-air. Such a tapping sensation consists of not only tactile sensations at the fingertips but also force feedbacks to the musculoskeletal system. Therefore, physical interactions with objects ostensibly involve an integration of tactile sensations with force feedback to the musculoskeletal system when perceived objects are contacted. In this study, we propose a method that combines electro-tactile stimulation (ETS) for tactile receptors with functional electro-stimulation (FES) for the musculoskeletal system. The results of an experimental study show that simultaneous perception was reportedly obtained when ETS was delayed by 25 ms after FES when tapping a rigid virtual object.
    No preview · Article · Jan 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Neurofeedback is a strong direct training method for brain function, wherein brain activity patterns are measured and displayed as feedback, and trainees try to stabilize the feedback signal onto certain desirable states to regulate their own mental states. Here, we introduce a novel neurofeedback method, using the mismatch negativity (MMN) responses elicited by similar sounds that cannot be consciously discriminated. Through neurofeedback training, without participants' attention to the auditory stimuli or awareness of what was to be learned, we found that the participants could unconsciously achieve a significant improvement in the auditory discrimination of the applied stimuli. Our method has great potential to provide effortless auditory perceptual training. Based on this method, participants do not need to make an effort to discriminate auditory stimuli, and can choose tasks of interest without boredom due to training. In particular, it could be used to train people to recognize speech sounds that do not exist in their native language and thereby facilitate foreign language learning.
    Full-text · Article · Oct 2014 · Scientific Reports
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We report a novel illusion whereby people perceive both congruent and incongruent hand motions as a united, single, and continuous motion of one's own hand (i.e. a sense of agency). This arises when individuals watch congruent and incongruent hand motions alternately from a first person perspective. Despite an individual knowing that s/he is not performing the motion, this illusion still can arise. Although a sense of agency might require congruency between predicted and actual movements, united motion is incongruent with predicted movement because the motion contains oscillating movement which results from switching hand movement images. This illusion offers new insights into the integration mechanism of predicted and observed movements on agency judgment. We investigated this illusion from a subjective experience point of view and from a motion response point of view.
    Preview · Article · Aug 2014 · Scientific Reports
  • [Show abstract] [Hide abstract]
    ABSTRACT: The principle of glasses-free 3D displays (each eye sees a different image without wearing glasses) can create a fully natural sensation of depth. In conventional principles, lenticular lenses or parallax barriers are placed in front of an image source, such as a liquid crystal display, to allow it to show a stereoscopic image. A disadvantage of the technology is that the resolution of perceived images is limited by the diffraction at the lenticular lenses or parallax barrier. To overcome this limit, we improved the technique by using the human perceptual feature known as slit viewing. When a figure moves behind stationary narrow slits, observers can see the moving figure as an integrated whole, a phenomenon known as slit viewing.
    No preview · Article · Jul 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: There are a lot of skills that it takes time for us to learn in our life. To be precise, it is not clear what and how to learn. For example, one of the biggest problems in the language learning is that learners cannot recognize novel sounds that do not exist in their native language, and it is difficult to gain a listening ability for these novel sounds [1]. Here, we developed a novel neurofeedback (NF) method, using the mismatch negativity (MMN) responses elicited by similar sounds, that can help people to unconsciously improve their auditory perceptual skills. In our method, the strength of the participants' MMN as a measure of perceptual discriminability is presented as visual feedback to provide a continuous, not binary, cue for learning. We found evidence that significant performance improvement for behavioral auditory discrimination and neurophysiological measure occurs unconsciously. Based on our findings, the method has great potential to provide effortless auditory perceptual training and develop an unconscious learning interface device.
    No preview · Conference Paper · Mar 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: This study reports that the effect of galvanic vestibular stimulation (GVS) can be enhanced by giving a countercurrent before the normal current stimulation in the forward direction. In order to investigate the effect of the countercurrent on GVS, we applied various kinds of amplitudes and durations of the countercurrent before the normal stimulation. The strength of the effect was measured by the subjective response and body sway. As a result, the enhancing effect by the countercurrent does not only appear in the objective response but also in the subjective reports and we found that the effect of the countercurrent was enhanced in response to the amount of charges of the countercurrent before normal stimulations. Our result implies that there is a capacitor on the current path that enhances the GVS effect by the countercurrent.
    No preview · Conference Paper · Dec 2013
  • Hiroyuki Iizuka · Hideyuki Ando · Taro Maeda
    [Show abstract] [Hide abstract]
    ABSTRACT: This study presents an extended dynamic neural network model of homeostatic adaptation as the first step toward constructing a model of mental imagery. In the homeostatic adaptation model, higher-level dynamics internally self-organized from sensorimotor dynamics are associated with desired behaviors. These dynamics are regenerated when drastic changes occur, which might break the internal dynamics. Due to the weak link between desired behavior and internal homeostasis in the original homeostatic adaptation model, adaptivity is limited. In this paper, we improve on the homeostatic adaptation model to create a stronger link between desired behavior and internal homeostasis by introducing a metabolic causation in a plasticity mechanism and show that it becomes more adaptive. Our results show that our model has three different time scales in the adaptive behaviors, which are discussed with our cognition and mental imagery.
    No preview · Article · Aug 2013 · Adaptive Behavior
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a novel method to navigate one's head rotation and position when two persons perform cooperative work with wearing head-mounted display, which is called view-sharing system. It is necessary to synchronize and match their head motions between two persons. In this system, the head motion is transmitted and guided by the target-maker shown in the user's view. In a conventional method, the partner's head position was shown in the view as target marker. In this paper, the marker was modified in terms of two typical human motion patterns; changing point of view by moving one's head around the target object, and changing one's gaze point by turning the head. Our proposed method was evaluated with the head tracking task and showed the effectiveness.
    No preview · Conference Paper · Mar 2013
  • Junji Watanabe · Taro Maeda · Hideyuki Ando
    [Show abstract] [Hide abstract]
    ABSTRACT: When a single column of light sources flashes quickly in a temporal pattern during a horizontal saccade eye movement, two-dimensional images can be perceived in the space neighboring the light source. This perceptual phenomenon has been applied to light devices for visual arts and entertainment. However, a serious drawback in exploiting this perceptual phenomenon for a visual information display is that a two-dimensional image cannot be viewed if there is any discrepancy between the ocular motility and the flicker timing. We overcame this drawback by combining the saccade-based display with an electro-ocular-graph-based sensor for detecting the saccade. The saccade onset is measured with the electro-ocular-graph-based sensor in real time and the saccade-based display is activated instantaneously as the saccade begins. The psychophysical experiments described in this article demonstrates that the method that we used can detect saccades with low latency and allows the saccade-based display to convey visual information more effectively than when the light sources continuously blink regardless of the observer's eye movements.
    No preview · Article · Jun 2012 · ACM Transactions on Applied Perception
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present an experiment that integrates a semiotic investigation with a dynamical perspective on embodied social interactions. The primary objective is to study the emergence of a communication system between two interacting individuals, where no dedicated communication modalities are predefined and the only possible interaction is very simple, non-directional, and embodied. Throughout the experiment, we observe the following three phenomena: (1) the spontaneous emergence of turn-taking behaviour that allows communication in non-directional environments; (2) the development of an association between behaviours and perceptive categories; (3) the acquisition of novel meaning by exploiting the notion of complementary set theory.
    No preview · Article · Feb 2012 · Psychological Research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: If the feeling of the presence can be transferred to a different place from rather than the place where we actually exist, our life style will change drastically. By extending robot-human telexistence [1] technology to human-human situations, we are developing an environment where a skilled person, who actually exists at a different place, can work with high efficacy on the ground instead of non-skilled person. In order to realize such a telexistence environment in human interactions, we are developing remote communication technologies exploiting sensemotion sharing. In this project, we have developed a view sharing system to share first person perspectives between remote two people [2]. The system consists of a head mounted display and cameras, which make possible a video see though (VST-HMD). The user wearing the HMD can see his own view and the partner's view, and also send his own view to the partner. Our aim is to share experience and to transmit the skills from one to another by sharing vision and motions [3]. We developed a new view sharing system to improve effectiveness and expand its applications.
    Preview · Article · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Our aim of this paper is to investigate the human communication in terms of two following questions. One is how human can know the fact that an interacting partner is human. Another is how non-communicative behaviour can become communicative. To answer these questions, we performed two experiments exploiting the idea of perceptual crossing experiments. As a result, we will show that the turn-taking structure supports humanlikeness and human communication in the primitive non-verbal interaction. Our results will be discussed with ambient interface technology.
    Full-text · Article · Jan 2012

  • No preview · Conference Paper · Jan 2012
  • Hideyuki Ando · Taro Maeda

    No preview · Article · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We developed a new pseudo-attraction force display using four vibrating motors. The advantage of this device is that it uses phase difference control to alter the time width of pulses to generate asymmetric oscillation. It does not require altering the frequency of the vibrating motors. In this study, we investigated the influence of the pulse width generated in the device using vibrating motors on the perceptive intensity of pseudo-attraction force. From the result, the oscillation having shorter pulse widths within the range of 10–20 ms induced the “being-pulled” sensation more strongly.
    Preview · Article · Jan 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: Spatial cognition requires integration of visual perception and self-motion. The lack of motion information in vision like image flows on TV often causes confusion of the spatial cognition and it becomes hard to understand where surrounding objects are. Our aim of this work is to investigate the mechanism of constructing spatial cognition from image flows and self-motion. In our experiments, we evaluated the performance of the spatial cognition abilities in VR environments according to the amounts of visual and motion information. Motion and visual information are closely related to each other since it is possible to estimate motion from the image flows of background. To modulate visual information, two kinds of head-mounted displays (32times24 and 90times45 deg.) were used. To compensate for motion information explicitly, a marker that indicates viewpoint movement of images was added in display and the effect was tested. As a result it is found that the maker significantly improved the performance and larger FOV could compensate for the lack of motion information however the effect was not significant when the marker was added. We will discuss these results with the image stabilization method to maintain a consistency between motion and image flows.
    No preview · Article · Oct 2011 · i-Perception
  • Hiroyuki Iizuka · Hideyuki Ando · Taro Maeda
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of the present paper was to clarify how the distinction of self- (sense of agency, SOA) and other-produced behavior can be synthesized and recognized in multisensory integration as our cognitive processes. To address this issue, we used tickling paradigm that it is hard for us to tickle ourselves. Previous studies show that tickle sensation by their own motion increases if more delay is given between self-motion of tickling and tactile stimulation (Blakemore et al. 1998, 1999). We introduced visual feedbacks to the tickling experiments. In our hypothesis, integration of vision, proprioception, and motor commands forms the SOA and disintegration causes the breakdown the SOA, which causes the feeling of others, producing tickling sensation even by tickling oneself. We used video-see-through HMD to suddenly delay the real-time images of their hand tickling motions. The tickle sensation was measured by subjective response in the following conditions; 1) tickling oneself without any visual modulation, 2) tickled by others, 3) tickling oneself with visual feedback manipulation. The statistical analysis of ranked evaluation of tickle sensations showed that the delay of visual feedback causes the increase of tickle sensation. The SOA was discussed with Blakemore's and our results.
    No preview · Article · Oct 2011 · i-Perception
  • [Show abstract] [Hide abstract]
    ABSTRACT: Previous studies have shown that modifying visual information contributes to the on-line control of self-motion, and that vection can be induced by modifying the velocity of optical flow. In this study, we investigate whether velocity modulation of optical flow affects self-motion perception during whole-body motion. In our experiments, visual stimuli were provided by a virtual wall consisting of dots on a screen. The participants are asked to move their own head from a starting point to a goal in 1.5 sec (about 33 cm/s). We compared three conditions of visual feedback (optical flow) with/without modifications, ie,, accelerated, decelerated, and no change in optical flow. The rates of change in velocity were between 0.5 and 2 times the control condition. As a result, we found that the accelerated condition induced under-shooting, while the decelerated condition induced over-shooting relative to the control condition's goal point. Moreover, the positioning errors became largest with a change rate of 1.5 times. Our findings suggest that self-motion perception during body motion is influenced by the velocity change of the optical flow, and that there is an optimal rate of velocity change for perceiving a self-motion. Finally, we discuss the mechanism of integrating visual and proprioceptive information.
    No preview · Article · Oct 2011 · i-Perception

Publication Stats

872 Citations
45.21 Total Impact Points

Institutions

  • 2008-2015
    • Osaka University
      • • Graduate School of Information Science and Technology
      • • Department of Bioinformatic Engineering
      Suika, Ōsaka, Japan
  • 2014
    • National Institute of Information and Communications Technology
      • Center for Information and Neural Networks
      Edo, Tokyo, Japan
  • 2010-2013
    • Osaka City University
      Ōsaka, Ōsaka, Japan
  • 2011
    • Le Centre de Recherche en Économie et Statistique
      Malakoff, Île-de-France, France
  • 2002-2011
    • Japan Science and Technology Agency (JST)
      Edo, Tōkyō, Japan
  • 2002-2008
    • NTT Communication Science Laboratories
      • Human Information Science Laboratory
      Kioto, Kyoto, Japan
  • 1991-2008
    • The University of Tokyo
      • • Graduate School of Information Science and Technology
      • • Department of Medical Engineering
      • • Department of Mathematical Engineering and Information Physics
      • • Department of Mechanical Engineering
      Tokyo, Tokyo-to, Japan