ArticlePDF Available

Abstract and Figures

Our overall goal is to analyze dolphin sounds to determine if dolphins utilize language or perhaps pictorial information in their complex whistles and clicks. We have recently discovered a novel phenomenon in images derived from digital recordings of the sounds of dolphins echolocating on submerged objects. Hydrophone recordings of dolphin echolocation sounds were input to a CymaScope, an analog instrument in which a water-filled, fusedquartz cell is acoustically excited in the vertical axis by a voice coil motor directly coupled to the cell. The resulting wave patterns were recorded with a digital video camera. We observed the formation of transient wave patterns in some of the digital video frames that clearly matched the shapes of the objects on which the dolphin echolocated, including a closed cell foam cube, a PVC cross, a plastic flowerpot, and a human subject. As further confirmation of this phenomenon the images were then converted into 3-dimensional computer models. The computer models were made such that the thickness at any given point was proportional to the brightness of a contrast-enhanced image with brighter areas thicker and darker areas thinner. These 3-dimensional virtual models were then printed in photopolymers utilizing a high definition 3D printer.
Content may be subject to copyright.
Research Article Open Access
Open Access
Research Article
Journal of
Marine Science: Research & Development
ISSN: 2155-9910
Kassewitz et al., J Marine Sci Res Dev 2016, 6:4
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds
Jack Kassewitz1*, Michael T Hyson2, John S Reid3 and Regina L Barrera4 Miami, Florida USA
2Sirius Institute, Puna, Hawaii
3Sonic Age Ltd, UK
4Puerto Aventuras, Mexico
Our overall goal is to analyze dolphin sounds to determine if dolphins utilize language or perhaps pictorial
information in their complex whistles and clicks. We have recently discovered a novel phenomenon in images
derived from digital recordings of the sounds of dolphins echolocating on submerged objects. Hydrophone recordings
of dolphin echolocation sounds were input to a CymaScope, an analog instrument in which a water-lled, fused-
quartz cell is acoustically excited in the vertical axis by a voice coil motor directly coupled to the cell. The resulting
wave patterns were recorded with a digital video camera. We observed the formation of transient wave patterns in
some of the digital video frames that clearly matched the shapes of the objects on which the dolphin echolocated,
including a closed cell foam cube, a PVC cross, a plastic owerpot, and a human subject. As further conrmation
of this phenomenon the images were then converted into 3-dimensional computer models. The computer models
were made such that the thickness at any given point was proportional to the brightness of a contrast-enhanced
image with brighter areas thicker and darker areas thinner. These 3-dimensional virtual models were then printed in
photopolymers utilizing a high denition 3D printer.
*Corresponding author: Jack Kassewitz, Speak Dolphin, 7980 SW 157 Street, Palmetto
Bay, FL 33157, USA, Tel: 1 (305) 807-5812; E-mail:
Received June 06, 2016; Accepted July 04, 2016; Published July 15, 2016
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon
Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci Res Dev 6:
202. doi:10.4172/2155-9910.1000202
Copyright: © 2016 Kassewitz J, et al. This is an open-access article distributed
under the terms of the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the
original author and source are credited.
Keywords: Dolphins; Dolphin echolocation; Hydrophone
recordings; CymaScope; Faraday waves; standing waves; 3D printing
We demonstrate that dolphin echolocation sound elds, when
reected from objects, contain embedded shape information that can
be recovered and imaged by a CymaScope instrument. e recovered
images of objects can be displayed both as 2-D images and 3-D printed
Bottlenose dolphins, Tursiops truncatus, use directional, high-
frequency broadband clicks, either individually or in “click train”
bursts, to echolocate. Each click has a duration of between 50 to 128
microseconds. e highest frequencies our team has documented
were approximately 300 kHz while other researchers have recorded
frequencies up to 500 kHz emitted by Amazon river dolphins [1].
e clicks allow dolphins to navigate, locate and characterize objects
by processing the returning echoes. Echolocation ability or bio-sonar
(Sound Navigation and Ranging) is also utilized by other Cetacea, most
bats and some humans. Dolphins echolocate on objects, humans, and
other life forms both above and below water [2].
e exact biological mechanisms of how dolphins “see” with sound
still remains an open question, yet, that they possess such an ability
is well established [3]. For example Pack and Herman found that “a
bottle nosed dolphin… was … capable of immediately recognizing a
variety of complexly shaped objects both within the senses of vision
or echolocation and, also, across these two senses. e immediacy of
recognition indicated that shape information registers directly in the
dolphin’s perception of objects through either vision or echolocation,
and that these percepts are readily shared or integrated across the senses.
Accuracy of intersensory recognition was nearly errorless regardless
of whether the sample objects were presented to the echolocation
sense … the visual sense (E-V matching) or the reverse… Overall, the
results suggested that what a dolphin “sees” through echolocation is
functionally similar to what it sees through vision” [4].
In addition, Herman and Pack showed that “…cross modal
recognition of … complexly shaped objects using the senses of
echolocation and vision…was errorless or nearly so for 24 of the 25
pairings under both visual to echoic matching (V-E) and echoic to
visual matching (E-V). First trial recognition occurred for 20 pairings
under V-E and for 24 under E=V… e results support a capacity
for direct echoic perception of object shape… and demonstrate that
prior object exposure is not required for spontaneous cross-modal
recognition” [4].
Winthrop Kellog was the rst to study dolphin echolocation in
depth and found their sonic abilities remarkable [5]. He discovered
that dolphins are able to track objects as small as a single “BB” pellet
(approximately 0.177 inch) at a range of 80 feet and negotiate a maze
of vertical metal rods in total darkness. A recent study showed that a
bottlenose dolphin can echolocate a 3-inch water -lled ball at a range
of 584 feet, analogous to detecting a tennis ball almost two football
elds away [6].
Click trains and whistles cover a spectrum of frequencies in the
range 0.2 to 300 kHz and appear to originate from at least two pairs
of phonators or phonic lips near the blowhole (Figure 1 le) [7]. e
phonic lips are excited by pumping air between air sacs located above
and below the phonators (Figure 2 right). e phonators’ positions can
be rapidly adjusted by attached muscles [8].
We suggest that these sounds reect from a compound parabola
[9-11] shape formed by the palatine bones. A compound parabola
would ensure equal beam intensity across the output aperture. e now
collimated sound is emitted from the dolphin’s melon, which is made
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 2 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
up of several types of adipose tissue [12]. e melon is thought to be an
acoustic lens with a variable refractive index. us, higher frequency
sounds will be deected to higher azimuths so that the dolphin may
interpret the frequency of the returning echoes as an azimuth angle.
e resulting sonic output reects from objects in the environment
forming echoes that are received and processed by the dolphin. e
commonly held view is that the returning echo sounds are received
via the lower jaw, passing rst through the skin and fat that overlays
the pan bones. Two internal fat bodies that are in contact with the
pan bones act as wave guides [13,14] conducting the sounds to the
tympanic bullae (Figure 2, Le) causing a thin part of each bulla to ex
like a hinge (Figure 3, Le). is transmits sound by way of the ossicles
(Figure 3, Right) to the cochleae.
As with stereoscopic visual perception in humans, due to the
separation of the eyes on the head, the separation of the le and right
fat bodies and tympanic bullae may allow dolphins to process sonic
inputs binaurally. Dierent times of arrival (or phase) from the le and
right may allow the dolphin to determine the direction of incoming
sounds [15-21].
Conjecture has surrounded the issue of whether the tympanic
membrane and the malleus, incus, and stapes are functional in
dolphins. A prevalent view is that the tympanic membrane lacks
function and that the middle ear ossicles are “calcied” in dolphins and
therefore lack function [22]. However, according to Gerald Fleischer
(Figure 3), while the dolphin ossicle chain may appear calcied, this
stiness actually matches the impedance of the tympanic bullae and
ossicles with the density of water. A thin part of each tympanic bulla,
with density approximating granite, responds to sounds by exing. is
exion is tightly coupled to the ossicles that have evolved their shapes,
lever arm moments and stiness to respond to acoustic signals in
water (about 800 times denser than air) with a minimal loss of acoustic
energy. e exion can occur even if the tympanic membrane lacks
function. Signicant rigidity is required in this system; for example,
the stapes in the dolphin has compliance comparable to a Boeing 747
landing gear [23].
Fleischer also suggests that the le/right asymmetry of the
skull (Figure 1, Right) and the “break” in the zygomatic arches of
echolocating Cetacea is necessary to prevent coupling of resonances
from both sides of the skull that would reduce the acoustic isolation of
monkey lips
posterior bursa
anterior bursa
(phonic lips) airsacs
Figure 1: Left: Dolphin head showing phonic lips or phonators [16,17], Right: Top view of dolphin skull, showing asymmetries [18].
Figure 2: Left: dolphin mandible showing pan bone area [19], Right: Schematic of dolphin air sacs producing sounds [20].
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 3 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
the bullae. In fact, one can determine from fossils if an extinct species
echolocated by examining the zygomatic arch (Figure 4).
Given that the fat bodies in the mandible contact the tympanic
bullae, there is a path by which returning echoes ex the tympanic bullae
and provide input to the ossicle chain. us the apparent contradictions
of the work of Fleischer, Norris and Ketten can be resolved since both
the fat bodies of the mandible as well as other external sounds ex the
tympanic bullae relative to the periotic bone. Sounds from either the fat
bodies or the tympanic bullae couple to the ossicle chain and conduct
sounds to the dolphin’s cochleae.
e sensory hair cells along the cochleae’s Organ of Corti are
stimulated at dierent positions that depend on a sound’s frequency,
thus translating frequency to position according to the Von Bekesy
traveling wave theory [24]. Interesting work by Bell suggests a model in
which each hair cell is a complex harmonic oscillator [25-28].
Neural signals from the hair cells are transferred to the brain via
the acoustic nerve, which is 3 times larger in dolphins than in humans.
e acoustic nerves ultimately project to the brain’s acoustic cortex
with a tonotopic mapping wherein specic frequencies excite specic
locations. e auditory cortex of the dolphin brain is highly developed
and allows hearing across a bandwidth approximately 10 times wider
than that of a human. e dolphin brain then interprets these data.
Dolphins can also control the direction of their echolocation
sounds. Lilly showed how dolphins use phase to steer their sound
beams [29]. Microphones were placed on either side of the blowhole
of a dolphin. At times only one microphone picked up sound while the
other showed zero signal, suggesting that the dolphin was making at
least two sounds; sounds to the le were cancelled out, while sounds to
the right were reinforced, and the same result was found when the sides
were reversed. erefore, a dolphin can aim its echolocation beam
while its head is stationary.
To place this work in a larger context, we note that dolphins
recognize themselves in mirrors [30-33], which show they are self-
aware. Wayne Batteau taught dolphins to recognize 40 spoken Hawaiian
words [34,35] and John C Lilly taught dolphins to imitate English [36].
Louis Herman found that dolphins can recognize approximately 300
hand signs in 2000 combinations [37]. Dolphins perform better at
language tasks than any other creature with the exception of humans.
Research by Markov and Ostrovskaya [38] found the 4 dolphin
phonators are under exquisite and separate control [39]. A dolphin
can make at least 4 simultaneous sounds that are all unique. From
examining around 300,000 vocalizations, they conclude that dolphins
have their own language with up to a trillion (1012) possible “words”
whereas English has only 106 words.
ey also found that dolphin communication sounds follow
Zipf’s laws [40] by which the number of short utterances exceeds long
utterances in the ratio 1/f. Markov and Ostrovskaya consider this
proof that dolphin sounds carry linguistic information since all other
languages and computer codes so far tested also show a 1/f relationship
with symbol length [41].
Equipment and Methods
To investigate the bio-sonar and communication system of
the bottlenose dolphin, Tursiops truncatus, we trained dolphins to
echolocate on various objects at a range of 2 to 6 feet. e sounds of
dolphins were recorded while they echolocated on submerged objects,
including a closed cell foam cube, a PVC cross, a plastic owerpot, and
a human subject. Searching for ways to visualize the information in
the recordings, we used a recently developed CymaScope, which has
three imaging modules covering dierent parts of the audio spectrum.
e highest frequency imaging module operates by sonically exciting
medical-grade water [42] in an 11 mm diameter fused quartz visualizing
cell by means of a voice coil motor. e dolphin recordings were input
into the CymaScope’s voice coil motor and the water in the visualizing
cell transposed the echolocation sounds to 2-D images.
We utilized a calibrated matched pair of Model 8178-7
hydrophones with a frequency response from 20 Hz to 200 kHz. eir
spatial response is omnidirectional in orthogonal planes with minor
nulling along an axis looking up the cable. e hydrophone calibration
curves show a at frequency response down to 20 Hz at -168.5 dB re 1
v/uPa. e response from 2 kHz to 10 kHz is -168.5 dB, aer which the
response rolls o gradually. From 80 kHz to 140 kHz, the sensitivity
falls to -174 dB, aer which the sensitivity climbs again to -170 dB at
180 kHz. Above 200 kHz, the response rolls o gradually out to 250
kHz, which then drops rather sharply at approximately -12 dB per
octave. e pair of hydrophones utilized 45-foot underwater cables
with four conductors inside a metalized shield and powered by two 9
volt batteries.
Hydrophone signal processing
e signal from the hydrophones was passed to a Grace Lunatec V3
microphone preamp, a wideband FET preamplier with an ultra-low
distortion 24-bit A/D converter. e preamplier oers the following
sample rates: 44.1, 48, 88.2, 96, 176.4 and 192 kHz. A band pass lter
attenuated signals at and below 15 Hz to reduce noise. In addition, we
utilized a power supply lter, and power input protective circuits. e
pre-amplied and ltered signal was sent to an amplier enclosed in
a metal EMI enclosure and encapsulated in a PVC plastic assembly.
Figure 3: Left: The Cetacean Periotic-Tympanic Bulla “hinge” ection [21],
Right: Dolphin ear ossicles (malleus-red, incus-green, stapes-blue) [22].
Figure 4: 30+ Million-Year-Old Dolphin Skull [25].
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 4 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
For our recordings, sampling rates were set at 192 kHz. A solid copper
grounding rod was laid in seawater. We measured 25 ohms or less with
the ground rod installed, thus minimizing noise.
e echolocation sounds for the plastic cube, cross, and owerpot
were recorded with one channel of a Sound Device 722 high-denition
two-channel digital audio recorder. e Sound Device recorder writes
and reads uncompressed PCM audio at 16 or 24 bits with sample rates
in the range 32 kHz to 192 kHz. e 722 implements a non-compressed
audio format that includes Sound Devices’ microphone preampliers
designed for high bandwidth and high bit rate digital recording with
linear frequency response, low distortion, and low noise. For recording
the echoes from the plastic objects the sample rate was set to 192 kHz.
e echolocation sounds for the human subject were recorded with
a SM3BAT, which can sample in the range 1 kHz to 384 kHz on a single
channel and up to 256 kHz with two channels. e sampling rate was
set to 96 kHz to better match the narrow frequency response range
of the CymaScope. Audio data was stored on an internal hard drive,
Compact Flash cards, or external FireWire Drives.
Recording dolphin object echolocation sounds
Dolphins were trained to echolocate on various objects on cue. We
recorded a dolphin’s echolocation sounds and the echoes reected from
a PVC plastic cross, a closed cell foam cube, a plastic owerpot, and
an adult male human subject under water. e signals from the cross,
cube and owerpot were recorded as single monaural tracks from one
hydrophone approximately 3 feet in front of the object and about 3
feet to the side, while the dolphin was 1-2 feet from the target. e
recording for the submerged, breath-holding human subject utilized
two tracks from two hydrophones on the le and right of the subject
about 3 feet in front of the human and about 6 feet apart (Figure 5).
We chose these objects because they were novel shapes to the
dolphin and had relatively simple geometries. Once we determined
that images of these shapes appeared in CymaGlyphs, we wanted to
see if the imaging eect would occur with the more complex shape of a
human body (Figure 6).
e average distance from dolphin to submerged object was between
1 and 2 feet and approximately a 6-foot range when echolocating the
human male (Figure 7).
Environmental conditions
During the experiments the water was clear and calm and about
ten feet in depth. We might expect that there would be multipath
echoes from the surface of the water and the bottom. We have yet to
see evidence of this in our recordings. Perhaps the fact that the dolphin
was at a range of 1-2 feet from the objects being echolocated eliminated
such eects. For the echolocation on the submerged human subject at
a range of about 6 feet we also failed to see evidence of any multipath
Visualizing dolphin echolocation sounds
We utilized various conventional methods to represent the dolphin
echolocation sounds including oscillographs and FFT sonograms,
which display data graphically. We also utilized a CymaScope
instrument to provide a pictorial mode of display. e CymaScope’s
highest frequency imaging module transposes sonic input signals to
water wavelet patterns by exciting a circular cell 11 mm in diameter
which contains 310 micro liters of water, by means of a voice coil
motor direct-coupled to the cell (Figure 8). Images recorded from the
CymaScope are called “CymaGlyphs.” e resulting wave patterns in
the visualizing cell were captured from the objects’ echo.wav les as still
photographs and as video recordings for the male human test subject’s
echo.wav le.
Utilizing recordings of dolphin sounds as inputs to the CymaScope
we discovered 2-D images that clearly resembled the objects being
echolocated. To further investigate the images, we converted them to
3-D models using 3D print technology. As far as we know, this is the
rst time that such methods have been employed to investigate dolphin
echolocation signals.
Digital dolphin sound les were supplied to Reid for experimentation
in the UK via email attachments. e audio signal was rst equalized
utilizing a Klark Teknik 30-band graphic equalizer with the 5 kHz band
boosted to +12 dB. All bands above 5 kHz were minimized to -12 dB,
creating a 24 dB dierential between the 5 kHz band and signals above
that frequency. Bands below 5 kHz, down to 800 Hz, were also boosted
to +12 dB. Bands below 800 Hz were minimized.
e equalized signal was then sent to an Inter-M model QD4240
amplier with a frequency response of 20 Hz - 20 kHz, THD: <0.1%,
over 20 Hz - 20 kHz; 60 watts RMS per channel; a single channel was
used creating a sound pressure level of 100 dB at the voice coil motor.
Figure 9 shows a block diagram of the signal processing equipment.
Figure 5: Echolocated objects: Cross 1 ft. across, Cube 3×3×3 inches,
Flowerpot about 1 ft. high, and a human male.
Figure 6: Echolocation: Left: Cross, Right: Cube.
Outgoing Echolocation Beamform
Returning Echolocation Beamform
Figure 7: Left: Schematic of hydrophones recording dolphin echolocating on
a human subject underwater Right: Submerged human male.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 5 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
e optimum (center) frequency response of the CymaScope
is 1840 Hz while the instrument’s upper limit is 5 kHz. us, at its
current stage of development the CymaScope responds to a small band
of frequencies in the dolphin recordings, however, future development
of the instrument is expected to broaden its frequency response.
e resulting wave patterns on the water’s surface were
photographed and/or captured on video with a Canon E0S 7D cameras
with a Canon 100 mm macro lens set at f 5.6 and 5500 K white balance.
e raw resolution for stills and video is 1920 × 1080 pixels, at a frame
rate of 24 fps with a shutter speed 1/30th second. An L.E.D. light
ring mounted horizontally above and coaxial with the visualizing cell
provided illumination (Figure 10).
Light Ring
Peltier chilling ring
Fused quartz
Visualizing Cell
Direct coupled to damper
Cotton damper
Section through CymaScope
Voice Coil Motor
Figure 8: Cross section through the CymaScope’s Voice Coil Motor and arrangement of Light Ring and Camera.
Computer 1
Computer 2
Class A
30 band
3 band
Graphic EqualiserCompressor-LimiterSignal Generator
Audio Amplifier
50 walts RMS max
Running Audacity
Running camera
Peltier PSU
Light Ring
Signal source for
dolphin sound files
Figure 9: Block diagram of signal processing route from signal source to CymaScope.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 6 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
e wav sound les sent to Reid were only labeled with letters and
numbers and he had no prior knowledge of what each le contained or
which les contained echolocations, if any. erefore, Reid performed
the analyses on the CymaScope “blindly” and in all cases, only the
sound les containing object echolocations showed images.
e sound le containing the “male human subject” was input to
the CymaScope and the resulting transient wave impressions on the
water were captured at 24 frames per second and compressed to a
QuickTime video le. Our initial investigations indicate that the image
occurs at or aer 19.6 seconds in the sound le and before 20 seconds.
e Canon 7D camera can have a variable audio sync slippage of 0 to 3
video frames, thus we can only localize the image within the video to a
6 frame region. In addition, the water in the cell has a hysteresis eect
in that it takes a nite amount of time for the water molecules to be
imprinted by the input signal; we have yet to measure this time delay,
which is likely to vary with frequency. We hypothesize that there is a
potential image frame for each dolphin click. One interesting aspect
is that the orientations of the images occurred at various angles and
were then adjusted to an upright orientation post shooting; however,
the “human” image is presented in its original orientation.
We discovered transient wave patterns in the water cell that were
strikingly similar in shape to the objects being echolocated. To further
investigate the shapes in these Cyma Glyphs we converted the images
to 3-D models. As far as we know, this the rst time such a method has
been implemented.
Transient wave images were found for the cube, cross, owerpot
and a human by examining single still frames, or single frames acquired
in bursts, usually where there was high power and dense click trains
in the recording. e parameters involved in capturing echolocation
images with the CymaScope include careful control of the acoustic
energy entering the visualizing cell. Water requires a narrow acceptable
“power window” of acoustic energy; too little energy results in no image
formation and too much energy results in water being launched from
the cell. As a result, many hours of work were involved in capturing
the still images of the echolocated objects. e imagery for the human
subject was captured in video mode and has been approximately located
in time code. e image formed between 19.6 to 20 seconds into the
recording and may derive from a set of dense clicks at approximately 20
seconds. e video was shot at 24 frames/second with a possible audio
synchronization error of plus or minus 3 frames.
Further, to maximize the coupling of the input sound to the
water in the cell, the original recording was slowed by 10%, as well as
being heavily equalized in order to boost the high frequency acoustic
energy entering the visualizing cell. is signal processing may have
introduced phase changes. In addition, there is some hysteresis, yet to
be measured, because of the mass and latency and compliance of the
voice coil motor, as well as the mass and latency of the water in the
visualizing cell. We have yet to measure these time delays, which are
likely to vary with frequency.
We note that the waveform of the equalized and slowed signal
appears substantially dierent than the original recording, further
complicating the matching of the sounds to the image. erefore,
the time of occurrence and the signals forming the images are only
approximately known at this time. We are replicating our results and
will have more accurate data in the future.
Object echolocation results
e Figures that follow show our main results. We display results
for echolocations on a cross (Figure 11), a cube (Figure 12), and a
owerpot (Figure 13). Each gure shows a schematic of the dolphin,
object, and hydrophone positions, followed by the CymaGlyph image
recorded for each object as well as a 3-D print of the image.
• Figure14showsahumansubjectunderwater.
• Figure15showsanFFTofsoundsrecordedfortwoclicksnear
where the human image formed.
• Figure16showsleandrighttracksrecordednearthetimeof
image formation. Note that the tracks are quite dierent and
indicates that sound cancellation with multiple sound sources
may be involved.
• Figure17isaportionofthesignaldrivingtheCymaScopenear
the time of image formation.
• Figure 18’s Le frame shows the CymaScope recording just
before the formation of an image of the human subject; while
the Center frame shows the human subject’s raw image, and
the Right frame shows the image with selected areas’ contrast
Figure 10: CymaScope.
Returning Echolocation Beamform
Outgoing Echolocation Beamform
Figure 11: Cross: Left: CymaGlyph (Contrast enhanced), Right: 3-D Print of
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 7 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
Returning Echolocation Beamform
Outgoing Echolocation Beamform
Figure 12: Cube: Left: CymaGlyph (Contrast enhanced), Right: 3-D Print of
Returning Echolocation Beamform
Outgoing Echolocation Beamform
Figure 13: Flowerpot; Left: CymaGlyph, Center: CymaGlyph; image with
dotted lines added, Right: 3-D Print of CymaGlyph.
Figure 14: Human subject Echolocation; Left: Human subject approx. 6 ft. from dolphin, Right: Close up
Figure 15: Echolocation of a Human subject, from 0.1 to 0.15 seconds, Frequencies 200 to 5000 Hz, 512 point FFT.
Figure 16: Right (top) and Left (bottom) tracks of human subject echolocation: Time 19.5 to 20.60 seconds. Note the extreme differences of the left and right tracks,
implying signal cancellation using multiple sound sources.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 8 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
enhanced to emphasize the human body shape. Overall contrast
was rst increased; then further contrast increases were made
by selecting areas and increasing their contrast.
• Figure19showsananalysisofthefeaturesofthehumanbody
• Figure20isa3Dprintofthishumanimage.
Cross: A cross-shaped object made of PVC pipe was echolocated
by a dolphin at a range of 1-2 feet and recorded with a matched pair
of hydrophones with a sample rate of 192 kHz with 16-bit precision.
Cube: A cube-shaped object made from blue plastic closed-cell
foam, was echolocated by a dolphin at a range of 1-2 feet and recorded
with a matched pair of hydrophones with a sample rate of 192 kHz with
16-bit precision.
Flowerpot: A owerpot-shaped object made of plastic was
echolocated by a dolphin at a range of 1-2 feet and recorded with a
matched pair of hydrophones with a sample rate of 192 kHz with 16-
bit precision.
Finding echoes
Adult human physical body part dimensions (Table 1) range from
10-180 cm, with corresponding scattering frequencies, in seawater,
from 833 Hz to 15 kHz. Note that the ner features are above 5000 Hz,
so for the CymaScope to see such detail, we assume that a mechanism
exists in which these details are coded in time. e following table
indicates the frequencies of echoes expected for dierent body parts as
calculated by John Kroeker.
We report the discovery that by utilizing dolphin echolocation
sounds to vibrate a cell containing water, in a CymaScope instrument,
we found that the shapes of submerged objects being echolocated by a
dolphin appeared transiently as wave patterns in the water-containing
cell. We found clear images of a PVC cross, a foam cube, a plastic
owerpot and a submerged male test subject.
We converted some of the images into 3-D printed objects where
thickness was proportional to brightness of the image at each point.
is both conrmed the initial results and oered new perspectives and
ways to hapticly explore these shapes.
Understanding the mechanism by which the transient images
occur will be the subject of future research. Detailed mathematical
understanding and modeling of these waves is complex and
mathematical techniques have yet to be developed that can model
Faraday Wave events beyond simple geometries. As far as we can
determine, this is the rst time such a phenomenon has been observed,
namely, that dolphin echolocation sounds, imaged by a CymaScope,
result in transient wave patterns that are substantially the same as the
objects being echolocated.
e ripple patterns that had been previously observed with the
CymaScope had typically been highly symmetrical and reminiscent of
mandalas, varying with the frequency as well as harmonic structures
and amplitude of the input signal, whereas the images of objects formed
by echolocation are atypical.
Michael Faraday studied patterns that he termed “crispations” on
the surface of liquids, including alcohol, oil of turpentine, white of egg,
Figure 17: Echolocation track from 0 to 0.6 seconds – Right track, equalized. From “What the Dolphin Saw” Audio.
Figure 18: CymaScope recording of human subject. Left: 1/24 second frame before human image appears, Center: Raw image, Right: Enhanced image.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 9 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
ink and water. ese patterns included various geometric forms and
historically have become known as Faraday waves.
“Parametrically driven surface waves (also known as Faraday
waves) appear on the free surface of a uid layer which is periodically
vibrated in the direction normal to the surface at rest. Above a certain
critical value of the driving amplitude, the planar surface becomes
unstable to a pattern of standing waves. If the viscosity of the uid is
large, the bifurcating wave pattern consists of parallel stripes. At lower
viscosity, patterns of square symmetry are observed in the capillary
regime (large frequencies). At lower frequencies (the mixed gravity-
capillary regime), hexagonal, eight-fold, and ten-fold patterns have
been observed. ese patterns have been simulated with the application
of complex hydrodynamic equations. ese equations are highly non-
linear and to achieve a match with observed patterns one needs high
order damping factors and ne adjustment of various variables in the
models” [43].
It is most surprising that features such as straight lines, for example,
forming the bas-relief image of a cube, or especially, the rough image
of a human body, form when the input signal to the CymaScope is a
recording of dolphin echolocations on these objects. e formation
of patterns as complex as the image of an object, such as we report
here, are well beyond the capabilities of current Faraday wave models.
erefore, we have yet to determine exactly how these images form. In
one sense we consider the CymaScope a type of analog computer that
embodies in its characteristics the complex math required for image
We are interested in understanding how spatial information
arises from a one-dimensional time series since, in these experiments,
the input to the CymaScope was a series of amplitudes over time,
containing no apparent shape or spatial information. Perhaps complex
interactions of the vibrating water form patterns in response to the later
incoming signals, in which the phase of the signals plays an important
part. Related to the matter of spatial information is the degree of detail
that we obtained in the images, particularly that of the submerged
human test subject in which his weight belt and other small features
are discernible, albeit at low resolution. e frequency response of the
CymaScope is from about 125 Hz to 5 kHz with a peak response at 1840
kHz. erefore, the formation of the ne details in the imagery must
be spread through time in some manner. A natural time base exists in
the visualizing cell in that it takes a nite amount of time for a ripple
to travel from the cell’s central axis to the circular boundary, which is
a function of the frequency of the injected signal. is hysteresis in the
water mass can be thought of as a type of memory, whereby existing
waves in the water interact with later signals. Perhaps the ner details
in the images are the result of complex interactions among these
We note that the image of the human body is ipped le to right, as
an apparent mirror image. Why this occurs has yet to be determined.
Also, what determines the orientation of the images in the instrument’s
visualizing cell, which is radially symmetric has yet to be determined.
In the case of the cube and the human subject, the images are “right
side up” which is curious and perhaps coincidental. We will investigate
this further and plan to conduct experiments with synthetic signals
to determine if images form when high frequency sound pulses
are reected from objects in a laboratory setting and input to the
We have yet to arrive at a hypothesis for the mechanism that
underpins this newly discovered phenomenon. An obvious question
that occurred to us is whether these results are a form of pareidolia, that
is, the tendency for people to see familiar shapes such as faces, even in
random patterns such as clouds. However, based on the circumstances,
this cannot be the case. Each shape was found even when only one
le in several sent to Reid contained the echolocation shape data. In
addition, Reid had no knowledge of what shapes were potentially to be
found in the sound les. is is a strong argument against any form of
pareidolia given that the crude outline of the owerpot even shows a
faint outline of the hand that held it and, in the image of the male test
subject, his weight belt and even some of his ngers were evident.
Figure 19: Interpretation of human body image showing features like the
ngers, elbow, and weight belt. The image areas have been emphasized.
Figure 20: Human subject: 3-D Print of human subject from a CymaGlyph
Human dimensions cm Frequency delta in ms
head 15 10200 0.098
arm width 10 15300 0.065
leg width 18 8500 0.118
torso width 33 4636 0.216
leg spacing at knees 40 3825 0.261
arm length 60 2550 0.392
leg length 80 1913 0.523
torso length 60 2550 0.392
arm spacing 90 1700 0.588
Total Height 180 833 1.176
Table 1: Human body feature sizes and echo delays.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 10 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
Examining dolphin head movements may provide insights
into how dolphins aim their sounds and scan objects. We intend to
measure head position and movement in future studies as well as
varying the dolphin’s range from the targets. Also, since dolphins can
make up to four and possibly ve dierent sounds simultaneously,
we are considering making multi-hydrophone recordings to better
analyze the sounds. In examining the le and right stereo tracks of
the echolocation on the human test subject, for example, we found
considerable dierences between the le and right tracks, suggesting
that the dolphins are employing sound cancellation techniques. For
example, click trains on the right channel may fail to appear on the le
(Figure 16).
e implications of the present study for dolphin perception
are signicant. It may be that dolphins and other Cetacea process
auditory information to create a perceived world analogous to human
visual perception. Dolphins possess large acoustic nerves that contain
approximately as many bers as human optic nerves. eir auditory
cortex is larger than humans’ and their brain is larger. It is plausible
that dolphins experience an auditory perceptual world that is as rich
as our human visual experience. In the same way we humans enjoy
a totally integrated world as we run across a eld or as a bat is nely
coordinated while catching two fruit ies in half a second diving
through the branches of a tree, so too, the dolphin likely integrates
sound, vision, touch, taste etc. into a complete gestalt.
We suggest that they are integrating sonar and acoustic data with
their visual system so that their sonar inputs are interpreted, in part, as
images. is concept is supported by extant studies. A corollary of this
concept is that dolphins may create sonic shapes with their sounds that
are projected to other dolphins as part of their communication. We
intend further studies to determine if such a communication system
Recent brain tract maps in the dolphin brain show possible neural
connections between cortical regions A1 and V1. is suggests both
auditory and visual areas of the brain participate in the interpretation
of auditory information [44]. Other studies show similar functioning
in humans and support the concept of similar cross-modality linking
in dolphins.
ere are humans who can detect objects in their environment by
listening for reected echoes. Tapping their canes, stomping their foot,
snapping their ngers, or making clicking noises provide the means to
create echoes. “It is similar in principle to active sonar and to animal
echolocation, which is employed by bats, dolphins and toothed whales
to nd prey” [45]. While humans’ visual cortex lacks known inputs
from other sensory modalities, in blind subjects it is active during
auditory or tactile tasks.
“Braille readers blinded in early life showed activation of primary
and secondary visual cortical areas during tactile tasks, whereas
normal controls showed deactivation, thus in blind subjects, cortical
areas normally reserved for vision may be activated by other sensory
modalities” [46].
Blind subjects perceive complex, detailed, information about the
nature and arrangement of objects [47].
“Some blind people showed activity in brain areas that normally
process visual information in sighted people, primarily primary visual
cortex or V1… when the experiment was carried out with sighted
people who did not echolocate… there was no echo-related activity
anywhere in the brain” [48].
Further evidence of visual areas of the human brain being involved
with audition comes from functional fMRI’s in blind echolocating
humans that showed simultaneous activity in both acoustic and visual
areas of the brain [49,50] (Figure 21).
Pitch perception shows a visual component [51]. Patel reports
on the work of Satori who used fMRI to image human volunteers
attempting to use pitch perception to identify paired short melodies.
Satori observed surprising neural activation outside the auditory
cortex. Relative pitch tasks showed increased activity in the infra-
parietal sulcus, an area of visuospatial processing. In recent research
into spatiality in relative pitch perception in humans, functional fMRI
studies show visual areas of the brain being activated as part of the
perception of simple sound phrases that are rising and declining in
From these and other studies, we conclude that since mammal,
human and dolphin brains are similar, and, given that many humans
experience a visual component to their echolocation, it seems more
than speculation that dolphins have developed similar perceptions that
combine the auditory and visual areas of their brain.
Dolphin echolocation sounds recorded while echolocating various
objects and a human subject, were input to a CymaScope instrument.
We discovered that patterns arose that closely matched the shape
of objects and a human subject being echolocated. ese results are
beyond current Faraday wave modeling techniques and will require
further research to determine how such wave patterns occur.
Understanding this phenomenon oers the possibility of
improved sonar and sonar displays and oers the potential for gaining
further insights into the process by which dolphins echolocate and
communicate. We intend to replicate and extend our experiments to
clarify issues arising from this discovery.
We dedicate this paper to the Cetacea, dolphins and whales and, to
our colleagues in this work and who continue to inspire us: Dr. John C.
Lilly, M.D., Dr. Henry (Hank) M. Truby, Ph.D., Star Newland, Carol
Ely, Dr. Jesse White, DVM and Lewis Brewer.
We acknowledge our Science Advisory Board: V. S. Ramachandran M.D.,
Echolocation Expert Control Participant
Figure 21: Human Echolocation showing visual components.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 11 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
Ph.D., Suzanne Thigpen, M.D., John Kroeker, Ph.D., Mark Owens, Ph.D., Charles
Crawford, D.V.M., Jason Chateld, D.V.M. and Elizabeth A. Rauscher, Ph.D.
We thank William H. Lange for his assistance, 3D Systems for their support of
3D printing our objects and Michael Watchulonis for his work to document this
discovery, and Annaliese Reid for her work in editing this paper. The work of Hans
Jenny is the underpinning of our discovery and rst led me to the idea of making
dolphins sounds visible.
My profound appreciation goes out to Dolphin Discovery, Mexico and all their
staff, managers, and veterinarians who fully supported our research objectives and
always displayed amazing patience. And of course, I must thank the dolphins-Zeus
and Amaya.
All the interns with were remarkable for their hard work
and strong belief in Donna’s and my dreams of dolphin communication-Elisabeth
Dowell, Regina Lobo, Brandon Cassel, Roger Brown, Donna Carey, Jillian
Rutledge, and Jim McDonough.
We must always remember Robert Lingenfelser, who nudged me along and
encouraged me to continue this Cymatic investigation.
Additionally, a special thank you to Art by God in Miami and Gene Harris for
allowing us to photograph their dolphin fossil used for our asymmetry analysis.
This research was conducted with a grant from Global Heart, Inc. and Speak
Dolphin. Jack Kassewitz.
1. Trone M, Glotin H, Balestriero R, Bonnett DE (2016) Enhanced feature
extraction using the Morlet transform on 1 MHz recordings reveals the complex
nature of Amazon River dolphin (Inia geoffrensis) clicks, Presentations,
Jacksonville, FL, Acoustical Society of America. Acoust Soc Am 138: 1904-
2. Jack Kassewitz et al., Use of Air-Based Echolocation by A Bottlenose Dolphin
(Tursiops truncatus).
3. Au, Whitlow WL, Fay, Richard R (2000) (Ed.), Hearing by Whales and Dolphins,
Springer Handbook of Auditory Research, Summary, Springer Verlag p: 403.
4. Pack AA, Herman LM (1995) Sensory integration in the bottlenosed dolphin:
immediate recognition of complex shapes across the senses of echolocation
and vision. J Acoust Soc Am 98: 722-733.
5. Herman LM, Pack AA, Hoffmann KM (1998) Seeing through sound: dolphins
(Tursiops truncatus) perceive the spatial structure of objects through
echolocation. J Comp Psychol 112: 292-305.
6. Winthrop K, Porpoises, Sonar (1965) University of Chicago Press; Third
Printing edition.
7. Au WW, Benoit BKJ, Kastelein RA (2007) Modeling the detection range of sh
by echolocating bottlenose dolphins and harbor porpoises. J Acoust Soc Am
121: 3954-3962.
8. Cranford TW, Elsberry WR, Van Bonn WG, Jeffress JA, Chaplin MS, et al.
(2011) Ridgway, Observation and analysis of sonar signal generation in the
bottlenose dolphin (Tursiops truncatus): Evidence for two sonar sources. J of
Exp Marine Bio and Eco p: 407.
9. Truby HM (1975) Cpersonal communication with Michael Hyson.
10. Fossil Freedom (2016) Solar collector reector geometry.
11. Colina MJA, Lopez VAF, Machuca MF (2010) Modeling of Direct Solar
Radiation in a Compound Parabolic Collector (cPC) with the Ray Tracing
Technique, Dyna rev fac nac minas 77: 163.
12. A single parabola forms a beam where the power is greatest at the center and
falls off at the periphery. A compound parabola insures that the power density
of the beam will have equal power across the output aperture.
13. Zahorodny ZP (2007) Ontogeny and Organization of Acoustic Lipids In Mandible
Fats of the Bottlenose Dolphin (Tursiops truncatus), Thesis Department of
Biology and Marine Biology, University of North Carolina, Wilmington.
14. Whitlow WL Au (2011) The Sonar of Dolphins, Springer pp: 91-94.
15. Rauch A (1983) The Behavior of Captive Bottlenose Dolphins (Tursiops
truncatus), Southern Illinois University at Carbondale.
16. Cranford TW, Amundin M, Norris KS (1996) Functional morphology and
homology in the odontocete nasal complex: implications for sound generation.
J Morphol 228: 223-285.
17. Jack K, Donna K, Stacey A (2015) Cetacean Skull Comparisons: Osteological
Research, Kindle Edition, Open Sci Pub.
18. Koopman HN, Zahorodny ZP (2008) Life history constrains biochemical
development in the highly specialized odontocete echolocation system. Proc
Biol Sci 275: 2327-2334.
19. Hyson MT (2008) Star Newland, Dolphins, Therapy and Autism, Sirius Institute,
Puna, HI.
20. Fleischer G (1978) Evolutionary Principles of the Mammalian Middle Ear,
Advances in Anatomy, Embryology and Cell Biology, Springer-Verlag, Berlin
and Heidelberg GmbH & Co. K.
21. Tubelli AA, Zosuls A, Ketten DR, Mountain DC (2014) Elastic modulus of
cetacean auditory ossicles. Anat Rec (Hoboken) 297: 892-900.
22. Darlene FK (1997) Structure and Function in Whale Ears, Bioacoustics 8: 103-
23. Fleischer G (1973) personal communication to Michael Hyson.
24. Art By God 60 NE 27th St, Miami, FL 33157, 2016.
25. von Bekesy G (1961) Nobel Lecture.
26. Bell A (2004) Hearing: travelling wave or resonance? PLoS Biol 2: e337.
27. James AB (2005) The Underwater Piano: A Resonance Theory of Cochlear
Mechanics, Thesis, The Australian National University.
28. Bell A (2010) The cochlea as a graded bank of independent, simultaneously
excited resonators: Calculated properties of an apparenttravelling wave”,
Proceedings of 20th International Congress on Acoustics, ICA 2010, Sydney,
Australia p: 2327.
29. Andrew B (2001) The cochlear amplier is a surface acoustic wave resonator,
PO. Box A348, Australian National University Canberra, ACT 2601, Australia.
30. John C. Lilly, (2015) The Mind of the Dolphin: A Non-Human Intelligence,
Consciousness Classics, Gateway Books and Tapes, Nevada City, CA p: 82.
31. Gordon GG, James RA, Daniel JS (2002) The mirror test, In: Marc Bekoff,
Colin Allen & Gordon M. Burghardt (eds.), The Cognitive Animal: Empirical and
Theoretical Perspectives on Animal Cognition, MIT Press.
32. Delfoura FKM (2001) Mirror image processing in three marine mammal species:
killer whales (Orcinus orca), false killer whales (Pseudorca crassidens) and
California sea lions (Zalophus californianus), Behavioural Processes pp: 181-
33. Plotnik JM, de Waal FB, Reiss D (2006) Self-recognition in an Asian elephant.
Proc Natl Acad Sci U S A 103: 7053-17057.
34. Prior H, Schwarz A, Güntürkün O (2008) Mirror-induced behavior in the magpie
(Pica pica): evidence of self-recognition. PLoS Biol 6: e202.
35. Batteau DW, Markey P (1966) Man/Dolphin Communication, Listening Inc., 6
Garden St., Arlington, MA, Final Report 15 December 1966 - 13 December
1967, Prepared for U.S. Naval Ordinance Test Station, China Lake, CA.
36. Kenneth WL, Dolphin Mental Abilities Paper, The Experiment with Puka and
Maui. Op. cit, John CL, The Mind of the Dolphin.
37. Herman LM, Richards DG, Wolz JP (1984) Comprehension of sentences by
bottlenosed dolphins. Cognition 16: 129-219.
38. Vladimir IM, Vera MO, Organisation of Communication System In Tursiops
truncatus Montague, AN Severtsov Institute of Evolutionary Morphology and
Ecology of Animals, USSR Academy of Sciences, 33 Leninsky Prospect,
Moscow 117071, USSR. From Sensory Abilities of Cetaceans: Laboratory
and Field Evidence, Edited by Jeanette A. Thomas and Ronald Kastelein
(Harderwijck Dolnarium), NATO ASI Series, Series A: Life sciences p: 196.
39. Cranford (2011) Op. cit., Ted W.
40. David MW (1998) Powers, “Applications and explanations of Zipf’s law”.
Association for Computational Linguistics pp: 151-160.
41. Anaheim CA (1979) Personal communication to Michael Hyson. They have a
language,” Ken Ito, Yamaha Motors Corporation, Anaheim, CA, in reference to
his work with the US Navy on dolphin linguistics.
42. Baxter Healthcare (2016) Sterile Water f / Irrigation, USP.
43. Peilong C, Jorge V (1997) Amplitude equations and pattern selection in
Faraday waves, Phys Rev Lett 79: 2670.
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci
Res Dev 6: 202. doi:10.4172/2155-9910.1000202
Page 12 of 12
Volume 6 • Issue 4 • 1000202
J Marine Sci Res Dev
ISSN: 2155-9910 JMSRD, an open access journal
44. Berns GS, Cook PF, Foxley S, Jbabdi S, Miller KL, et al. (2015) Diffusion tensor
imaging of dolphin brains reveals direct auditory pathway to temporal lobe.
Proc Biol Sci 282.
45. Grifn, Donald R (1959) Echoes of Bats and Men, Anchor Press.
46. Sadato N, Pascual LA, Grafman J, Ibañez V, Deiber MP, et al. (1996) Activation
of the primary visual cortex by Braille reading in blind subjects. Nature 380:
47. Rosenblum LD, Gordon MS, Jarquin L (2000) Echolocating distance by moving
and stationary listeners, Ecol. Psychol 12: 181-206.
48. Cotzin M, Dallenbach KM (1950) “Facial vision:” the role of pitch and loudness
in the perception of obstacles by the blind. Am J Psychol 63: 485-515.
49. Emily C (2011) Blind people echolocate with visual part of brain, CBC News,
50. Thistle, Alan, Brain Image of Blind Echolocator, Western University, Ontario,
51. Aniruddh DP (2014) Neuroscience News Echolocation Acts as Substitute
Sense for Blind People, Neuroscience News.
visual-impairment-echolocation-neuroscience-1662/ 54 Aniruddh D. Patel,
Music, Language and the Brain, Oxford University Press, Oxford.
OMICS International: Publication Benefits & Features
Unique features:
• Increasedglobalvisibilityofarticlesthroughworldwidedistributionandindexing
• Showcasingrecentresearchoutputinatimelyandupdatedmanner
• Specialissuesonthecurrenttrendsofscienticresearch
Special features:
• 700+OpenAccessJournals
• 50,000+Editorialteam
• Rapidreviewprocess
• Qualityandquickeditorial,reviewandpublicationprocessing
• Indexingatmajorindexingservices
• SharingOption:SocialNetworkingEnabled
• Authors,ReviewersandEditorsrewardedwithonlineScienticCredits
• Betterdiscountforyoursubsequentarticles
Citation: Kassewitz J, Hyson MT, Reid JS, Barrera RL (2016) A Phenomenon
Discovered While Imaging Dolphin Echolocation Sounds. J Marine Sci Res Dev
6: 202. doi:10.4172/2155-9910.1000202
... Anatomical research of the organization of the toothed whale brain has indicated that primary auditory and visual areas may have neural connections and both modalities are involved in the perception of auditory information (Kassewitz & Hyson, 2016). Toothed whales have well developed inferior olives, a structure that receives inputs from a variety of sensory modalities and contributes to multisensory integration (Oelschläger, Ridgway, & Knauth, 2010). ...
There is limited research discussing comparative multisensory integration in echolocating mammals. Multisensory integration is the combination of redundant stimulus information from multiple sensory modalities. Through multisensory integration, sensory information from multiple modalities can influence an animal or human’s behaviour. The current paper reviews the experimental findings of studies in bats, toothed whales, and humans that test or indirectly suggest multisensory integration of echolocation, vision, and equilibrioception. We focus on these sensory modalities because they dominant the current literature on multisensory integration with echolocation. Experimental evidence to date strongly supports the importance of multisensory integration of echolocation with vision and/or equilibrioception for processes such as navigation, object recognition, and social communication. Additionally, we discuss opportunities for further research and the importance of comparative approaches for directing studies in humans.
... sons supérieurs à 20 kHz) pour l'écholocation (e.g. Hemilä et al. 2001 ;Ketten 2004 ;Popov & Supin 2007 ;Kassewitz et al. 2016 ;Ladegaard et al. 2019). Ainsi, par rapport à leurs cousins terrestres, les néocètes actuels présentent de profondes modifications de leurs organes de l'audition. ...
La mise en évidence par la biologie moléculaire et par les données paléontologiques de l'appartenance des cétacés au groupe des artiodactyles constitue une des avancées majeures de ces 30 dernières années en mammalogie. Il n'y a cependant pas à l'heure actuelle de consensus quant aux relations phylogénétiques basales des artiodactyles fondées sur des caractères morphologiques et l'histoire évolutive du groupe est de fait, ponctuée de nombreux points d'interrogation. Cette thèse explore une source de caractères phylogénétiques prometteuse : la région auditive (os pétreux, bulle auditive, osselets de l'oreille moyenne, oreille interne) à partir notamment des nouvelles perspectives offertes par l'imagerie µCT Scan. Les principaux objectifs de cette thèse sont (1) de déterminer le signal phylogénétique porté par la région auditive chez les artiodactyles afin d’apporter une nouvelle source de caractères aux analyses et (2) d’explorer le signal écologique porté par les différents éléments de cette région sensorielle dédiée à l’audition (oreille externe, moyenne et canal cochléaire du labyrinthe osseux) et à l’équilibrioception (vestibule et canaux semi-circulaires du labyrinthe osseux). La première partie de cette thèse (I) nous emmène au Togo, où de nombreux restes inédits de la région auditive de « baleines à pattes » (Protocetidae Stromer, 1908) ont été récoltés. D’un point de vu anatomique, ces restes fossiles ont permis de documenter et de décrire pour la première fois le stapes, l’incus et le labyrinthe osseux d’un protocète ; des éléments indispensables pour comprendre leur audition. L’analyse morpho-fonctionnelle indique qu’une audition optimale était probablement possible dans l’air et dans l’eau pour ces cétacés semi-aquatiques. De plus, la morphologie de leur cochlée indique que leur capacité auditive était proche de celle de leurs cousins terrestres et que les spécialisations relatives aux capacités auditives remarquables des cétacés modernes (i.e. sensibilité aux infra- ou ultrasons) se sont opérées après la séparation historique entre les mysticètes et les odontocètes.La deuxième partie de ce travail (II) se concentre sur les origines de l’amphibiose au sein des Cetancodonta, à travers l’étude de plusieurs familles fossiles, connues pour leurs liens étroits au milieu aquatique. L’étude de la région auditive des hippopotamoïdes (Anthracotheriidae + Hippopotamidae), révèle que l’adaptation à un mode de vie semi-aquatique est apparue plusieurs fois, de façon convergente, dans son histoire évolutive et semble d’ailleurs indiquer une origine terrestre pour ce groupe. Quant au raoellidé Indohyus, son complexe pétro-tympanique présente une combinaison de caractères suggérant un certain degré d’adaptation au milieu aquatique, mais l’étude fonctionnelle de sa cochlée indique que ce taxon ne pouvait très probablement pas entendre de façon efficace sous l’eau. Pour finir, le dernier point de cette thèse explore également le potentiel phylogénétique de la région auditive à travers une analyse construite sur des caractères morphologiques du pétreux et du labyrinthe osseux à l’échelle des artiodactyles. Pour la première fois, les résultats de notre analyse concordent avec ceux des analyses moléculaires. Parmi les points les plus notables, le clade des Cetancodonta est bien soutenu par la morphologie du pétreux et la position d’Indohyus suggère fortement que les raoellidés sont des cétacés.Ainsi, la région auditive s’avère être un élément essentiel d’un point de vu phylogénétique et morphofonctionnel. En effet, comme nous avons pu le voir tout au long de cette thèse, lorsque la nature complexe et variée de la région auditive est appréhendée dans son ensemble, elle permet d’inférer l’écologie d’un taxon donné et d’en apprendre davantage sur ses relations de parenté. Par conséquent, la région auditive est encore loin d’avoir dit ses derniers mots... et nous n’avons pas encore fini d’en entendre parler.
... It is unknown if dolphins process biosonar echoes similar to the ones of Daredevil, although researchers have attempted, in a rather bizarre experiment, to recreate what dolphins could "see" using echolocation (Kassewitz et al., 2016). All fiction aside, their echolocation abilities are remarkable and, using the words of Erulkar, "those animals that use echolocation for their survival and existence represent the epitome of adaptation for sound localization" (Erulkar, 1972). ...
The spatial accuracy of source localization by dolphins has been observed to be equally accurate independent of source azimuth and elevation. This ability is counter-intuitive if one considers that humans and other species have presumably evolved pinnae to help determine the elevation of sound sources, while cetaceans have actually lost them. In this work, 3D numerical simulations are carried out to determine the influence of bone-conducted waves in the skull of a short-beaked common dolphin on sound pressure in the vicinity of the ears. The skull is not found to induce any salient spectral notches, as pinnae do in humans, that the animal could use to differentiate source elevations in the median plane. Experiments are conducted in a water tank by deploying sound sources on the horizontal and median plane around a skull of a dolphin and measuring bone-conducted waves in the mandible. Their full waveforms, and especially the coda, can be used to determine source elevation via a correlation-based source localization algorithm. While further experimental work is needed to substantiate this speculation, the results suggest that the auditory system of dolphins might be able to localize sound sources by analyzing the coda of biosonar echoes. 2D numerical simulations show that this algorithm benefits from the interaction of bone-conducted sound in a dolphin's mandible with the surrounding fats.
Full-text available
Studies on animal bioacoustics, traditionally relying on non-human primate and songbird models, converge towards the idea that social life appears as the main driving force behind the evolution of complex communication. Comparisons with cetaceans is also particularly interesting from an evolutionary point of view. They are indeed mammals forming complex social bonds, with abilities in acoustic plasticity, but that had to adapt to marine life, making habitat another determining selection force. Their natural habitat constrains sound production, usage and perception but, in the same way, constrains ethological observations making studies of captive cetaceans an important source of knowledge on these animals. Beyond the analysis of acoustic structures, the study of the social contexts in which the different vocalizations are used is essential to the understanding of vocal communication. Compared to primates and birds, the social function of dolphins’ acoustic signals remains largely misunderstood. Moreover, the way cetaceans’ vocal apparatus and auditory system adapted morphoanatomically to an underwater life is unique in the animal kingdom. But their ability to perceive sounds produced in the air remains controversial due to the lack of experimental demonstrations. The objectives of this thesis were, on the one hand, to explore the spontaneous contextual usage of acoustic signals in a captive group of bottlenose dolphins and, on the other hand, to test experimentally underwater and aerial abilities in auditory perception. Our first observational study describes the daily life of our dolphins in captivity, and shows that vocal signalling reflects, at a large scale, the temporal distribution of social and non-social activities in a facility under human control. Our second observational study focuses on the immediate context of emission of the three main acoustic categories previously identified in the dolphins’ vocal repertoire, i.e. whistles, burst-pulses and click trains. We found preferential associations between each vocal category and specific types of social interactions and identified context-dependent patterns of sound combinations. Our third study experimentally tested, under standardized conditions, the response of dolphins to human-made individual sound labels broadcast under and above water. We found that dolphins were able to recognize and to react only to their own label, even when broadcast in the air. Apart from confirming aerial hearing, these findings go in line with studies supporting that dolphins possess a concept of identity. Overall, the results obtained during this thesis suggest that some social signals in the dolphin repertoire can be used to communicate specific information about the behavioural contexts of the individuals involved and that individuals are able to generalize their concept of identity for human-generated signals.
Full-text available
The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative 0 regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species.
Full-text available
It has long been known that human listeners can echolocate a sound-reflecting surface as they walk toward it. There is also evidence that stationary listeners can determine the location, shape, and material of nearby surfaces from reflected sound. This research tested whether there is an advantage of listener movement for echolocating as has been found for localization of emitted sounds. Blindfolded participants were asked to echolocate a 3 x 6 ft wall while either moving or remaining stationary. After echolocating, the wall was removed, and participants were asked to walk to where the wall had been. Results showed that participants were somewhat more accurate with moving than stationary echolocation for some distances. A follow-up experiment confirmed that this moving advantage was not a function of a specific type of training or the multiple stationary positions available during moving echolocation. This subtle moving advantage might be a function of echoic time-to-arrival information.
Full-text available
Most of us are aware of dolphins as loveable, playful animals that appear in oceanarium shows, on television and in movies. They are subjects of naval and other research to find out how they use sound to communicate, navigate in their environment, use Sonar and stun fish. They are among the most intelligent of all creatures. Throughout history the dolphins have helped people, taking children to school in ancient Greece, fishing with us, guiding ships, saving people from drowning, and recently, escorting Elian Gonzales when he was at sea on an inner-tube drifting from Cuba to the United States.
The sonar of dolphins has undergone evolutionary re-finement for millions of years and has evolved to be the premier sonar system for short range applications. It far surpasses the capability of technological sonar, i.e. the only sonar system the US Navy has to detect buried mines is a dolphin system. Echolocation experiments with captive animals have revealed much of the basic parameters of the dolphin sonar. Features such as signal characteristics, transmission and reception beam patterns, hearing and internal filtering properties will be discussed. Sonar detection range and discrimination capabilities will also be included. Recent measurements of echolocation signals used by wild dolphins have expanded our understanding of their sonar system and their utilization in the field. A capability to perform time-varying gain has been recently uncovered which is very different than that of a technological sonar. A model of killer whale foraging on Chinook salmon will be examined in order to gain an understanding of the effectiveness of the sonar system in nature. The model will examine foraging in both quiet and noisy environments and will show that the echo levels are more than sufficient for prey detection at relatively long ranges.
The Amazon River dolphin lives exclusively in freshwater throughout the Amazon River watershed, a dynamic and acoustically complex habitat. Although generally considered a relatively non-vocal species, recent evidence suggests that these animals are acoustically active, producing tremendous quantities of high-frequency, pulsed signals. Moreover, these pulsed signals appear to be considerably more complex than previously believed. This study explored the high-frequency pulsed emanations produced by Amazon River dolphins in Peru. Audio recordings were made using a two hydrophone array, one of which was sampled at 1 MHz, in August of 2015. Digitized recordings were analyzed using FFT and Morlet wavelets. Subsequently, unsupervised machine learning attempted to delineate various click categories based upon inter-click intervals, the frequency bandwidth of each click, and the formants contained within each click. Although the Morlet transform is much more robust and accurate for higher frequencies than the FFT, its performance was not constant for all frequencies. Thus, the Morlet transform and the FFT produced different click categories. Thus, formant results above 230 kHz most likely were skewed. These results are the first to clearly demonstrate the heterogeneity of the high-frequency pulsed emanations of the Amazon River dolphin.
In order to model the hearing capabilities of marine mammals (cetaceans), it is necessary to understand the mechanical properties, such as elastic modulus, of the middle ear bones in these species. Biologically realistic models can be used to investigate the biomechanics of hearing in cetaceans, much of which is currently unknown. In the present study, the elastic moduli of the auditory ossicles (malleus, incus, and stapes) of eight species of cetacean, two baleen whales (mysticete) and six toothed whales (odontocete), were measured using nanoindentation. The two groups of mysticete ossicles overall had lower average elastic moduli (35.2 ± 13.3 GPa and 31.6 ± 6.5 GPa) than the groups of odontocete ossicles (53.3 ± 7.2 GPa to 62.3 ± 4.7 GPa). Interior bone generally had a higher modulus than cortical bone by up to 36%. The effects of freezing and formalin-fixation on elastic modulus were also investigated, although samples were few and no clear trend could be discerned. The high elastic modulus of the ossicles and the differences in the elastic moduli between mysticetes and odontocetes are likely specializations in the bone for underwater hearing. Anat Rec, 2014. © 2014 Wiley Periodicals, Inc.