Figure 1 - uploaded by Susanne Seltmann
Content may be subject to copyright.
Duration and FM of tet and stack calls in 6 pairs (10 randomly selected measurements per animal). A. Examples of tets and stacks. Clearly, tets are much stronger frequency-modulated than stacks. Note that our wireless microphones show more power in the lower frequencies as compared with external microphones since they record the near field. B. Tet calls had shorter durations than stacks (P = 1.90e–19). In females, duration was slightly longer than in males (P = 0.0025). C. Tets had higher FM-scores than stacks (P = 0.0001) whereas FM-scores were generally lowest in females (P = 0.0003). All tests: REML in JMP10 with pairs as random factor. Pitch was not different between tets and stacks (not shown). doi:10.1371/journal.pone.0109334.g001
Source publication
Unlearned calls are produced by all birds whereas learned songs are only found in three avian taxa, most notably in songbirds. The neural basis for song learning and production is formed by interconnected song nuclei: the song control system. In addition to song, zebra finches produce large numbers of soft, unlearned calls, among which "stack" call...
Citations
... These sequences of spikes are not fixed ensembles as observed in partial out-of-order reproductions during sleep [21]. Even though the zebra finch is a specialist of one song, it still produces other non-courtship vocalizations in adulthood using the same motor areas [22][23][24]. Related species such as budgerigars, bengalese finches and canaries present more variable songs and even lifelong learning, suggesting that the neuronal substrate underlying the sequences may be more plastic than typically assumed [25][26][27]. ...
Repeating sequences of neural activity exist across diverse brain regions of different animals and are thought to underlie diverse computations. However, their emergence and evolution in the presence of ongoing synaptic plasticity remain poorly understood. To gain mechanistic insights into this process, we modeled how biologically-inspired rules of activity-dependent synaptic plasticity in recurrent circuits interact to produce connectivity structures that support sequential neuronal activity. Even under unstructured inputs, our recurrent networks developed strong unidirectional connections, resulting in spontaneous repeating spiking sequences. During ongoing plasticity these sequences repeated despite turnover of individual synaptic connections, a process reminiscent of synaptic drift. The turnover process occurred over different timescales, with certain connectivity types and motif structures leading to sequences with different volatility. Structured inputs could reinforce or retrain the resulting connectivity structures underlying sequences, enabling stable but still flexible encoding of inputs. Our model unveils the interplay between synaptic plasticity and sequential activity in recurrent networks, providing insights into how brains implement reliable but flexible computations.
... Traditionally, animal-borne sensors are based on frequency modulation (FM) to transmit data [15], [16]. Although this approach can achieve minimal power consumption [17], there are well-known issues in terms of signal quality and interference when more than a single transmitter is in the same area [18]. ...
... Although this approach can achieve minimal power consumption [17], there are well-known issues in terms of signal quality and interference when more than a single transmitter is in the same area [18]. Moreover, FM transmitters allow only one-way communication from the sensor to the receiver, and there is no way to send information back to the sensor node on the bird [16]. ...
Animal vocalisations serve a wide range of vital functions. Although it is possible to record animal vocalisations with external microphones, more insights are gained from miniature sensors mounted directly on animals' backs. We present TinyBird-ML; a wearable sensor node weighing only 1.4 g for acquiring, processing, and wirelessly transmitting acoustic signals to a host system using Bluetooth Low Energy. TinyBird-ML embeds low-latency tiny machine learning algorithms for song syllable classification. To optimize battery lifetime of TinyBird-ML during fault-tolerant continuous recordings, we present an efficient firmware and hardware design. We make use of standard lossy compression schemes to reduce the amount of data sent over the Bluetooth antenna, which increases battery lifetime by 70% without negative impact on offline sound analysis. Furthermore, by not transmitting signals during silent periods, we further increase battery lifetime. One advantage of our sensor is that it allows for closed-loop experiments in the microsecond range by processing sounds directly on the device instead of streaming them to a computer. We demonstrate this capability by detecting and classifying song syllables with minimal latency and a syllable error rate of 7%, using a light-weight neural network that runs directly on the sensor node itself. Thanks to our power-saving hardware and software design, during continuous operation at a sampling rate of 16 kHz, the sensor node achieves a lifetime of 25 hours on a single size 13 zinc-air battery.
... Their vocal repertoire has been assumed to be non-learned based on their simple vocal features, non-territorial and non-courtship vocalisations, and a syrinx that lacks complex intrinsic muscles found in vocal learners in birds (Stidolph 1950;Ames 1971;Sherley 1985;Moran 2021). Furthermore, New Zealand wrens do not produce broadcast territorial songs as most vocal learning songbirds do; instead they only produce calls, which are short functional units that have historically been regarded as innate rather than learned (Marler and Mundinger 1975;Sewall 2011;Maat et al. 2014;Sewall et al. 2016). Although call learning is not as well documented as song learning, it has been demonstrated in a variety of species, including parrots (Medina-García et al. 2015;Wright and Dahlin 2017), songbirds (Mundinger 1970;Zann 1990Zann , 1985, and more recently in musk ducks (Ten Cate and Fullagar 2021) and black-headed gulls (Ten Cate 2021) (although more tests are needed in the latter species). ...
... In general, limitations due to sight occlusions and sound superpositions can be overcome with animalborne sensors such as accelerometers 22,23 , gyroscopes, microphones 24 , and global positioning systems (GPS) 23 . In combination with wireless transmitters 24 and loggers 22 , these sensors enable the detection of behaviors such as walking, grooming, eating, drinking, and ying, for example, in birds 25 , cats 26 , and dogs 27 , though often with low reliability due to noisy and ambiguous sensor signals 15 . ...
... In general, limitations due to sight occlusions and sound superpositions can be overcome with animalborne sensors such as accelerometers 22,23 , gyroscopes, microphones 24 , and global positioning systems (GPS) 23 . In combination with wireless transmitters 24 and loggers 22 , these sensors enable the detection of behaviors such as walking, grooming, eating, drinking, and ying, for example, in birds 25 , cats 26 , and dogs 27 , though often with low reliability due to noisy and ambiguous sensor signals 15 . In general, animalborne transmitter devices are designed to achieve high reliability, low weight, small size, and long battery life, giving rise to a complex trade-off. ...
... Among the best transmitters, in terms of battery life, size, and weight, are analog frequency-modulated (FM) radio transmitters. Their low power requirement minimizes the frequency of animal handling and associated handling stress, making them an excellent choice for longitudinal observations of small vertebrates 24,28,29 . ...
In longitudinal observations of animal groups, the goal is to identify individuals and to reliably detect their interactive behaviors, including their vocalizations. However, to reliably extract individual vocalizations from their mixtures and other environmental sounds remains a serious challenge. Promising approaches are multimodal systems that exploit signal redundancy and make use of animal-borne wireless sensors. In this vein, we designed a modular recording system (BirdPark) that yields synchronized data streams. We recorded groups of songbirds with multiple cameras and microphones and recorded their body vibrations with custom low-power frequency-modulated (FM) radio transmitters. We developed a custom software-defined radio receiver with a multi-antenna demodulation technique that increased the signal-to-noise ratio of the received radio signals by 6.5 dB and reduced the signal loss rate due to fading by a factor of 63 to only 0.01% of the recording time compared to single-antenna demodulation. Nevertheless, neither a single vibration sensor nor a single microphone is sufficient by itself to detect the complete vocal output of an individual. Even in the minimal setting of an animal pair, an average of about 3.7% of vocalizations remain undetected within each sensor modality. Our work emphasizes the need for high-quality recording systems and for multimodal analysis of social behavior.
... Traditionally, animal-borne sensors are based on frequency modulation (FM) to transmit data [15], [16]. Although this approach can achieve minimal power consumption [17], there are well-known issues in terms of signal quality and interference when more than a single transmitter is in the same area [18]. ...
... Although this approach can achieve minimal power consumption [17], there are well-known issues in terms of signal quality and interference when more than a single transmitter is in the same area [18]. Moreover, FM transmitters allow only one-way communication from the sensor to the receiver, and there is no way to send information back to the sensor node on the bird [16]. ...
Animal vocalizations serve a wide range of vital functions. Although it is possible to record animal vocalizations with external microphones, more insights are gained from miniature sensors mounted directly on animals' backs. We present TinyBird-ML; a wearable sensor node weighing only 1.4 g for acquiring, processing, and wirelessly transmitting acoustic signals to a host system using Bluetooth Low Energy. TinyBird-ML embeds low-latency tiny machine-learning algorithms for song syllable classification. To optimize battery lifetime of TinyBird-ML during fault-tolerant continuous recordings, we present an efficient firmware and hardware design. We make use of standard lossy compression schemes to reduce the amount of data sent over the Bluetooth antenna, which increases battery lifetime by 70% without negative impact on offline sound analysis. Furthermore, by not transmitting signals during silent periods, we further increase battery lifetime. One advantage of our sensor is that it allows for closed-loop experiments in the microsecond range by processing sounds directly on the device instead of streaming them to a computer. We demonstrate this capability by detecting and classifying song syllables with minimal latency and a syllable error rate of 7%, using a lightweight neural network that runs directly on the sensor node itself. Thanks to our power-saving hardware and software design, during continuous operation at a sampling rate of 16 kHz, the sensor node achieves a lifetime of 25 hours on a single size 13 zinc-air battery.
... The downstream output regions have not been extensively studied in the avian system but could include the song control regions 43,44 ; neighboring output regions of the avian forebrain in the arcopallium 17 ; the ventromedial hypothalamic nucleus (VMHm), which is part of the social behavior network 45 ; or the lateral nidopallium, which has been implicated in higher level cognitive tasks and mate choice. 46,47 In our ensemble analyses, we combined auditory units recorded from the entire auditory pallium. ...
The categorization of animal vocalizations into distinct behaviorally relevant groups for communication is an essential operation that must be performed by the auditory system. This auditory object recognition is a difficult task that requires selectivity to the group identifying acoustic features and invariance to renditions within each group. We find that small ensembles of auditory neurons in the forebrain of a social songbird can code the bird's entire vocal repertoire (∼10 call types). Ensemble neural discrimination is not, however, correlated with single unit selectivity, but instead with how well the joint single unit tunings to characteristic spectro-temporal modulations span the acoustic subspace optimized for the discrimination of call types. Thus, akin to face recognition in the visual system, call type recognition in the auditory system is based on a sparse code representing a small number of high-level features and not on highly selective grandmother neurons.
... Coordinated duetting between the parents may function as negotiations over the parental effort, at least during the egg incubation phase (Boucaud et al., 2016(Boucaud et al., , 2017. Compared to widely studied courtship songs, little is known about the short distance calls (Ter Maat et al., 2014). The neural substrate responsible for these calls overlaps, at least partially, with the brain's song system (Gobes et al., 2009;Giret et al., 2015), however the different social function implies the involvement of different regions and/or genes. ...
The current review is an update on experimental approaches in which birds serve as model species for the investigation of typical failure symptoms associated with autism spectrum disorder (ASD). The discussion is focused on deficiencies of social behavior, from social interactions of domestic chicks, based on visual and auditory cues, to vocal communication in songbirds. Two groups of pathogenetic/risk factors are discussed: 1) non-genetic (environmental/epigenetic) factors, exemplified by embryonic exposure to valproic acid (VPA), and 2) genetic factors, represented by a list of candidate genes and signaling pathways of diagnostic or predictive value in ASD patients. Given the similarities of birds as experimental models to humans (visual orientation, vocal learning, social cohesions), avian models usefully contribute toward the elucidation of the neural systems and developmental factors underlying ASD, improving the applicability of preclinical results obtained on laboratory rodents. Furthermore, they may predict potential susceptibility factors worthy of investigation (both by animal studies and by monitoring human babies at risk), with potential therapeutic consequence.
... In general, limitations due to sight occlusions and sound superpositions can be overcome with animalborne sensors such as accelerometers 16,17 , gyroscopes, microphones 18 , and global positioning systems (GPS) 17 . In combination with wireless transmitters 18 and loggers 16 , these sensors enable the detection of behaviors such as walking, grooming, eating, drinking, and flying, for example, in birds 19 , cats 20 , and dogs 21 , though often with low reliability because of noisy and ambiguous sensor signals 12 . ...
... In general, limitations due to sight occlusions and sound superpositions can be overcome with animalborne sensors such as accelerometers 16,17 , gyroscopes, microphones 18 , and global positioning systems (GPS) 17 . In combination with wireless transmitters 18 and loggers 16 , these sensors enable the detection of behaviors such as walking, grooming, eating, drinking, and flying, for example, in birds 19 , cats 20 , and dogs 21 , though often with low reliability because of noisy and ambiguous sensor signals 12 . In general, animal-borne transmitter devices are designed to achieve high reliability, low weight, small size, and long battery life, giving rise to a complex tradeoff. ...
... Among the best transmitters, in terms of battery life, size, and weight, are analog frequency-modulated (FM) radio transmitters. Their low power requirement minimizes animal handling frequency and associated handling stress, making them an excellent choice for longitudinal observations of small vertebrates 18,22,23 . ...
The implicit goal of longitudinal observations of animal groups is to identify individuals and to reliably detect their behaviors, including their vocalizations. Yet, to segment fast behaviors and to extract individual vocalizations from sound mixtures remain challenging problems. Promising approaches are multimodal systems that record behaviors with multiple cameras, microphones, and animal-borne wireless sensors. The instrumentation of these systems must be optimized for multimodal signal integration, which is an overlooked steppingstone to successful behavioral tracking.
We designed a modular system (BirdPark) for simultaneously recording small animals wearing custom low-power frequency-modulated radio transmitters. Our custom software-defined radio receiver makes use of a multi-antenna demodulation technique that eliminates data losses due to radio signal fading and that increases the signal-to-noise ratio of the received radio signals by 6.5 dB compared to best single-antenna approaches. Digital acquisition relies on a single clock, allowing us to exploit cross-modal redundancies for dissecting rapid behaviors on time scales well below the video frame period, which we demonstrate by reconstructing the wing stroke phases of free-flying songbirds. By separating the vocalizations among up to eight vocally interacting birds, our work paves the way for dissecting complex social behaviors.
... Consistent with the second hypothesis above is emerging work in songbirds and suboscines that suggests that nuclei within the song circuit can coordinate and modify acoustic features of unlearned vocalizations [15,61,62]. In suboscine birds, for example, an RA-like nucleus is essential for mediating precise motor commands for song output in reproductively active adults, as well as the motor refinement of vocalizations during a protracted song ontogeny period early in life [15]. ...
Vocal learning is thought to have evolved in 3 orders of birds (songbirds, parrots, and hummingbirds), with each showing similar brain regions that have comparable gene expression specializations relative to the surrounding forebrain motor circuitry. Here, we searched for signatures of these same gene expression specializations in previously uncharacterized brains of 7 assumed vocal non-learning bird lineages across the early branches of the avian family tree. Our findings using a conserved marker for the song system found little evidence of specializations in these taxa, except for woodpeckers. Instead, woodpeckers possessed forebrain regions that were anatomically similar to the pallial song nuclei of vocal learning birds. Field studies of free-living downy woodpeckers revealed that these brain nuclei showed increased expression of immediate early genes (IEGs) when males produce their iconic drum displays, the elaborate bill-hammering behavior that individuals use to compete for territories, much like birdsong. However, these specialized areas did not show increased IEG expression with vocalization or flight. We further confirmed that other woodpecker species contain these brain nuclei, suggesting that these brain regions are a common feature of the woodpecker brain. We therefore hypothesize that ancient forebrain nuclei for refined motor control may have given rise to not only the song control systems of vocal learning birds, but also the drumming system of woodpeckers.
... Coordinated call production between partners is a well described behaviour in birds, where it is thought to influence pair-bond maintenance and mate guarding 42 . Zebra finches show time-locked call behaviour using two types of calls: Tet and stack calls 26 . Both are short, low power vocalizations used when the birds are physically close together 25,28 . ...
... Our goal was to first test if the audiovisual environment was sufficiently realistic to elicit established behavioural responses in social communication, before using the system in biological experiments. Zebra finches communicating within the modular virtual environment emitted calls that were synchronized and contingent on the call of the mate with response latencies as in real life situations 10,26,27 . Furthermore, our data show that males exhibited high-intensity courtship behaviour and sang directed song to their virtual females. ...
Interactive biorobotics provides unique experimental potential to study the mechanisms underlying social communication but is limited by our ability to build expressive robots that exhibit the complex behaviours of birds and small mammals. An alternative to physical robots is to use virtual environments. Here, we designed and built a modular, audio-visual 2D virtual environment that allows multi-modal, multi-agent interaction to study mechanisms underlying social communication. The strength of the system is an implementation based on event processing that allows for complex computation. We tested this system in songbirds, which provide an exceptionally powerful and tractable model system to study social communication. We show that pair-bonded zebra finches (Taeniopygia guttata) communicating through the virtual environment exhibit normal call timing behaviour, males sing female directed song and both males and females display high-intensity courtship behaviours to their mates. These results suggest that the environment provided is sufficiently natural to elicit these behavioral responses. Furthermore, as an example of complex behavioral annotation, we developed a fully unsupervised song motif detector and used it to manipulate the virtual social environment of male zebra finches based on the number of motifs sung. Our virtual environment represents a first step in real-time automatic behaviour annotation and animal–computer interaction using higher level behaviours such as song. Our unsupervised acoustic analysis eliminates the need for annotated training data thus reducing labour investment and experimenter bias.