ArticlePDF Available

Abstract

Photonic neural networks have the potential to revolutionize the speed, energy efficiency and throughput of modern computing—and to give Moore’s law–style scaling a new lease on life.
34 OPTICS & PHOTONICS NEWS JANUARY 2018
NEUROMORPHIC
PHOTONICS
Mitchell A. Nahmias,
Bhavin J. Shastri,
Alexander N. Tait,
Thomas Ferreira de Lima
and Paul R. Prucnal
35 JANUARY 2018 OPTICS & PHOTONICS NEWS
Sed min cullor si deresequi rempos magnis eum explabo. Ut
et hicimporecum sapedis di aut eum quiae nonem et adi.
Getty Images
NEUROMORPHIC
PHOTONICS
Photonic neural networks have the potential to
revolutionize the speed, energy efficiency and
throughput of mo dern computing—and to give
Moores lawstyle scaling a new lease on life.
36 OPTICS & PHOTONICS NEWS JANUARY 2018
In an age overrun with information, the ability to
process vast volumes of data has become crucial.
The proliferation of microelectronics has enabled
the emergence of next-generation industries to
support emerging arti cial-intelligence services
and high-performance computing. These data-inten-
sive enterprises rely on continual improvements in
hardware—and the demand for data will continue to
grow as smart gadgets multiply and become ever more
integrated into our dai ly lives. Unfortunately, however,
those prospects are running up against a stark reality:
the exponential hardware scaling in digital electronics,
most famously embodied in Moore’s law, is fundamen-
tally unsustainable.
This situation suggests that the time is ripe for
a radically new approach: neuromorphic photon-
ics. An emerging eld at the nexus of photonics and
neuroscience, neuromorphic photonics combines the
advantages of optics and electronics to build systems
with high e ciency, high interconnectivity and high
information density. In the pages that follow, we take
a look at some of the traditional challenges of photonic
information processing, describe the photonic neural-
network approaches being developed by our lab and
others, and o er a glimpse at the future outlook for
this emerging  eld.
Moving beyond Moore
In the la er half of the 20th century, microprocessors
faithfully adhered to Moore’s law, the well-known pre-
diction of exponentially improving performance. As
Gordon Moore originally predicted in 1965, the den-
sity of transistors, clock speed, and power e ciency
in microprocessors doubled approximately every 18
months for most of the past 60 years.
Yet this trend began to la nguish over the last decade.
A law known as Dennard scaling, which states that
microprocessors would proportionally increase in
performance while keeping their power consumption
constant, has broken down since about 2006; the result
has been a trade-o betwe en speed and power e ciency.
Although transistor densities have so far continued to
grow exponentia lly, even that scaling wi ll stagnate once
device sizes reach their fundamental quantum limits
in the next ten years.
One route toward resolving this impasse lies in
photonic integrated circ uit (PIC) platforms, which have
recently undergone rapid growth. Photon ic communica-
tion channels are not bound by the same physical laws
as electronic ones; as a result, photonic interconnects
are slowly replacing electrical wires as communica-
tion bottlenecks worsen. PICs are becoming a key
part of communication systems in data centers, where
Free-space networks
Cannot be integrated
Scalable Computing
Models
Digital electronics
Unsustainable performance
Fabrication
Industry
Microwave photonics
Limited functionality
Analog
Photonics
Neuromorphic photonics
SYNERGISTIC APPROACH Neuromorphic photonics uses modern fabrication techniques to
implement efficient, scalable analog photonics operations.
1047-6938/18/01/34/8-$15.00 ©OSA
37 JANUARY 2018 OPTICS & PHOTONICS NEWS
Neuromorphic photonics combines the advantages of optics
and electronics to build systems with high efficiency, high
interconnectivity and high information density.
microelect ronic compatibility and high-yield, low-cost
manufacturing are crucial. Because of their integration,
PICs can allow photonic processing at a scale impos-
sible with discrete, bulky optical- ber counterparts,
and scalable, CMOS-compatible silicon-photonic sys-
tems are on the cusp of becoming a commercial reality.
PICs have several unique traits that could enable
practical, scalable photonic processing and could leap-
frog the current stagnation of Moore’s law–like scaling
in electronic-only se ings:
Speed. Electronic microprocessor clock rates cannot
exceed about four GHz before hi ing thermal-dissipation
limits, and parallel architectures, such as graphic pro-
cessing units, are limited to even slower timescales. In
contrast, each channel in a photonic system, by default,
ca n operate at upwards of twe nty giga her to suppor t
ber optic communication rates.
Information density. Paradoxically, despite the large
sizes of on-chip photonic devices—whose lower bound
on size must exceed the wavelength of the light that
travels through them—PICs can pack orders of mag-
nitude more information in every square centimeter.
One reason is that photonic signals operate much
faster, thereby shuffl ing much more data through the
system per second. Another is that lightwaves exhibit
the superposition property, which allows for optical
multiplexing: waveguides can ca rry many signals a long
di erent wavelengths or time slots simultaneously
without taking up additional space. This combination
enables an enormous amount of information—easily
more than one terabyte per second—to fl ow through a
waveguide only half a micron wide.
Energy efficiency. Photonic operations have the poten-
tial to consume orders of magnitude less power than
digital approaches. T his property comes from so -called
linear photonic operations (that is, those that can be
described using linear algebra). Transmission elements
are sometimes considered to dissipate no energy; how-
ever, it always takes energy to generate, modulate and
receive light signals. Nonetheless, the lack of a funda-
mental energy cost per operation means that photonic
processors may not be subject to the unfavorable scaling
laws that have stymied f urther performance returns in
electronic systems.
Photonic signal processing
Optical signal processing has a rich history, but opti-
ca l systems have had di cu lty achieving sca lability in
computing. Extensive research has focused on imple-
menting optica l-computing operations using bot h digital
bits and continuous-valued analog signals. Concepts
for neuro-inspired photonic computing originally
Neural nets: The photonic edge
Von Neumann architectures (left), relying on sequential input-output through a central processor, differ fundamentally
from more decentralized neural-network architectures (middle). Photonic neural nets (right) can solve the interconnect
bottleneck by using one waveguide to carry signals from many connections (easily N2~10,000) simultaneously.
Central
Processing Unit
Memory
01101010 10101110
input output
program data
Von Neumann architecture
waveguide
input
output
Photonic neural network
input output
Neural network architecture
weights
38 OPTICS & PHOTONICS NEWS JANUARY 2018
envisioned systems that used vertically oriented light
sources or spatial light modulators together with free-
space holographic routing. Many researc hers imagined
that an optical computer would consist of a 3-D holo-
graphic cube programmed to route signals between
arrays of LEDs.
Although optical logic devices later developed into
the switches and routers that form today’s telecom-
munications infrast ructure, optical computing did not
achieve the same level of success. Researchers realized
that the scaling laws for electronic components could
continue to address the bo lenecks in traditional pro-
cessors for many years to come. The ceaseless march of
Moore’s law meant that, while optical computing sys-
tems might outperform electronics in the short term,
microprocessors would eclipse them in several years.
A close look at the hardware reveals that the past
challenges of optical computing—and, particularly,
optical neural computing—lay chiefl y in a few factors:
the continued favorable scaling of electronic devices,
the packaging di culties associated with free-space
coupling and holographic interconnects, and the di -
culty in shrinking optical devices. Now, about 30 years
later, the landscape has changed tremendously. With
Moore’s law confronting fundamental limitations, the
scaling of electronics c an no longer be taken for granted.
Meanwhile, large-scale integration tec hniques are start-
ing to emerge in photonics, dr iven by telecommun ication
applications and a market need for increased informa-
tion fl ow both between and within processors.
These changes have led to an explosion in PICs,
which are already  nding their way into fast Ethernet
switches in s ervers and data centers. Microwave photon-
ics are also emerg ing as a contender for radio-frequency
applications, now enabled by the low cost of microc hip
photonic integrated components. Researchers have
implemented digital photonic devices in various tech-
nologies, including bers, waveguides, semiconductor
devices and resonators.
Both the analog and the digital approaches to optica l
computing, however, still face challenges. Increasing
the number of analog operations leads to noise and
degrades signal integrity, limiting the potential com-
plexity of optical processors. And, while digital systems
lter out noise during every step and can x errors after
they occu r—making it easy for engine ers to design com-
plex systems with many interacting components—the
high scaling cost of digita l photonic devices makes this
approach both prohibitively expen sive and impractical.
Photonic neural networks
Neural network approaches represent a hybrid between
the purely digital and analog approaches, allowing for
more e cient processors that are both less resource-inten-
sive and robust to noise. But what is a neural network?
Most modern microprocessors follow the so-called
von Neumann arc hitecture, in which mach ine instruc-
tions and data are stored in memory a nd share a central
communication channel, or bus, to a processing unit.
Instructions de ne a procedure to operate on data,
which is continually shuffl ed back and forth between
memory and the processor.
Neural networks function quite differently.
Individually, neurons can perform simple operations
such as adding inputs together or  ltering out weaker
signals. In groups, however, they can implement far
more complex operations through the formation of
networks. Instead of usi ng digital 0’s and 1’s, neu ral net-
works represent information in analog signals, which
can take the form of either continuous real-number
values or of spikes in which information is encoded in
the timing between short pulses. Rather than abiding
by a sequential set of instructions, neurons process
data in parallel and are programmed by the connec-
tions between them.
The input into a particular neuron is a linear com-
bination—also referred to as a weighted addition—of
Electronic vs. photonic neural nets
Neuromorphic architectures potentially sport better speed-
to-efficiency characteristics than state-of-the-art electronic
neural nets (such as IBM’s TrueNorth, Stanford University’s
Neurogrid, the University of Heidelburg’s HICANN), as
well as advanced digital electronic systems (such as the
University of Manchester’s SpiNNaker).
10
2
10
3
10
4
10
5
10
6
10
7
10
8
1
10
10
–1
Efficiency (GMAC/s/W)
10
–2
Computational Speed (MMAC/s/cm
2
)
10
2
110
4
10
6
10
8
10
10
10
9
10
11
TrueNorth
HICANN
Neuromorphic electronics
Silicon photonics
Digital electronics
Von Neumann efficiency wall
State -of-the-art
SpiNNaker Microwave electronics
Neuromorphic
photonics
Nanophotonics
NeuroGrid
39 JANUARY 2018 OPTICS & PHOTONICS NEWS
Rather than abiding by a sequential set of instructions,
neurons process data in parallel and are programmed by the
connections between them.
the output of other neurons. These connections can be
weighted with negative and positive values, respec-
tively, which are called (borrowing the language of
neuroscience) inhibitory and excitatory synapses. The
weighting is therefore represented as a real number,
and the interconnection network can be expressed as
a matrix.
Photonics appears to be an ideal technology with
which to implement neural networks. The greatest
computational burden in neural networks lies in the
interconnectivity: in a system with N neurons, if every
neuron can com municate wit h every other neuron (plus
itself), ther e will be N2 connections. Just one more neur on
adds N more connections—a prohibitive situation if N
is large. Photonic systems can address this problem in
two ways: waveguides can boost interconnectivity by
carrying many signals at the sa me time through optical
multiplexing; and low-energy, photonic operations can
reduce the computational burden of performing li near
functions such as weighted addition. For example, by
associating each node with a color of light, a network
could support N additional connections without nec-
essarily adding any physical wires.
We can understand this beer through the example
of a multiply-accumulate (MAC) operation. Each such
operation represents a single multiplication, followed
by an addition. Since, mathematically, MAC operations
comprise dot products, matrix multiplications, convo-
lutions and Fourier transforms, they underlie much of
high-performance computing. They also constitute the
most costly operations in both hardware-based neural
networks and machine-learning algorithms. In the
digital domain, MACs occur in a serial fashion, which
means that the time and energy costs increase with the
number of inputs.
In contrast, passive lightwave devices, such as
wavelength-sens it ive lters, do not inherently dissipate
energy and can eciently perform such operations in
parallel. They can therefore greatly enhance high per-
formance computing, especially systems that rely on
matrix multiplication. In addition, reprogrammabil-
ity is possible with tunable photonic elements. These
advantages have motivated researchers to investigate a
variety of photonic neural models that exhibit a range
of interesting properties.
A spectrum of implementations
One such photonic neura l model, curr ently under inves-
tigation in our lab, involves engineering dynamical
lasers to resemble the biological behavior of neurons.
Laser neuron s, operating optoelec tronically, can operate
at approximately 100 million times the speed of their
biological counterparts, which are rate-limited by bio-
chemical interactions. These lasers represent neural
spikes via optical pulses by operating under a dynami-
cal regime called excitability. Excitability is a behavior
in feedback systems in which small inputs that exceed
some threshold cause a major excursion from equilib-
rium—which, in the case of a laser neuron, releases an
optical pulse. This event is followed by a recovery back
to equilibrium, the so-called refractory period.
We have found a the oretica l link between t he dynam-
ics of semiconductor lasers and a com mon neuron model
used in computational neuroscience, and have demon-
strated how a laser with a n embedded graphene section
could eectively emulate such behavior. Building from
these results, a number of research groups have fabri-
cated, tested and proposed laser neurons with various
feedback conditions. These include two-section models
in semiconductor lasers, photonic-crystal nanocavities,
polarization-sensitive vertical cavity lasers, lasers with
optical feedback or optical injection, and linked photo-
detector–laser systems with receiverless connections or
A laser neural
network being
tested at Princeton
University.
Princeton University Lightwave Lab, 2017
40 OPTICS & PHOTONICS NEWS JANUARY 2018
resonant t unneling. A recently demonstrated approach
based on optical modulators has the potential to exhibit
much lower conversion costs from one processing
stage to another, and to be fully integrated on silicon-
photonic platforms.
Toward scalable networks
Researchers have lately investigated interconnection
protocols that can tune to any desired network con-
guration. Arbitrary weights allow a wide array of
potential applications based on classical neural net-
works. Several notable approaches use complementary
physical e ects in this regard.
Broadcast-and-weight. A broadcast-and-weight neural
network architecture, demonstrated by our group at the
Princ eton Lightwave Lab, use s groups of tunable lter s
to implement weights on signals encoded onto multiple
waveleng ths. Tunin g a give n lt er on and o reso nance
changes the transmission of each signal through that
lter, e ectively multiplying the signal with a desired
weight. The resulting weighted signals travel into a
photodetector, which can receive many wavelengths
in parallel to perform a summing operation.
Broadcast-and-weight takes advantage of the enor-
mous informat ion density available to on-chip photonics
through the use of optical multiplexing, and is compat-
ible with a number of laser neuron models. Filter-based
weight banks have also been investigated both t heoreti-
cally and experimentally in the form of closely packed
microring filters, prototyped in a silicon-photonic
platform. And the interconnect architecture of a fully
integrated superconducting optoelectronic network
recently proposed by scientists at the U.S. National
Institute of Standards and Technology—and said to
o er potentially unmatched energy e ciency—could
be compatible with broadcast-and-weight.
Coherent. A coherent approach, which uses destruc-
tive or constructive interference effects in optical
interferometers to implement a matrix-vector operation
on incoming signals, was recently demonstrated by a
research team led by Marin Soljačić and Dirk Englund
and at the Massachuse s Institute of Technology, USA.
In such an architect ure there is no need to convert from
the optical domain to the electrical domain; hence, inter-
facing a coherent system w ith photonic, nonli near nodes
(for example, based on the Kerr e ect) could in principle
al low for ener gy e c ient, passive al l-optical proces sor s.
The coherent approach is, however, limited to only
one wavelength, and requires devices much larger
than tunable  lters, which puts a cap on the infor-
mation density that the approach can achieve in its
current form. In addition, all-optical interconnects
must grapple with both amplitude and phase, and no
solution has yet been proposed to prevent phase noise
accumulation from one stage to another. Nonetheless,
the investigation of large-scale networking schemes
is a promising direction for the integration of various
technologies in the eld towards highly scalable on-
chip photonic systems.
Reservoir computing. A contrasting approach to
tunable neural networks being pursued by a number
of labs, reservoir computing extracts usef ul information
from a  xed, possibly nonlinear system of interacting
nodes. Reservoirs require far fewer tunable elements
th a n neura l-network models to ru n e e ctively, maki ng
-20
-10
0
10
20 WF = 0.449
s1
s2
Neuron state [s1, s2] (mV)
-20
-10
0
10
20 WF = 0.529
Time (ms)
-0.5 0 0.5
-20
-10
0
10
20 WF = 0.629
State 1 : s1(normalized)
Wwin =
W
F
−1 0
1WF0
Neuron state [s1, s2] (mV)
State 2 : s
2
(normalized)
Self weight: W
F
Left: A photonic neural network that can be implemented in silicon photonics. Right: The on-chip system with modulator
neurons displays a characteristic oscillation called a Hopf bifurcation, which confirms the presence of an integrated neural
network. Princeton University Lightwave Lab, 2017/ A. Tait et al., Sci. Rep. 7, 7430 (2017).
41 JANUARY 2018 OPTICS & PHOTONICS NEWS
Neuromorphic photonic processing has the potential to one day
usher in a paradigm shift in computing—creating a smarter,
more efficient world.
them less challenging to implement in hardware; how-
ever, they cannot be easi ly programmed. These systems
have utilized optical-multiplexing strategies in both
time and wavelength. Experimentally demonstrated
photonic reservoirs have displayed state-of-the-art per-
formance in benchmark classication problems, such
as speech recognition.
Marching ahead
It remains to be seen in what ways photonic processing
systems wil l complement microelec tronic hardware, but
curre nt tech nological developments look promisi ng. For
example, the xed cost of electronic-to-photonic con-
version is no longer as energetically unfavorable as in
the past. A modern silicon-photonic link can transmit
a photonic signal using only femtojoules of energy per
bit of information, whereas t housands of femtojoules of
energy are consumed per operation in even the most
ecient digital electronic processors, including IBM’s
TrueNorth cognitive computing ch ip and Google’s ten-
sor processing unit.
The comparisons should get beer still as perfor-
mance scaling in optoelectronic devices continues to
improve. New modulators or lasers based on plasmonic
localization, graphene modulation or nanophotonic
cavities have the potential to increase eciency. The
next generation of photonic devices could potentially
consume only hundreds of aojoules of energy per time
slot, allowing analog photonic MAC-based processors
to consume even less per operation.
In light of these developments, photonic neural net-
works could nd a place in many applications. These
systems can act as a coprocessor for perform ing compu-
tationally intense linear operations—including MACs,
Fourier transfor ms and convolutions—by implementing
them in the photonic domain, potentially decreasing
the energy consumption and increasing the through-
put of signal processing, high-performance computing
and articial-intelligence algorithms. This could be a
boon for data centers, which increasingly depend on
such operations and have consistently doubled their
energy consumption every four years.
Photonic processors a lso have unmatched speeds and
latencies, which make them well suited for spec ialized
applications requiring either real-time response times
or fast signals. One example is a front-end processor in
radio-frequency transceivers. As the wireless spectr um
becomes increasingly overcrowded, the use of large,
adaptive phased-array antenn as that receive many more
radio waves simultaneously may soon become t he norm.
Photonic neural networks could perform complex sta-
tistical operations to extract important data, including
the separation of mixed signals or the classication of
recognizable radiofrequency signatures.
Still another application example lies in low-latency,
ultrafast control systems. It’s well understood that
recurrent neural networks can solve various problems
that involve minimizing or maximizing some known
function. A processing method known as Hopeld
optimization requires the solution to such a problem
during each step of the algorithm, and could utilize
the short convergence times of photonic networks for
nonlinear optimization.
Fiber optics once rendered copper cables obsolete
for long-distance communications. Neuromorphic
photonic processing has the potential to one day usher
in a similar paradigm shift in computing—creating a
smarter, more ecient world. OPN
Mitchell A. Nahmias, Bhavin J. Shastri, Alexander N. Tait,
Thomas Ferreira de Lima and Paul R. Prucnal (prucnal@
princeton.edu) are with the Department of Electrical Engi-
neering, Princeton University, Pr inceton, N.J., USA.
References and Resources
c P. Prucnal and B. Shastri. Neuromorphic Photonics (CRC
Press, 2017).
c M. Nahmias et al. J. Sel. Top. Quantum Electron. 19,
1800212 (2013).
c A. Tait et al. J. Lightwave Technol. 32, 4029 (2014).
c B. Shastri et al. Sci. Rep. 5, 19126 (2015).
c P. Prucnal et al. Adv. Opt. Photon. 8, 228 (2016).
c A. Tait et al. Sci. Rep. 7, 7430 (2017).
c Y. Shen et al. Nat. Photon. 11, 441 (2017).
c J.M. Shainline et al. Phys. Rev. Appl. 7, 034013 (2017).
c G. Van der Sande et al. Nanophotonics, 6, 561 (2017).
... In neuromorphic electronics, bandwidth and interconnectivity have to be traded off [2]. Photonics offers the opportunity to simultaneously achieve high bandwidth with high interconnectivity, in tandem with low power consumption and low latency [3]. The interconnections between "neurons" in a neural network are known as synapses. ...
... Also, the current PhC synapse architecture, as shown in Fig. 1(c), only allows for positive weights, unlike the add-drop MRR synapse architecture which permits both positive and negative weights. Positive weights suffice for typical artificial neural networks, but spiking networks require negative weights for inhibition [3]. Future work can explore improved PhC synapse architecture that can allow for negative weights in spiking networks. ...
... In this area of study, photonic platforms have emerged as attractive candidates for analog ANNs. Photonics, in contrast to electronics, possess attributes such as ultrafast processing, wave division multiplexing assisted parallelism and high wall-plug efficiency [10]. In particular, photonic integrated silicon platforms are highly desirable schemes due to their low physical footprint and easy co-integration with electronic schemes. ...
... It must also be considered that using more biologically plausible non-linear functions such as the winner takes it all layer, the robustness of a neural network can be enhanced by allowing for phase shifter values with as lower bit precision [22]. Lastly, although the Bayesian treatment of a silicon PIC based on Mach Zehnder interferometers has been presented, the aforementioned functionalities can be also incorporated in different hardware photonic platforms such as spatial [29] or time-delayed reservoir computing [38] implementations or full spiking neural networks [10]. ...
Preprint
Full-text available
Artificial neural networks are efficient computing platforms inspired by the brain. Such platforms can tackle a vast area of real-life tasks ranging from image processing to language translation. Silicon photonic integrated chips (PICs), by employing coherent interactions in Mach-Zehnder interferometers, are promising accelerators offering record low power consumption and ultra-fast matrix multiplication. Such photonic accelerators, however, suffer from phase uncertainty due to fabrication errors and crosstalk effects that inhibit the development of high-density implementations. In this work, we present a Bayesian learning framework for such photonic accelerators. In addition to the conventional log-likelihood optimization path, two novel training schemes are derived, namely a regularized version and a fully Bayesian learning scheme. They are applied on a photonic neural network with 512 phase shifters targeting the MNIST dataset. The new schemes, when combined with a pre-characterization stage that provides the passive offsets, are able to dramatically decrease the operational power of the PIC beyond 70%, with just a slight loss in classification accuracy. The full Bayesian scheme, apart from this energy reduction, returns information with respect to the sensitivity of the phase shifters. This information is used to de-activate 31% of the phase actuators and, thus, significantly simplify the driving system.
... In neuromorphic electronics, bandwidth and interconnectivity have to be traded off [2]. Photonics offers the opportunity to simultaneously achieve high bandwidth with high interconnectivity, in tandem with low power consumption and low latency [3]. The interconnections between "neurons" in a neural network are known as synapses. ...
... Also, the current PhC synapse architecture, as shown in Fig. 1(c), only allows for positive weights, unlike the add-drop MRR synapse architecture which permits both positive and negative weights. Positive weights suffice for typical artificial neural networks, but spiking networks require negative weights for inhibition [3]. Future work can explore improved PhC synapse architecture that can allow for negative weights in spiking networks. ...
Preprint
Full-text available
The bandwidth and energy demands of neural networks has spurred tremendous interest in developing novel neuromorphic hardware, including photonic integrated circuits. Although an optical waveguide can accommodate hundreds of channels with THz bandwidth, the channel count of photonic systems is always bottlenecked by the devices within. In WDM-based photonic neural networks, the synapses, i.e. network interconnections, are typically realized by microring resonators (MRRs), where the WDM channel count (N) is bounded by the free-spectral range of the MRRs. For typical Si MRRs, we estimate N <= 30 within the C-band. This not only restrains the aggregate throughput of the neural network but also makes applications with high input dimensions unfeasible. We experimentally demonstrate that photonic crystal nanobeam based synapses can be FSR-free within C-band, eliminating the bound on channel count. This increases data throughput as well as enables applications with high-dimensional inputs like natural language processing and high resolution image processing. In addition, the smaller physical footprint of photonic crystal nanobeam cavities offers higher tuning energy efficiency and a higher compute density than MRRs. Nanophotonic cavity based synapse thus offers a path towards realizing highly scalable photonic neural networks.
... Neuromorphic photonic [6,8] approaches can be divided into two main categories ( Figure 2): coherent (single wavelength) and incoherent (multiwavelength) approaches. Neuromorphic systems based on reservoir computing [9][10][11] and Mach-Zehnder interferometers [12,13] are example of coherent approaches. ...
... Neuromorphic engineering is broadly concerned with the development of physical hardware systems that can potentially mimic the neuro-biological structure and fundamental operational principles of the nervous system. In relation to that, neuromorphic computing aims to bring the efficacy of biocomputing into engineered computational devices [89,92], and it remains to be an active area of research both in electronics [145][146][147][148][149] and photonics [150][151][152]. ...
Article
Full-text available
Deep learning has been revolutionizing information processing in many fields of science and engineering owing to the massively growing amounts of data and the advances in deep neural network architectures. As these neural networks are expanding their capabilities toward achieving state-of-the-art solutions for demanding statistical inference tasks in various applications, there appears to be a global need for low-power, scalable, and fast computing hardware beyond what existing electronic systems can offer. Optical computing might potentially address some of these needs with its inherent parallelism, power efficiency, and high speed. Recent advances in optical materials, fabrication, and optimization techniques have significantly enriched the design capabilities in optics and photonics, leading to various successful demonstrations of guided-wave and free-space computing hardware for accelerating machine learning tasks using light. In addition to statistical inference and computing, deep learning has also fundamentally affected the field of inverse optical/photonic design. The approximation power of deep neural networks has been utilized to develop optics/photonics systems with unique capabilities, all the way from nanoantenna design to end-to-end optimization of computational imaging and sensing systems. In this review, we attempt to provide a broad overview of the current state of this emerging symbiotic relationship between deep learning and optics/photonics.
... Over the 10 years, several neuromorphic photonic [6], [8] approaches have been proposed as shown in Figure 2. This can be divided into feedforward and recurrent (including random recurrent i.e. reservoir computing [9]- [11]), or coherent (single wavelength) [12], [13] and incoherent (multiwavelength) [14], [15] approaches, or continuous time networks and spiking networks, or integrated approaches and free-space. ...
... In neuromorphic photonics [8], the device responsible for linear computation is the weight bank, comprised of a series of resonators with unique resonant wavelengths, which selectively weights (multiply) a series of incoming WDM signals [9] and optically sums all of the resulting light. These have been optimized specifically for real-time weighted summation of high-bandwidth analog signals. ...
Preprint
Full-text available
Neuromorphic photonic processors based on resonator weight banks are an emerging candidate technology for enabling modern artificial intelligence (AI) in high speed, analog systems. These purpose-built analog devices implement vector multiplications with the physics of resonator devices, offering efficiency, latency, and throughput advantages over equivalent electronic circuits. Along with these advantages, however, often comes the difficult challenges of compensation for fabrication variations and environmental disturbances. In this paper we review sources of variation and disturbances from our experiments, as well as mathematically define quantities that model them. Then, we introduce how the physics of resonators can be exploited to weight and sum multiwavelength signals. Finally, we outline automated design and control methodologies necessary to create practical, manufacturable, and high accuracy/precision resonator weight banks that can withstand operating conditions in the field. This represents a road map for unlocking the potential of resonator weight banks in practical deployment scenarios.
Article
We experimentally demonstrate two types of programmable, low-threshold, optically controlled nonlinear activation functions, which are challenging to realize in photonic neural networks (PNNs). These devices rely on on-chip integrated Ge-Si photoelectric detectors and silicon electro-optical switches, and they generate rectified linear unit (ReLU) or sigmoid functions with arbitrary slopes without additional electrical processing. Both devices function at an extremely low threshold of 0.2 mW. The embedding of these nonlinear activation functions into convolutional neural networks facilitates the attainment of high inference accuracies of up to 95% when applied to Modified National Institute of Standards and Technology (MNIST) handwritten digit-classification tasks. The devices are suitable for low-power PNNs with an arbitrary number of propagation layers in photonic-computing chips.
Chapter
Subwavelength gratings refer to periodic structures that have a period less than half the wavelength of light in the material so that no Bragg diffraction mode is supported. Instead, the light will propagate as if it was in a homogeneous material with anisotropic refractive indices. Subwavelength gratings have attracted great interest recently, as they provide a useful degree of freedom for the crafting of the effective refractive index of the material in photonic devices. In this chapter, we will introduce some of the applications of subwavelength structures for silicon photonics devices. We start by introducing the background theory of subwavelength gratings and then discuss their applications for the engineering of waveguide grating couplers, suspended membrane devices for mid-infrared (mid-IR) wavelengths, and their use with numerical optimization techniques for optimizing photonic devices. We shall discuss the classic effective medium theory (EMT) for subwavelength gratings and show how EMT can reduce time-consuming three-dimensional (3D) numerical optimizations to an effective two-dimensional (2D) optimization problem.
  • P Prucnal
  • B Shastri
and Resources c P. Prucnal and B. Shastri. Neuromorphic Photonics (CRC Press, 2017).
  • M Nahmias
M. Nahmias et al. J. Sel. Top. Quantum Electron. 19, 1800212 (2013).
  • A Tait
c A. Tait et al. J. Lightwave Technol. 32, 4029 (2014).
  • P Prucnal
c P. Prucnal et al. Adv. Opt. Photon. 8, 228 (2016). c A. Tait et al. Sci. Rep. 7, 7430 (2017).
  • Y Shen
Y. Shen et al. Nat. Photon. 11, 441 (2017).
  • J M Shainline
c J.M. Shainline et al. Phys. Rev. Appl. 7, 034013 (2017).
  • G Van Der Sande
c G. Van der Sande et al. Nanophotonics, 6, 561 (2017).
  • B Shastri
B. Shastri et al. Sci. Rep. 5, 19126 (2015).
  • A Tait
A. Tait et al. Sci. Rep. 7, 7430 (2017).
  • M Shainline
M. Shainline et al. Phys. Rev. Appl. 7, 034013 (2017).