ArticlePDF Available

The cloak of invisibility: Challenges and applications

Authors:

Abstract and Figures

Is it possible to create a cloak of invisibility - a flexible artifact that can make anything inside it invisible and preserve invisibility despite mobility and deformation? Exploring the algorithmic and technological challenges involved reveals tantalizing information.
Content may be subject to copyright.
62
PERVASIVE
computing
1536-1268/02/$17.00 © 2002 IEEE
WEARABLE COMPUTING
The Cloak of Invisibility:
Challenges and
Applications
A
nyone who’s read the Harry Potter
books by J.K. Rowling understands
the concept of an invisibility cloak.
Furthermore, humans have dreamt of
what invisibility might mean since the
beginning of civilization, and this dream persists in
today’s literature and culture. However, although
more improbable methods of invisibility will remain
unrealized, an invisibility cloak could be feasible in
the future through technology. In this article, we
attempt to show this via technical “pencil and paper”
arguments and a conceptual implementation.
Generally speaking, we pro-
pose a fabric of small computing
devices that can receive and
retransmit light emissions in a
directional way as well as inter-
act with each other in a wireless
amorphous network.
1
While
someone or something is inside the cloak, the cloak’s
devices would cause external observers to perceive
the light configurations they would see if the wearer
wasn’t there. Sensors on the side of the cloak not fac-
ing the observers could receive such configurations
and, via distributed coordination, communicate them
to emitters on the observer’s side for retransmission.
Building a solid wall exhibiting such properties
for a nonmoving observer might be difficult. Even
more challenging problems arise when preserving
such properties
For any observer in any position
For any shape of the cloak
Despite movement of the cloak’s fabric
Although we haven’t identified fully fledged
technical solutions for all these complex prob-
lems, you could collectively exploit numerous
recent findings in different research areas (such
as mobile computing,
2,3
distributed and peer-
to-peer coordination,
4,5
self-organization,
6,7
and
MEMS (microelectromechanical)
8
and sensor net-
works
1
) to sketch promising solutions. Follow-
ing that, you might then achieve the cloak’s actual
implementation and numerous related, innova-
tive artifacts.
For the sake of readability, we present our argu-
ments incrementally. For each step, we discuss the
Is it possible to create a cloak of invisibility—a flexible artifact that can
make anything inside it invisible and preserve invisibility despite mobility
and deformation? Exploring the algorithmic and technological
challenges involved reveals tantalizing information.
Franco Zambonelli and
Marco Mamei
Università di Modena e Reggio
Emilia
This is an unusual article. It does not fall neatly
into any of the typical genres of papers published
in this magazine, such as a research report, retro-
spective, survey, or tutorial. Rather, it is a carefully
reasoned technical speculation on an intriguing
idea—the use of sensor-impregnated garments
to achieve invisibility through real-time scene
manipulation. Although the paper is pure specu-
lation, the Associate Editors in Chief and I felt
that IEEE Pervasive Computings readership would
be well served by including this thought-provok-
ing article in this issue. To provide balance, we
include brief comments by vision experts Martial
Hebert of Carnegie Mellon University and Steve
Shafer of Microsoft Research.
M. Satyanarayanan, Editor in Chief
associated hardware and software chal-
lenges and the artifacts you might create
after resolving the challenges.
Step 1. The invisible wall
Let’s consider the basic scenario of mak-
ing a rigid, flat wall invisible to an unmov-
ing observer at a known position (for
example, centered in front of the wall at a
known distance from it). You could do this
simply by having a camera capture every-
thing behind the wall (from the observer’s
known fixed perspective) and project it on
the front side.
Alternatively, we propose building the
invisible wall by using a network of small
computer-based devices. (This serves as a
basic building block toward defining the
invisibility cloak, for which we will remove
the constraint of a flat, rigid shape and of a
single, fixed observer.) For this, we will need
In devices, sensors that can perceive
light emissions and transform them into
digital signals, such as a digital camera’s
CCD sensors
Out devices that can appropriately emit
light on the basis of specific signals
received, such as LCD displays, LEDs,
and microlasers
Of course, you might also have devices act-
ing as both In sensors and Out emitters.
To create the wall, we would deploy
densely packed In and Out devices on the
wall, one type on each side. We do not con-
sider devices placed in regular, possibly
wired, grids as in cameras and monitors.
Instead, we consider the wall randomly
filled with unwired devices to avoid place-
ment and wiring efforts and to enable, say,
“painting” or spraying the wall with trans-
parent glue containing sensors and emit-
ters. Such a choice, while increasing flexi-
bility and fault tolerance, requires devices
to have short-range wireless communica-
tion capabilities (optical- or radio-based)
9
and to be either internally self-powered (for
example, via solar energy or a light bat-
tery)
1
or laid on a conductive substrate and
fed with external power.
10
Once applied, each In device must
record its local light information on the
wall and transmit it to the corresponding
Out device on the opposite side, to glob-
ally reconstruct the image. Such transmis-
sions, owing to short-range communica-
tion capabilities—and to avoid limiting the
wall’s size and thickness—must occur in
multiple steps, by properly routing mes-
sages across the network to their destina-
tion (see Figure 1a). Of course, this requires
both sides of the wall to be seamlessly part
of the same network, so that a message can
pass from one side to the other by contin-
uously traveling over the network. For
example, in Figure 1a, we assume that
devices also fill the wall’s gray lateral parts.
Software issues
Deploying sensors and emitters without
a predetermined layout can dramatically
cut the costs of building our wall. How-
ever, it complicates software design. In par-
ticular, when dealing with the software to
control such an artifact, we must ask,
To properly establish the In and Out
pairs, how can devices determine their
location?
How do you route data across the net-
work from In to Out devices?
For the first question, because we have
spread the devices randomly on the wall,
they do not have any a priori knowledge
about their position. However, fundamen-
tally, each device will find its mate by look-
ing for the device on the opposite side of the
wall having the same coordinates. We can’t
use Global Positioning System technology
because it lacks the required accuracy and is
costly.
3
Alternatively, thanks to recent
research in localization technologies and
algorithms,
6,11
we can let each device deter-
mine its own position in the wall, accord-
ing to a relative set of coordinates, by rely-
ing on a few devices with known positions.
Such localization algorithms can rely on
the geometrically intuitive fact that you can
uniquely determine a point’s position on a
surface by measuring its distance from
three nonaligned reference points, beacons,
through triangulation.
3,11
So, for the invis-
ible wall, we can establish the global coor-
dinate system in four main steps.
First, we choose at least three beacons—
either via an external stimulus or by a leader
election algorithm—defining a coordinate’s
frame (in our case, we might choose devices
along the wall’s edges, as in Figure 1a).
Second, each beacon must produce some
OCTOBER–DECEMBER 2002
PERVASIVE
computing
63
Sensor
Sensor
Sensor
Sensor
Sensor
Sensor
(
x
= 2,
y
= 3,
color
)
y
x
y
= 3
x
= 2
Reach
ranges
Out
In
Reference
frame
(a) (b)
y
x
Beacons
Figure 1. The invisible wall: (a) a global
view and (b) local routing of a tuple
toward a specific point.
sort of signal to let other devices estimate
their distance from the three beacons and
eventually, via triangulation, their posi-
tions.
3
Devices can estimate distance by
exploiting either a physical property of the
signal the beacons emit (for example, its
attenuation, although this method’s accu-
racy at such small scales might be inade-
quate) or a geometrical property (for exam-
ple, the devices’ average density coupled
with expanding ring hop count messages
6
).
Third, the triangulation process spreads
to further devices—as soon as they know
their position with respect to the coordi-
nate system—and diffuses across the whole
surface.
The last step is to execute a distributed,
iterative algorithm
11
on the surface to
improve the accuracy of the estimation,
which could have produced notable errors
during the diffusion process, until reach-
ing the required accuracy. Although this
might take some time, the process’s execu-
tion time is not relevant because it only
takes place at bootstrap.
Given a set of coordinates’ availability,
the same for both sides of the wall, you can
restate the second question as: “How can
you route the data sensed by an In device at
coordinates (x
In
, y
In
) to the Out device posi-
tioned at the (x
In
, y
In
) coordinates on the
wall’s opposite side?” Or, better, because an
Out device might not necessarily exist at the
same coordinates (or because such a device
could be dead or temporarily unreachable),
you might say “How can you route such
data to the Out device closest to (x
In
, y
In
)?”
In this case, the answer is rather simple (see
Figure 1b). Each In device must propagate
a message (that is, a tuple) in the form (x
In
,
y
In
, color), representing the image informa-
tion it captured at its own coordinates. You
can then route such a tuple from device to
device toward the wall’s closest edge.
Because a device knows its own coordinates
and can store its neighbors’ coordinates (as
obtained during localization), it also knows
whether it must propagate a message and
where, on the basis of simple Euclidean con-
siderations. Once the tuple has reached the
wall’s opposite side, routing proceeds from
device to device toward the (x
In
, y
In
) coor-
dinates. Propagation stops when the tuple
reaches the Out device at (x
In
, y
In
) or when
no emitter closer to the goal can be found
in the neighborhood.
Optical and hardware issues
At what size do the In and Out devices
provide a reasonable visual rendering? Fol-
lowing the Listings-Donders model of the
human eye
12
(see Figure 2), we can define
θ
min as the minimum angle for which an
observer would perceive points A and B
separately. This angle is approximately
1/60 degree, the minimum at which two
light rays hit two distinct cone cells sepa-
rated by an unhit cone cell.
Thus, considering Figure 2, you would
perceive A and B separately only if
For a good image, the distance between
two devices must be less than d/3400 (for
example, you would perceive two objects
separated by 1 mm as a single object from
a distance of 3,400 mm, or 3.4 m).
For instance, to rend invisible a 1 m
2
wall
from a distance of 10 m, you’d need 115,600
devices on each side. In fact, the allowed
maximum distance between two devices to
provide the impression of smoothness is
10/3400 = 2.9 mm. So, each device must
ld
d
>⋅ tan( / ) .160
3400
64
PERVASIVE
computing
http://computer.org/pervasive
WEARABLE COMPUTING
D
eveloping large arrays of microsensors on flexible surfaces is
an interesting area of research that could lead to many appli-
cations. In that respect, the article investigates interesting concepts
for implementing such devices. However, although the goal of in-
visibility is quite catchy and sure to motivate much discussion, the
claims are at odds with standard results in computer vision and
computer graphics, and it would be wise to motivate work on sen-
sor and processor networks without resorting to Harry Potter.
Although it’s possible, in principle, to project the image received
by an array of input sensors through another array of output sen-
sors, the extrapolation from an invisible wall with a fixed observer
to a general cloak overlooks several known facts in the geometry
and photometry of image formation. More precisely, serious issues
arise when you consider the case of an observer in an arbitrary
position in a 3D world.
From a geometric standpoint, assuming that the observed
world has a general 3D structure—as opposed to a flat world—
it is well known that, because of parallax effects, you cannot recon-
struct exactly the view that an observer in an arbitrary position
would see. Even if you reconstructed the 3D structure from the
input sensor—something the authors do not propose—you still
could not reconstruct the view because parts of the scene that are
occluded from the input sensors’ view, but visible to the observer,
cannot be reconstructed. A vast literature on scene reconstruction
and image-based rendering addresses these issues. Shuffling the
set of input rays into a set of output rays will not generally gener-
ate a correct image.
Ignoring the geometric issues, faithful reconstruction from an
arbitrary observer viewpoint assumes implicitly a simple reflection
model for the objects observed in the environment. Specifically, as
soon as the light reflected from an object depends on the surface
orientation and viewing direction, you can no longer recover in a
simple manner the intensity that you would see from a different
direction. —Martial Hebert
Martial Hebert
is a professor at Carnegie Mellon University’s Robotics
Institute. His interests include computer vision (particularly object recogni-
tion and 3D model reconstruction) and mobile robotics (particularly per-
ception for mobility and autonomous driving).
Simply Difficult
be approximately (2.9 mm)
2
8.4 mm
2
wide; a linear meter must have at least 340
devices and a square meter 115,600 devices
(340
2
).
The above requirements appear feasible
with regard to the state of the art in optical
MEMS technologies. For instance, three
years ago, the Smart Dust project at Berke-
ley developed optical (laser-based) inter-
nally powered computer-based sensors and
emitters of 1 mm
2
, which could well serve
our purpose.
13
Even the amount of data that each
device and its wireless communication
channels must deal with is not a challenge.
If we assume each device should record and
transmit a 24-bit value for the color infor-
mation (16 million colors) with two 16-bit
values representing device coordinates, this
results in a 56-bit tuple (24 + 16 + 16). You
must update these values 30 times per sec-
ond, the normal television frequency. For
a 1 m
2
wall, with devices approximately
2.9 mm apart, and assuming that the com-
munication range lets a device connect only
to its closest neighbors (a very strict hy-
pothesis), a tuple’s routing process takes
170 steps on average and, in the worst case
of centrally located pairs, 340 steps. Thus,
devices located at the wall’s edges (the ones
dealing with the highest traffic) will take
charge of routing information for the 170
other devices on the line between them and
the wall’s center. All this considered, a sin-
gle device’s wireless link bandwidth should
be (170 56 bits) 30 Hz 286 kbits/sec
to sustain such peaks, which is low enough
for modern micro devices to sustain.
We emphasize that we are considering
here the human eye’s physiological limits.
For several applications, less strict constraints
(and coarser renderings) might suffice.
Applications
Because the described wall can render
the image only from a fixed perspective, it
cannot render invisibility: the image pro-
duced by the Out devices would not
change as an observer moves, making the
illusion vanish. Still, a transparent wall can
have several potential applications in cases
where you’d like to observe without being
observed—for example, in therapy and
investigation settings as well as for enter-
tainment. This installation has an advan-
tage over more traditional technologies,
such as using cameras or mirrors, in its self-
containment. It also doesn’t require spe-
cific infrastructures or skills.
This technology would also allow build-
ing further interesting artifacts. Specifically,
you could use an analogous amorphous
network to produce a paintable television
or monitor—that is, a television sold as a
OCTOBER–DECEMBER 2002
PERVASIVE
computing
65
I
s this paper a joke or a serious contribution to the literature? This
much I know for sure: These authors have an overactive imagina-
tion! And that is probably what we all need because the field of per-
vasive computing is on the whiz-bang edge of computer science to
begin with.
On purely technical grounds, do I believe this proposal can
work? Not really. The optical sensors and emitters required would
be impossibly dense, especially considering the issues involved in
each unit’s directionality, determination of the viewer’s position,
intermingling sensors and emitters on the surface, determination
of sensor and emitter positions, and so on. Do I believe we’re
going to invent cloaks of invisibility soon? No.
But it would be kind of cool, wouldn’t it? There’s something to
be said for papers that go way outside the bounds and inspire
people to wild feats of imagination. In that regard, I actually wish
this article were more about cool concepts and less about how
(not) to achieve them. Maybe we can’t make an invisibility cloak,
but we could make a “mirror” that does something particularly
cool. The “maybes” here are endless.
So, is this paper to be taken seriously? Not by me. But who
cares? If I were teaching, I might give it to my class anyway and
ask them to think about it. This might not be possible in my
lifetime, but could be in theirs. Steve Shafer
Steve Shafer
is a senior researcher in ubiquitous computing at Microsoft
Research. His current work is in location awareness for mobile devices and
intelligent environments; his previous work was in computer vision, model-
ing optical properties of materials, and cameras.
Kind of Cool
Figure 2. The Listings-Donders model of
the human eye.
Eye
5 mm 15 mm
A
B
d
I
θmin
b
n
a
paint that, once painted, works as a nor-
mal flat-screen television. In this case, you
would need a TV-signal receiver at one of
the painted wall’s edges to also act as a bea-
con for the network coordinate system.
Once all emitters are localized, the receiver
can transmit tuples in the form (x, y, color),
to be propagated in the network of emitters
to render the TV image.
Step 2. The invisible object
You can only enable the real power of
invisibility, and a larger class of related appli-
cations, by making it possible to paint invis-
ible objects of any shape and by enhancing
image rendering to relax the constraint of
the fixed and single point of observation.
You want the object be completely covered
by a sensor network so that, for any point
of its surface, a ray of light incident from
any direction gets properly captured and
retransmitted on the opposite side.
By assuming that one or more observers
can move around the object and see all its
sides, you must integrate the In and Out
devices in a single device or, they must be
both densely painted on the whole surface
so that you no longer separate the object
into In and Out sides. Moreover, if we want
the object to show what’s behind it inde-
pendent of the observers’ position, each
portion of the surface should be able to
retransmit different light configurations in
different directions, to virtually extend a
reasonable number of light rays on the
object (see Figure 3). For this purpose, we
can consider a device as a compound object
capable of acquiring different light infor-
mation (In function) and of firing different
light rays (Out function) in different direc-
tions. Alternatively, we can consider a
device as a monodirectional sensor or emit-
ter. By distributing these monodirectional
devices densely on the object surface with
a random orientation, any portion of the
surface will have, with high probability, sen-
sor and emitters pointing in all directions
Software issues
In this case, implementing invisibility
closely resembles the case of the previously
described invisible wall: each In device
must provide light information to the Out
device on the object’s “opposite” side—
that is, to the emitter on the surface that is
the closest (in surface position and orien-
tation) to the virtual extension of the light
ray that the sensor captures. However, in
this case, answering how devices determine
their position and the In-Out pairs and
how you can route data from In to Out
devices is more challenging.
For our explanation, it’s important to
distinguish between extrinsic and intrinsic
coordinates. Extrinsic coordinates identify
a device’s position and orientation with
respect to a 3D frame attached to the object
(see Figure 4a). You could represent a
device’s extrinsic coordinates, for instance,
by its (x, y, z) coordinates and by the two
angles (
θ
,
ϖ
) determining its orientation.
Intrinsic coordinates specify devices’ posi-
tions in the object surface. In other words,
they are 2D coordinates mapped on the
surface, establishing a frame on the object’s
surface (see Figure 4b).
66
PERVASIVE
computing
http://computer.org/pervasive
WEARABLE COMPUTING
Figure 3. The invisible object.
Sensor
A
Sensor
C
Sensor
B
Multiple sensors pointing in different directions
(or a compound multidirectional sensor)
C
C
B
B
A
A
Figure 4. Coordinates: (a) extrinsic and
(b) intrinsic.
(
x
,
y
,
z
)
x
z
y
Beacon
θ
ϖ
(b)(a)
ξ
( , )
ξη
η
A device’s extrinsic coordinates are fun-
damentally important in that they unam-
biguously determine the coefficients of the
specific light ray associated with the
device—that is, the light ray an In device
receives and blocks or the light ray an Out
device reproduces. So, you must establish
an In-Out pair between two devices whose
extrinsic coordinates identify the same
straight line—that is, the light ray to be
reproduced. Therefore, each device must
know its extrinsic coordinates to establish
such a pair.
Determining a device’s extrinsic coordi-
nates in a distributed way from local infor-
mation can take place by extending the
beacon-based localization mechanism to
consider the object’s local curvature and
the devices’ orientation (see Figure 4a).
You can determine local curvature from
within the surface in a completely distrib-
uted manner by taking into account the fol-
lowing geometrical property: While on a
plain surface, the ratio of the circumfer-
ence to a circle’s radius is always 2π. On a
curved surface, the ratio of the circumfer-
ence to a circle’s “radius,” as measured on
the surface, decreases as the curvature
increases. This is because the measured
radius is actually an arc on the surface.
Starting from this property, each device can
measure the local curvature of the object
on which it is located by measuring the cir-
cumference and the radius of a small cir-
cle centered on itself. This can operatively
take place by having each device probe the
neighborhood, determine the number of
devices at a given distance (the circumfer-
ence) and the number of devices on the
shortest path from the central device to a
neighboring device (the radius), and then
examine how far the ratio of these num-
bers is from 2π.
7
Once you do this, you can get the
device’s orientation on the surface by the
curvature information and by comparing
the beacon’s orientation with other devices’
orientations epidemically. To this end, you
must equip each device with a system capa-
ble of determining relative orientations,
such as the Cricket Compass.
14
Once
you’ve gathered the curvature and devices’
orientations, you can easily obtain the
extrinsic coordinates by a simple variant
of the triangulation procedure.
Unfortunately, even once each device
knows its extrinsic coordinates, this knowl-
edge doesn’t help establish the correct In-
Out pairs that would enable a light ray to
be reproduced. In fact, if an In device starts
propagating a tuple reporting its extrinsic
coordinates and the color to be reproduced
(x, y, z,
θ
,
ϖ
, color), you cannot exploit this
information to properly route the message
toward the corresponding Out device. In
fact, extrinsic coordinates do not indicate
where a light ray entering the object will
exit, this being dependent on the object’s
shape. Without local information about the
object’s shape (which is impossible to store
locally unless the object is very regular), you
cannot know a priori where the tuple
should eventually arrive or what the right
direction is to approach the destination.
To solve this problem without flooding
tuples across the whole network, we need
a strategy to route information without
explicit knowledge of mates’ extrinsic
coordinates. This is where intrinsic coor-
dinates come in. Evaluating intrinsic coor-
dinates is perfectly analogous to the flat
wall case. Also, you can effectively exploit
intrinsic coordinates to route tuples toward
a specific point in the surface, by making a
tuple progressively approach the needed
destination.
We propose that each device, once it has
determined its extrinsic coordinates, can
determine the coefficients coeffs of the
straight line that coincides with the light
ray incident on it. Of course, two In and
Out devices are mates if and only if they
compute the same (or very close) coeffs.
Now suppose that all devices agree on a
continuous hash function H that maps an
equation’s coefficients in intrinsic coordi-
nates (
ξ
,
η
). An In device A can then send
the tuple containing the color information
to the device at intrinsic coordinates
H(coeff
A
). An Out emitter B, on its side of
the wall, can try to collect color informa-
tion from the device at intrinsic coordinates
H(coeff
B
). So, if A and B are mates, the cal-
culated intrinsic coordinates are H(coeff
A
)
= H(coeff
B
), identifying a unique device
closest to those coordinates. This device
will act as a rendezvous point (see Figure 5)
to establish the pair and exchange the
needed information. Of course, the mes-
sage must carry the extrinsic coordinates
to deal with hash collisions—that is, to
check the correctness of a forming pair in
the case of multiple pairs using the same
rendezvous node.
You can optimize this process: only the
first communication between an In and Out
pair occurs via the rendezvous, after which
the two devices become known to each
other and can interact via direct tuple rout-
ing. However, such a solution is less reli-
able: it does not consider that a device can
die or run out of power, a problem that the
rendezvous solution, establishing In-Out
couples dynamically, does not experience.
Internet-scale peer-to-peer computing
and content-based routing have already
addressed similar problems. There, peers
forming an unstructured and dynamic com-
munity must exchange data and messages
based not on an IP addressing scheme but
rather on the messages’ content (for exam-
ple, “I need the mp3 of ‘Hey Jude,’ no mat-
ter who can provide it to me”). Data struc-
turing in a peer-to-peer community defines
a sort of extrinsic coordinate scheme: it is
one on which messages must be delivered
but provides no help in routing them. So,
to avoid Gnutella’s flooding approach, var-
ious proposals
5,15
exploit the common idea
of relying on a virtual overlay network,
OCTOBER–DECEMBER 2002
PERVASIVE
computing
67
Light ray
(x , y , z )
AAA
(ξ
R
, η
R
) = H(coeff
A
) = H(coeff
B
)
(Rendezvous)
(x , y , z
)
BB B
Figure 5. Rendezvous communication.
defining an intrinsic coordinate scheme
facilitating routing between peers. If you
need to send or receive a message on the
basis of its content, you can hash the mes-
sage’s content into the intrinsic coordinates
and then route it toward the rendezvous
point in the overlay network.
Optical and hardware issues
If you consider each sensor and emitter
a separate device, the number of directional
devices involved in rendering an object
invisible might be very high. In fact, the
object surface must be densely filled with
sensors or emitters oriented toward all pos-
sible directions of the object’s outer space.
To provide some quantitative data, we
can reapply considerations made previously
to a 1-m-diameter sphere. Consider again
for a moment a fixed point of observation;
in this case, to render invisibility at a 10 m
distance, approximately 372,000 devices,
each 8.44 mm
2
wide, must cover the
sphere’s surface. We derive this number
from the Listing-Donders model: from a 10
m distance, a device of 8.44 mm
2
is seen as
a single point, and we can easily calculate
that a 1-m-diameter sphere has room for
372,000 such devices. When you have to
preserve such a property for any direction
of observations, however, the 8.44 mm
2
previously occupied by a single device must
now include numerous sensors and emit-
ters pointing in different directions. As a
first approximation, we can say that in cre-
ating a smooth 3D view of an object, the
number of directional sensors and emitters
in the 8.44 mm
2
area must be at least one-
half the total number of such areas—that
is, 186,000 (372,000/2). This is because, to
acquire and display a coherent image from
all possible points of view in each of the
8.44 mm
2
areas, you need at least one sen-
sor or emitter to support any of the other
sensors and emitters directly visible from a
common viewpoint (approximately the
number of sensors and emitters in half a
surface—for example, a hemisphere).
To fill a 8.44 mm
2
area with 186,000
sensors and with a similar number of emit-
ters, each device should occupy an area
smaller than 5 µm 5 µm. It’s hard to imag-
ine a single standalone computer-based
device of that size. Still, you can think of
packing in multiple optical microdevices
pointing in different directions and con-
trolling them via a single compound device.
Efforts such as those at Texas Instruments,
where they’ve produced microdisplays
made up of electrostatically actuated mir-
rors of a few µm, and at Philips Research
Laboratories,
16
showing the possibility of
growing µm-scale LCD cells on any type of
surface, demonstrate such an approach’s
potential feasibility.
For bandwidth requirements, we can
apply calculations similar to the ones in the
previous section. Coding the (color, x, y, z,
θ
,
ϖ
) information requires 104 bits (24 + 5
16 bits). Each tuple must carry its extrin-
sic coordinates to avoid hash collisions; you
can dynamically recompute at each hop the
intrinsic coordinates required for routing.
Considering that, in the worst case, you
must route each message for 1.57 m (one-
half of a maximum circle in the sphere), an
In-Out communication occurring through
a rendezvous device would require, in the
worst case, 3.14 m—that is, 1,082 hops.
(We assume that the wireless-communica-
tion range is long enough to transmit over
the 8.44 mm
2
area.) So, for a single direc-
tional facet, or monodirectional device, the
required bandwidth to route 30 messages
per second for a maximum worst case of
1,082 other devices is 1,082 104 bits
30 Hz 3.4 Mbits/sec (although the aver-
age required bandwidth for sensors might
be lower). If you pack all the devices con-
tained in the 8.44 mm
2
area (372,000
devices) in a single compound device, the
devices’ bandwidth requirements increases
tremendously (to approximately 1.2
Tbits/sec). As challenging as this might be,
it’s not technically impossible, especially
when you consider recent advances in ter-
ahertz-band technologies.
9
Additionally,
you could also exploit AI and image-recog-
nition technologies to have sensors track
observers’ positions and reduce the sen-
sors’ efforts by rendering invisibility from
a limited set of viewpoints.
Applications
Perhaps the most natural applications
for invisible objects fall in the military mar-
ket (for example, invisible cars and tanks)
and in the nonintrusive study of natural
ecosystems.
You could also paint the interior of an
object, such as a room, instead of its exte-
rior. For example, you could produce real-
istic, immersive virtual reality environ-
ments, freeing users from wearing intrusive
hardware
17
and from staying immobile at
a specific position in a specific room where
specific displays are placed.
18
A related
application with a great potential market
in southern Europe would be to produce
paintable windows. Tall buildings, very
dark inside due to both narrow windows
and narrow streets, characterize southern
European cities. Because local laws forbid
changing these buildings’ structures, paint-
ing virtual windows could greatly improve
the quality of living there. From the inside,
they would look like large, real windows;
from the outside, you wouldn’t perceive
changes to the building.
More generally, you could effectively
exploit the outlined technology to produce
visibility despite occluding objects. For
instance, appropriately painting trucks
would improve their rear visibility. Also,
you could improve limited visibility in
mountain streets by painting portions of
the occluding slopes.
Step 3. The invisibility cloak
The last constraint we have to remove
to build the cloak of invisibility (or more
generally, a comfortable cloth) is rigidity.
We need to deploy our network of devices
on a flexible fabric, which you can reform
accordingly for unpredictable dynamics
and shapes, due to factors both external
(for example, wind) and internal (for exam-
ple, the wearer’s movements).
Software issues
Unlike the previous cases, a flexible
cloak’s In-Out pairs must be continuously
reestablished to reflect the cloak’s move-
ments. In particular, although the device’s
intrinsic coordinates remain fixed, extrin-
sic coordinates can change continuously.
So, the routing problem becomes particu-
larly challenging, and you must account
for the overhead in maintaining an over-
68
PERVASIVE
computing
http://computer.org/pervasive
WEARABLE COMPUTING
lay coherent structure over the cloak’s
amorphous network. We have two strate-
gies for solving this problem.
First, devices could reevaluate their
extrinsic coordinates continuously (for
example, 30 times per second) to account
for cloak reshaping. The communication
would then proceed with the rendezvous
approach. Unfortunately, this strategy
imposes a notable computational and com-
munication overhead on devices, leaving
little room for the activities related to image
rendering. Also, because the localization
process might require several iterations to
reach satisfying accuracy, we can’t predict
whether this process would be fast enough
for execution at 30 times per second.
Alternatively, you could rely on a small,
rigid portion of the cloak (for example, a
belt or a necklace) as a central point for
geometrical references. Both In and Out
devices, without being forced to know a
priori their extrinsic coordinates, could
send a partially undefined tuple (contain-
ing only the color information) toward the
rigid reference point. While being routed
from the source toward the cloak’s rigid
section, each of the intermediate devices
could dynamically compute the geometric
information related to the path that the
tuple followed and include this informa-
tion in the tuple before propagating it.
Once the tuple arrives at the cloak’s rigid
section, it can exploit the information col-
lected during its travel to discover the
source’s extrinsic coordinates. At this
point, it can apply the hash-based ren-
dezvous mechanism to meet its mate. A
drawback is load imbalances, produced by
concentrating computational and com-
munication activity in (and in the proxim-
ity of) the cloak’s rigid area.
We do not exclude that better strategies
and algorithmic solutions might exist. You
might find inspiration from mobile ad hoc
networks, which focus on enabling mobile
peers to communicate with each other,
despite the networks’ continuously chang-
ing topology.
2
Optical and hardware issues
Cloak flexibility does not change the
devices’ optical and size characteristics.
What might change instead is their band-
width requirement, as induced by the addi-
tional communication overhead to support
cloak deformations and dynamic reform-
ing of the In-Out pairs.
If we consider the first solution proposed
(dynamic recomputing of positions and
orientations), optimistically assuming that
this solution is computationally feasible,
the bandwidth requirements will likely
increase dramatically, owing to the relo-
calization process’s iterations. If we con-
sider the second proposed solutions, the
bandwidth requirements do not seem to
even double. In fact, before a device could
route a message to the appropriate ren-
dezvous place, the message must travel
toward the rigid reference point (for exam-
ple, the belt). So, each message on a flexi-
ble cloak travels for a total distance that
would be less than twice the distance it
would have traveled for a rigid object.
However, you must carefully check such
considerations against the load imbalances
introduced.
Additionally, energy recharging can give
a mutable, wearable cloak a distinct advan-
tage over a wall or rigid object. Even when
lacking an external power supply,
You can use inertial forces induced by
cloak movements to mechanically re-
charge the devices
The human body’s thermal energy can
provide an alternate energy source for
wearable computing systems (for exam-
ple, Infineon Technology offers thermal-
body-powered clothes)
A final note relates to the artifact’s cost.
The Institute for Defense Analysis has pre-
dicted that large-scale production of
MEMS computer-based systems will reduce
their cost, in the near future, to well below
1 Euro each.
8
A cloak of invisibility of 3 m
2
would require approximately 372,000
compound (multidirectional) devices to be
invisible at a 10m distance, implying an
overall cost well below a half-million Euros.
Applications
Of course, you could move from cloaks
to any type of clothing, applying the same
technology. They could significantly im-
pact military operations and investigations,
although raising questions of ethics. We’d
rather see invisible clothes appear in fash-
ion and entertainment markets. For fash-
ion, you could enrich small (and cheap)
portions of clothes and accessories (for
example, T-shirts, bracelets, or necklaces)
with invisibility frames. Moreover, an
invisibility cloak could effectively augment
virtual reality games in theme parks such as
Disneyland.
Another application we have considered,
but aren’t sure if it’s feasible, is internal body
monitoring. You could have a patient drink
(or be injected with) a set of sensors and let
them disperse in a zone of the body (for
example, the stomach). Then, via an emit-
ter-based cream that you’d paint on the
patient (for example, the belly), you could
see the body’s interior, despite the internal
movements of the body and of the sensors
within it. If impossible in a human body,
you could be probably do it in other types
of “interiors” characterized by dynamic
internal activity, such as an underground
river or a complex pipeline.
R
ecent advances in MEMS and
wireless communication tech-
nologies, together with con-
ceptual advances in distributed
coordination, let us envision the possibility
of building an invisibility cloak in our life-
times, possibly accordingly to our concep-
tual implementation.
Our research group currently faces
issues related to distributed coordination
in pervasive and mobile computing sce-
narios, which brings us closer to realiza-
tion of the invisibility cloak and related
applications.
First, we are studying the impact of envi-
ronmental dynamics on the local states of
sets of locally connected sensors and on
their global behaviors.
19
We’ve found that
you can effectively exploit environmental
dynamics (for example, the type and
dynamics of the stimuli received) to achieve
a globally organized behavior in a large set
of distributed sensors. For an invisibility
cloak, you could use this to let In sensors
OCTOBER–DECEMBER 2002
PERVASIVE
computing
69
in a region of the cloak perform some sort
of dynamic and distributed data compres-
sion, to drastically reduce the data travel-
ing over the cloak.
Second, we are studying a novel approach
to content-based routing, aimed at letting
data flow toward its destination in a com-
plex network by relying on distributed com-
putational fields, providing dynamic, appli-
cation-specific views of a region of physical
space.
4
You can then achieve content-based
routing by letting data follow the shape of
such fields, which will lead them to their des-
tinations despite dynamic environmental
changes. For the cloak, you could effectively
use such computational fields to dynami-
cally route information from In to Out
devices despite the cloak’s changes in shape.
These approaches, together with ap-
proaches from other areas (for example,
amorphous computing, multiagent sys-
tems,
20
and sensor networks) will likely
improve the cloak’s conceptual design and
shorten its time to market and related
costs.
ACKNOWLEDGMENTS
We thank Tim Kindberg, Roy Want, and the anony-
mous referees for helpful comments and corrections
on this article’s earlier drafts.
REFERENCES
1. D. Estrin et al., “Connecting the Physical
World with Pervasive Networks,” IEEE
Pervasive Computing, vol. 1, no. 1, Jan.–
Mar. 2002, pp. 59–69.
2. J. Broch et al., “A Performance Compari-
son of MultiHop Wireless Ad Hoc Network
Routing Protocols,” Proc. ACM/IEEE
Conf. Mobile Computing and Networking,
ACM Press, New York, 1998, pp. 85–97.
3. J. Hightower and G. Borriello, “Location
Systems for Ubiquitous Computing,” Com-
puter, vol. 34, no. 8, Aug. 2001, pp. 57–66.
4. M. Mamei, L. Leonardi, and F. Zambonelli,
“A Physically Grounded Approach to
Coordinate Movements in a Team,” Proc.
1st Int’l Workshop Mobile Teamwork,
IEEE CS Press, Los Alamitos, Calif., 2002,
pp. 373–378.
5. A. Rowstron and P. Druschel, “Pastry: Scal-
able, Decentralized Object Location and
Routing for Large-Scale Peer-to-Peer Sys-
tems,” Lecture Notes in Computer Science,
no. 2218, Springer-Verlag, Berlin, 2001, pp.
329–350.
6. R. Nagpal, “Programmable Self-Assembly
Using Biologically-Inspired Multiagent
Control,” Proc. 1st Int’l Conf. Autonomous
Agents and Multiagent Systems, ACM
Press, New York, 2002, pp. 418–425.
7. O.C. Thorpe, Computing Curvature Using
Amorphous Computing, tech. report, MIT
Artificial Intelligence Lab, Cambridge,
Mass., 1998.
8. K.S.J. Pister, “On the Limits and Applica-
bility of MEMS Technology,” Defense Sci-
ence Study Group Report, Inst. for Defense
Analysis, Alexandria, Va., 2000.
9. R. Kohler, “Terahertz Semiconductor-Het-
erostructure Laser,” Nature, vol. 417, no,
6885, 9 May 2002, pp. 156–159.
10. J. Lifton et al., “Pushpin Computing Sys-
tem Overview: A Platform for Distributed,
Embedded, Ubiquitous Sensor Networks,”
Proc. 1st Int’l Pervasive Computing Conf.
2002, Lecture Notes in Computer Science,
no. 2414, Springer-Verlag, Berlin, 2002, pp.
139–151.
11. A. Howard and M.J. Mataric, “Relaxation
on a Mesh,” Proc. IEEE/RSJ Int’l Conf.
Robots and Systems, IEEE CS Press, Los
Alamitos, Calif., 2001, pp. 740–745.
12. J.F. Fulton, Howell’s Textbook of Physiol-
ogy, W.B. Saunders, Philadelphia, 1946.
13. J.M Kahn, R.H. Katz, and K.S.J. Pister,
“Emerging Challenges: Mobile Network-
ing for Smart Dust,” J. Comm. and Net-
works, vol. 2, no. 3, Sept. 2000, pp.
188–196.
14. N.B. Priyantha et al., “The Cricket Com-
pass for Context-Aware Mobile Applica-
tions,” Proc. ACM/IEEE Conf. Mobile
Computing and Networking, ACM Press,
New York, 2001, pp. 1–14.
15. I. Stoica et al., “Chord: A Scalable Peer-to-
Peer Lookup Service for Internet Applica-
tions,” Proc. ACM Special Interest Group
on Communications Conf. (SIGCOMM
2001), ACM Press, New York, 2001, pp.
149–160.
16. R. Penterman et al., “Single-Substrate LCDs
by Photo Enforced Stratification,” Nature,
vol. 417, no. 6684, 2 May 2002, pp. 55–58.
17. W. Piekarski and B. Thomas, “ARQuake:
the Outdoor Augmented Reality Gaming
System,” Comm. ACM, vol. 45, no. 1, Jan.
2002, pp. 36–38.
18. J. Jacobsson and Z. Hwang, “Unreal Tour-
nament for Immersive Interactive Theater,”
Comm. ACM, vol. 45, no. 1, Jan. 2002, pp.
39–42.
19. F. Zambonelli, M. Mamei, and A. Roli,
“What Can Cellular Automata Tell Us
about the Behaviour of Large MultiAgent
Systems?” Proc. 1st Int’l Workshop Soft-
ware Eng. for Large Agent Systems, to be
published in Lecture Notes in Computer
Science, Springer-Verlag, Berlin, 2003.
20. F. Zambonelli and V. Parunak, “From
Design to Intentions: Signs of a Revolution,”
Proc. 1st Int’l Joint Conf. Autonomous
Agents and Multiagent Systems, ACM Press,
New York, 2002, pp. 455–456.
For more information on this or any other comput-
ing topic, please visit our Digital Library at http://
computer.org/publications/dlib.
70
PERVASIVE
computing
http://computer.org/pervasive
WEARABLE COMPUTING
the AUTHORS
Franco Zambonelli
is a professor of computer science at the University of Modena
and Reggio Emilia. His research interests include distributed and pervasive comput-
ing, multiagent systems, and agent-oriented software engineering. He obtained his
Laurea in electronic engineering and his PhD in computer science from the Univer-
sity of Bologna. He is a member of the management committee of the European
Network of Excellence “Agentlink II” and, within the same network, is the coordina-
tor of the Special Interest Group on Methodologies and Software Engineering for
Agent Systems. He is a member of the IEEE, ACM, AIIA (Italian Association for Artifi-
cial Intelligence), and TABOO (Italian Association on Object-Oriented Technologies). Contact him at
Dipt. di Scienze e Metodi dell’Ingegneria, Univ. di Modena e Reggio Emilia, Via Allegri 13, Reggio
Emilia, Italy; franco.zambonelli@unimo.it.
Marco Mamei
is PhD student in computer science at the University of Modena and
Reggio Emilia. His research interests include distributed and pervasive computing,
complex and adaptive systems, and multiagent systems. He has a Laurea in computer
science from the University of Modena and Reggio Emilia. He is a member of the
AIIA (Italian Association for Artificial Intelligence) and TABOO (Italian Association on
Object-Oriented Technologies). Contact him at Dipt. di Scienze e Metodi dell’Ingeg-
neria, Univ. di Modena e Reggio Emilia, Via Allegri 13, Reggio Emilia, Italy; mamei.
marco@unimo.it.
... The key element leading to such robust, scalable and flexible behaviors is coordination: the agents' fundamental capability to dynamically decide on their own actions in the context of their dynamic operational environment. Autonomous components, in fact, must be able to coordinate their activities' patterns to achieve goals, that possibly exceed their capabilities as singles, despite -and possibly taking advantage -of environment dynamics and unexpected situations [24,31]. ...
... Agents would be driven by the fields, not by moving form one place to another, but by making other kind of activities. In this direction, in the next future it will be interesting to apply the field-based model, in the development of new applications for sensor networks with a particular interest in those algorithms exploiting ideas taken from manifold geometry [31]. ...
Article
Full-text available
Enabling and ruling coordination activities in mobile com-puting scenarios challenge traditional approaches to distributed appli-cation development and software engineering. This chapter focuses on Distributed Motion Coordination, i.e., the problems of coordinating the movements of a group of "agents" (e.g., mobile devices, users carrying on those devices, mobile robots and sensors) and focuses on the idea of field-based approaches to model and engineer such coordinated be-haviors. The key idea in field-based approaches is to have the agents' movements driven by computational force fields, generated by the agents themselves and/or by some infrastructure and propagated across the en-vironment. A globally coordinated and self-organized behavior in agents' movements emerges due to the interrelated effects of agents following the fields' shape and of dynamic fields re-shaping. This chapter shows that field-based approaches enable the definition of adaptive and effective co-ordination schemes, which can be modeled and tested by making use of a dynamical systems formalism, and which can be easily implemented above existing middleware infrastructures. Several applications exam-ples expressing a variety of motion coordination patterns are presented, and both modeled in dynamical systems terms and tested in a simulated implementation. In addition, the programming model to code fields is presented in the context of a reference middleware implementation.
... The key element leading to such robust, scalable and flexible behaviors is coordination: the agents' fundamental capability to dynamically decide on their own actions in the context of their dynamic operational environment. Autonomous components, in fact, must be able to coordinate their activities' patterns to achieve goals, that possibly exceed their capabilities as singles, despite -and possibly taking advantage -of environment dynamics and unexpected situations [24,31]. ...
... Agents would be driven by the fields, not by moving form one place to another, but by making other kind of activities. In this direction, in the next future it will be interesting to apply the field-based model, in the development of new applications for sensor networks with a particular interest in those algorithms exploiting ideas taken from manifold geometry [31]. ...
Article
Full-text available
As computing is becoming pervasive, autonomous computer-based systems are going to be embedded in all everyday objects and in the physical environment. In such a scenario, mobility too, in different forms, will be pervasive. Mobile users, mobile devices, computer-enabled vehicles, as well as mobile software compo- nents, define an open and dynamic networked world that will be the stage of near-future software applications. Traditionally, software applications have always been built by adopting pro- gramming paradigms rooted on not-autonomous components coupled at design time by fixed interaction patterns. Although simple in principle, this approach seems inadequate to deal with the dynamism of the above sketched application scenario and it is likely to lead to even-more brittle and fragile applications. Only in recent years, the research on software agents has fostered new pro- gramming paradigms based on autonomous components (i.e. components with a separated thread of execution and control) interacting to realize an applica- tion (3,6,25). In this chapter, the term agent can refer not only to software components, but also to any autonomous real-world entity with computing and networking capability (e.g., a user carrying on a Wi-Fi PDA, a robot, or a mod- ern car). Autonomous interacting components are intrinsically robust, in that for example, if a component breaks down or goes out of the wireless network, the others can autonomously and dynamically re-organize their interaction patterns to account for such a problem. The key element leading to such robust, scalable and flexible behaviors is coordination: the agents' fundamental capability to dy- namically decide on their own actions in the context of their dynamic operational environment. Autonomous components, in fact, must be able to coordinate their activities' patterns to achieve goals, that possibly exceed their capabilities as singles, despite - and possibly taking advantage - of environment dynamics and unexpected situations (24,31). Unfortunately, general and widely-applicable approaches to program and manage these autonomous interacting systems are unknown. The main concep- tual difficulty is that it is possible to apply a direct engineered control only on the single agents' activities, while the application task is often expressed at the
... 14,15 For perfect cloaking effect, the cloak size, object shape, the movement of the object, and the position of the observer are taken into consideration while designing the invisibility cloak structure. 16 Hopefully, the geometry of cloak also plays an important role in the performance of cloaking principle, but it is rarely discussed in the literature. In many literatures, circular shape was the default shape of the clock. ...
Article
Full-text available
In this research article, a triangular-shaped microwave network cloak made out of interconnected patches was designed at the frequency of 2.88 GHz. Its performances in terms of field distributions and radar cross section scattering were compared with the existing structures. The novel triangular-shaped cloak structure was then applied to cloak a hindering obstacle placed in front of an electromagnetic source. The simulated scattering parameter and radiation pattern of antenna source were then validated through the experimental results. Also, the proposed structure was applied to reduce the mutual interference between two neighboring antennas. The effects of various parameters such as position and size of the cloak, dielectric constant, and substrate thickness on the cloaking effect were also simulated and studied.
... Invisible structures which do not produce scattered fields, letting the incident waves pass through them, are of high interest of researchers, especially as potential cloaks and lowscattering sensors [1][2][3][4][5] . In most cases, the objective is to try to hide an object with the known scattering/radiating properties from incident waves, using metamaterials or metasurfaces 6,7 . ...
Article
Here we discuss possibilities to enclose a volume of space by some penetrable metasurfaces so that the fields outside this volume are not perturbed by the metasurfaces while the fields induced inside the volume can be controlled: for example, made very large or very small. We restrict the analysis to a one-dimensional scenario where the incident field is a single plane wave and focus on realizations using purely reactive metasurfaces. The study is relevant to such applications as low-observable sensors or non-linear devices, and the studied physical phenomena have links to recent research on embedded eigenstates in continuum.
... In other words, object becomes invisible by cancelling the scattered fields from the object with that of cloak. To achieve perfect cloaking, the object should be cloaked irrespective of the incident angles, shape and the object movement [1]. For an object to become invisible in the EM radiation, the EM fields are disturbed either by controlling the material constitutive parameters or by boundary conditions. ...
Conference Paper
Full-text available
In this paper, a modulated metasurface structure is introduced for reducing the interference between two dipole antennas placed close to each other in MIMO application. For a system comprising of two dipole antennas of WLAN frequency of 2.4GHz and 6.3GHz respectively, a single metasurface cloak structure covering the 2.4GHz antenna provides proper isolation. The metasurface cloak is patterned with the modulated (aperiodic) structure. The similarity between the simulated S parameter results of the isolated antennas with that of closely placed cloaked antennas proves the effectiveness of proposed cloaking structure design. Also, the results obtained from the proposed modulated metasurface are compared with the existing capacitive horizontal ring metasurface and observed an increment in bandwidth of nearly 15 % in 6.3GHz with the proposed cloak design structure.
... The quest for invisibility has made great progress in recent years [1][2][3][4][5][6][7][8][9]. Invisibility techniques range from mimesis and camouflaging to modern optical devices, such as electromagnetic cloaking in the visible spectrum (Fig. 1). ...
Article
Full-text available
Determining how invisible an optical cloak intended to hide something is becoming a real problem in recent times. There are many invisibility mechanisms, the performance is quite different from technique to technique, and it is desirable to have a precise metric for their comparison. Here we propose a simple metric that assesses the invisibility in a similar way as humans do. This invisibility index is based on the fact that the human visual system (HVS) is highly sensitive to spatial frequencies, and then uses the Fourier transform and the contrast sensitivity function of the HVS to assess invisibility.
Article
We explore in detail the concept of mantle cloaking, showing that a single, ultrathin, conformal metasurface may drastically suppress the scattering from planar [one-dimensional (1D)], cylindrical [two-dimensional (2D)], and/or spherical [three-dimensional (3D)] objects. We propose realistic designs that may realize this concept at radio (RF) or far-infrared (FIR) frequencies. In addition we discuss properties and physical insights of this cloaking functionality compared to other available techniques involving bulk metamaterials, showing exciting potentials in terms of ultralow-profile, light-weight, and bandwidth of operation. These results may be of great interest for low-observability, camouflaging, noninvasive probing, and low-noise communication applications.
Article
Transformation optics has shown the ability to cloak an object from incident electromagnetic radiation is possible. However, the material parameters are inhomogeneous, anisotropic, and, in some instances, singular at various locations. In order for a cloak to be practically realized, simplified parameter sets are required. However, the simplified parameters result in a degradation in the cloaking function. Constitutive parameters for simplified two-dimensional cylindrical cloaks have been developed with two material property constraints. It was initially believed satisfying these two constraints would result in the simplified cylindrical cloaks having the same wave equation as an ideal cloak. Because of this error, the simplified cloaks were not perfect. No analysis was done to determine all material parameter constraints to enable a perfect two-dimensional cylindrical cloak. This research developed a third constraint on the material parameters. It was shown as the material parameters better satisfy this new equation, a two-dimensional cylindrical cloak's hidden region is better shielded from incident radiation. Additionally, a novel way to derive simplified material parameters for two-dimensional cylindrical cloaks was developed. A Taylor series expansion dictated by the new constraint equation leads to simplified cloaks with significantly improved scattering width performances when compared to previous published results. During the course of this research, it was noted all cloak simulations are performed using finite element method (FEM) based numerical methods. A Green's function was used to accurately calculate scattering widths from a two-dimensional cylindrical cloak with a perfect electrically conducting inner shell.
Chapter
Seit einiger Zeit ist das Phänomen zu beobachten, dass virtuelle und reale Welt immer stärker miteinander verschmelzen. Ausgangspunkt dieser Entwicklung war die explosionsartige Verbreitung des Internets und des WWW in allen Bereichen des Lebens, die verstärkt wird durch die allgegenwärtige Verfügbarkeit von Rechnern, die immer kleiner, aber auch immer leistungs- und kommunikationsfähiger werden. Notebooks und Personal Digital Assistants bilden nur den Anfang; die Informatik befasst sich in Gebieten wie Ubiquitous Computing oder „Drahtlose Sensornetze“ inzwischen mit sehr viel durchdringenderen Technologien, die gleichsam in die reale Welt eingewoben werden und in ihr verschwinden. Mussten früher viele Benutzer einen Computer teilen, so stand spätestens Anfang der 90er Jahre jedem Benutzer ein eigener Rechner zur Verfügung; heute ist jeder Mensch zumindest in den Industrienationen von einer Vielzahl von Prozessoren umgeben, ohne diese überhaupt noch als solche wahrzunehmen.
Article
Full-text available
A fundamental problem that confronts peer-to-peer applications is the efficient location of the node that stores a desired data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis and simulations show that Chord is scalable: Communication cost and the state maintained by each node scale logarithmically with the number of Chord nodes.
Conference Paper
Full-text available
A hardware and software platform has been designed and implemented for modeling, testing, and deploying distributed peer-to-peer sensor networks comprised of many identical nodes. Each node possesses the tangible affordances of a commonplace pushpin to meet ease-of-use and power considerations. The sensing, computational, and communication abilities of a “Pushpin”, as well as a “Pushpin” operating system supporting mobile computational processes are treated in detail. Example applications and future work are discussed.
Article
Large-scale networks of wireless sensors are becoming increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption and cost for digital circuitry, wireless communications and Micro ElectroMechanical Systems (MEMS), This has enabled very compact, autonomous and mobile nodes, each containing one or more sensors, computation and communication capabilities, and a power supply. The missing ingredient is the networking and applications layers needed to harness this revolutionary capability into a complete system. We review the key elements of the emergent technology of "Smart Dust" and outline the research challenges they present to the mobile networking and systems community, which must provide coherent connectivity to large numbers of mobile network nodes co-located within a small volume.
Article
A new technique to produce polymer covered liquid crystal layers on a single substrate, called photo-enforced stratification, allows cost-effective production of ultra-thin LCDs. The two-step photopolymerisation-induced phase separation of a liquid crystal and a polymer precursor can be performed on a variety of substrates and provides freedom in display design.
Conference Paper
The terahertz region (1-10THz) of the electromagnetic spectrum offers ample opportunities in spectroscopy, free space communications, remote sensing and medical imaging. Yet, the use of THz waves in all these fields has been limited by the lack of appropriate, convenient sources. We here report on a semiconductor heterostructure injection laser that emits at THz frequencies (4.44THz, λ ∼ 67 μm) and has the potential for devicelike implementation. It is based on the quantum cascade scheme employing interminiband transitions in the technologically mature AlGaAs/GaAs material system and features a novel kind of waveguide loosely based on the surface plasmon concept. The electron dynamics in the structure is analysed with the help of Monte-Carlo simulations including both carrier-carrier and carrier-phonon scattering. The lasers emit more than 2mW of peak optical power and can be operated in pulsed mode up to 60K with threshold current densities of few hundred A/cm2. Continuous-wave operation is achieved at temperatures below 30K with output powers of 150 μW.
Conference Paper
This paper presents the design and evaluation of Pastry, a scalable, distributed object location and routing substrate for wide-area peer-to-peer applications. Pastry performs application-level routing and object location in a potentially very large overlay network of nodes connected via the Internet. It can be used to support a variety of peer-to-peer applications, including global data storage, data sharing, group communication and naming. Each node in the Pastry network has a unique identifier (nodeId). When presented with a message and a key, a Pastry node efficiently routes the message to the node with a nodeId that is numerically closest to the key, among all currently live Pastry nodes. Each Pastry node keeps track of its immediate neighbors in the nodeId space, and notifies applications of new node arrivals, node failures and recoveries. Pastry takes into account network locality; it seeks to minimize the distance messages travel, according to a to scalar proximity metric like the number of IP routing hops. Pastry is completely decentralized, scalable, and self-organizing; it automatically adapts to the arrival, departure and failure of nodes. Experimental results obtained with a prototype implementation on an emulated network of up to 100,000 nodes confirm Pastry’s scalability and efficiency, its ability to self-organize and adapt to node failures, and its good network locality properties.