Multimodality imaging is becoming increasingly important in medical imaging. Since the motivation for combining multiple imaging modalities is generally to improve diagnostic or prognostic accuracy, the benefits of multimodality imaging cannot be assessed through the display of example images. Instead, we must use objective, task-based measures of image quality to draw valid conclusions about system performance. In this paper, we will present a general framework for utilizing objective, task-based measures of image quality in assessing multimodality and adaptive imaging systems. We introduce a classification scheme for multimodality and adaptive imaging systems and provide a mathematical description of the imaging chain along with block diagrams to provide a visual illustration. We show that the task-based methodology developed for evaluating single-modality imaging can be applied, with minor modifications, to multimodality and adaptive imaging. We discuss strategies for practical implementing of task-based methods to assess and optimize multimodality imaging systems.
In this paper, we present a retrospective and chronological review of our efforts to revolutionize the way physical medicine is practiced by developing and deploying therapeutic robots. We present a sample of our clinical results with well over 300 stroke patients, both inpatients and outpatients, proving that movement therapy has a measurable and significant impact on recovery following brain injury. Bolstered by this result, we embarked on a two-pronged approach: 1) to determine what constitutes best therapy practice and 2) to develop additional therapeutic robots. We review our robots developed over the past 15 years and their unique characteristics. All are configured both to deliver reproducible therapy but also to measure outcomes with minimal encumbrance, thus providing critical measurement tools to help unravel the key question posed under the first prong: what constitutes "best practice"? We believe that a "gym" of robots like these will become a central feature of physical medicine and the rehabilitation clinic within the next ten years.
Ambient Intelligence (AmI) is a new paradigm in information technology aimed at empowering people's capabilities by the means of digital environments that are sensitive, adaptive, and responsive to human needs, habits, gestures, and emotions. This futuristic vision of daily environment will enable innovative human-machine interactions characterized by pervasive, unobtrusive and anticipatory communications. Such innovative interaction paradigms make ambient intelligence technology a suitable candidate for developing various real life solutions, including in the health care domain. This survey will discuss the emergence of ambient intelligence (AmI) techniques in the health care domain, in order to provide the research community with the necessary background. We will examine the infrastructure and technology required for achieving the vision of ambient intelligence, such as smart environments and wearable medical devices. We will summarize of the state of the art artificial intelligence methodologies used for developing AmI system in the health care domain, including various learning techniques (for learning from user interaction), reasoning techniques (for reasoning about users' goals and intensions) and planning techniques (for planning activities and interactions). We will also discuss how AmI technology might support people affected by various physical or mental disabilities or chronic disease. Finally, we will point to some of the successful case studies in the area and we will look at the current and future challenges to draw upon the possible future research paths.
Making technological advances in the field of human-machine interactions requires that the capabilities and limitations of the human perceptual system are taken into account. The focus of this report is an important mechanism of perception, visual selective attention, which is becoming more and more important for multimedia applications. We introduce the concept of visual attention and describe its underlying mechanisms. In particular, we introduce the concepts of overt and covert visual attention, and of bottom-up and top-down processing. Challenges related to modeling visual attention and their validation using ad hoc ground truth are also discussed. Examples of the usage of visual attention models in image and video processing are presented. We emphasize multimedia delivery, retargeting and quality assessment of image and video, medical imaging, and the field of stereoscopic 3D images applications.
The premise of today's drug development is that the mechanism of a disease is highly dependent upon underlying signaling and cellular pathways. Such pathways are often composed of complexes of physically interacting genes, proteins, or biochemical activities coordinated by metabolic intermediates, ions, and other small solutes and are investigated with molecular biology approaches in genomics, proteomics, and metabonomics. Nevertheless, the recent declines in the pharmaceutical industry's revenues indicate such approaches alone may not be adequate in creating successful new drugs. Our observation is that combining methods of genomics, proteomics, and metabonomics with techniques of bioimaging will systematically provide powerful means to decode or better understand molecular interactions and pathways that lead to disease and potentially generate new insights and indications for drug targets. The former methods provide the profiles of genes, proteins, and metabolites, whereas the latter techniques generate objective, quantitative phenotypes correlating to the molecular profiles and interactions. In this paper, we describe pathway reconstruction and target validation based on the proposed systems biologic approach and show selected application examples for pathway analysis and drug screening.
Physics-based simulation is needed to understand the function of biological structures and can be applied across a wide range of scales, from molecules to organisms. Simbios (the National Center for Physics-Based Simulation of Biological Structures, http://www.simbios.stanford.edu/) is one of seven NIH-supported National Centers for Biomedical Computation. This article provides an overview of the mission and achievements of Simbios, and describes its place within systems biology. Understanding the interactions between various parts of a biological system and integrating this information to understand how biological systems function is the goal of systems biology. Many important biological systems comprise complex structural systems whose components interact through the exchange of physical forces, and whose movement and function is dictated by those forces. In particular, systems that are made of multiple identifiable components that move relative to one another in a constrained manner are multibody systems. Simbios' focus is creating methods for their simulation. Simbios is also investigating the biomechanical forces that govern fluid flow through deformable vessels, a central problem in cardiovascular dynamics. In this application, the system is governed by the interplay of classical forces, but the motion is distributed smoothly through the materials and fluids, requiring the use of continuum methods. In addition to the research aims, Simbios is working to disseminate information, software and other resources relevant to biological systems in motion.
Acquiring neural signals at high spatial and temporal resolution directly from brain microcircuits and decoding their activity to interpret commands and/or prior planning activity, such as motion of an arm or a leg, is a prime goal of modern neurotechnology. Its practical aims include assistive devices for subjects whose normal neural information pathways are not functioning due to physical damage or disease. On the fundamental side, researchers are striving to decipher the code of multiple neural microcircuits which collectively make up nature's amazing computing machine, the brain. By implanting biocompatible neural sensor probes directly into the brain, in the form of microelectrode arrays, it is now possible to extract information from interacting populations of neural cells with spatial and temporal resolution at the single cell level. With parallel advances in application of statistical and mathematical techniques tools for deciphering the neural code, extracted populations or correlated neurons, significant understanding has been achieved of those brain commands that control, e.g., the motion of an arm in a primate (monkey or a human subject). These developments are accelerating the work on neural prosthetics where brain derived signals may be employed to bypass, e.g., an injured spinal cord. One key element in achieving the goals for practical and versatile neural prostheses is the development of fully implantable wireless microelectronic "brain-interfaces" within the body, a point of special emphasis of this paper.
Pathology is a medical subspecialty that practices the diagnosis of disease. Microscopic examination of tissue reveals information enabling the pathologist to render accurate diagnoses and to guide therapy. The basic process by which anatomic pathologists render diagnoses has remained relatively unchanged over the last century, yet advances in information technology now offer significant opportunities in image-based diagnostic and research applications. Pathology has lagged behind other healthcare practices such as radiology where digital adoption is widespread. As devices that generate whole slide images become more practical and affordable, practices will increasingly adopt this technology and eventually produce an explosion of data that will quickly eclipse the already vast quantities of radiology imaging data. These advances are accompanied by significant challenges for data management and storage, but they also introduce new opportunities to improve patient care by streamlining and standardizing diagnostic approaches and uncovering disease mechanisms. Computer-based image analysis is already available in commercial diagnostic systems, but further advances in image analysis algorithms are warranted in order to fully realize the benefits of digital pathology in medical discovery and patient care. In coming decades, pathology image analysis will extend beyond the streamlining of diagnostic workflows and minimizing interobserver variability and will begin to provide diagnostic assistance, identify therapeutic targets, and predict patient outcomes and therapeutic responses.
Physical systems, from galactic clusters to diffusing molecules, often show fractal behavior. Likewise, living systems might often be well described by fractal algorithms. Such fractal descriptions in space and time imply that there is order in chaos, or put the other way around, chaotic dynamical systems in biology are more constrained and orderly than seen at first glance. The vascular network, the syncytium of cells, the processes of diffusion and transmembrane transport might be fractal features of the heart. These fractal features provide a basis which enables one to understand certain aspects of more global behavior such as atrial or ventricular fibrillation and perfusion heterogeneity. The heart might be regarded as a prototypical organ from these points of view. A particular example of the use of fractal geometry is in explaining myocardial flow heterogeneity via delivery of blood through an asymmetrical fractal branching network.
The successful development of neural prostheses requires an understanding of the neurobiological bases of cognitive processes, i.e., how the collective activity of populations of neurons results in a higher level process not predictable based on knowledge of the individual neurons and/or synapses alone. We have been studying and applying novel methods for representing nonlinear transformations of multiple spike train inputs (multiple time series of pulse train inputs) produced by synaptic and field interactions among multiple subclasses of neurons arrayed in multiple layers of incompletely connected units. We have been applying our methods to study of the hippocampus, a cortical brain structure that has been demonstrated, in humans and in animals, to perform the cognitive function of encoding new long-term (declarative) memories. Without their hippocampi, animals and humans retain a short-term memory (memory lasting approximately 1 min), and long-term memory for information learned prior to loss of hippocampal function. Results of more than 20 years of studies have demonstrated that both individual hippocampal neurons, and populations of hippocampal cells, e.g., the neurons comprising one of the three principal subsystems of the hippocampus, induce strong, higher order, nonlinear transformations of hippocampal inputs into hippocampal outputs. For one synaptic input or for a population of synchronously active synaptic inputs, such a transformation is represented by a sequence of action potential inputs being changed into a different sequence of action potential outputs. In other words, an incoming temporal pattern is transformed into a different, outgoing temporal pattern. For multiple, asynchronous synaptic inputs, such a transformation is represented by a spatiotemporal pattern of action potential inputs being changed into a different spatiotemporal pattern of action potential outputs. Our primary thesis is that the encoding of short-term memories into new, long-term memories represents the collective set of nonlinearities induced by the three or four principal subsystems of the hippocampus, i.e., entorhinal cortex-to-dentate gyrus, dentate gyrus-to-CA3 pyramidal cell region, CA3-to-CA1 pyramidal cell region, and CA1-to-subicular cortex. This hypothesis will be supported by studies using in vivo hippocampal multineuron recordings from animals performing memory tasks that require hippocampal function. The implications for this hypothesis will be discussed in the context of "cognitive prostheses"-neural prostheses for cortical brain regions believed to support cognitive functions, and that often are subject to damage due to stroke, epilepsy, dementia, and closed head trauma.
This special issue focuses on elucidating computational neuroscience: an interdisciplinary field of scientific research in which one of the primary goals is to understand how electronic activity in brain cells and networks enables biological intelligence. The objective is to provide a selection of papers that expose and review research efforts in aspects of computational neuroscience that demonstrate its rapidly growing intersection with electrical, electronic and computer engineering, and the prospects for interaction in the near and long-term future.
Technology has advanced to where it is possible to design and grow-with predefined geometry and surprisingly good fidelity-living networks of neurons in culture dishes. Here we overview the elements of design, emphasizing the lithographic techniques that alter the cell culture surface which in turn influences the attachment and growth of the neural networks. Advanced capability in this area makes it possible to design networks of desired complexity. Other issues addressed include the influence of glial cells and media on activity and the potential for extending the designs into three dimensions. Investigators are advancing the art and science of analyzing and controlling through stimulation the function of the neural networks, including the ability to take advantage of their geometric form in order to influence functional properties.
We have developed an office based vector tissue Doppler imaging (vTDI) that can be used to quantitatively measure muscle kinematics using ultrasound. The goal of this preliminary study was to investigate if vTDI measures are repeatable and can be used robustly to measure and understand the kinematics of the rectus femoris muscle during a drop jump task. Data were collected from 8 healthy volunteers. Vector TDI along with a high speed camera video was used to better understand the dynamics of the drop jump. Our results indicate that the peak resultant vector velocity of the rectus femoris immediately following landing was repeatable across trials (intraclass correlation coefficient=0.9).The peak velocity had a relatively narrow range in 6 out of 8 subjects (48-62 cm/s), while in the remaining two subjects it exceeded 70 cm/s. The entire drop jump lasted for 1.45 0.27 seconds. The waveform of muscle velocity could be used to identify different phases of the jump. Also, the movement of the ultrasound transducer holder was minimal with peak deflection of 0.91 0.54 degrees over all trials. Vector TDI can be implemented in a clinical setting using an ultrasound system with a research interface to better understand the muscle kinematics in patients with ACL injuries.
Magnetic resonance imaging (MRI) has been successfully applied to many of the applications of molecular imaging. This review discusses by example some of the advances in areas such as multimodality MR-optical agents, receptor imaging, apoptosis imaging, angiogenesis imaging, noninvasive cell tracking, and imaging of MR marker genes.
Piezoresistive sensors are among the earliest micromachined silicon devices. The need for smaller, less expensive, higher performance sensors helped drive early micromachining technology, a precursor to microsystems or microelectromechanical systems (MEMS). The effect of stress on doped silicon and germanium has been known since the work of Smith at Bell Laboratories in 1954. Since then, researchers have extensively reported on microscale, piezoresistive strain gauges, pressure sensors, accelerometers, and cantilever force/displacement sensors, including many commercially successful devices. In this paper, we review the history of piezoresistance, its physics and related fabrication techniques. We also discuss electrical noise in piezoresistors, device examples and design considerations, and alternative materials. This paper provides a comprehensive overview of integrated piezoresistor technology with an introduction to the physics of piezoresistivity, process and material selection and design guidance useful to researchers and device engineers.
Modeling is essential to integrating knowledge of human physiology. Comprehensive self-consistent descriptions expressed in quantitative mathematical form define working hypotheses in testable and reproducible form, and though such models are always "wrong" in the sense of being incomplete or partly incorrect, they provide a means of understanding a system and improving that understanding. Physiological systems, and models of them, encompass different levels of complexity. The lowest levels concern gene signaling and the regulation of transcription and translation, then biophysical and biochemical events at the protein level, and extend through the levels of cells, tissues and organs all the way to descriptions of integrated systems behavior. The highest levels of organization represent the dynamically varying interactions of billions of cells. Models of such systems are necessarily simplified to minimize computation and to emphasize the key factors defining system behavior; different model forms are thus often used to represent a system in different ways. Each simplification of lower level complicated function reduces the range of accurate operability at the higher level model, reducing robustness, the ability to respond correctly to dynamic changes in conditions. When conditions change so that the complexity reduction has resulted in the solution departing from the range of validity, detecting the deviation is critical, and requires special methods to enforce adapting the model formulation to alternative reduced-form modules or decomposing the reduced-form aggregates to the more detailed lower level modules to maintain appropriate behavior. The processes of error recognition, and of mapping between different levels of model complexity and shifting the levels of complexity of models in response to changing conditions, are essential for adaptive modeling and computer simulation of large-scale systems in reasonable time.
A review and tutorial of the fundamental ideas and methods of
joint time-frequency distributions is presented. The objective of the
field is to describe how the spectral content of a signal changes in
time and to develop the physical and mathematical ideas needed to
understand what a time-varying spectrum is. The basic gal is to devise a
distribution that represents the energy or intensity of a signal
simultaneously in time and frequency. Although the basic notions have
been developing steadily over the last 40 years, there have recently
been significant advances. This review is intended to be understandable
to the nonspecialist with emphasis on the diversity of concepts and
motivations that have gone into the formation of the field
By doping the substrate of a MOST more heavily near the S i -S i O 2 interface than in the bulk of the semiconductor, an improvement in the transconductance can be achieved. Since the gate capacitance after the turn-on remains unchanged, an improvement in the cutoff frequency results. The numerical results demonstrating this effect are presented.
Telecommunications networks of the future will exploit two new network architecture concepts that are currently being implemented, or soon will be. These are the Intelligent Network and ISDN, the Integrated Services Digital Network, which together will support a full range of voice, data, and image services that Information Age telecommunications users will demand. These new network architectures, operating synergistically with intelligence in terminal systems, will constitute a framework in which users and service providers will link together standardized functional components to create customized services. These components, along with interfaces and signaling protocols at the interfaces and within the network will result from continuing national and international standardization efforts. In the planning of these new architectures, a few major goals are of paramount importance: • the achievement of a flexible network structure in which functionality is distributed among the network components in a way which supports the timely and economic introduction of new services in response to user needs; • the establishment of industry standards at the interfaces between network elements such that service suppliers can choose among a set of available systems products in building their networks and avoid dependence on a small set of suppliers, • the development of standard user interfaces supporting signaling procedures which can provide the user with increased control of, and access to, services to satisfy his needs; Achievement of these goals will result in the realization of an Open Network Architecture. The ISDN and Intelligent Network architecture concepts are described in this paper.
Using large loop antennas the ultra-low-frequency research group at the National Bureau of Standards has studied the upper atmosphere phenomena of geomagnetic micropulsations. Data taken at a number of world stations on both direct reading chart and magnetic tape indicate a division into three contributing phenomena for the frequency range of 3.0 to 0.02 cps. Very regular oscillations of 2.0 to 0.2 cps are a strange pulsation phenomenon most likely of outer atmospheric origin but apparently unrelated to solar-terrestrial disturbances. Sudden bursts of large amplitude field fluctuations spread throughout the frequency range are closely related to high latitude particle precipitation, enhanced ionospheric absorption, and auroral luminosity; these fluctuations seem to be of ionospheric origin. Regular oscillations between 0.2 and 0.03 cps appearing over broad sections of the earth with related phase on days of high solar-terrestrial activity are presently the best candidate for magneto-hydrodynamic interpretation. During the International Quiet Sun Year (IQSY) a configuration of world stations will be operated along a latitudinal line covering about 180° at three sites in the boreal auroral zone, along a longitudinal line near 75° to 80°W longitude with conjugate stations corresponding to L shell values of about 6.5 and 4, and at an equatorial site.
Technology challenges for silicon integrated circuits with a
design rule of 0.1 μm and below are addressed. We begin by reviewing
the state-of-the-art CMOS technology at 0.25 μm currently in
development, covering a logic-oriented processes and dynamic random
access memory (DRAM) processes. CMOS transistor structures are compared
by introducing a figure of merit. We then examine scaling guidelines for
0.1 μm which has started to deviate for optimized performance from
the classical theory of constant-field scaling. This highlights the
problem of nontrivial subthreshold current associated with the
scaled-down CMOS with low threshold voltages. Interconnect issues are
then considered to assess the performance of microprocessors in 0.1
μm technology. 0.1 μm technology will enable a microprocessor
which runs at 1000 MHz with 500 million transistors. Challenges below
0.1 μm are then addressed. New transistor and circuit possibilities
such as silicon on insulator (SOI), dynamic-threshold (DT) MOSFET, and
back-gate input MOS (BMOS) are discussed. Two problems below 0.1 μm
are highlighted. They are threshold voltage control and pattern
printing. It is pointed out that the threshold voltage variations due to
doping fluctuations is a limiting factor for scaling CMOS transistors
for high performance. The problem with lithography below 0.1 μm is
the low throughput for a single probe. The use of massively parallel
scanning probe assemblies working over the entire wafer is suggested to
overcome the problem of low throughput
There are several lasers that can provide high-power radiation at deep-UV wavelengths. The only laser that has been successfully used in semiconductor manufacturing as a source for lithography is the excimer laser Excimer lasers provide direct deep-UV light, are scalable in energy and power, and are capable of operating with narrow spectral widths. Also, by providing three wavelengths at 248, 193, and 157 nm, excimer lasers span three generations. They have large beams and a low degree of coherence. Their physics and chemistry are well understood. Thanks to major technical developments, these lasers have kept up with the ever-tightening specifications of the lithography industry. We will discuss what these specifications are and the advances that have been made in laser technology to meet these. We will also identify any possible future limitation in this technology. The success behind the microelectronics explosion is attributed to many factors. The excimer laser is one of them.
A new type of 1.06-µm solid-state detector is discussed, the inverted hetrojunction III-V alloy mesa photodiode, which offers quantum efficiencies near 1.00 percent, extremely low capacitance and transit time, and low dark currents. The characterstics of these detectors allow their use in sensitive 1.06-µm optical receivers which promise better signal-to-noise ratios in a number of applications than any other available 1.06-µm photodetector. In particular, an optimization procedure for selecting photodiode and preamplifier parameters to give the best signal-to-noise ratio under signal conditions is discussed and this technique is applied to a proposed system application. It is shown that in this laser-illuminated airborne night imaging system, a small area heterojunction III-V alloy photodiode detector in an optimized receiver should be able to give signal-to-noise ratios much higher than any other 1.06-µm detector approach, even though the other 1.06-µm detectors may have lower noise equivalent power (NEP) values than this receiver. This is an illustration of the fact that such "magic numbers" for detector comparision as NEP are applicable only to comparing similar types of detectors in certain specific types of applications (such as comparing IR photoconductors in a high background application), and are of very little value in determining the relative performance of different types of detectors for a given system application (such as comparing photomultipliers, avalanche photodiodes, and low-noise photodiodes for this application).
The convolution of two signals in an acoustic delay line can be detected as an electrical output at suitable electrodes, using a nonlinear crystal, such as lithium niobate, for the delay-line medium. Also, a time-reversed version of an acoustic signal in the delay line can be generated by the application of a short electrical pulse to the electrodes. These forms of signal processing promise to have many applications for signal correlation and pulse compression. Convolution and time reversal in a parametric bulk acoustic-wave device has been demonstrated using a 1. 35-GHz carrier modulated with a 7-bit optimum stagger sequence. The use of longitudinal waves in an optimally oriented lithium niobate crystal has produced improved output levels.