Journal of Computational and Theoretical Nanoscience

Published by American Scientific Publishers
Online ISSN: 1936-7317
Print ISSN: 1936-6612
Publications
Discrete nanodiamond particles of 500 nm and 6 nm average size were seeded onto silicon substrates and plasma treated using chemical vapor deposition to create silicon-vacancy color centers. The resulting narrow-band room temperature photoluminescence is intense, and readily observed even for weakly agglomerated sub-10 nm size diamond. This is in contrast to the well-studied nitrogen-vacancy center in diamond which has luminescence properties that are strongly dependant on particle size, with low probability for incorporation of centers in sub-10 nm crystals. We suggest the silicon-vacancy center to be a viable alternative to nitrogen-vacancy defects for use as a biomarker in the clinically-relevant sub-10 nm size regime, for which nitrogen defect-related luminescent activity and stability is reportedly poor.
 
Since the pneumatic system has compressibility and time-delay nonlinearity behaviors, especially, for a heavy-duty pneumatic actuating table, it is difficult to establish an appropriate mathematical model for the design of model-based controller. Although fuzzy logic control has model-free feature, it still needs a time consuming work for rules bank and fuzzy parameters adjustment. Here, a self-adaptation fuzzy controller (SAFC) is proposed to control the up-down motion of a four legs pneumatic actuating table. This intelligent control strategy combines an adaptive rule with fuzzy and sliding mode control algorithms. It has on-line learning ability to deal with the system time-varying and non-linear uncertainty coupling behaviors, and adjust the control rules parameters. Only eleven fuzzy rules are required for this MIMO pneumatic actuating table motion control and these fuzzy control rules can be established and modified continuously by on-line learning. The experimental results show that this intelligent control algorithm can effectively monitor the pneumatic table to track the specified motion trajectories.
 
The AUV(Autonomous surface vehicle) is widely used for military purpose and scientific purpose. The AUV can save human life from severe environment and reduce the operating cost of underwater equipment. Generally, hulls of AUV are made of composite materials or metal alloys such as aluminum alloy and titanium alloy. Composite materials are well known as its light weight, corrosion resistance and freedom of shape design. But, composite materials are not have plastic deformation so this can be a disadvantage as the materials for underwater equipment. From this reason, this study focused on the reliability of the material. This study contains material design, experiment for materials and verification by FEA. The material what was focused on this study is Al-CFRP hybrid composites. There are used two kinds of Al-CFRP hybrid composites. Inter laminar property between Aluminum alloy and carbon fiber reinforced composites is very important. So, two kinds of Al-CFRP were used for this study. One is co-cured material and the other is post-bonded. Tensile and interlaminar test were achieved to define material properties. Profits from use of Al-CFRP sandwich material are like these. First, this material can enhance the buckling performance and second, it can achieve the reliability against failure at a moment. Mechanical tests are achieved for designed materials and its results are used for FEA. This study verify the feasibility about Al-CFRP hybrid composites for AUVs. The hull of AUV manufacturing and ocean diving test will be achieved in future work.
 
This paper presents implementation of a carrier-based three-dimensional space vector PWM technique for three-phase four-leg voltage source converter with microcontroller. The implementation of the 3-D SVPWM needs quite a bit of digital logic and computational power, and it might be a software and hardware burden even for recent digital signal processor (DSP) systems. Therefore, this paper presents a simple three-phase four-leg with carrier-based three-dimensional space vector PWM technique. The proposed technique can be implemented with microcontroller. The performance of proposed PWM strategy has been investigated and verified through simulations and experimental results for three-phase four-leg voltage source converter. This proposed method also can be applied to system that needs the synchronization between source voltage and load voltage.
 
This paper presents an energy-efficient contention based cooperative routing scheme for event detection in wireless sensor networks. In order to improve detection efficiency and diminish energy consumption, ECCRD selects the appropriate relay node by integrated consideration of detection efficiency, access probability and energy consumption. In the scheme, we propose a well-defined cooperative access mechanism which captures the detection capability and is used as the criterion for relay node selection during each contention round. Simulation results show that ECCRD achieves better detection efficiency and less energy consumption.
 
Web services technology provides a flexible and cost-effective paradigm to construct highly dynamic systems through service discovery, composition, and ultra-late binding. However, its new features bring great pressure to maintain Web service-based system. Based on the massive testing results, how to locate the fault points in system is a challenging task. In the paper, a two level diagnosis framework for Web services system is proposed. In service unit level, the WSDL interface information is used to construct decision table. In service composition level, the decision information system is built by comprehensively using process specifications and interface information. Then, rule mining algorithm in rough set reasoning is adopted to reveal the input cases associated with service or system failures. How to utilize such rules to locate faults in Web services system is also discussed. In addition, two cases are introduced to validate the feasibility and effectiveness of our approach.
 
A set of basic vectors locally describing metric properties of an arbitrary 2-dimensional (2D) surface is used for construction of fundamental algebraic objects having nilpotent and idempotent properties. It is shown that all possible linear combinations of the objects when multiplied behave as a set of hypercomples (in particular, quaternion) units; thus interior structure of the 3D space dimensions pointed by the vector units is exposed. Geometric representations of elementary surfaces (2D-sells) structuring the dimensions are studied in detail. Established mathematical link between a vector quaternion triad treated as a frame in 3D space and elementary 2D-sells prompts to raise an idea of "world screen" having 1/2 of a space dimension but adequately reflecting kinematical properties of an ensemble of 3D frames.
 
We derive a quantization formula of Bohr-Sommerfeld type for computing quasinormal frequencies for scalar perturbations in an AdS black hole in the limit of large scalar mass or spatial momentum. We then apply the formula to find poles in retarded Green functions of boundary CFTs on $R^{1,d-1}$ and $RxS^{d-1}$. We find that when the boundary theory is perturbed by an operator of dimension $\Delta>> 1$, the relaxation time back to equilibrium is given at zero momentum by ${1 \over \Delta \pi T} << {1 \over \pi T}$. Turning on a large spatial momentum can significantly increase it. For a generic scalar operator in a CFT on $R^{1,d-1}$, there exists a sequence of poles near the lightcone whose imaginary part scales with momentum as $p^{-{d-2 \over d+2}}$ in the large momentum limit. For a CFT on a sphere $S^{d-1}$ we show that the theory possesses a large number of long-lived quasiparticles whose imaginary part is exponentially small in momentum.
 
S. López-Romero, P. Santiago, D. Mendoza. Assisted-hydrothermal synthesis and
S. López-Romero, P. Santiago, D. Mendoza. Assisted-hydrothermal synthesis 
S. López-Romero, P. Santiago, D. Mendoza. Assisted-hydrothermal synthesis and 
S. López-Romero, P. Santiago, D. Mendoza. Assisted-hydrothermal synthesis and 
Flower-like nanostructures formed by ZnO nanorods were synthesized and deposited on seeded silicon and glass substrates by a hexamethylenetetramine (HMTA) - assisted hydrothermal method at low temperature (90 oC) with methenamine ((CH3)6N4), as surfactant and catalyst. The substrates were seeded with ZnO nanoparticles. The structure and morphology of the nanostructures were studied by means of x-ray diffraction (XRD), high resolution transmission electron microscopy (HRTEM), and scanning electron microscopy (SEM) techniques. Influence of the seed nanoparticle on the formation of the flower-like ZnO nanostructures is demonstrated. The influence of the organic oxygenated chains on the crystalline habit during the growth process is also observed.
 
Asteroids are leftover pieces from the era of planet formation that help us understand conditions in the early Solar System. Unlike larger planetary bodies that were subject to global thermal modification during and subsequent to their formation, these small bodies have kept at least some unmodified primordial material from the solar nebula. However, the structural properties of asteroids have been modified considerably since their formation. Thus, we can find among them a great variety of physical configurations and dynamical histories. In fact, with only a few possible exceptions, all asteroids have been modified or completely disrupted many times during the age of the Solar System. This picture is supported by data from space mission encounters with asteroids that show much diversity of shape, bulk density, surface morphology, and other features. Moreover, the gravitational attraction of these bodies is so small that some physical processes occur in a manner far removed from our common experience on Earth. Thus, each visit to a small body has generated as many questions as it has answered. In this review we discuss the current state of research into asteroid disruption processes, focusing on collisional and rotational mechanisms. We find that recent advances in modeling catastrophic disruption by collisions have provided important insights into asteroid internal structures and a deeper understanding of asteroid families. Rotational disruption, by tidal encounters or thermal effects, is responsible for altering many smaller asteroids, and is at the origin of many binary asteroids and oddly shaped bodies. Comment: Accepted for publication to Advanced Science Letters, Special Issue on Computational Astrophysics, edited by Lucio Mayer
 
Space-time schematic of the proposed loophole-free Bell experiment. Two atomic traps are separated by 300 m, each atom emits a photon whose polarization is entangled with the atomic spin. The two photons arrive simultaneously on a non-polarizing beamsplitter where interference takes place. The coincidence detection in the outputs of the beamsplitter (equivalent to a Bell-state measurement (BSM) on the two photons) signals the projection of the atoms onto an entangled state. The signal of successful BSM is sent back to both setups, where atomic state detection is started. The detection is performed in a randomly chosen basis and has to be finished before any classical signal can reach the other side (i.e. within less than 1 µs).
Number N of events necessary to violate Bell's inequality by 3 standard deviations using fluorescence detection as a function of the expected atom-atom visibility V = V at−at (2a det − 1) 2 .
Number N of events necessary to violate Bell's inequality by 3 standard deviations with ionization detection as a function of the electron/ion detection efficiency p d (including ionization probability). The assumed atom-atom visibility excluding the ionization detection efficiency is V at−at (2a ST − 1) 2 = 82.6%.
Experimental tests of Bell's inequality allow to distinguish quantum mechanics from local hidden variable theories. Such tests are performed by measuring correlations of two entangled particles (e.g. polarization of photons or spins of atoms). In order to constitute conclusive evidence, two conditions have to be satisfied. First, strict separation of the measurement events in the sense of special relativity is required ("locality loophole"). Second, almost all entangled pairs have to be detected (for particles in a maximally entangled state the required detector efficiency is 82.8%), which is hard to achieve experimentally ("detection loophole"). By using the recently demonstrated entanglement between single trapped atoms and single photons it becomes possible to entangle two atoms at a large distance via entanglement swapping. Combining the high detection efficiency achieved with atoms with the space-like separation of the atomic state detection events, both loopholes can be closed within the same experiment. In this paper we present estimations based on current experimental achievements which show that such an experiment is feasible in future.
 
Several scenarios have been proposed in which primordial perturbations could originate from quantum vacuum fluctuations in a phase corresponding to a collapse phase (in an Einstein frame) preceding the Big Bang. I briefly review three models which could produce scale-invariant spectra during collapse: (1) curvature perturbations during pressureless collapse, (2) axion field perturbations in a pre big bang scenario, and (3) tachyonic fields during multiple-field ekpyrotic collapse. In the separate universes picture one can derive generalised perturbation equations to describe the evolution of large scale perturbations through a semi-classical bounce, assuming a large-scale limit in which inhomogeneous perturbations can be described by locally homogeneous patches. For adiabatic perturbations there exists a conserved curvature perturbation on large scales, but isocurvature perturbations can change the curvature perturbation through the non-adiabatic pressure perturbation on large scales. Different models for the origin of large scale structure lead to different observational predictions, including gravitational waves and non-Gaussianity.
 
We study the gravitational collapse of an inhomogeneous scalar field with quantum gravity corrections associated with singularity avoidance. Numerical simulations indicate that there is critical behaviour at the onset of black hole formation as in the classical theory, but with the difference that black holes form with a mass gap. Comment: 8 pages, 3 figures. Typos corrected -- version to appear in a special issue of Adv. Science Lett. (Ed. M. Bojowald)
 
Binary black holes occupy a special place in our quest for understanding the evolution of galaxies along cosmic history. If massive black holes grow at the center of (pre-)galactic structures that experience a sequence of merger episodes, then dual black holes form as inescapable outcome of galaxy assembly. But, if the black holes reach coalescence, then they become the loudest sources of gravitational waves ever in the universe. Nature seems to provide a pathway for the formation of these exotic binaries, and a number of key questions need to be addressed: How do massive black holes pair in a merger? Depending on the properties of the underlying galaxies, do black holes always form a close Keplerian binary? If a binary forms, does hardening proceed down to the domain controlled by gravitational wave back reaction? What is the role played by gas and/or stars in braking the black holes, and on which timescale does coalescence occur? Can the black holes accrete on flight and shine during their pathway to coalescence? N-Body/hydrodynamical codes have proven to be vital tools for studying their evolution, and progress in this field is expected to grow rapidly in the effort to describe, in full realism, the physics of stars and gas around the black holes, starting from the cosmological large scale of a merger. If detected in the new window provided by the upcoming gravitational wave experiments, binary black holes will provide a deep view into the process of hierarchical clustering which is at the heart of the current paradigm of galaxy formation. They will also be exquisite probes for testing General Relativity, as the theory of gravity. The waveforms emitted during the inspiral, coalescence and ring-down phase carry in their shape the sign of a dynamically evolving space-time and the proof of the existence of an horizon. Comment: Invited Review to appear on Advanced Science Letters (ASL), Special Issue on Computational Astrophysics, edited by Lucio Mayer
 
schematics of the experimental setup featuring the Type-II PDC heralded photon source. The idler photon is addressed to an IF (RG) filter, collected and sent to APD1, opening a coincidence window in the TAC modules; the signal goes through the NF and the IF (RG) filters, and then is split by the BS, whose outputs are collected and sent to APD2 and APD3 to close the coincidence windows opened. The output of the two TACs is also sent to an AND logical gate whose outputs gives the number of double coincidences. 
reconstructed 
In order, from left to right, the three plots show the reconstructed density matrix (rec), ρ n,m rec , for the coherent state, the expected matrix (true), ρ n,m exp , and the absolute 
The knowledge of the density matrix of a quantum state plays a fundamental role in several fields ranging from quantum information processing to experiments on foundations of quantum mechanics and quantum optics. Recently, a method has been suggested and implemented in order to obtain the reconstruction of the diagonal elements of the density matrix exploiting the information achievable with realistic on/off detectors, e.g. silicon avalanche photo-diodes, only able to discriminate the presence or the absence of light. The purpose of this paper is to provide an overview of the theoretical and experimental developments of the on/off method, including its extension to the reconstruction of the whole density matrix. Comment: revised version, 11 pages, 6 figures, to appear as a review paper on Adv. Science Lett
 
Chemical materials used for the error-correction in the encoded oligonucleotides. 
Scheme showing the steps of the recognition, error correction and amplification of the encoded DNA signal (t 0 denotes heating). 
Design of an XOR logic gate based on the DNA hairpin structure functionalized with the fluorescent dye (D) and quencher (Q) covalently bound to the 5' and 3' ends, respectively, and using the encoded poly-T and poly-C oligonucleotides ("1" and "0", respectively) as input signals.
Design of a NAND logic gate based on DNAzyme performing the biocatalytic oxidation of NADH by H 2 O 2 , and using the encoded poly- T and poly-C oligonucleotides (“1” and “0”, respectively) as input signals. 
We offer a theoretical design of new systems that show promise for digital biochemical computing, including realizations of error correction by utilizing redundancy, as well as signal rectification. The approach includes information processing using encoded DNA sequences, DNAzyme biocatalyzed reactions and the use of DNA-functionalized magnetic nanoparticles. Digital XOR and NAND logic gates and copying (fanout) are designed using the same components.
 
A Spin network (Υ , j, m ) consists of a graph Υ together with labels j i for the edges 
An artist impression of a black hole in LQC. The edges of the state on the bulk puncture the horizon S = Σ ∩ ∆ endowing it with area through the labels j's and with intrinsic curvature through the m's.
Black holes in equilibrium and the counting of their entropy within Loop Quantum Gravity are reviewed. In particular, we focus on the conceptual setting of the formalism, briefly summarizing the main results of the classical formalism and its quantization. We then focus on recent results for small, Planck scale, black holes, where new structures have been shown to arise, in particular an effective quantization of the entropy. We discuss recent results that employ in a very effective manner results from number theory, providing a complete solution to the counting of black hole entropy. We end with some comments on other approaches that are motivated by loop quantum gravity.
 
This is a review of current theory of black-hole dynamics, concentrating on the framework in terms of trapping horizons. Summaries are given of the history, the classical theory of black holes, the defining ideas of dynamical black holes, the basic laws, conservation laws for energy and angular momentum, other physical quantities and the limit of local equilibrium. Some new material concerns how processes such as black-hole evaporation and coalescence might be described by a single trapping horizon which manifests temporally as separate horizons.
 
Arguments are presented to show that in the case of entangled systems there are certain difficulties in implementing the usual Bohmian interpretation of the wave function in a straightforward manner. Specific examples are given.
 
Insets: Fano factor, F v , as a function of ¯ v, for the different light states. Bars: reconstructed photoelectron distributions, P el m , for some of the data sets used to calculate the Fano factor. Lines: theoretical curves. 
Bars: reconstructed photon-number distributions, P ph n , for the different light states. The displayed reconstructions correspond to the measurements performed with η = η max , whose P el m are plotted as white bars in Fig. 2. Dots: theoretical curves, P n (the connecting lines are a guide for the eye). Insets: values of P el 0 as a function of η (dots) and theoretical behavior expected for each field (lines). 
We theoretically demonstrate that detectors endowed with internal gain and operated in regimes in which they do not necessarily behave as photon-counters, but still ensure linear input/output responses, can allow a self-consistent characterization of the statistics of the number of detected photons without need of knowing their gain. We present experiments performed with a photo-emissive hybrid detector on a number of classical fields endowed with non-trivial statistics and show that the method works for both microscopic and mesoscopic photon numbers. The obtained detected-photon probability distributions agree with those expected for the photon numbers, which are also reconstructed by an independent method.
 
We construct the most general perturbatively long-range integrable spin chain with spins transforming in the fundamental representation of gl(N) and open boundary conditions. In addition to the previously determined bulk moduli we find a new set of parameters determining the reflection phase shift. We also consider finite-size contributions and comment on their determination.
 
Different procedures have been developed in order to recover entanglement after propagation over a noisy channel. Besides a certain amount of noise, entanglement is completely lost and the channel is called entanglement breaking. Here we investigate both theoretically and experimentally an entanglement concentration protocol for a mixed three-qubit state outgoing from a strong linear coupling of two-qubit maximally entangled polarization state with another qubit in a completely mixed state. Thanks to such concentration procedure, the initial entanglement can be probabilistically recovered. Furthermore, we analyse the case of sequential linear couplings with many depolarized photons showing that thanks to the concentration a full recovering of entanglement is still possible. Comment: 16 pages, 7 figures, to be published on Advanced Science Letters
 
We suggest to use the photon homodyne detection experimental data for checking the Heisenberg and Schr\"{o}dinger-Robertson uncertainty relations, by means of measuring optical tomograms of the photon quantum states.
 
Flowchart of the synthesis of LNN-12 powder by citrate-gel method 
Thermo-gravimetry and differential scanning calorimetry analysis of the LNN-12 gel 
(a) A single grain rhomb-shaped LNN-12 particle, (b) high-resolution TEM (HR-TEM) images taken over the area shown in (a); the inset showing the corresponding SAED image. 
The MPB composition, Na0.88Li0.12NbO3 (commonly known as LNN-12) has been synthesized by adopting recently developed low-cost citrate-gel route where Nb2O5 acts as a source of Nb. During synthesis, Nb2O5 transforms into a stable and soluble chelate complex, an alternative of expensive metal alkoxides. Thermal decomposition process and phase formation of the as prepared gel were studied using thermo-gravimetry (TG) and X-ray diffractometry (XRD). The gels were calcined in the temperature range 500–800 °C and a pure perovskite phase was obtained at 700 °C, which is 200 °C below the conventional ceramics route (900 °C). Morphology of the phase pure powders was characterized using scanning electron microscopy (SEM) and highresolution transmission electron microscopy (HRTEM). The compacted samples showed high sintered density at < 1200 °C. This has been attributed to small particle size and homogeneity. The lower sintering temperature eliminates the possibility of alkali elements loss, leading to exact MPB composition and enhancement in the electrical properties.
 
According to the inflationary scenario of cosmology, all structure in the Universe can be traced back to primordial fluctuations during an accelerated (inflationary) phase of the very early Universe. A conceptual problem arises due to the fact that the primordial fluctuations are quantum, while the standard scenario of structure formation deals with classical fluctuations. In this essay we present a concise summary of the physics describing the quantum-to-classical transition. We first discuss the observational indistinguishability between classical and quantum correlation functions in the closed system approach (pragmatic view). We then present the open system approach with environment-induced decoherence. We finally discuss the question of the fluctuations' entropy for which, in principle, the concrete mechanism leading to decoherence possesses observational relevance. Comment: 12 pages, Revtex, invited contribution to a special issue of Advanced Science Letters, final version
 
We review recent progress in the description of the formation and evolution of galaxy clusters in a cosmological context by using numerical simulations. We focus our presentation on the comparison between simulated and observed X-ray properties, while we will also discuss numerical predictions on properties of the galaxy population in clusters. Many of the salient observed properties of clusters, such as X-ray scaling relations, radial profiles of entropy and density of the intracluster gas, and radial distribution of galaxies are reproduced quite well. In particular, the outer regions of cluster at radii beyond about 10 per cent of the virial radius are quite regular and exhibit scaling with mass remarkably close to that expected in the simplest case in which only the action of gravity determines the evolution of the intra-cluster gas. However, simulations generally fail at reproducing the observed cool-core structure of clusters: simulated clusters generally exhibit a significant excess of gas cooling in their central regions, which causes an overestimate of the star formation and incorrect temperature and entropy profiles. The total baryon fraction in clusters is below the mean universal value, by an amount which depends on the cluster-centric distance and the physics included in the simulations, with interesting tensions between observed stellar and gas fractions in clusters and predictions of simulations. Besides their important implications for the cosmological application of clusters, these puzzles also point towards the important role played by additional physical processes, beyond those already included in the simulations. We review the role played by these processes, along with the difficulty for their implementation, and discuss the outlook for the future progress in numerical modeling of clusters. Comment: Invited Review to appear on Advanced Science Letters (ASL), Special Issue on Computational Astrophysics, edited by Lucio Mayer
 
It has been proposed that measurement in quantum mechanics results from spontaneous breaking of a symmetry of the measuring apparatus and could be a unitary process that preserves coherence. Viewed in this manner, it is argued, non-destructive measurements should preserve this coherence and be reversible. It is shown that experiments with maximally entangled bipartite states can indeed distinguish between projective and unitary measurements.
 
In the standard cosmological model a mysterious cold dark matter (CDM) component dominates the formation of structures. Numerical studies of the formation of CDM halos have produced several robust results that allow unique tests of the hierarchical clustering paradigm. Universal properties of halos, including their mass profiles and substructure properties are roughly consistent with observational data from the scales of dwarf galaxies to galaxy clusters. Resolving the fine grained structure of halos has enabled us to make predictions for ongoing and planned direct and indirect dark matter detection experiments. While simulations of pure CDM halos are now very accurate and in good agreement (recently claimed discrepancies are addressed in detail in this review), we are still unable to make robust, quantitative predictions about galaxy formation and about how the dark matter distribution changes in the process. Whilst discrepancies between observations and simulations have been the subject of much debate in the literature, galaxy formation and evolution needs to be understood in more detail in order to fully test the CDM paradigm. Whatever the true nature of the dark matter particle is, its clustering properties must not be too different from a cold neutralino like particle to maintain all the successes of the model in matching large scale structure data and the global properties of halos which are mostly in good agreement with observations.
 
We describe the definition and the role background independence and the closely related notion of diffeomorphism invariance play in modern string theory. These important concepts are transformed by a new understanding of gauge redundancies and their implementation in non-perturbative quantum field theory and quantum gravity. This new understanding also suggests a new role for the so-called background-independent approaches to directly quantize the gravitational field. This article is intended for a general audience, and is based on a plenary talk given in the Loops 2007 conference in Morelia, Mexico. Comment: 8 pages, 1 figure, to appear in Adv. Sci. Lett
 
The codification in higher dimensional Hilbert Spaces (whose logical basis states are dubbed qudits in analogy with bidimensional qubits) presents various advantages both for Quantum Information applications and for studies on Foundations of Quantum Mechanics. Purpose of this review is to introduce qudits, to summarize their application to Quantum Communication and researches on Local Realism and, finally, to describe some recent experiment for realizing them. A Little more in details: after a short introduction, we will consider the advantages of testing local realism with qudits, discussing both the 3-4 dimensional case (both for maximally and non-maximally entanglement) and then the extension to an arbitrary dimension. Afterwards, we will discuss the theoretical results on using qudits for quantum communication, epitomizing the outcomes on a larger security in Quantum Key Distribution protocols (again considering separately qutrits, ququats and generalization to arbitrary dimension). Finally, we will present the experiments performed up to now for producing quantum optical qudits and their applications. In particular, we will mention schemes based on interferometric set-ups, orbital angular momentum entanglement and biphoton polarization. Finally, we will summarize what hyperentanglement is and its applications.
 
The formation of disk galaxies is one of the most outstanding problems in modern astrophysics and cosmology. We review the progress made by numerical simulations carried out on large parallel supercomputers. Recent progress stems from a combination of increased resolution and improved treatment of the astrophysical processes modeled in the simulations, such as the phenomenological description of the interstellar medium and of the process of star formation. High mass and spatial resolution is a necessary condition in order to obtain large disks comparable with observed spiral galaxies avoiding spurious dissipation of angular momentum. A realistic model of the star formation history. gas-to-stars ratio and the morphology of the stellar and gaseous component is instead controlled by the phenomenological description of the non-gravitational energy budget in the galaxy. We show that simulations of gas collapse within cold dark matter halos including a phenomenological description of supernovae blast-waves allow to obtain stellar disks with nearly exponential surface density profiles as those observed in real disk galaxies, counteracting the tendency of gas collapsing in such halos to form cuspy baryonic profiles. However, the ab-initio formation of a realistic rotationally supported disk galaxy with a pure exponential disk in a fully cosmological simulation is still an open problem. We argue that the suppression of bulge formation is related to the physics of galaxy formation during the merger of the most massive protogalactic lumps at high redshift, where the reionization of the Universe likely plays a key role. A sufficiently high resolution during this early phase of galaxy formation is also crucial to avoid artificial angular momentum loss (Abridged). Comment: 41 pages, 15 figures, Invited Review accepted for publication on Advanced Science Letters. High resolution version can be found at http://www.exp-astro.phys.ethz.ch/mayer/galform.ps.gz
 
Many reported studies relating to models and modeling clearly indicate that science teachers need to be knowledgeable about the role of models and modeling in science and need to be engaged in rich modeling activities, so that they become able to use models in science teaching and learning. In view of adequately preparing pre-service teachers to teach science through models, the authors in this chapter discuss how a cohort of pre-service elementary teachers was introduced to model-based teaching/learning and reasoning. Data for this study were collected from 62 pre-service teachers, who had the same background knowledge and similar computing skills. After four lab meetings about the pedagogical uses of computer models and computer-modeling tools, students were asked to propose a science lesson with computer models to be taught in a real classroom. Both qualitative and quantitative analyses were conducted to assess the quality of students’ lessons in terms of the structure of the computer models proposed and the complexity of the entire lesson. KeywordsTeacher education-Science teaching-Science learning-Computer modelling-Instructional design
 
The cosmic reionization of hydrogen was the last major phase transition in the evolution of the universe, which drastically changed the ionization and thermal conditions in the cosmic gas. To the best of our knowledge today, this process was driven by the ultra-violet radiation from young, star-forming galaxies and from first quasars. We review the current observational constraints on cosmic reionization, as well as the dominant physical effects that control the ionization of intergalactic gas. We then focus on numerical modeling of this process with computer simulations. Over the past decade, significant progress has been made in solving the radiative transfer of ionizing photons from many sources through the highly inhomogeneous distribution of cosmic gas in the expanding universe. With modern simulations, we have finally converged on a general picture for the reionization process, but many unsolved problems still remain in this young and exciting field of numerical cosmology.
 
The black hole information paradox is one of the most important issues in theoretical physics. We review some recent progress using string theory in understanding the nature of black hole microstates. For all cases where these microstates have been constructed, one finds that they are horizon sized `fuzzballs'. Most computations are for extremal states, but recently one has been able to study a special family of non-extremal microstates, and see `information carrying radiation' emerge from these gravity solutions. We discuss how the fuzzball picture can resolve the information paradox. We use the nature of fuzzball states to make some conjectures on the dynamical aspects of black holes, observing that the large phase space of fuzzball solutions can make the black hole more `quantum' than assumed in traditional treatments.
 
Comparison of the supernovae data 32 with the model of an accelerated expanding 
From a theory of an abstract quantum information the theory of general relativity can be deduced by means of few and physically good founded reasons. “Abstract” quantum information means that primarily no special meaning is connected with it. Therefore it is named with a new denotation: Protyposis. From the Protyposis and by using group-theoretical methods follows a cosmological model, which has an isotropic and homogeneous metric and solves the so-called cosmological problems. The Protyposis is subject to an equation of states for energy density and pressure that fulfils all the energy conditions and that also gives an explanation for the “dark energy.” If it is demanded that the relations between the spacetime structure and the material content in it should remain valid also for variations from this ideal cosmology, then general relativity results from this quantum theoretical considerations as being a description for local inhomogenities of spacetime.
 
An unsqueezed (top) and a squeezed state (bottom) of a harmonic oscillator, illustrated through the spread G 0 , 2 , 
The current understanding of structure formation in the early universe is mainly built on a magnification of quantum fluctuations in an initial vacuum state during an early phase of accelerated universe expansion. One usually describes this process by solving equations for a quantum state of matter on a given expanding background space-time, followed by decoherence arguments for the emergence of classical inhomogeneities from the quantum fluctuations. Here, we formulate the coupling of quantum matter fields to a dynamical gravitational background in an effective framework which allows the inclusion of back-reaction effects. It is shown how quantum fluctuations couple to classical inhomogeneities and can thus manage to generate cosmic structure in an evolving background. Several specific effects follow from a qualitative analysis of the back-reaction, including a likely reduction of the overall amplitude of power in the cosmic microwave background, the occurrence of small non-Gaussianities, and a possible suppression of power for odd modes on large scales without parity violation.
 
In this article we review the theory of cosmological inflation with a particular focus on the beautiful connection it provides between the physics of the very small and observations of the very large. We explain how quantum mechanical fluctuations during the inflationary era become macroscopic density fluctuations which leave distinct imprints in the cosmic microwave background (CMB). We describe the physics of anisotropies in the CMB temperature and polarization and discuss how CMB observations can be used to probe the primordial universe. Comment: 18 pages, 12 figures. Invited review to appear in Advanced Science Letters Special Issue on Quantum Gravity, Cosmology and Black Holes. v2: published version, minor clarifications and references added
 
We study the tensor modes of linear metric perturbations within an effective framework of loop quantum cosmology. After a review of inverse-volume and holonomy corrections in the background equations of motion, we solve the linearized tensor modes equations and extract their spectrum. Ignoring holonomy corrections, the tensor spectrum is blue tilted in the near-Planckian superinflationary regime and may be observationally disfavoured. However, in this case background dynamics is highly nonperturbative, hence the use of standard perturbative techniques may not be very reliable. On the other hand, in the quasi-classical regime the tensor index receives a small negative quantum correction, slightly enhancing the standard red tilt in slow-roll inflation. We discuss possible interpretations of this correction, which depends on the choice of semiclassical state.
 
In this paper we present the problem of quantum to classical transition of quantum fluctuations during inflation and in particular the question of evolution of entanglement. After a general introduction, three specific very recent works are discussed in some more detail drawing some conclusion about the present status of these researches.
 
The cosmological constant is the most economical candidate for dark energy. No other approach really alleviates the difficulties faced by the cosmological constant because, in all other attempts to model the dark energy, one still has to explain why the bulk cosmological constant (treated as a low-energy parameter in the action principle) is zero. I argue that until the theory is made invariant under the shifting of the Lagrangian by a constant, one cannot obtain a satisfactory solution to the cosmological constant problem. This is impossible in any generally covariant theory with the conventional low-energy matter action, if the metric is varied in the action to obtain the field equations. I review an alternative perspective in which gravity arises as an emergent, long wavelength phenomenon and can be described in terms of an effective theory using an action associated with null vectors in the spacetime. This action is explicitly invariant under the shift of the energy momentum tensor $T_{ab}\to T_{ab}+\Lambda g_{ab}$ and any bulk cosmological constant can be gauged away. Such an approach seems to be necessary for addressing the cosmological constant problem and can easily explain why its bulk value is zero. I describe some possibilities for obtaining its observed value from quantum gravitational fluctuations. Comment: Invited article to appear in Advanced Science Letters Special Issue on Quantum Gravity, Cosmology and Black holes (editor: M. Bojowald)
 
We study the issue of the recovery of diffeomorphism invariance in the recently introduced loop quantum gravity treatment of the exterior Schwarzschild space-time. Although the loop quantization agrees with the quantization in terms of metric variables in identifying the physical Hilbert space, we show that diffeomorphism invariance in space-time is recovered with certain limitations due to the use of holonomic variables in the loop treatment of the model. This resembles behaviors that are expected in the full theory. Comment: 5 pages, no figures, invited paper for a special issue of Advanced Science Letters
 
A three-valent vertex is evolved in discrete steps by erecting a pole and connecting the other vertices to the end of the pole. The different types of line indicate the different time steps. 
We review and discuss the role of diffeomorphism symmetry in quantum gravity models. Such models often involve a discretization of the space-time manifold as a regularization method. Generically this leads to a breaking of the symmetries to approximate ones, however there are incidences in which the symmetries are exactly preserved. Both kind of symmetries have to be taken into account in covariant and canonical theories in order to ensure the correct continuum limit. We will sketch how to identify exact and approximate symmetries in the action and how to define a corresponding canonical theory in which such symmetries are reflected as exact and approximate constraints.
 
The function of f (x) used to analyse the resonance conditions in the interaction L + ↔ L + + I +. The value of β used in this plot is ∼ 0.0087.
The function of g(y) used to analyse the resonance conditions in the interaction R + ↔ R + + I +. The dashed curve gives |g| for g(y) < 0 and the solid curve the positive values of g. The value of β used in this plot is ∼ 14.
Three-wave interactions of plasma waves propagating parallel to the mean magnetic field at frequencies below the electron cyclotron frequency are considered. We consider Alfv\'en--ion-cyclotron waves, fast-magnetosonic--whistler waves, and ion-sound waves. Especially the weakly turbulent low-beta plasmas like the solar corona are studied, using the cold-plasma dispersion relation for the transverse waves and the fluid-description of the warm plasma for the longitudinal waves. We analyse the resonance conditions for the wave frequencies $\omega$ and wavenumbers $k$, and the interaction rates of the waves for all possible combinations of the three wave modes, and list those reactions that are not forbidden.
 
We review the problem of the formation of terrestrial planets, with particular emphasis on the interaction of dynamical and geochemical models. The lifetime of gas around stars in the process of formation is limited to a few million years based on astronomical observations, while isotopic dating of meteorites and the Earth-Moon system suggest that perhaps 50-100 million years were required for the assembly of the Earth. Therefore, much of the growth of the terrestrial planets in our own system is presumed to have taken place under largely gas-free conditions, and the physics of terrestrial planet formation is dominated by gravitational interactions and collisions. The earliest phase of terrestrial-planet formation involve the growth of km-sized or larger planetesimals from dust grains, followed by the accumulations of these planetesimals into ~100 lunar- to Mars-mass bodies that are initially gravitationally isolated from one-another in a swarm of smaller planetesimals, but eventually grow to the point of significantly perturbing one-another. The mutual perturbations between the embryos, combined with gravitational stirring by Jupiter, lead to orbital crossings and collisions that drive the growth to Earth-sized planets on a timescale of 10-100 million years. Numerical treatment of this process has focussed on the use of symplectic integrators which can rapidly integrate the thousands of gravitationally-interacting bodies necessary to accurately model planetary growth. While the general nature of the terrestrial planets--their sizes and orbital parameters--seem to be broadly reproduced by the models, there are still some outstanding dynamical issues. One of these is the presence of an embryo-sized body, Mars, in our system in place of the more massive objects that simulations tend to yield. [Abridged] Comment: Invited Review to appear on Advanced Science Letters (ASL), Special Issueon Computational Astrophysics, edited by Lucio Mayer. 42 pages, 7 figures
 
We analyze a system consisting of two spatially separated quantum objects, here modeled as two pseudo-spins, coupled with a mesoscopic environment modeled as a bosonic bath. We show that by engineering either the dispersion of the spin boson coupling or the environment dimensionality or both one can in principle tailor the spatial dependence of the induced entanglement on the spatial separation between the two spins. In particular we consider one, two and three dimensional reservoirs and we find that while for a two or three dimensional reservoir the induced entanglement shows an inverse power law dependence on the spin separation, the induced entanglement becomes separation independent for a one dimensional reservoir. Comment: 3 pages, no figures
 
This article provides a brief (non-exhaustive) overview of some possibilities for recreating fundamental effects which are relevant for black holes (and other gravitational scenarios) in the laboratory. Via suitable condensed matter analogues and other laboratory systems, it might be possible to model the Penrose process (superradiant scattering), the Unruh effect, Hawking radiation, the Eardley instability, black-hole lasers, cosmological particle creation, the Gibbons-Hawking effect, and the Schwinger mechanism. Apart from an experimental verification of these yet unobserved phenomena, the study of these laboratory systems might shed light onto the underlying ideas and problems and should therefore be interesting from a (quantum) gravity point of view as well. Comment: 15 pages, 4 figures
 
In this paper, a novel fuzzy simplex sliding-mode controller is proposed for controlling a multivariable nonlinear system. The fuzzy logic control (FLC) algorithm and simplex sliding-mode control (SSMC) theory are integrated to form the fuzzy simplex sliding mode control (FSSMC) scheme which improves the system states response and reduces system states chattering phenomenon. In this paper, at first, we introduce the principle of simplex method, and then develop fuzzy controls based on the simplex method. Finally, a numerical example is proposed to illustrate the advantages of the proposed controllers, the simulation results demonstrate that the fuzzy simplex type sliding mode control scheme is a good solution to the chattering problem in the simplex sliding mode control.
 
I review the definition of n-point functions in loop quantum gravity, discussing what has been done and what are the main open issues. Particular attention is dedicated to gauge aspects and renormalization.
 
We present a panoramic view on various attempts to "solve" the problems of quantum measurement and macro-objectivation, i.e., of the transition from a probabilistic quantum mechanic microscopic world to a deterministic classical macroscopic world. After a general introduction we describe in some detail both schemes that require some change or extension of the formalism (as hidden variable models and spontaneous collapse models) and those that do not introduce a real collapse of the wave function (as many worlds models, decoherence and quantum histories approaches, informational and modal interpretation, relational quantum mechanics, etc.). A large bibliography is provided for the readers who want to examine closely these studies.
 
A simple non-interferometric "quantum interrogation" method is proposed which uses evanescent wave sensing with frustrated total internal reflection on a surface. The simple method has the advantage over the original interferometric Elitzur-Vaidman method of being able to detect objects that are neither black nor non-diffracting and that are such that they cannot be introduced into an arm of an interferometer for whatever reason (e.g. its size, sensitivity, etc.). The method is intrinsically of high efficiency.
 
Top-cited authors
Bala Murali Krishna Mariserla
  • Indian Institute of Technology Jodhpur
Sathyavathi Ravulapalli
  • K L deemed to be University
Venugopal Rao Soma
  • University of Hyderabad
D. Narayana Rao
  • University of Hyderabad
Davoud Dastan
  • Cornell University