Article

Fundamentals of physics. Translated from the English by Anna Schleitzer. German edition edited by Stephan W. Koch. 2nd ed

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The potential energy of interaction between the spheres and , according to [19], is ...
... Then based on the principle of linear superposition of states, the potential at point M will be the sum of the potentials of all charges in M [19]. Therefore ...
... The Eqs. 19 are valid because the forming spheres of the torus N T are situated symmetrically relative to the centre of the sphere P S . Moving on we will structure the nuclei of deuterium and tritium. ...
Article
Full-text available
In this paper we model in a new way the nuclei of deuterium and tritium. We consider the nucle-ons as toroids that rotate at a constant angular velocity around a line perpendicular to their ro-tation plane and passing through the center of mass of the nuclei. Based on exact analytical formulas obtained by us for the electrostatic in-teraction between two spheres with arbitrary ra-dii and charges, we obtain that the known bind-ing energy of the deuteron and triton has an elec-tromagnetic nature. We also obtain through the-se formulas the force of interaction inside these nuclei. Besides that, within the framework of the classical model we use, we calculate the volumes and mass densities of the nucleons. Throughout all that we use the experimentally obtained re-sults for the radii and masses of the nucleons and nuclei under study. Through our toroid model we confirm the main experimental results obtained for the deuteron and triton not only for the bind-ing energy but also for the magnetic moments, spins and stability.
... As shown in Fig. 1, for a spring whose initial position is at point O ′ and stretched to point O, the elastic force F is explained by the following Hooke's law formula [55][56][57]: ...
... For Newton's second law of motion, a common expression is that the acceleration of an object is proportional to the force and inversely proportional to the mass of the object, and the direction of acceleration is the same as the direction of the force. If an elastic force F is applied to an object with mass M, its acceleration a will be the following [57]: ...
Article
Full-text available
In recent years, a large number of meta-heuristic algorithms have been proposed to efficiently solve various complex optimization problems in reality. Most of these algorithms are based on the intelligent behavior of swarms in the natural world. In this article, we take Hooke's law of elasticity and Newton's second law of motion as the information interaction tools and innovatively propose a new meta-heuristic algorithm that is based on the laws of physics, called the elastic deformation optimization algorithm (EDOA). A new parameter adaptive adjustment mechanism is designed in the EDOA to better explore and exploit the search space. At the same time, we compare the proposed EDOA with six well-known search algorithms and conduct simulation experiments on 23 classical benchmark functions and IEEE CEC 2020 benchmark functions respectively. We have further analyzed the experimental results, used two nonparametric statistical test methods, and drawn iterative curves of the algorithms to prove the powerful comprehensive performance of the proposed EDOA.
... The fixture was then tightly closed and placed into a temperature chamber (625G, Thermo Fisher Scientific Inc., Waltham, MA, USA) and C p and R p values were measured with different frequencies (ranging from 1 to 30 MHz) and temperatures (ranging from 20 to 80 °C with 10 °C interval). The dielectric constant (ε ′ ) and loss factor (ε ″ ) of the corn flour were calculated using the following equations (Agilent Technologies, 2000;Halliday, Resnick, & Walker, 2001;Von Hippel, 1954). ...
... The fixture was then tightly closed and placed into a temperature chamber (625G, Thermo Fisher Scientific Inc., Waltham, MA, USA) and C p and R p values were measured with different frequencies (ranging from 1 to 30 MHz) and temperatures (ranging from 20 to 80 °C with 10 °C interval). The dielectric constant (ε ′ ) and loss factor (ε ″ ) of the corn flour were calculated using the following equations ( Agilent Technologies, 2000;Halliday, Resnick, & Walker, 2001;Von Hippel, 1954). ...
Article
Non-uniform heating is a major challenge for using radio frequency (RF) heat treatment in pasteurization of low moisture food products. The objective of this study was to evaluate the effect of different electrode gaps, moisture content (MC), bulk density and surrounding materials on RF heating uniformity and rate in corn flour. Additionally, the dielectric and thermal properties of corn flour were determined as affected by MC, temperature (°C), and frequency (MHz). Changes in MC, water activity (aw) anSSd color in the sample after RF heating were measured to evaluate treatment effect on food quality. A precision LCR meter and a liquid test fixture were used to study DP of the sample at RF frequency ranging from 1 to 30 MHz. The RF heating uniformity and temperature profiles of corn flour as exposed to RF heating were obtained with an infrared camera and data logger using a fiber optic sensor. The DP values increased with increasing MC and temperature, but decreased with increasing frequency. The heating rate increased from 3.5 to 6.8 °C min− 1 with increasing MC (from 10.4 to 16.7%), but decreased from 12.7 to 5.2 °C min− 1 with increasing electron gap (from 11 to 15 cm). The corner and edge heating were observed at all layers of the samples for all the distances, and the hottest and the most uniform layer were determined as the middle layer at an electrode gap of 15 cm. Glass petri dish provided better uniformity than those of polyester plastic petri dish. Covering by foam led to more uniform RF heating uniformity in corn flour, and better moisture and aw distribution. This study provided useful information to develop an effective RF process as an alternative of conventional thermal treatments for pasteurization of low-moisture products.
... The problem called 'vitreous detachment' starts when it separates from the internal eyeball and go on propagating the damage, but leaving water-like fluid in the place where before was the jelly-like fluid, this is undesirable. In accordance with Halliday et al. (2007), light enters the eye by a transparent membrane, named cornea (Figure 4). Aqueous humor is the name of the liquid behind it. ...
... The cornea-lens system focuses light onto the back of the eye. As highlights Halliday et al. (2007), most of the diffraction of light takes place in the cornea-humor-lens region boundary with the environment; the internal part keeps an almost constant diffraction constant. Rods and cones are sensitive receptors in the back of the eye, named retina. ...
Article
Full-text available
This paper proposes the simultaneous application of molecular dynamics, systems biology, biomechanics, and computational intelligence for a single aim: understanding numerically the vitreous detachment, an ophthalmologic disorder (pathophysiology). The paper is theoretical and seminar in some sense, letting for further papers the detailed labors. The paper is somehow provocative in some sense when it applies molecular dynamics to a macro-system (the vitreous), and simultaneously audacious when it proposes to associate the vitreous detachment to genotype consequences (something beyond just aging effect as in general associated). The numerical simulations are not presented herein; apart from references to incipient results. The objective is to make known the efforts of the author for reasons such as documentation or divulgation. In the biological community, the most recent challenge is how to properly link the genotype to phenotype (how to correctly make predictions starting from either of them). Further this had created two different-widely-accepted approaches in biological modeling: button-up and up-button. In the formal one starts from gene-based (or some similar level) information and strives to predict accurately some manifestations in a higher hierarchal level, such as disorders; whereas in the latter one attempts to understand the same, but from high-level pillars, most of the time physiology-based approaches. This kind of researches could have great impact in medicine and life sciences such as modeling of pharmacokinetic/pharmacodynamics systems. The author so far are not aware of any parallel work, except the ones cited, but using somehow different approaches and guiding principles. In order to simulate the virtual system, which includes mathematical models and simulations, it proposed Java™ as programming language mainly for its features. In order to cope with the complexity for correlating the genotype to the phenotype (the undesirable pathophysiology), we proposed the intriguing and state of the art computational intelligence. And finally, this work was born from the efforts of a second scientist that if it succeeds, it will be available as a discrete-continuous simulator as some defends as required in some situations. This work is relevant for correlating the most recent endeavors of applied genetics to biomechanics, something demanded in biomedical sciences.
... Within these models, qualitative a s w ell as quantitative p h ysics knowledge is represented by means of if-then rules. 1 Both models were developed on the basis of data that were taken from a study conducted by Chi, Bassok, Lewis, Reimann and Glaser 1989. This study included assessments of students' qualitative p h ysics knowledge before and after they were given instruction in classical mechanics on the basis of a standard textbook Halliday & Resnick, 1981. ...
... The paper concludes with a discussion and potential pedagogical implications of our ndings. 2 The Design of the Chi et al. 1989 Study In order to understand how students acquire physics knowledge, Chi et al. 1989 investigated nine subjects as they studied a common college physics textbook Halliday & Resnick, 1981. All of them had taken high school physics courses. ...
Article
Several earlier investigations found that teaching standard textbook physics causes only moderate change in qualitative understanding. Many investigations have tried to explain why teaching textbook physics results in so little learning of qualitative physics. In contrast, we examined cases where learning did occur and tried to understand them, hoping that this might help us to understand how to support such learning. We developed computerized simulation models of both qualitative, conceptual problem solving and quantitative problem solving and used them to assess changes in students' qualitative knowledge as they learned textbook physics. In many cases, qualitative knowledge has been acquired on the basis of information explicitly presented in the textbook. We also found cases, however, where learning of qualitative physics took place on the basis of information only implicitly addressed in the instruction. Even more important, in various cases, this newly acquired qualitative knowledge led to a less frequent use of incorrect qualitative preknowledge. This suggests that successful students not only learn what has been explicitly presented in the instruction but also learn by deriving and constructing information left implicit in the instruction, relating this information to their preknowledge and possibly refining and modifying their preknowledge in those cases where conflicts became salient.
... The existence of two clocks A and B such that each of them runs slowly than the other is the root of a bitter controversy (H. Dingle case, revisited here in Chapter 52), in my opinion not sufficiently attended by the scientific community [102] [212, p. [153][154][155][156][157][158][159][160][161][162][163][164][165][166][167][168]. Consider now two sets of identical clocks, the set of A-clocks placed in the reference frame RF o and the set of B-clocks placed in RF v ( Figure 8.1). ...
Book
Full-text available
This book analyzes the possibility that the relativistic deformations of space and time were only apparent, not real. He also introduces the concept of preinertia, a universal attribute of all physical objects that makes it impossible to detect absolute motion. Preinertia and discrete space and time are then used to propose an alternative to the theory of special relativity compatible with all its experimental support.
... This means that the physical constraints -i.e., the Newton's law -should not be violated. The Newton's law indicates a maximum feasible speed for each road based on its curvature [22]: ...
Preprint
Vehicle trajectory prediction is nowadays a fundamental pillar of self-driving cars. Both the industry and research communities have acknowledged the need for such a pillar by running public benchmarks. While state-of-the-art methods are impressive, i.e., they have no off-road prediction, their generalization to cities outside of the benchmark is unknown. In this work, we show that those methods do not generalize to new scenes. We present a novel method that automatically generates realistic scenes that cause state-of-the-art models go off-road. We frame the problem through the lens of adversarial scene generation. We promote a simple yet effective generative model based on atomic scene generation functions along with physical constraints. Our experiments show that more than 60%60\% of the existing scenes from the current benchmarks can be modified in a way to make prediction methods fail (predicting off-road). We further show that (i) the generated scenes are realistic since they do exist in the real world, and (ii) can be used to make existing models robust by 30-40%. Code is available at https://s-attack.github.io/.
... Seseorang dapat dikatakan mengetahui tentang apa yang dibicarakannya, apabila seseorang tersebut dapat mengukur apa yang dibicarakannya baik secara kuantitatif maupun secara kualitatif. Dalam bidang fisika, pengukuran merupakan hal penting karena fisika merupakan ilmu pengetahuan dasar (basic science) yang berlandaskan pada pengamatan-pengamatan eksperimental dan pengukuranpengukuran kuantitatif (Halliday, 1997). Salah satu pengukuran yang diperlukan dibidang fisika adalah pengukuran jarak. ...
Article
Full-text available
ABSTRAK ?é?á Alat ukur jarak pada dasarnya adalah alat untuk mengetahui berapa jarak atau berapa panjang jarak atau benda.Modulasi gelombang adalah penggabungan dua sinyal gelombang menjadi satu. Penelitian bertujuan untuk membuat gelombang AFG 1 dan 2 yang termodulasi, menentukan receiver dari gelombang bunyi, dan mengetahui hubungan antara jarak dan tegangan dengan gelombang yang termodulasi. Penelitian dilaksanakan di Laboratorium Pengembangan Fisika IKIP PGRI Semarang dari bulan Januari sampai Juni 2012 dengan membuat rangkaian receiver dan trasmiter (modulasi AFG 1 dan 2). Analisa dan interpretasi data dilakukan dengan analisis regresi dan ralat pengamatan. Hasil pengujian sensor ultrsonik dengan AFG 1 sebesar 40 KHz dan AFG 2 sebesar165 Hz dengan penguatan 50 kali. Pengjian secara pantulan dengan hasil linieritas tertinggi pada jarak 140 cm adalah Y = -0,0197 X + 3,97143. Karakteristik linieritas alat sebsar 99,92% dengan kesalahan standar estimasi sebesar 2,49%. Kesimpulannya ialah, rancang bangun alat ukur jarak ini dapat mengukur jarak dengan memanfaatkan modulasi gelombang ultrasonik dengan melakukan perbandingan antara tegangan dan jarak dengan hasil linier yang dibatasi pada jarak 140 cm. Kata kunci: alat ukur jarak, modulasi gelombang, sensor ultrasonik, tegangan dan jarak.
... Both present material that Stephan and Massey label as primary. The text chosen is Fundamentals of Physics by Halliday et al. (1997). Now in its 5th edition, the work is available in several different formats, the largest, the so-called 'extended' edition running to 45 chapters. ...
Chapter
Full-text available
This chapter present ten general principles for improving the teaching of demography. Support for these principles is found in (a) the ideas of the semantic or model-based school of the philosophy of science, and (b) the design of courses and textbooks in the physical and biological sciences. Demography courses and texts based on these principles would present demography as a complete science, with abundant theoretical models as well as technique, data and descriptive findings.
... DEMF) [7][8][9]. Furthermore, it has been reported that acute exposure to DEMF affects nociception and analgesia [10]. However, it is important noticing that an external dynamical magnetic field influences the neuronal electric activity, independently from the neuronal intrinsic magnetic field. ...
Article
The interference between external magnetic fields and neurophysiology is not new, however, the role of the neuronal magnetic field remains unclear. This study aimed at investigating a possible role of the neuronal magnetic field in nociception. Highly and poorly magnetic reduced graphene oxide (rGO) were injected intrathecally in rats. Nociceptive responsiveness was greater in rats that received highly magnetic-rGO in von Frey electronic or intraplantar capsaicin tests. Furthermore, in vitro experiments demonstrated that the number of KCl-responsive DRG-neurons was greater when treated with highly magnetic-rGO when compared with non-magnetic-rGO. Our data also suggested that the mechanism underlying the increased nociceptive responsiveness involves increased Ca²⁺v activity. Complementary experiments excluded the cytotoxic and inflammatory effects of the magnetic-rGO in neuronal responsiveness. These data suggest that the disturbance of the neuronal magnetic field in spinal cord increases nociceptive responsiveness, suggesting an importance of the magnetic component of the electromagnetic field in neuronal transmission.
... GSA is inspired by the movement of agents under the influence of the gravitational forces. Due to these forces, a global movement generates which drives all agents towards the agents having heavies masses [2]. The effectiveness and robustness of these meta heuristic algorithms depend upon two fundamental process which navigate the swarm in the search space: the exploration process which explores the large search space and ensures that solution does not converge in local optima while exploitation process concentrates on best solution for convergence to optimality [12]. ...
Conference Paper
Full-text available
Gravitational search algorithm is a nature inspired optimization algorithm, inspired by newton's law of gravity and law of motion. In this paper, a new variant of Gravitational search algorithm is presented. The exploration and exploitation capability of GSA is balanced by splitting the whole swarm into two groups. The search process is modified so that one group better exploits and one group becomes responsible for better exploration. This proposed algorithm is tested over some benchmark functions. The results show that our approach gives a better balance between exploration and exploitation to get the optimal solution. A comparative study of this algorithm with GSA and some well-known swarm based meta-heuristic search methods like Bio-geography based optimization (BBO), Differential evolution (DE) and Artificial bee colony (ABC) confirm its efficiency and robustness.
... If the system includes a set of particles with different masses and distances (discrete system), Equation (1) can be rewritten as Equation (2). Here, m i is the mass of each particle, and r i is its distance from the origin [19]. This concept was first developed by Leonhard Euler in 1765: ...
Article
Full-text available
Introduction Many features, emerging from mathematical techniques, have been used in the analysis of brain signals. In this study, the physical quantity of "moment of inertia" (MOI) was introduced as a feature to enhance high-frequency waves (HFWs) in electroencephalography (EEG). Materials and Methods In this research, the recorded EEGs from F3, F4, and Cz points in 20 males were used. A total of 30 noiseless epochs (4 sec with a 1 sec overlap) were selected for each eyes-open and eyes-closed state from each brain signal. After averaging the relative power spectrum (RPS) of 30 epochs and obtaining an RPS with low fluctuation, the MOIs of the power spectrum and each EEG band were calculated. Results The MOI enhanced the HFWs of brain signals; therefore, HFW fluctuations in the power spectrum of MOI were more evaluable and observable than those of RPS. Paired t-test showed no significant difference in the asymmetry of MOI between the eyes-open and eyes-closed states (P=0.227), while the MOIs of alpha and beta bands between these two states were significantly different [F(1, 38)=11.8; P=0.001 and F(1, 38)=12.9; P=0.001, respectively]. Conclusion This study demonstrated that the MOI of different frequency bands might be used as a feature for some patients who are different from healthy subjects in terms of high-frequency bands or performance of two hemispheres. Therefore, in order to ensure the applicability of the obtained results, evaluation of MOI for EEG of some disorders, such as attention-deficit hyperactivity disorder, alcoholism, and autism is suggested in future studies.
... One must keep in mind that R phot includes not only the WD core, but also an extended accreted envelope on the WD surface, with hydrogen burning at its base. In order to determine the actual radius of the WD core, one can consider the radius R1 of a zero-temperature WD that is related to its mass M1 by (Hamada & Salpeter 1961;Eracleous & Horne 1996) ...
Article
Supersoft X-ray sources (SSSs) are characterized by their low effective temperatures and high X-ray luminosities. The soft X-ray emission can be explained by hydrogen nuclear burning on the surface of a white dwarf (WD) accreting at an extremely high rate. A peculiar 67 s periodicity (P67) was previously discovered in the XMM-Newton light curves of the SSS CAL 83. P67 was detected in X-ray light curves spanning 9 years, but exhibits variability of several seconds on time-scales as short as a few hours, and its properties are remarkably similar to those of dwarf nova oscillations (DNOs). DNOs are short time-scale modulations often observed in dwarf novae during outburst. DNOs are explained by the well established low-inertia mag- netic accretor (LIMA) model. In this paper, we show that P67 and its associated period variability can be satisfactorily explained by an application of the LIMA model to the more extreme environment in a SSS (eLIMA), contrary to another recent study at- tempting to explain P67 and its associated variability in terms of non-radial g-mode oscillations in the extended envelope of the rapidly accreting white dwarf in CAL 83. In the eLIMA model, P67 originates in an equatorial belt in the WD envelope at the boundary with the inner accretion disc, with the belt weakly coupled to the WD core by a 100 000 G magnetic field. New optical light curves obtained with the Sutherland High-speed Optical Camera (SHOC) are also presented, exhibiting quasi-periodic modulations on time-scales of 1000 s, compatible with the eLIMA framework.
... The derivation of these elementary equations is presented in many physics texts Halliday and Resnick 1981, and these equations in a slightly modified form are given also by Chow 1959. They describe the motion of a projectile unaffected by wind resistance. ...
Article
Full-text available
In recent years, design floods have increased beyond spillway capacity at numerous large dams. When additional spillway capacity is difficult or expensive to develop, designers may consider allowing the overtopping of a dam during extreme events. For concrete arch dams, this often raises issues of potential erosion and scour downstream from the dam, where the free jet initiating at the dam crest impacts the abutments and the downstream river channel. A recent review has shown that a commonly cited equation for predicting the trajectory of free jets is flawed, producing jet trajectories that are much too flat in this application. This could lead analysts to underestimate the amount of scour that could occur near a dam foundation, or conversely to overestimate the extent of scour protection required. This technical note presents the correct and incorrect jet trajectory equations, quantifies the errors associated with the flawed equation, and summarizes practical information needed to model the trajectory of free jets overtopping dam crests.
... Traveling waves are more useful for transport analysis and standing waves are more useful for fitting particular boundary conditions. College physics textbooks [Cutnell and Johnson, 2001;Walker, 2002;Halliday et al., 2003] describe fitting linear waves to specific boundary conditions to make standing waves on a string of finite length, and for fitting acoustic waves in organ pipes. Open end organ pipes are fitted to finite length boundary conditions assuming antinode behavior at both open ends. ...
Article
The fundamental intramolecular frequency of a globular protein can be obtained from the measurements of acoustic velocities of bulk protein matter. This lowest frequency for common size molecules is shown to be above several hundred GHz. All modes below this frequency would then be intermolecular modes or bulk modes of the molecule and surrounding matter or tissue. The lowest frequency modes of an extended DNA double helix are also shown to be bulk modes because of interaction with water. Only DNA modes, whose frequency is well above 4 GHz, can be intrahelical modes, that is, confined to the helix rather than in the helix plus surroundings. Near 4 GHz, they are heavily damped and, therefore, not able to resonantly absorb. Modes that absorb radio frequency (RF) below this frequency are bulk modes of the supporting matter. Bulk modes rapidly thermalize all absorbed energy. The implication of these findings for the possibility of athermal RF effects is considered. The applicability of these findings for other biological molecules is discussed.
... If Schumann resonances generate energy from atoms and small molecules via dipole forces, which as a part of Pierre Curie's law has been experimentally verified ( Halliday et al., 1993), it seems we can improve that interaction with this side effect free technology (Repacholi and Greenbaum, 1999) to provide safe, new interventions superceding current paradigms as others have predicted (Johnson et al., 2004;Liboff 2004). Technologically driven as we are, it is inexplicable that we remain with such deadly paradigms as drugs and surgery, when a safe, non-invasive technology that may replace them awaits our serious interest. ...
Article
First reduced to science by Maxwell in 1865, electromagnetic technology as therapy received little interest from basic scientists or clinicians until the 1980s. It now promises applications that include mitigation of inflammation (electrochemistry) and stimulation of classes of genes following onset of illness and injury (electrogenomics). The use of electromagnetism to stop inflammation and restore tissue seems a logical phenomenology, that is, stop the inflammation, then upregulate classes of restorative gene loci to initiate healing. Studies in the fields of MRI and NMR have aided the understanding of cell response to low energy EMF inputs via electromagnetically responsive elements. Understanding protein iterations, that is, how they process information to direct energy, we can maximize technology to aid restorative intervention, a promising step forward over current paradigms of therapy.
Article
During an earthquake, interior nonstructural components of a building will be damaged. The damaged objects will obstruct pedestrians' evacuation routes and increase casualties. But this issue has received scant attention in evacuation simulation research. This paper focuses on this issue and proposes an indoor seismic evacuation model to simulate crowd evacuation in a dynamic environment. A physical model of nonstructural components is presented to simulate the dynamics of the indoor scenario. The flow field algorithm is constructed to guide pedestrian's avoidance behaviors globally to reflect the impact of environmental changes on indoor crowd path selection, and a modified social force model is built to simulate the joint influence of seismic forces and the environment on pedestrian motion states. The results of the experiments shows that the proposed model can generate realistic evacuation scene and rational evacuation routes in the earthquake. We propose a crowd evacuation simulation method in the dynamic seismic environment, use real seismic data and the physical model to simulate motion phenomenon of movable objects, modify the flow field algorithm to guide individuals to avoid ground obstacles, and build an improved social force model to simulate crowd movement.
Article
Full-text available
Community detection refers to the task of finding groups of nodes in a network that share common properties. The identified groups are called communities, which have tight intra-connections and feeble inter-connections. For the large-scale networks, we need a stable algorithm to detect communities quickly and does not depend on previous knowledge about the possible communities and any special parameter tuning. Therefore, this paper introduces a novel algorithm for community detection that is inspired by the surface Gravity of astronomical objects. In this algorithm, each vertex of a network and its degree and size are metaphors for an astronomical object and its mass and radius, respectively. The algorithm defines the gravity force for each vertex of a network. So, as a dense astronomical object, a dense vertex has a high mass and a low radius and exerts a high gravity force on its neighbours. In this paper, we define a particular modularity gain function to evaluate the merging gain of two vertexes. The algorithm is very fast, and its computational complexity is from the order of O(nlogn)O(n\log n). Although there is a trade-off between the modularity and speed of algorithms, the results showed that the suggested algorithm is much faster than the current well-known algorithms. Furthermore, it is reliable, stable and free from parameter tuning as well as predefined knowledge about communities. In this paper, the proposed algorithm is extended to run faster than the original Gravity. The extended Gravity algorithm divides a large scale network into some smaller sub-networks. Then the communities of each part are detected in serial and parallel modes. By preserving the modularity of detected communities, the extended Gravity is extremely faster than the original Gravity algorithm.
Article
Full-text available
Objective: We compared the air or saline insufflated endotracheal tube (ETT) cuff pressures and their effects on postoperative respiratory complications in gynecological laparoscopic surgeries in the Trendelenburg position (TP). Patients and Methods: This prospective study was carried out on a total of 60 patients, whose ages ranged from 18 to 65 years and who were classified by the American Society of Anesthesiologists (ASA) as I-III. They were scheduled for gynecological laparoscopic surgery in TP. Patients included in the study were randomly divided into two groups as the saline (Group S, n=30) and air (Group A, n=30) groups. ETT cuff pressures and peak airway pressures were recorded immediately after intubation and at 10-minute intervals during the intraoperative period. Results: The cuff pressure and maximum cuff pressure values in the saline group were significantly lower than in the air group in the 50-minute (p<0.05). The sore throat and analgesic consumption were significantly lower in Group S in postoperative 24 hours (p<0.001, for all). Conclusion: The intraoperative cuff pressures, postoperative sore throat, and analgesic consumption were lower in the salineinsufflated group than in the air-insufflated group in gynecological laparoscopic surgery in TP.
Article
Full-text available
A general formulation for both passive and active transmembrane transport is derived from basic thermodynamical principles. The derivation takes into account the energy required for the motion of molecules across membranes and includes the possibility of modeling asymmetric flow. Transmembrane currents can then be described by the general model in the case of electrogenic flow. As it is desirable in new models, it is possible to derive other well-known expressions for transmembrane currents as particular cases of the general formulation. For instance, the conductance-based formulation for current turns out to be a linear approximation of the general formula for current. Also, under suitable assumptions, other formulas for current based on electrodiffusion, like the constant field approximation by Goldman, can be recovered from the general formulation. The applicability of the general formulations is illustrated first with fits to existing data, and after, with models of transmembrane potential dynamics for pacemaking cardiocytes and neurons. The general formulations presented here provide a common ground for the biophysical study of physiological phenomena that depend on transmembrane transport.
Article
Full-text available
A general formulation for both passive and active transmembrane transport is derived from basic thermodynamical principles. The derivation takes into account the energy required for the motion of molecules across membranes, and includes the possibility of modeling asymmetric flow. Transmembrane currents can then be described by the general model in the case of electrogenic flow. As it is desirable in new models, it is possible to derive other well known expressions for transmembrane currents as particular cases of the general formulation. For instance, the conductance-based formulation for current turns out to be a linear approximation of the general formula for current. Also, under suitable assumptions, other formulas for current based on electrodiffusion, like the constant field approximation by Goldman, can also be recovered from the general formulation. The applicability of the general formulations is illustrated first with fits to existing data, and after, with models of transmembrane potential dynamics for pacemaking cardiocytes and neurons. The general formulations presented here provide a common ground for the biophysical study of physiological phenomena that depend on transmembrane transport.
Article
Full-text available
Background: It seems that using shuttle balance which has recently been produced in Iran would be beneficial in exercise prescription for preventing sports injuries and recovery. The purpose of this study is comparing the amount of the electromyography activity of involved muscles in ankle strategy while standing on one leg on shuttle balance versus wobble board. Materials and Methods: this study is a functional and cause-compare study. 15 female students 20-22 years of age having the enterance standards were selected meaningfully. The amount of EMG activity of selected muscles (Tibialis Anterior, Gastrocnemius, Rectus Femoris and Hamstring) was measured while standing on one leg on two devices. The difference in means of muscles activity in both of devices was estimated using multivariate analysis of variance. Results: The results showed a significant difference between the amount of EMG activity of involved muscles (p=0.001). Also, the results of the intragroup effects showed that the electromyography activity of Tibialis Anterior, Rectus Femoris and Hamstring while standing on shuttle balance was significantly more than the activity while standing on wobble board (p0/05). Conclusion: It seems that standing on shuttle balance can make higher electromyography activity in the muscles that are involved on ankle and thigh joints, i.e. Tibialis Anterior, Rectus Femoris and Hamstring. So it is recommended that shuttle balance can be used in balance training program.
Chapter
Full-text available
In this chapter, the basic ideas from Chap. 11 are applied to concrete examples of models relating to fertility, specifically the total fertility rate, a measurement model, and the Easterlin socio-economic model of fertility, a behavioral model.
Article
Full-text available
Industrial scroll decanter centrifuge (SDC) separation of the solids in fluid fine tailings (FFT) which have particles 10 μm and smaller in size require flocculant addition. Accordingly, the mechanism in the centrifuge is flocculation of the fine solid particles followed by sedimentation. Thus, feeding preflocculated material obviates the flocculation step inside the SDC resulting in improved process efficiency by reducing the power consumption and increasing the throughput capacity of each SDC.
Article
Full-text available
In this paper, an optimum design method for buckling restrained brace frames subjected to seismic loading is presented. The multi-objective charged system search is developed to optimize costs and damages caused by the earthquake for steel frames. Minimum structural weight and minimum seismic energy which including seismic input energy divided by maximum hysteretic energy of fuse members are selected as two objective functions to find a Pareto solutions that copes with considered preferences. Also, main design constraints containing allowable amount of the inter-story drift and plastic rotation of beam, column members and plastic displacement of buckling restrained braces are controlled. The results of optimum design for three different frames are obtained and investigated by the developed method.
Chapter
The essay reviews a new approach to science/physics curriculum. Scientific knowledge is considered as a culture, and the disciplinary content knowledge is upgraded to cultural content knowledge (CCK). Physics disciplinary knowledge is considered as comprised of fundamental theories hierarchically structured in triadic model: nucleus-body-periphery representing discipline-culture. This structure supports displaying major steps of the scientific discourse in construction of the particular discipline. By contrast between the fundamentals (nucleus) and their alternatives (periphery) the conceptual meaning of the former is established and emphasized. As to the epistemological aspects of knowledge, the cultural approach suggests considering different approaches - rationalist, empiricist and constructivist − as complementary contributions interwoven in the integrated method practiced in science. CCK based curriculum involves and arranges using history and philosophy of science. It seeks meta-knowledge (big picture) of science appealing to the broad population of learners of different interests and preferences. Three ways to deliver the CCK oriented curriculum had been empirically explored. They were briefly described: new curriculum, conceptual excursus and summative lecture. Some epistemological features are addressed within the discipline-culture perspective (theory-model relationship, concept definitions, cumulative nature and objectivity of scientific knowledge). Altogether, the suggested curricular perspective provides a paradigm matching the tradition of dissemination of scientific literacy and enlightenment.
Conference Paper
This work presents a new Active Fault-Tolerant Control (AFTC) method to maintain the Unmanned Aerial Vehicles (UAV) stable and controllable even under a total fault up to one propulsion system actions. The proposed approach is based on the control system re-configuration of the remaining propellers. As the system is no longer controllable in all six degrees of freedom, one dimension should be abandoned. In this case, the control of yaw dynamics was chosen to be ignored because less crash risk is introduced. Than the remaining variables became controllable leading the aircraft to safely land and without shortcoming.
Article
Full-text available
This study deals with the school instruction of the concept of weight. The historical review reveals the major steps in changing weight definition reflecting the epistemological changes in physics. The latest change drawing on the operation of weighing has been not widely copied into physics education. We compared the older instruction based on the gravitational definition of weight with the newer one based on the operational definition. The experimental teaching was applied in two versions, simpler and extended. The study examined the impact of this instruction on the middle school students in regular teaching environment. The experiment involved three groups (N = 486) of 14-year-old students (ninth grade). The assessment drew on a written questionnaire and personal interviews. The elicited schemes of conceptual knowledge allowed to evaluate the impact on students’ pertinent knowledge. The advantage of the new teaching manifested itself in the significant decrease of the well-known misconceptions such as “space causes weightlessness,” “weight is an unchanged property of the body considered,” and “heavier objects fall faster”. The twofold advantage—epistemological and conceptual—of the operational definition of weight supports the correspondent curricular changes of its adoption.
Chapter
There are different matrices associated with a graph, such as incidence matrix, the adjacency matrix and the Laplacian matrix. One of the aims of algebraic graph theory is to determine how properties of graphs are reflected in algebraic properties of these matrices. The eigenvalues and eigenvectors of these matrices provide valuable tools for combinatorial optimisation and in particular for ordering of sparse symmetric matrices such as the stiffness and flexibility matrices of the structures. Here, algebraic graph-theoretical methods and metaheuristic-based algorithms are provided for nodal ordering for bandwidth and profile reduction.
Article
Full-text available
Physical examination of any swelling is the first step in making a diagnosis. Many a times we see a patient with a spherical swelling, which is usually a cyst. The interpretation of physical signs should be based on sound principles of physics. In the present paper, we explain physical characteristics of a swelling (cyst) using principles of fluid mechanics.
Chapter
This chapter consists of two parts. In the first part, the standard magnetic charged system search (MCSS) is presented and applied to different numerical examples to examine the efficiency of this algorithm. The results are compared to those of the original charged system search method [1].
Chapter
This chapter consists of two parts. In the first part, an optimization algorithm based on some principles from physics and mechanics, which is known as the charged system search (CSS) [1]. In this algorithm the governing Coulomb law from electrostatics and the Newtonian laws of mechanics. CSS is a multi-agent approach in which each agent is a charged particle (CP). CPs can affect each other based on their fitness values and their separation distances. The quantity of the resultant force is determined by using the electrostatics laws, and the quality of the movement is determined using Newtonian mechanics laws. CSS can be utilized in all optimization fields; especially it is suitable for non-smooth or non-convex domains. CSS needs neither the gradient information nor the continuity of the search space.
Article
Full-text available
South Korean high school students are being taught Einstein’s Special Theory of Relativity. In this article, I examine the portrayal of this theory in South Korean high school physics textbooks and discuss an alternative method used to solve the analyzed problems. This examination of how these South Korean textbooks present this theory has revealed two main flaws: First, the textbooks’ contents present historically fallacious backgrounds regarding the origin of this theory because of a blind dependence on popular undergraduate textbooks, which ignore the revolutionary aspects of the theory in physics. And second, the current ingredients of teaching this theory are so simply enumerated and conceptually confused that students are not provided with good opportunities to develop critical capacities for evaluating scientific theories. Reviewing textbooks used in South Korea, I will, first, claim that the history of science contributes to understand not merely the origins but also two principles of this theory. Second, in addition to this claim, I argue that we should distinguish not only hypotheses from principles but also phenomena from theoretical consequences and evidence. Finally, I suggest an alternative way in which theory testing occurs in the process of evaluation among competitive theories on the basis of data, not in the simple relation between a hypothesis and evidence.
Chapter
Switched-capacitor (SC) DC–DC converters are a class of power converters that are used to convert one voltage level to another, through the use of switches and capacitors. They consist of two parts, a power stage, which is also known as a charge pump, and a feedback/feed-forward controller that regulates the output to the desired voltage value.
Chapter
Effective use of 2D echocardiography requires that the user acquire some understanding of how the images are produced. Although modern technology has resulted in images that quite faithfully represent the structures scanned by the transducer, ultrasound has limitations that should be understood for the correct interpretation of echocardiographic images.
Chapter
This chapter consists of two parts. In the first part an optimization algorithm based on some principles from physics and mechanics, which is known as the Charged System Search (CSS) [1]. In this algorithm the governing Coulomb law from electrostatics and the Newtonian laws of mechanics. CSS is a multi-agent approach in which each agent is a Charged Particle (CP). CPs can affect each other based on their fitness values and their separation distances. The quantity of the resultant force is determined by using the electrostatics laws and the quality of the movement is determined using Newtonian mechanics laws. CSS can be utilized in all optimization fields; especially it is suitable for non-smooth or non-convex domains. CSS needs neither the gradient information nor the continuity of the search space.
Article
Full-text available
Explaining is one of the most important everyday practices in science education. In this article, we examine how scientific explanations could serve as cultural tools for members of a group of pre-service physics teachers. Specifically, we aim at their use of explanations about forces of inertia in non-inertial frames of reference. A basic assumption of our study is that explanatory tools (e.g., typical explanations learned) shape the ways we think and speak about the world. Drawing on the theory of mediated action, analysis illustrates three major claims on scientific explanations: (1) explaining is an act of actively responding to explanations presented by others (and not only to evidence itself); (2) the actual experience of explaining involves the enactment of power and authority; (3) resistance (not acknowledging an explanation as one’s own) might be a constitutive part of learning how to explain (hence, teachers could approach scientific explanation in a less dogmatic way). These assertions expand the possibilities of dialogue between studies of scientific explanations and the social sciences. Implications for science teaching and research in science education are presented.
Chapter
Full-text available
A lot of attention has been devoted lately to the philosophical questions raised by phase transitions, and for a good reason. The topic is not only conceptually intriguing, but also generous, inviting interesting connections to issues such as emergence and reduction, idealizations and modeling, as well as the status of fictions in scientific explanations. Here I address this latter topic, and sketch a novel approach to understanding the role of a certain type of fiction (an infinite system) in the explanation of the phenomena of phase-change.
Article
This chapter consists of two parts. In first part, the standard Magnetic Charged System Search (MCSS) is presented and applied to different numerical examples to examine the efficiency of this algorithm. The results are compared to those of the original charged system search method [1].
Article
For many years, scientists have been searching for nondestructive methods for the measurement of plant root system parameters. The measurement of electrical capacitance (EC) across the root has been proposed as one such nondestructive method. This article presents a study on the determination of relationships between EC measurement and the shape and size of the electrodes immersed in medium that are used for measurement. Measurement of EC and the parameters characterizing root systems of 1-year-old seedlings of the common beech Fagus sylvatica L. was conducted under laboratory conditions. The measurements of EC were performed between seedling root systems and two different electrodes in the form of a cylinder or a rectangular plate. Statistically significant correlations were found between the capacitance and root system parameters in both the variants; however, the correlations were higher in the case of the flat rectangular plate. Correlation coefficient (r) between EC and total root length was 0.688 for cylindrical electrode and 0.802 for rectangular plate, for total root area 0.641 and 0.818, and for dry weight of root system 0.502 and 0.747. The best-fitted linear regression relationships between the EC and the measured parameters were characterized by low determination coefficients in variants with cylindrical electrodes, and higher with flat rectangular plate electrodes. The results indicated that a two-dielectric media concept is a better model than Dalton’s model when attempting to interpret the behavior of root and soil capacitance. The different electrodes probably allow root capacitance measurements to be interpreted from different aspects. However, this hypothesis requires further verification.
Article
Full-text available
This study presents the thermal buckling analysis of solid circular plate made of porous material bounded with piezoelectric sensor-actuator patches. The porous material properties vary through the thickness with specific function. The general mechanical nonlinear equilibrium and linear stability equations are derived using the variational formulations to obtain the governing equations of piezoelectric porous plate. Thermal buckling load is derived for solid circular plates under uniform temperature load for the clamped edge condition. In recent paper the effects of porous plate’s thickness, porosity, porous thermal expansion coefficient, piezoelectric thickness, piezoelectric thermal expansion coefficient, and feedback gain on thermal stability of the plate are investigated.
Chapter
Full-text available
This chapter focuses on the detection and evaluation of seismic induced structural damages by means of changes in structural mechanical impedance at high frequencies of the order of kilohertz. Structural mechanical impedance is a direct representation of the structural parameters. However, its measurement at high frequencies is difficult by conventional means owing to practical considerations. This chapter shows how this problem can be alleviated by extracting the mechanical impedance from the electro-mechanical admittance signatures of piezoelectric-ceramic (PZT) patches surface bonded to the structure. Based on the variation of the extracted impedance elements with respect to frequency, the inherent structural components are identified. This approach eliminates the need for any a-priori information about the phenomenological nature of the structure. As proof of concept, the chapter reports a study conducted on a model of reinforced concrete (RC) frame subjected to seismic vibrations on a shaking table. The piezo-impedance transducers are found to perform better than the low frequency vibration techniques as well as the traditional raw-signature based damage quantification in the EMI technique.
Chapter
Full-text available
In this work we present an interactive multimedia tutorial allowing to comparatively explore the dynamical behavior of the “Two-capacitor system” and of others systems, quite different, but showing a very similar energetic behavior: i) two communicating tanks; ii) two coupled lossy springs “sharing elongation”; iii) plastic collision between two material points; iv) two coaxial, rotationally coupled, disks spinning around an axis. The aim of the work is to give learners useful insights on the fundamental subject of energy transformation. The tutorial, appropriate for high school and first year university students, is implemented in a form that makes it also suitable for classroom use with an Interactive White Board
Chapter
Driven by ever growing demands of miniaturization, increased functionality, high performance, and low cost for microelectronic products and packaging, new and unique solutions in IC and system integration, such as system-on-chip (SOC), system-in-package (SiP), system-on-package (SOP), have been hot topics recently. Despite the high level of integration, the number of discrete passive components (resistors, capacitors, or inductors) remains very high. In a typical microelectronic product, about 80% of the electronic components are passive components, which are unable to add gain or perform switching functions in circuit performance, but these surface-mounted discrete components occupy over 40% of the printed circuit/wiring board (PCB/PWB) surface area and account for up to 30 percent of solder joints and up to 90 percent of the component placements required in the manufacturing process. Embedded passives, an alternative to discrete passives, can address these issues associated with discrete counterparts, including substrate board space, cost, handling, assembly time, and yield [1, 2]. Figure 14.1 schematically shows an example of realization of embedded passive technology by integrating resistor and capacitor films into the laminate substrates. Fig. 14.1 Schematic representation of the size advantages of the embedded passives as compared to discrete passives By removing these discrete passive components from the substrate surface and embedding them into the inner layers of substrate board, embedded passives can provide many advantages such as reduction in size and weight, increased reliability, improved performance and reduced cost, which have driven a significant amount of effort during the past decade for this technology. This chapter provides a review on most recent development in embedded inductors, capacitors, and resistors.
Chapter
This chapter reviews RF volume, array, and surface coil modeling, design, construction, control, safety, and human in-vivo application examples for field strengths from 4 to 9.4 T. While a comprehensive variety of coils is included, focus is on the transmission line (TEM) technology head, body, surface, and array coils developed by the author over the past 16 years. References provide a supplement to this material for the many details that cannot be covered in a single book chapter.
Article
Full-text available
Speckle tracking is a new imaging modality capable of providing information about myocardial motion in all three directions: longitudinal, circumferential and radial. There are many software packages with their unique tracking algorithms and user interfaces in the market. We aimed to evaluate the feasibility of QLAB software in clinical practice and speckle based myocardial velocities in healthy subjects. Thirty-two subjects were enrolled in the study. Images from apical four-chamber, apical two-chamber, parasternal short-axis (mitral valve-apical levels) views were acquired and analyzed offline with QLAB. We measured speed and velocity data in longitudinal, circumferential and radial directions. Time percent of these events were also calculated. In the final data analysis 825 of 832 segments (99.2%) were included. Mann Whitney U, Student's t and Kendall's tau-b coefficient tests were used for statistical analysis. We determined that circumferential speed was significantly higher (p<0.001) than radial velocity in both parasternal short-axis views. Likewise, longitudinal speed was higher (p<0.001) than radial velocity in apical views. Notwithstanding the speed and velocity data, time percent of radial velocity were significantly lower (p<0.001 for all) than their longitudinal or circumferential counterparts. We also notified that apex was the segment reaching its maximum speed at earliest time. QLAB measurement time was relatively long (8.1+/-1.7 min) and intraobserver agreement was lost in 3% of the segments. In addition to these findings, we consider QLAB software package for speckle tracking needs some improvements to shorten measurement time and decrease user intervention.
Article
Full-text available
Several previous studies have examined the ability to judge the relative mass of objects in idealized collisions. With a newly developed technique of psychological Markov chain Monte Carlo sampling (A. N. Sanborn & T. L. Griffiths, 2008), this work explores participants' perceptions of different collision mass ratios. The results reveal interparticipant differences and a qualitative distinction between the perception of 1:1 and 1:2 ratios. The results strongly suggest that participants' perceptions of 1:1 collisions are described by simple heuristics. The evidence for 1:2 collisions favors heuristic perception models that are sensitive to the sign but not the magnitude of perceived mass differences.
Article
Commercially available ankle-foot prostheses are passive when in contact with the ground surface, and thus, their mechanical properties remain fixed across different terrains and walking speeds. The passive nature of these prostheses causes many problems for lower extremity amputees, such as a lack of adequate balance control during standing and walking. The ground reaction force (GRF) and the zero moment point (ZMP) are known to be basic parameters in bipedal balance control. This thesis focuses on the estimation of these parameters using two prostheses, a powered ankle-foot prototype and an instrumented, mechanically-passive prosthesis worn by a transtibial amputee. The main goal of this research is to determine the feasibility of estimating the GRF and ZMP primarily using sensory information from a force/torque transducer positioned proximal to the ankle joint. The location of this sensor is ideal because it allows the use of a compliant artificial foot to be in contact with the ground, in contrast to rigid foot structures employed by walking robots. Both, the active and passive, instrumented prostheses were monitored with a wearable computing system designed to serve as a portable control unit for the active prototype and as an ambulatory gait analysis tool.
ResearchGate has not been able to resolve any references for this publication.