Torture has been gaining impetus in recent years, its refined horror due to perversion of medical and scientific knowledge and techniques. More than 60 national regimes torture their citizen and all these nation deny this fact. Hence, medical evidence of torture is essential in the fight against torture and in the struggle to prevent its use. The development of the concept of Human Rights is described to comprehend fully the evil and horror of torture. The multifaceted medical aspects of torture are reviewed to draw attention to the significant contribution the medical profession has to offer in the fight against torture.
Science has two aspects: it is a burgeoning integrated knowledge and understanding of the natural world (discovery) and it is a series of human processes and ways of thinking which generate such understanding, and also make use of it for human advantages (invention). The natural world exists apart from humanity, but the processes and types of thinking which we use in investigating and using it represent one of our highest intellectual and ethical activities. To falsify the discovery aspect can only have temporary results, since Nature is the final referee. False accounts of human pathways to discovery and invention are ethical and intellectual sins which may often not be detected, but which misrepresent the way science works and may induce false approaches to investigation.
As a direct consequence of exposure to microgravity, astronauts experience a set of physiological changes which can have serious medical implications when they return to earth. Most immediate and significant are the headward shift of body fluids and the removal of gravitational loading from bone and muscles, which lead to progressive changes in the cardiovascular and musculoskeletal systems. Cardiovascular adaptations result in an increased incidence of orthostatic intolerance (fainting) following flight, decreased cardiac output, and reduced capacity for exercise. Changes in the musculoskeletal system contribute significantly to impaired function experienced in the post-flight period. The underlying factor producing these changes is the absence of gravity, and countermeasures are therefore designed primarily to simulate earthlike movements, stresses, and system interactions. Exercise is one approach that has had wide operational use and acceptance in both the US and Russian space programmes, and it has enabled humans to stay relatively healthy in space for well over a year. Although it remains the most effective countermeasure currently available, significant physiological degradation still occurs. The development of other countermeasures will be necessary for missions of longer duration, for example for human exploration of Mars.
It is increasingly evident that there is more to biological evolution than natural selection; moreover, the concept of evolution is not limited to biology. We propose an integrative framework for characterizing how entities evolve, in which evolution is viewed as a process of context-driven actualization of potential (CAP). Processes of change differ according to the degree of nondeterminism, and the degree to which they are sensitive to, internalize, and depend upon a particular context. The approach enables us to embed phenomena across disciplines into a broad conceptual framework. We give examples of insights into physics, biology, culture and cognition that derive from this unifying framework.
Since World War II barriers to international trade in industrial commodities have been reduced while barriers to agricultural commodity trade have become more severe. During the last several decades the world has experienced cycles of "food pessimism" and "food optimism." Nevertheless, as a result of technical change the terms at which the world's consumers can expect to have access to food appears to be more favorable in the future than in the past. If consumers are to have access to the greater abundance that can be made available, it will be necessary for developed market economies to reduce the distortions resulting from agricultural commodity and trade policies. It is in the interest of both producers and consumers, in developed and developing countries, that the world move toward an international trading regime in which agricultural commodities move across national borders at least as freely as financial resources.
What will an artificial Companion be like? Who will need them and how much good or harm will they do? Will they change our
lives and social habits in the radical way technologies have in the past: just think of trains, phones and television? Will
they force changes in the law so that things that are not people will be liable for damages; up till now, it is the case that
if a machine goes wrong, it is always the maker or the programmer, or their company, which is at fault. Above all, how many
people, with no knowledge of technology at all, such as the old and very young, will want to go about, or sit at home, with
a companion that may look like a furry handbag on the sofa, or a rucksack on the back, but which will keep track of their
lives by conversation, and be their interface to the rather elusive mysteries we now think of as the Internet or Web.
It is argued that the fundamentals of a “central dogma” for human psychology are already known and involve the bilaterality of the central nervous system and the specializations of the cerebral hemispheres in high-level cognitive functions. The popularity of the cognition/emotion and verbal/nonverbal dichotomies notwithstanding, a denotative-language/ context dichotomy more accurately describes the cognitive contributions of the left and right cerebral hemispheres. The empirical basis for this dichotomy is reviewed and its implications for a central dogma for human psychology are outlined.
A systems analysis of the future evolution of man can be conducted by analyzing the biological material of the galaxy into three subsystems: man, intelligent machines, and intelligent extraterrestrial organisms. A binomial interpretation is applied to this system wherein each of the subsystems is assigned a designation of success or failure. For man the two alternatives are, respectively, 'decline' or 'flourish', for machine they are 'become intelligent' or 'stay dumb', while for extraterrestrial intelligence the dichotomy is that of 'existence' or 'nonexistence'. The choices for each of three subsystems yield a total of eight possible states for the system. The relative lack of integration between brain components makes man a weak evolutionary contestant compared to machines. It is judged that machines should become dominant on earth within 100 years, probably by means of continuing development of existing man-machine systems. Advanced forms of extraterrestrial intelligence may exist but are too difficult to observe. The prospects for communication with extraterrestrial intelligence are reviewed.
The paper reviews efforts undertaken to explore the moon and the results obtained, noting that such efforts have involved a successful interdisciplinary approach to solving a number of scientific problems. Attention is given to the interactions of astronomers, cartographers, geologists, geochemists, geophysicists, physicists, mathematicians and engineers. Earth based remote sensing and unmanned spacecraft such as the Ranger and Surveyor programs are discussed. Emphasis is given to the manned Apollo missions and the results obtained. Finally, the information gathered by these missions is reviewed with regards to how it has increased understanding of the moon, and future exploration is considered.
Tian Ji's horse racing strategy, a famous Chinese legend, constitutes a
promising concept to be applied to important issues in today's competitive
environment; this strategy is elaborated on and analyzed by examining the
general case. The mathematical formulation concerning the calculation of
winning, drawing or losing combinations and probabilities is presented to
illustrate the interesting insights on how ancient philosophies could promote
thinking in business competitiveness, in particular, the wisdom behind
sacrificing the part for the benefit of the whole or sacrificing the short-term
objectives in order to gain the long-term goal.
Worldwide, there are two conceptual models of pregnancy and child birth. In the first, ‘male’ model, pregnancy and the birth of a baby are biomedical processes. In the second, ‘female’ model, pregnancy and child birth are major psychosocial events for the woman. The research agenda of obstetricians is based on the biomedical model. It mainly focuses on studying the effectiveness of interventions aimed at diminishing the risk of morbidity and mortality. Midwives’ and nurses’ research agenda centres around ‘normal birth’ and takes psychosocial outcomes such as women’s experiences and satisfaction with different types of care into account as well. Midwifery and nursing are relatively young fields of science. Research training and opportunities are not as widely available to midwives as to obstetricians. As a consequence, the leading research into pregnancy and birth care focuses primarily on the application of technical, medical, ‘male’ solutions. A growing body of evidence, however, shows that a healthy baby alone is not enough to guarantee a woman’s satisfaction with her pregnancy, birth and postpartum period. To improve women’s and babies’ well-being, the biomedical and psychosocial models of pregnancy and birth need to be reconciled and integrated.
The economic and social pressures on universities will require a more flexible approach to education to be evolved if the system is to survive. An increasingly multicultural and multiethnic mix of students and the growing number of older students demand the ability to tailor programmes to the individuals concerned. Use of fibre optic and other high speed communications technology will be crucial in this endeavour, allowing courses to be pooled between institutions, while multimedia facilities and virtual reality offer students unrivalled opportunities to expand their educational experience. The future must also rest on increased collaboration with industry and in closer integration of universities into their local communities.
The chapter presents methods designed for the purpose of analyzing series of statistical observations taken at regular intervals in time. The methods have a wide range of applications, such as astronomy, meteorology, seismology, oceanography, communications engineering and signal processing, the control of continuous process plants, neurology and electroencephalography, and economics. The methods apply to stationary or nonevolutionary time series. Such series manifest statistical properties which are invariant throughout time, so that the behavior during one epoch is the same as it would be during any other. There are two distinct yet broadly equivalent modes of time-series analysis which may be pursued. On the one hand are the time-domain methods that have their origin in the classical theory of correlation. Such methods deal preponderantly with the autocovariance functions and the cross-covariance functions of the series, and they lead inevitably towards the construction of structural or parametric models of the autoregressive moving-average type for single series and of the transfer-function type for two or more causally related series. On the other hand are the frequency-domain methods of spectral analysis. These are based on an extension of the methods of Fourier analysis which originate in the idea that, over a finite interval, any analytic function can be approximated, to whatever degree of accuracy is desired, by taking a weighted sum of sine and cosine functions of harmonically increasing frequencies. Furthermore, this chapter discusses a simple technique as smoothing the periodogram that should provide a theoretical resolution to the problems encountered in attempts to detect the hidden periodicities in economic and astronomical data.
Methods of structure selection that have evolved in nature to achieve efficient survival of a species can provide valuable guidance to engineers designing structures or components. The principle of adaptive growth which biological structures use to minimise stress concentrations is discussed with reference to the different priorities of trees and mammalian bones. These criteria have been incorporated into a computer aided optimisation design procedure and into the soft kill option, which provides a means of eliminating non-load carrying areas from structures. Both of these techniques have found application in industry. Another procedure, computer aided internal optimisation, which attempts to optimise the performance of composite materials by aligning the fibre distribution with the stress flow, again mimicking the structure of trees, is still in the development stage. Finally, the ethical consequences of adopting such a severe strategy of natural selection with the low safety factors this implies are discussed. This theme is developed with reference to a visual assessment technique developed to estimate the likelihood that a given tree will fall.
In 1993, Rothamsted Experimental Station, the oldest agricultural research institute in the world, celebrated 150 years of experimental work on the production of farm crops. Most of the station's 'classical experiments', begun by its founder John (later Sir John) Lawes between 1843 and 1856, continue today and provide useful information for contemporary agriculture and ecology which Lawes could never have envisaged. These include development of a model for the turnover of organic matter in soil, assessments of the increasing pollution of soil by toxic metals and organic carcinogens resulting from twentieth century industrial activities, and insights into the ecological consequences of changes in agricultural policies. The experiments also provide many examples of the value of long term, systematic data collection and interdisciplinary research in agricultural production, ecology and environmental pollution. Facilities for this work became available through the scientific flair and foresight of Lawes, and since his death have been maintained and extended by generations of dedicated scientists.
Just over 150 year ago, in 1826, James Smithson FRS, distinguished English chemist and mineralogist made a will in which be left his fortune to a small wooden-hutted village in a swamp, Washington D.C., there to create the Smithsonian Institution. To this day no-one know exactly why he made this, then eccentric, will. After a century and a half, his Institution has grown to a world-famous interdisciplinary establishment attracting over 20 million visitors each year. This development is due entirely to the eight Secretaries who have been in charge of the Institution since it formal establishment in 1846. Here the eighth Secretary review the work of his predecessors and states his own philosophy.
Following his early voyages to Labrador and Newfoundland, and then with Cook to the South Pacific, Banks was elected in 1778, at the age of 35, President of The Royal Society, a position held until his death 42 years later. From a present day perspective, his career appears to reflect the 'amateur' approach to science seen as typical of that time, but there are also contemporary resonances in his career in administration, his close links with government, and his emphasis on utility and public understanding of science. Banks' efforts to keep science politically acceptable during a period of rapid social change and revolution and to maintain The Royal Society's monopoly of natural knowledge, resisting pressure for fragmentation into more specialised groups, are considered in a review of his influence on the development of English science over this period.
The practice of doctoring wines with lead additives in order to sweeten and preserve them was widespread in Europe from Roman times on and was responsible for numerous epidemics. However, it was not until the 17th century that it was recognised that a common and frequently fatal disease, known as the colic of Poitou and by many other names, was caused by the consumption of leaded wines. The correct aetiology of the disease was discovered by Eberhard Gockel, city physician of Ulm, which city was at the time the centre of the German wine trade. The local colic outbreaks of the 1690s and Gockel's findings had resulted in serious economic losses to Ulm, and in response to this crisis Duke Eberhard Ludwig of Württemberg issued, in 1693, a strict edict against adulterating wines with lead. This law, which is here presented in facsimile and translation, is an early example of consumer protection legislation. Apart from attempting to control the distribution of litharge (lead acetate), the edict requires that witnesses report offenders to the authorities on pain of equal culpability ('whistle blowing') and prescribes the death penalty for convicted perpetrators.
This is the first of three articles about the campaign to abate smoke in the cities of England. It began early in the 19th centnry and culminated in the Clean Air Act, 1956. Between 1844 and 1850 no fewer than six Bills were introduced into parliament to compel furnaces to ‘consume their own smoke’. All failed to pass into law although enough was known about the science and technology of combustion to justify legislation for furnace used to raise steam-power. In 1853 Palmerston succeeded in putting on the statute book the first really elfective clean air act for the metropolis of London. It did not cover dwelling houses; the campaign to bring the e under the law—to be described in the second essay—had to await improvements in the design of domestic grates. It was during the decade 1843–1853 that the public conscience was awakened to the need for laws to protect the environment against pollution.
In the years between 1912 and 1916, the Danish artist and graphic designer Gerhard Heilmann published a series of articles in the journal of the Danish Ornithological Society. From the outset, Heilmann's work aroused international interest, and in 1926 it was published in English as The origin of birds, setting the international agenda for research in bird evolution for the next 40 years. In Denmark, however, Heilmann's highly original work was generally ignored or even ridiculed by zoologists. This article demonstrates how Heilmann's artistic abilities played an important role in securing him international renown as a palaeontologist, while at the same time his lack of scientific credentials led to his complete isolation from the Danish zoological establishment. And it suggests that Heilmann's unyielding efforts to solve the riddle of bird evolution in the borderland between art and science, reflected a deeply felt emotional and spiritual need to counteract the religious dogmatism that had permeated his childhood and early youth, leaving memories and experiences that remained vividly painful throughout his life.
This is the second of three article about the campaign to abate smoke in the cities of England which began early in the 19th century and culminated in the Clean Air Act, 1956. By the 1880s it had become evident that a major cause of the severe ‘pea soup’ fogs in London was smoke from domestic fires which were not covered by any law to abate smoke. A succession of severe fogs, coupled with the publication of mortality rates which turned out to be as severe as those caused by cholera, stimulated the creation of a smoke abatement lobby. Thi essay describes the work of that lobby and the sustained efforts (ten attempts in nine years) to put a smoke-abatement Bill through parliament.The choice before Londoners was either to change from open fires to closed stoves burning anthracite or coke, as the social price to pay for cleaner air; or to continue to enjoy the ‘pokeable, companionable’ open grate, at the cost of fogs which caused death and illness and paralysed transport. In the 1880s the technology for smoke control was already available; it was social resistance to it which prevented its application. The anti-smoke lobby failed to get more effective laws through parliament; but it did valuable service in keeping the issue before the public and lifting social norms for the environment toward the ‘threshold level’ which, two generations later, made stricter smoke abatement laws acceptable.
The period 1900 to 1930 saw fundamental changes in the basic laws of physics. The discoveries of the special and general theories of relativity and those of quanta and quantum mechanics transformed profoundly physicists' understanding of the nature of space and time, as well as the fundamentals of physics at the atomic level, which have no counterpart in classical physics. Almost coincidentally, major changes took place in the processes of musical composition – that same period seeing the development of atonality, the liberation of rhythm, and twelve-tone music. This essay reviews in non-technical terms the profound changes in the thinking of physicists and compares the intellectual struggles involved with the extraordinary parallel changes in the approaches of composers to musical composition. No causal connection is suggested, but the common theme of the processes of innovation and creativity within very strict sets of rules in both physics and music is emphasised.
The general principles of submarine design are outlined, and the close interaction between operational requirements and technical solutions is illustrated. These solutions involve an exceptional range of physical and social sciences. The engineering development of submarines between 1905 and 1945 is described, and it is shown how steady improvement was made and special requirements met. The main theme follows British developments, but frequent comparisons are made with the best designs from other countries.
The 1960s, a golden age for academia in the USA, witnessed an unparallelled expansion of disciplines, among them the history of science and quantitative studies of science. A major pioneer in developing those fields was Derek Price, whose leadership of Yale University's newly created Department of History of Science and Medicine helped to bring national prominence to research about science and scientists. Price's legacy to the history of science, science policy, and scientometrics continues to be influential today. Three of Price's students recall the chemistry of the first years of the department, and reflect on their experiences with Price the scholar, teacher, and mentor.
Biological discussion in the 1920s was inspired by the seemingly limitless potential of genetics and the possibilities for human development; and the debates around it attended simultaneously to the promise and danger of human intervention. At the core of these discussions was the idea of development, what it constituted and how it could be predicted. Several papers published in the To-day and To-morrow series explore the tensions between optimism and scientific pragmatism, between progress and apparent randomness in development, and between human development and the potential for regression. Turning to the mythological figures of Daedalus, Galatea, and Prometheus — each a creator and visionary — the authors of these essays (Haldane, W.R. Brain, Crookshank, Jennings, Macfie, Bernal, Sullivan) reflect the diversity of 'Darwinisms' employed during the early twentieth century and the implications each offered.
To honour the distinguished Members of our Editorial Board, it has now become customary to publish on the occasion of their 80th birthday a contribution to this Journal chosen by them. This may be a summary of their life's work, as Dr Joseph Needham decided (Volume 5, Number 4, page 263, 1980). Lord Ashby suggested a re-publication of his Compton Memorial Lecture of 1964, and this appeared in 1984 volume 9, Number 3, page 205. Here Sir William McCrea recalls a fascinating period of what has now become a decisive development in the history of science. Sir William outlines his career subsequent to the period 1925–1929, in the last section of this review. He calls it: The Start of Another Story.
This paper describes Roger Money-Kyrle's contribution to the To-day and To-morrow series, Aspasia: the Future of Amorality. Although many contributions to the series mentioned psychoanalysis in passing, Money-Kyrle's alone attempted to describe the difference Freud's science might make to the future of human society. Money-Kyrle's overlapping connections with the Moral Sciences ambience of Cambridge in the early 1920s, British psychoanalytic anthropology and the eugenicist movement associated with UCL and the Galton Institute are presented as a prelude to the discussion of his book. Aspasia emerges as an ambitious attempt to fuse these three elements of British scientific culture in the postwar period. If, in the final analysis, it doesn't quite succeed, it is still of interest as a record of Money-Kyrle's own intellectual commitments and for what it tells us about the wider British psychoanalytic scene in the early 1930s.
A fire at a chemical manufacturing plant at Schweizerhalle (near Basel) in November 1986 and the subsequent release of toxic agrochemicals into the River Rhine is taken as a basis for discussion of some problems and needs in ecotoxicological research. The 5–8 tons of pesticides that entered the river killed a great portion of the eel population and injured other fish species as well as macroinvertebrates as far downstream as The Netherlands. Fundamental scientific research must provide better means to deal with such problems as contamination of drinking water and toxicity to whole biocenoses, both of which were handled with great uncertainty during and after the accident. Especially serious is the lack of knowledge about the chronic effects of mixtures of chemicals to individuals and entire ecosystems. Considering the number of anthropogenic chemicals (∼70 000) and the great variety of species (some 2–3 million) there seems to be no hope of ever being able to monitor or test all of them. Thus, there is an urgent need for generally applicable principles and concepts. A discussion of the relationship between toxic effects to fish, exposure time and concentration provides some indication of the direction in which research might proceed. Finally, eight postulates pertaining to ecotoxicology summarise what has been learnt from the analysis of this chemical spill.
Founded 200 years ago, the Royal Institution has been and continues to be the site of a large number of major chemical and physical discoveries. Such work includes Humphry Davy's discovery of several chemical elements and his invention of the miners' safety lamp, Michael Faraday's fundamental discoveries in electromagnetism (including the manufacture of the first electric motor, transformer, and dynamo and the development of field theory) as well as his liquefaction of gases and discovery of benzene, John Tyndall's pioneering efforts in understanding why the sky is blue and how glaciers behave, and, more recently, the crucial contributions made to crystallography by William and Lawrence Bragg. The Royal Institution is also a place where science has been brought to the public over the past two centuries through a variety of popular lecture courses. Davy, Faraday, Tyndall, and the Braggs not only undertook research, but were also scientific communicators of the first rank who would frequently lecture to an audience of more than a thousand in the Royal Institution's lecture theatre. One of the courses Faraday founded was the Christmas lectures for young people which continue to this day and are now televised, reaching an audience of millions. How all this came to be achieved in an eighteenth century town house in Mayfair is the subject of this paper.
The predominating characteristic in world food and agriculture is a high degree of national self-sufficiency. Divergencies in national situations are, however, so great that accurate aggregate statement are difficult to make. The domestic food problems of OECD countries are and should remain relatively minor; those of the USSR and East Europe are more severe but they too should be fully manageable in the medium to longer run. China is likely to remain basically self-sufficient but could become one of the world's greatest importers. The food problem is most critical in low income countries. A continuation of the trends of the last two decades would worsen some aspect of their situation. Such an outcome is not inevitable. A just-completed 90-country study by FAO assessed that developing country food output could grow at around 3.7% over the years 1980–2000 compared with a trend of just under 3%. If this is accompanied by some redistribution of purchasing power and improvements in the international policy framework, including larger food stocks, by the year 2000 their food problems could be largely solved. The inescapable modernization of developing country agriculture will be difficult and very expensive. Developing countries must receive at least one dollar in six of their agricultural investment requirements as external assistance.
After introducing influences which will affect mechanization of agriculture over the next 50 years, the review deals separately with three aspects of equipment on farms. These are machinery for arable farming, equipment used in livestock husbandry, and the hardware and software likely to assist farm management. The bulk of crop production is predicted to be carried out by a system of field gantries which will replace today's tractor and implements. The gantries will be fitted with automatic guidance but will still carry an operator who will be provided with extensive monitoring instrumentation. Second, monitoring of livestock growth or yield will be computer based. Health and breeding will also be automatically monitored and controlled, and farm staff will be aided by a robot, whose functions will include remote viewing and sensing, routine animal and feed handling operations, and assistance with animal herding. Computers will be used extensively in farm management with on-farm machines linked permanently to national information data banks, to suppliers and organizations who purchase from the farm and to consultant organizations. Computers will also be used for planning decisions based on operational research predictive methods and particularly for daily work planning.
Macroengineering projects are not normally constructed with accuracies measured in fractions of a millimetre nor do they take the curvature of the Earth into consideration. CERN's Super Proton Synchrotron near Geneva, Switzerland, required both and yet it was built within the time and cost estimated. When commissioned it worked first time and has since then exceeded expectations. Here the details of its interdisciplinary design and construction are recorded by the Project Team who were responsible for this success story. The articles deal with the magnets, the vacuum system, the radiofrequency acceleration system, the civil engineering aspects of the 6.9 km tunnel, its survey and alignment, the voltage stabilization of the power supply the cooling water system, and last but by no means least the novel computer control philosophy and techniques which may well find application in other complex projects needing monitoring and control.
Since the dawn of the nuclear age, there has been interest in putting the nuclear genie back into the bottle. In the intervening decades there have been persistent efforts to promote the elimination of nuclear weapons. The International Pugwash Movement, for example, has struggled for more than 40 years to establish zero as the proper goal of nuclear arms control and to examine seriously the prerequisites and conditions that would permit nuclear elimination to become a plausible policy option. But such voices were always very much in the minority, and their preferred course was always distant from the main lines of debate about nuclear weapons policy and nuclear arms control. Indeed, as is quite evident, during the Cold War the superpower protagonists built vast nuclear arsenals, numbering tens of thousands on each side, and enshrined nuclear weapons at the centre of their defence strategies. The notion of eliminating nuclear weapons did not stir wide interest or support. In the last few years, this has begun to change rather dramatically. There has been a remarkable upsurge of interest in, and support for, the abolition of nuclear weapons. This upsurge has included a series of high profile studies examining the prospects for nuclear elimination as well as a widening web of prominent supporters of the idea of abolition. The mid and late 1990s have witnessed an unprecedented focus on nuclear abolition as a desirable objective worthy of serious policy consideration.
Although the celestial observations made by the Aborigines were precise, the significance attached to them was conceptual rather than perceptual. It could not be derived from observation but only from knowledge gained by initiation into tribal values. The legends which embodied the astronomical knowledge had a threefold pragmatic role in tribal culture: they functioned as a predictive calendar for terrestrial events; they were associated with stories which reinforced the moral values pertaining to tribal identity; and they contributed to the belief system which provided a philosophical rationale for a tribe's understanding of the universe. Selected myths relating to the sun, moon, the Milky Way, the Magellanic Clouds, Venus, and various constellations are outlined and illustrated by traditional bark painting designs to provide examples of these general statements. Parallels are drawn with the theories of some contemporary philosophers of science.
In attempting to understand scientific community sociologically, issues of participation and belonging come quickly to the fore. Here, I suggest that scientific texts offer a useful route into exploring these aspects of community life. Through analysis of the textual construction of scientific community we find, however, that only certain forms of participation, and certain kinds of participant, are included. This raises the question of whether existing inequalities within the scientific community are being perpetuated by the self-image projected by the community's introductory texts.
Santiago Ramón y Cajal's neurohistological work marked a turning point in ongoing debates on the morphology of the nervous system. From 1888 onwards, he published extensively on the anatomic unity of the nerve cell. His experiments with the chrome silver stain resulted in highly particular ways of seeing and visualising neurons. In the current article, Cajal's practices of manipulating, observing, and drawing tissue will be juxtaposed with his photographic and cinematographic experiments. In addition, his epistemic stances on observation will be discussed, in order to better understand his commitment to the image. Lastly, the role of Cajal's drawings will be analysed in the process of attuning locally established findings to the expertise of his peers.
The past few decades have seen bad blood between biologists and social scientists. Each camp has seen social evolution as its own preserve: the biologists confident that evolutionary biology would deliver insights about culture and social behaviour; the social anthropologists, psychologists, and linguists resenting intrusion on to what they have seen as their patch. The two groups have fiercely debated questions about genes and culture: how behaviour is transmitted from one generation to another, where language comes from and how it is learnt. This report summarises a meeting organised by The Royal Society which attempted to bring the two sides together. It set out to discuss how far the emergence of new patterns of behaviour represents learning or results from genetically transmitted mechanisms; how cognitive development interacted with social organisation during critical transitions in human evolution; how far primates have culture but not language; and when, and how, language itself developed in human but not other primate species.
Einstein did 'basic' research, and to very good effect. Everybody nowadays says that 'basic' science should be fostered. But what do they mean? The conventional responses to this important question are confused and contradictory. Historical accounts are out of date. Philosophical criteria are too reductionist. Sociologists deconstruct basic research entirely. Psychological interpretations are too self indulgent. Populists deplore its elitism. Economic theory discounts it heavily. Industry merely wants to exploit it. Academia celebrates its pure irrelevance – and yet policy makers imagine it can be planned. Magritte tells us that the nature of basic scientific research is a suitable theme for basic metascientific research. Let us explore it in that spirit.
The processes of research in universities are evaluated as a source of expertise of value to the economy. The survey goes beyond straightforward engineering departments and looks at centres of excellence in areas such as astronomy, archaeology, physics, computer science, and chemistry that are not necessarily associated with technology transfer. The traditional model of technology transfer is described. The efficiency and limitations of the model which has led to the development of organisations with titles similar to University Business Development Office are discussed. The evaluation includes consideration of the impact of organisational structure on the type and areas of technology transfer that are possible and the opportunities for commercially valuable innovation within these collaborations. Suggestions are made for alternative approaches. The experience at Bradford which led to a proposal to set up an Office of Innovation and Technology is discussed and evaluated as a different approach to building collaborations with industry.
Drug discovery has traditionally been based on a process of molecular roulette in which large numbers of compounds are tested for biological activity in animals. This is slow and expensive, and does not have very good odds for success. Typically, introduction of a new drug costs $100m–200m in research and development, and only about 40 new chemical entities are introduced each year despite an estimated $20 000m annual expenditure on pharmaceutical research worldwide. Increasingly, research on drug discovery involves highly integrated interdisciplinary teams, and the use of modern technology. Three themes can be discerned in drug discovery research: computer aided drug design, development of new therapeutic targets, and exploitation of new sources of lead compounds. Most benefits will result from synergistic interactions between these three themes. Computer based drug design can focus either on postulated targets or on postulated lead compounds. Although gene cloning provides a vast database of sequences of potential target proteins, modelling of structure from ah initio calculations has yet to be convincingly achieved. There have been advances based on crystal structures, but these are limited to enzymes (for example, thymidylate synthase and purine nucleoside phosphorylase) and DNA. Other approaches concentrate on lead compounds. When these are also proteins, a wide range of evolutionary analyses can be used to predict active site structures, leading to the design of simpler analogues. Gene cloning also provides the means to a molecular dissection of disease, revealing new targets for therapeutic intervention. Transgenic animals may offer human diseases in laboratory mice. Receptors, enzymes, and second messenger systems can be obtained in quantities sufficient to form the basis of new assays for testing compounds. The use of robotics and microelectronic biosensors enables large numbers of compounds to be tested routinely. However, such molecular assays are absolutely dependent on the correct choice of target, and the screening methods lose the subtlety of integrated biological systems. New compounds can come from biotechnology, from randomly generated pep tides, or from exploiting natural biodiversity. The therapeutic protein 'revolution' has still to arrive, but new approaches to natural products may be more promising. Combination of high throughput micro-screening technology with computer aided structural refinement should generate novel lead compounds.
In this paper we re-examine the relationship and possibilities for discourse between the academic disciplines called 'sciences' and those known as 'arts'. Do they represent one culture or two? An apparent diversity of views emerges in two contemporary writers, George Steiner and Nicholas Lash, the former differentiating the two, the latter insisting that the 'two cultures' debate itself is misconstrued. We follow the principal threads of both arguments in the light of an intimate involvement with the practice of science and its communication in public and academic contexts. Visiting aspects of both arts and sciences that distinguish them from other disciplines, the role of theory, and the twin purposes of function and contemplation, we find that much of the pain of discourse between them arises from a failure to recognise common structures and functions. As a result, either function or contemplation may be overemphasised at the expense of the other. We suggest directions in which the tensions might be resolved in both public and academic arenas.
Innovation is defined as the commercial exploitation of a new idea generated by technology push or market pull. The essentials of innovation are outlined, and two industrial examples are offered: a process innovation in sausage manufacture and a product innovation in oil palm cloning. The role of academy in featuring innovation is discussed.
Part I of this series used the idea, more properly hypothesis, that perception works by model fitting. The hypothesis is an important key to clarifying perennial issues about science and the arts and about, for instance, consciousness and free will. What is involved goes beyond what practical people call 'mere semantics' and 'mere philosophy'. For instance, it has practical implications for scientists' professional codes of conduct and for the social experiment we call free market democracy, a theme to be developed in the third and final part of this series. Here in Part II, the model fitting hypothesis is discussed in more detail along with some key evidence. That evidence – much of it checkable by any observant person, with no need for specialist equipment – includes a class of perceptual phenomena to be referred to here as 'acausality illusions', in which, in some cases, perceived times precede the arrival of relevant sensory data. Such phenomena are consistent with the model fitting hypothesis, which predicts that perceived times of outside-world events must be earlier than, and perceived times of internal decisions later than, associated physical events in the nervous system. Associated timespans are typically a few tenths of a second.
From memorial lectures given recently in honour of pioneering personalities of nuclear physics, one gain the impression that electrons and light ions, emitted by radioactive samples, were adequate radiation sources to probe the nuclear structure and that experimental limitations existed primarily in the detecting instruments of the time. Ideas for electrotechnical devices for accelerating various charged particle species are said to have been described in the liiterature, but more as a principle rather than a usable tool. Moreover, it was said that D. O. Lawrence, the famous inventor of tbe cyclotron in 1932, used nuclear reactions to demonstrate the accelerating performance of his rapidly growing machines rather than immediately commissioning one machine a a working tool for nuclear physics experiments.From a more modern standpoint the technical background did not exist in those days to build beam production machines. And it was not until World War II and after that nuclear data were so urgently needed that any conceivable technical effort went into the development of particle accelerators. This is specificallyy true for high-power radiofrequency (rf) amplifiers whicb then became available from radar applications or were specifically developed for building large ion linear accelerators for breeding fissile material.Along with the vacuum, high voltage and rf developments of the post war nuclear age, two theoretical inventions were indispensable for building large accelerators: the principle phase focussing in rf linear accelerators and the strong focussing of particle beams in linear and circular accelerators. In the mid-1950s all tools were on hand to build electron, proton and heavy-ion accelerators as nuclear and high energy physics experiments demanded.Rolf Wideröe, in his later talks, called tbe accelerators a ‘child of physics and electrical engineering'. However, this interdisciplinary view did not exist until the late forties. Before this, accelerator were homemade device in nuclear physics institutes. Physicists wound magnet coils by themselves, and designed their own vacuum pumps and rf tubes. Today, the larger accelerators are still planned for a specific research goal in the nuclear physics institutes. Electrical engineers take part in the design tage, and all components are entirely subcontracted to industry.
The last decade has seen a dramatic increase in public concern about nuclear energy. As a consequence, it has become recognised that the future of nuclear energy will not only depend on technical and economic factors, but that public acceptability of this technology will play a crucial role in the long-term future of nuclear energy. Research has shown a considerable divergence in public and expert assessment of the risks associated with nuclear energy. Qualitative aspects of risks play a dominant role in the public's perception of risks, and it seems necessary for experts to recognise this in order to improve relations with the general public. It is also clear, however, that differences in the perception of risks do not embrace all the relevant aspects of the public's assessment of nuclear energy. Public reaction is also related to more general beliefs and values, and the issue of nuclear energy is embedded in a much wider moral and political domain.
This paper is an extended outline of the author's presidential address to the Chemistry Section of the British Association for the Advancement of Science at the BAAS's Annual Festival of Science held in Sheffi eld in September 1999. One theme of the chemistry component of the festival was 'following nature's way', which was designed to show how chemists can learn from biology and apply that knowledge to generate advances in chemistry, and in this context the interface between inorganic chemistry and biology was drawn to attention. Chemists have utilised coordination compounds to serve as metallobiosite analogues and so gain insight into the nature of the sites. It is also possible to take the information acquired through protein crystallography, which gives a precise definition of the metal environment, to develop new chemistry. This paper shows how this synergistic interplay has helped to reveal the nature of the metallobiosites that transport dioxygen in nature and how an understanding of the nature of the metallobiosite in urease has helped in the generation of new coordination chemistry.