The escalating costs of health care and other recent trends have made health care decisions of great societal import, with decision-making responsibility often being transferred from practitioners to health economists, health plans, and insurers. Health care decision making increasingly is guided by evidence that a treatment is efficacious, effective–disseminable, cost-effective, and scientifically plausible. Under these conditions of heightened cost concerns and institutional–economic decision making, psychologists are losing the opportunity to play a leadership role in mental and behavioral health care: Other types of practitioners are providing an increasing proportion of delivered treatment, and the use of psychiatric medication has increased dramatically relative to the provision of psychological interventions.
Research has shown that numerous psychological interventions are efficacious, effective, and cost-effective. However, these interventions are used infrequently with patients who would benefit from them, in part because clinical psychologists have not made a convincing case for the use of these interventions (e.g., by supplying the data that decision makers need to support implementation of such interventions) and because clinical psychologists do not themselves use these interventions even when given the opportunity to do so.
Clinical psychologists' failure to achieve a more significant impact on clinical and public health may be traced to their deep ambivalence about the role of science and their lack of adequate science training, which leads them to value personal clinical experience over research evidence, use assessment practices that have dubious psychometric support, and not use the interventions for which there is the strongest evidence of efficacy. Clinical psychology resembles medicine at a point in its history when practitioners were operating in a largely prescientific manner. Prior to the scientific reform of medicine in the early 1900s, physicians typically shared the attitudes of many of today's clinical psychologists, such as valuing personal experience over scientific research. Medicine was reformed, in large part, by a principled effort by the American Medical Association to increase the science base of medical school education. Substantial evidence shows that many clinical psychology doctoral training programs, especially PsyD and for-profit programs, do not uphold high standards for graduate admission, have high student–faculty ratios, deemphasize science in their training, and produce students who fail to apply or generate scientific knowledge.
A promising strategy for improving the quality and clinical and public health impact of clinical psychology is through a new accreditation system that demands high-quality science training as a central feature of doctoral training in clinical psychology. Just as strengthening training standards in medicine markedly enhanced the quality of health care, improved training standards in clinical psychology will enhance health and mental health care. Such a system will (a) allow the public and employers to identify scientifically trained psychologists; (b) stigmatize ascientific training programs and practitioners; (c) produce aspirational effects, thereby enhancing training quality generally; and (d) help accredited programs improve their training in the application and generation of science. These effects should enhance the generation, application, and dissemination of experimentally supported interventions, thereby improving clinical and public health. Experimentally based treatments not only are highly effective but also are cost-effective relative to other interventions; therefore, they could help control spiraling health care costs. The new Psychological Clinical Science Accreditation System (PCSAS) is intended to accredit clinical psychology training programs that offer high-quality science-centered education and training, producing graduates who are successful in generating and applying scientific knowledge. Psychologists, universities, and other stakeholders should vigorously support this new accreditation system as the surest route to a scientifically principled clinical psychology that can powerfully benefit clinical and public health.
Policy decisions at the organizational, corporate, and governmental levels should be more heavily influenced by issues related to well-being—people's evaluations and feelings about their lives. Domestic policy currently focuses heavily on economic outcomes, although economic indicators omit, and even mislead about, much of what society values. We show that economic indicators have many shortcomings, and that measures of well-being point to important conclusions that are not apparent from economic indicators alone. For example, although economic output has risen steeply over the past decades, there has been no rise in life satisfaction during this period, and there has been a substantial increase in depression and distrust. We argue that economic indicators were extremely important in the early stages of economic development, when the fulfillment of basic needs was the main issue. As societies grow wealthy, however, differences in well-being are less frequently due to income, and are more frequently due to factors such as social relationships and enjoyment at work.
Important noneconomic predictors of the average levels of well-being of societies include social capital, democratic governance, and human rights. In the workplace, noneconomic factors influence work satisfaction and profitability. It is therefore important that organizations, as well as nations, monitor the well-being of workers, and take steps to improve it.
Assessing the well-being of individuals with mental disorders casts light on policy problems that do not emerge from economic indicators. Mental disorders cause widespread suffering, and their impact is growing, especially in relation to the influence of medical disorders, which is declining. Although many studies now show that the suffering due to mental disorders can be alleviated by treatment, a large proportion of persons with mental disorders go untreated. Thus, a policy imperative is to offer treatment to more people with mental disorders, and more assistance to their caregivers.
Supportive, positive social relationships are necessary for well-being. There are data suggesting that well-being leads to good social relationships and does not merely follow from them. In addition, experimental evidence indicates that people suffer when they are ostracized from groups or have poor relationships in groups. The fact that strong social relationships are critical to well-being has many policy implications. For instance, corporations should carefully consider relocating employees because doing so can sever friendships and therefore be detrimental to well-being.
Desirable outcomes, even economic ones, are often caused by well-being rather than the other way around. People high in well-being later earn higher incomes and perform better at work than people who report low well-being. Happy workers are better organizational citizens, meaning that they help other people at work in various ways. Furthermore, people high in well-being seem to have better social relationships than people low in well-being. For example, they are more likely to get married, stay married, and have rewarding marriages. Finally, well-being is related to health and longevity, although the pathways linking these variables are far from fully understood. Thus, well-being not only is valuable because it feels good, but also is valuable because it has beneficial consequences. This fact makes national and corporate monitoring of well-being imperative.
In order to facilitate the use of well-being outcomes in shaping policy, we propose creating a national well-being index that systematically assesses key well-being variables for representative samples of the population. Variables measured should include positive and negative emotions, engagement, purpose and meaning, optimism and trust, and the broad construct of life satisfaction. A major problem with using current findings on well-being to guide policy is that they derive from diverse and incommensurable measures of different concepts, in a haphazard mix of respondents. Thus, current findings provide an interesting sample of policy-related findings, but are not strong enough to serve as the basis of policy. Periodic, systematic assessment of well-being will offer policymakers a much stronger set of findings to use in making policy decisions.
It is understandable in times of financial crisis that the general public asks how this could happen. And since the market actors appear so irrational, it is also understandable that people – lay people and experts alike – believe that “psychological” factors play a decisive role. Is there evidence for this and what is the evidence? It is true that in general people individually use their cognitive and other resources in sensible ways, and that they collectively have developed institutions that effectively regulate economic and other transactions. It is likewise true that extreme circumstances sometimes are beyond people´s capacity, individually as well as collectively. It is therefore essential that scientific knowledge of people´s cognitive and other limitations is brought to bear on the issue of how to prevent such extreme circumstances to occur. Arguably, financial markets such as those for stocks and credit overtax actors’ capacity to make rational judgments and decisions. In product markets with full competition, prices represent the true value of the products offered. This does however not hold in stock markets where stock prices, due to excessive trading, are more volatile than they should be if reflecting the true value of the stocks. Psychological explanations include cognitive biases such as overconfidence and overoptimism, risk aversion in the face of sure gains and risk taking and loss aversion in the face of possible losses, and influences of nominal representation (money illusion) of stock prices. If no cognitive biases (strengthened by affective influences) exist or only some actors are susceptible to such biases, individual irrationality in stock markets would be eliminated. This is however not what evidence indicates. Still, in order to understand stock market booms and busts, it is necessary to take into account the tendency among actors to imitate each other. In de-stabilized stock markets, experts are less likely to loose money t
This monograph discusses research, theory, and practice relevant to how children learn to read English. After an initial overview of writing systems, the discussion summarizes research from developmental psychology on children's language competency when they enter school and on the nature of early reading development. Subsequent sections review theories of learning to read, the characteristics of children who do not learn to read (i.e., who have developmental dyslexia), research from cognitive psychology and cognitive neuroscience on skilled reading, and connectionist models of learning to read. The implications of the research findings for learning to read and teaching reading are discussed. Next, the primary methods used to teach reading (phonics and whole language) are summarized. The final section reviews laboratory and classroom studies on teaching reading. From these different sources of evidence, two inescapable conclusions emerge: (a) Mastering the alphabetic principle (that written symbols are associated with phonemes) is essential to becoming proficient in the skill of reading, and (b) methods that teach this principle directly are more effective than those that do not (especially for children who are at risk in some way for having difficulty learning to read). Using whole-language activities to supplement phonics instruction does help make reading fun and meaningful for children, but ultimately, phonics instruction is critically important because it helps beginning readers understand the alphabetic principle and learn new words. Thus, elementary -school teachers who make the alphabetic principle explicit are most effective in helping their students become skilled, independent readers.
This editorial introduces a report by A. W. Kruglanski et al in the same issue of the journal (see record
2008-15295-002) which identifies many of the inadequacies of the war metaphor. Using a psychological perspective, this report examines the use of metaphors in framing counterterrorism. The four metaphors examined are war, law enforcement, containment of a social epidemic, and a process of prejudice reduction. The current author argues that we need to reframe the conflict with terrorists if we are ever to prevail in the campaign against terrorism. Instead of reaching for a simplifying metaphor, our leaders ought to acknowledge the complexities of the problems we face, and use the opportunities afforded by their position to educate their followers to the realities of the world around them. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
The term “learning styles” refers to the concept that individuals differ in regard to what mode of instruction or study is most effective for them. Proponents of learning-style assessment contend that optimal instruction requires diagnosing individuals' learning style and tailoring instruction accordingly. Assessments of learning style typically ask people to evaluate what sort of information presentation they prefer (e.g., words versus pictures versus speech) and/or what kind of mental activity they find most engaging or congenial (e.g., analysis versus listening), although assessment instruments are extremely diverse. The most common—but not the only—hypothesis about the instructional relevance of learning styles is the meshing hypothesis, according to which instruction is best provided in a format that matches the preferences of the learner (e.g., for a “visual learner,” emphasizing visual presentation of information).
The learning-styles view has acquired great influence within the education field, and is frequently encountered at levels ranging from kindergarten to graduate school. There is a thriving industry devoted to publishing learning-styles tests and guidebooks for teachers, and many organizations offer professional development workshops for teachers and educators built around the concept of learning styles.
The authors of the present review were charged with determining whether these practices are supported by scientific evidence. We concluded that any credible validation of learning-styles-based instruction requires robust documentation of a very particular type of experimental finding with several necessary criteria. First, students must be divided into groups on the basis of their learning styles, and then students from each group must be randomly assigned to receive one of multiple instructional methods. Next, students must then sit for a final test that is the same for all students. Finally, in order to demonstrate that optimal learning requires that students receive instruction tailored to their putative learning style, the experiment must reveal a specific type of interaction between learning style and instructional method: Students with one learning style achieve the best educational outcome when given an instructional method that differs from the instructional method producing the best outcome for students with a different learning style. In other words, the instructional method that proves most effective for students with one learning style is not the most effective method for students with a different learning style.
Our review of the literature disclosed ample evidence that children and adults will, if asked, express preferences about how they prefer information to be presented to them. There is also plentiful evidence arguing that people differ in the degree to which they have some fairly specific aptitudes for different kinds of thinking and for processing different types of information. However, we found virtually no evidence for the interaction pattern mentioned above, which was judged to be a precondition for validating the educational applications of learning styles. Although the literature on learning styles is enormous, very few studies have even used an experimental methodology capable of testing the validity of learning styles applied to education. Moreover, of those that did use an appropriate method, several found results that flatly contradict the popular meshing hypothesis.
We conclude therefore, that at present, there is no adequate evidence base to justify incorporating learning-styles assessments into general educational practice. Thus, limited education resources would better be devoted to adopting other educational practices that have a strong evidence base, of which there are an increasing number. However, given the lack of methodologically sound studies of learning styles, it would be an error to conclude that all possible versions of learning styles have been tested and found wanting; many have simply not been tested at all. Further research on the use of learning-styles assessment in instruction may in some cases be warranted, but such research needs to be performed appropriately.
One of the most continually vexing problems in society is the variability with which citizens support endeavors that are designed to help a great number of people. In this article, we examine the twin roles of cooperative and antagonistic behavior in this variability. We find that each plays an important role, though their contributions are, understandably, at odds. It is this opposition that produces seeming unpredictability in citizen response to collective need. In fact, we suggest that careful consideration of the research allows one to often predict when efforts to provide a collectively beneficial good will succeed and when they will fail.
To understand the dynamics of participation in response to collective need, it is necessary to distinguish between the primary types of need situations. A public good is an entity that relies in whole or in part on contributions to be provided. Examples of public goods are charities and public broadcasting. Public goods require that citizens experience a short-term loss (of their contribution) in order to realize a long-term gain (of the good). However, because everyone can use the good once it is provided, there is also an incentive to not contribute, let others give, and then take advantage of their efforts. This state of affairs introduces a conflict between doing what is best for oneself and what is best for the group. In a public goods situation, cooperation and antagonism impact how one resolves this conflict.
The other major type of need situation is a common-pool resource problem. Here, a good is fully provided at the outset, and citizens may sample from it. The resource is usually, but not necessarily, partially replenished. Examples of replenished resources are drinking water and trees; examples of resources that are functionally not replenished are oil and minerals. Common-pool resources allow citizens to experience a short-term gain (by getting what they want in the early life of the resource) but also present the possibility of a long-term loss (if the resource dries up). As with public goods, there is thus a conflict between, on the one hand, acting in one’s best interest and taking as much as one wants all the time and, on the other, acting for the good of the group, which requires taking a lesser amount so that the replenishment rate can keep up with the rate of use. As with public goods, both cooperation and antagonism affect this decision.
With these situations in mind, we can now dig deeply into the dynamics of both cooperation and antagonism. Cooperation is one of the most heavily studied aspects of human behavior, yet despite this attention, there is much that is not understood about it, including its fundamental base. There are a number of different perspectives on the base. Interdependence theory argues that cooperation is driven by how one interprets the subjective value of the outcomes that will result from various combinations of behaviors. A person who sees a potential result of “50 to you, 50 to me” as “We both would do well” is more likely to cooperate than the person who sees it as “I would not outgain the other person.” Self-control theory suggests that cooperation is a function of how well a person can resist the impulse to benefit now and delay gratification. Evolutionary theory takes many forms but revolves around the extent to which cooperation is adaptive. Finally, the appropriateness framework takes a cognitive approach and assumes that cooperation is determined by a combination of social–cognitive (interpretation of self and the situation) and decision-heuristic factors. We propose that it is possible to integrate across these approaches and understand cooperation as a behavior that is influenced by all of these factors as well as other dynamics, such as cultural mores and personality traits.
Antagonism, as it relates to the collective welfare, is a phenomenon with a lesser history but one that is clearly influential. A number of facets of antagonism are relevant. Power, and its abuse, is a major factor, and a specific application to collective goods is the notion of a “gatekeeper,” or a person who can completely determine whether a public good exists or a common-pool resource can be used. Gatekeepers tend to demand ample compensation from others in order for the good or resource to go forward. If this demand is resisted, as it often is, the end result is that the good is not provided or the resource not accessed. Another facet is the desire to see an out-group be harmed. Sometimes, this motivation is so strong that people will deny themselves a good outcome in order to see the harm occur. Why someone would want to see an out-group be harmed is debatable, but it may be attributable to a desire to be seen as a winner, or it may be a strategy designed to produce a net benefit for one’s in-group. Emotions also play a role, with people tending to assume that out-group members have just basic emotions such as happiness and sadness and not secondary emotions such as guilt and shame. Because out-group members are emotionally simple, it is seen as acceptable to treat them badly. Complicating matters even further is that antagonism can sometimes be seen against in-group members who deviate, in either direction, from the group norm and against individuals who are behaving in a clearly selfless manner, like volunteers.
A number of approaches have been proposed to the resolution of public goods problems. Structural solutions act to alter the basic dynamic of the dilemma by means of interventions such as rewards for cooperation, punishment for noncooperation, and selection of a single group member to chart a course of action for everyone. Third-party solutions involve the bringing in of an external agent to help determine how group members should behave. These agents may be more passive and merely suggest solutions, or they may be more active and dictate how decisions will be made, what decision will be made, or both. Finally, psychological solutions involve changing how people view the situation.
We finish by discussing how policy makers can improve the chances of a publicly valuable good being supported. We particularly emphasize creation of a felt connection with future generations; clear demonstration of immediate and concrete consequences as a result of failure to provide the good; instillation of a sense of community; and isolation of the good from other, related issues. We also take up the general problem of distrust of those who establish policy and discuss some methods for helping minimize distrust.
While trying to think of an interesting way to introduce this major review of the field of lie detection, I did what lots of people do these days. I typed ‘‘catching liars’’ into the Google search bar and up came 305,000 results in .17 seconds. The first page was ‘‘10 ways to catch a liar’’ from WebMD. The essay featured J.J. Newberry, a trained federal agent purported to be skilled at detecting deception in people he interviewed. One of his success stories was the lie he spotted when a witness to a shooting tried to tell him that she heard gunshots and, without looking, just ran away. Newberry was suspicious: For him, this was inconsistent with how people respond to situations like this. And to prove his point, he banged on the table and the witness instantly looked right at him. This story helped motivate the tip at the top of the list for catching liars. The number-1 tip, Newberry said, is to look for inconsistencies in what witnesses are saying. Number 2: Ask the unexpected.
Posttraumatic stress disorder (PTSD) poses monumental public health challenges because of its contribution to mental health, physical health, and both interpersonal and social problems. Recent military engagements in Iraq and Afghanistan and the multitude of resulting cases of PTSD have highlighted the public health significance of these conditions.
There are now psychological treatments that can effectively treat most individuals with PTSD, including active duty military personnel, veterans, and civilians. We begin by reviewing the effectiveness of these treatments, with a focus on prolonged exposure (PE), a cognitive-behavioral therapy (CBT) for PTSD. Many studies conducted in independent research labs have demonstrated that PE is highly efficacious in treating PTSD across a wide range of trauma types, survivor characteristics, and cultures. Furthermore, therapists without prior CBT experience can readily learn and implement the treatment successfully.
Despite the existence of highly effective treatments like PE, the majority of individuals with PTSD receive treatments of unknown efficacy. Thus, it is crucial to identify the barriers and challenges that must be addressed in order to promote the widespread dissemination of effective treatments for PTSD. In this review, we first discuss some of the major challenges, such as a professional culture that often is antagonistic to evidence-based treatments (EBTs), a lack of clinician training in EBTs, limited effectiveness of commonly used dissemination techniques, and the significant cost associated with effective dissemination models.
Next, we review local, national, and international efforts to disseminate PE and similar treatments and illustrate the challenges and successes involved in promoting the adoption of EBTs in mental health systems. We then consider ways in which the barriers discussed earlier can be overcome, as well as the difficulties involved in effecting sustained organizational change in mental health systems. We also present examples of efforts to disseminate PE in developing countries and the attendant challenges when mental health systems are severely underdeveloped.
Finally, we present future directions for the dissemination of EBTs for PTSD, including the use of newer technologies such as web-based therapy and telemedicine. We conclude by discussing the need for concerted action among multiple interacting systems in order to overcome existing barriers to dissemination and promote widespread access to effective treatment for PTSD. These systems include graduate training programs, government agencies, health insurers, professional organizations, healthcare delivery systems, clinical researchers, and public education systems like the media. Each of these entities can play a major role in reducing the personal suffering and public health burden associated with posttraumatic stress.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.
We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.
We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.
We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
For nearly a century, scholars have sought to understand, measure, and explain giftedness. Succeeding theories and empirical investigations have often built on earlier work, complementing or sometimes clashing over conceptions of talent or contesting the mechanisms of talent development. Some have even suggested that giftedness itself is a misnomer, mistaken for the results of endless practice or social advantage. In surveying the landscape of current knowledge about giftedness and gifted education, this monograph will advance a set of interrelated arguments: The abilities of individuals do matter, particularly their abilities in specific talent domains; different talent domains have different developmental trajectories that vary as to when they start, peak, and end; and opportunities provided by society are crucial at every point in the talent-development process. We argue that society must strive to promote these opportunities but that individuals with talent also have some responsibility for their own growth and development. Furthermore, the research knowledge base indicates that psychosocial variables are determining influences in the successful development of talent. Finally, outstanding achievement or eminence ought to be the chief goal of gifted education. We assert that aspiring to fulfill one’s talents and abilities in the form of transcendent creative contributions will lead to high levels of personal satisfaction and self-actualization as well as produce yet unimaginable scientific, aesthetic, and practical benefits to society.
To frame our discussion, we propose a definition of giftedness that we intend to be comprehensive. Giftedness is the manifestation of performance that is clearly at the upper end of the distribution in a talent domain even relative to other high-functioning individuals in that domain. Further, giftedness can be viewed as developmental in that in the beginning stages, potential is the key variable; in later stages, achievement is the measure of giftedness; and in fully developed talents, eminence is the basis on which this label is granted. Psychosocial variables play an essential role in the manifestation of giftedness at every developmental stage. Both cognitive and psychosocial variables are malleable and need to be deliberately cultivated.
Our goal here is to provide a definition that is useful across all domains of endeavor and acknowledges several perspectives about giftedness on which there is a fairly broad scientific consensus. Giftedness (a) reflects the values of society; (b) is typically manifested in actual outcomes, especially in adulthood; (c) is specific to domains of endeavor; (d) is the result of the coalescing of biological, pedagogical, psychological, and psychosocial factors; and (e) is relative not just to the ordinary (e.g., a child with exceptional art ability compared to peers) but to the extraordinary (e.g., an artist who revolutionizes a field of art).
In this monograph, our goal is to review and summarize what we have learned about giftedness from the literature in psychological science and suggest some directions for the field of gifted education. We begin with a discussion of how giftedness is defined (see above). In the second section, we review the reasons why giftedness is often excluded from major conversations on educational policy, and then offer rebuttals to these arguments. In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student’s talents and abilities. In fact, high-ability students in the United States are not faring well on international comparisons. The scores of advanced students in the United States with at least one college-educated parent were lower than the scores of students in 16 other developed countries regardless of parental education level.
In the third section, we summarize areas of consensus and controversy in gifted education, using the extant psychological literature to evaluate these positions. Psychological science points to several variables associated with outstanding achievement. The most important of these include general and domain-specific ability, creativity, motivation and mindset, task commitment, passion, interest, opportunity, and chance. Consensus has not been achieved in the field however in four main areas: What are the most important factors that contribute to the acuities or propensities that can serve as signs of potential talent? What are potential barriers to acquiring the “gifted” label? What are the expected outcomes of gifted education? And how should gifted students be educated?
In the fourth section, we provide an overview of the major models of giftedness from the giftedness literature. Four models have served as the foundation for programs used in schools in the United States and in other countries. Most of the research associated with these models focuses on the precollegiate and early university years. Other talent-development models described are designed to explain the evolution of talent over time, going beyond the school years into adult eminence (but these have been applied only by out-of-school programs as the basis for educating gifted students).
In the fifth section we present methodological challenges to conducting research on gifted populations, including definitions of giftedness and talent that are not standardized, test ceilings that are too low to measure progress or growth, comparison groups that are hard to find for extraordinary individuals, and insufficient training in the use of statistical methods that can address some of these challenges.
In the sixth section, we propose a comprehensive model of trajectories of gifted performance from novice to eminence using examples from several domains. This model takes into account when a domain can first be expressed meaningfully—whether in childhood, adolescence, or adulthood. It also takes into account what we currently know about the acuities or propensities that can serve as signs of potential talent. Budding talents are usually recognized, developed, and supported by parents, teachers, and mentors. Those individuals may or may not offer guidance for the talented individual in the psychological strengths and social skills needed to move from one stage of development to the next. We developed the model with the following principles in mind: Abilities matter, domains of talent have varying developmental trajectories, opportunities need to be provided to young people and taken by them as well, psychosocial variables are determining factors in the successful development of talent, and eminence is the aspired outcome of gifted education.
In the seventh section, we outline a research agenda for the field. This agenda, presented in the form of research questions, focuses on two central variables associated with the development of talent—opportunity and motivation—and is organized according to the degree to which access to talent development is high or low and whether an individual is highly motivated or not.
Finally, in the eighth section, we summarize implications for the field in undertaking our proposed perspectives. These include a shift toward identification of talent within domains, the creation of identification processes based on the developmental trajectories of talent domains, the provision of opportunities along with monitoring for response and commitment on the part of participants, provision of coaching in psychosocial skills, and organization of programs around the tools needed to reach the highest possible levels of creative performance or productivity.
Many doctors, patients, journalists, and politicians alike do not understand what health statistics mean or draw wrong conclusions without noticing. Collective statistical illiteracy refers to the widespread inability to understand the meaning of numbers. For instance, many citizens are unaware that higher survival rates with cancer screening do not imply longer life, or that the statement that mammography screening reduces the risk of dying from breast cancer by 25% in fact means that 1 less woman out of 1,000 will die of the disease. We provide evidence that statistical illiteracy (a) is common to patients, journalists, and physicians; (b) is created by nontransparent framing of information that is sometimes an unintentional result of lack of understanding but can also be a result of intentional efforts to manipulate or persuade people; and (c) can have serious consequences for health.
The causes of statistical illiteracy should not be attributed to cognitive biases alone, but to the emotional nature of the doctor–patient relationship and conflicts of interest in the healthcare system. The classic doctor–patient relation is based on (the physician's) paternalism and (the patient's) trust in authority, which make statistical literacy seem unnecessary; so does the traditional combination of determinism (physicians who seek causes, not chances) and the illusion of certainty (patients who seek certainty when there is none). We show that information pamphlets, Web sites, leaflets distributed to doctors by the pharmaceutical industry, and even medical journals often report evidence in nontransparent forms that suggest big benefits of featured interventions and small harms. Without understanding the numbers involved, the public is susceptible to political and commercial manipulation of their anxieties and hopes, which undermines the goals of informed consent and shared decision making.
What can be done? We discuss the importance of teaching statistical thinking and transparent representations in primary and secondary education as well as in medical school. Yet this requires familiarizing children early on with the concept of probability and teaching statistical literacy as the art of solving real-world problems rather than applying formulas to toy problems about coins and dice. A major precondition for statistical literacy is transparent risk communication. We recommend using frequency statements instead of single-event probabilities, absolute risks instead of relative risks, mortality rates instead of survival rates, and natural frequencies instead of conditional probabilities. Psychological research on transparent visual and numerical forms of risk communication, as well as training of physicians in their use, is called for.
Statistical literacy is a necessary precondition for an educated citizenship in a technological democracy. Understanding risks and asking critical questions can also shape the emotional climate in a society so that hopes and anxieties are no longer as easily manipulated from outside and citizens can develop a better-informed and more relaxed attitude toward their health.
Teams of people working together for a common purpose have been a centerpiece of human social organization ever since our ancient ancestors first banded together to hunt game, raise families, and defend their communities. Human history is largely a story of people working together in groups to explore, achieve, and conquer. Yet, the modern concept of work in large organizations that developed in the late 19th and early 20th centuries is largely a tale of work as a collection of individual jobs. A variety of global forces unfolding over the last two decades, however, has pushed organizations worldwide to restructure work around teams, to enable more rapid, flexible, and adaptive responses to the unexpected. This shift in the structure of work has made team effectiveness a salient organizational concern.
Teams touch our lives everyday and their effectiveness is important to well-being across a wide range of societal functions. There is over 50 years of psychological research—literally thousands of studies—focused on understanding and influencing the processes that underlie team effectiveness. Our goal in this monograph is to sift through this voluminous literature to identify what we know, what we think we know, and what we need to know to improve the effectiveness of work groups and teams.
We begin by defining team effectiveness and establishing the conceptual underpinnings of our approach to understanding it. We then turn to our review, which concentrates primarily on topics that have well-developed theoretical and empirical foundations, to ensure that our conclusions and recommendations are on firm footing. Our review begins by focusing on cognitive, motivational/affective, and behavioral team processes—processes that enable team members to combine their resources to resolve task demands and, in so doing, be effective. We then turn our attention to identifying interventions, or “levers,” that can shape or align team processes and thereby provide tools and applications that can improve team effectiveness. Topic-specific conclusions and recommendations are given throughout the review. There is a solid foundation for concluding that there is an emerging science of team effectiveness and that findings from this research foundation provide several means to improve team effectiveness. In the concluding section, we summarize our primary findings to highlight specific research, application, and policy recommendations for enhancing the effectiveness of work groups and teams.
Many students are being left behind by an educational system that some people believe is in crisis. Improving educational outcomes will require efforts on many fronts, but a central premise of this monograph is that one part of a solution involves helping students to better regulate their learning through the use of effective learning techniques. Fortunately, cognitive and educational psychologists have been developing and evaluating easy-to-use learning techniques that could help students achieve their learning goals. In this monograph, we discuss 10 learning techniques in detail and offer recommendations about their relative utility. We selected techniques that were expected to be relatively easy to use and hence could be adopted by many students. Also, some techniques (e.g., highlighting and rereading) were selected because students report relying heavily on them, which makes it especially important to examine how well they work. The techniques include elaborative interrogation, self-explanation, summarization, highlighting (or underlining), the keyword mnemonic, imagery use for text learning, rereading, practice testing, distributed practice, and interleaved practice.
To offer recommendations about the relative utility of these techniques, we evaluated whether their benefits generalize across four categories of variables: learning conditions, student characteristics, materials, and criterion tasks. Learning conditions include aspects of the learning environment in which the technique is implemented, such as whether a student studies alone or with a group. Student characteristics include variables such as age, ability, and level of prior knowledge. Materials vary from simple concepts to mathematical problems to complicated science texts. Criterion tasks include different outcome measures that are relevant to student achievement, such as those tapping memory, problem solving, and comprehension.
We attempted to provide thorough reviews for each technique, so this monograph is rather lengthy. However, we also wrote the monograph in a modular fashion, so it is easy to use. In particular, each review is divided into the following sections: General description of the technique and why it should work How general are the effects of this technique? 2a. Learning conditions 2b. Student characteristics 2c. Materials 2d. Criterion tasks Effects in representative educational contexts Issues for implementation Overall assessment
The review for each technique can be read independently of the others, and particular variables of interest can be easily compared across techniques.
To foreshadow our final recommendations, the techniques vary widely with respect to their generalizability and promise for improving student learning. Practice testing and distributed practice received high utility assessments because they benefit learners of different ages and abilities and have been shown to boost students’ performance across many criterion tasks and even in educational contexts. Elaborative interrogation, self-explanation, and interleaved practice received moderate utility assessments. The benefits of these techniques do generalize across some variables, yet despite their promise, they fell short of a high utility assessment because the evidence for their efficacy is limited. For instance, elaborative interrogation and self-explanation have not been adequately evaluated in educational contexts, and the benefits of interleaving have just begun to be systematically explored, so the ultimate effectiveness of these techniques is currently unknown. Nevertheless, the techniques that received moderate-utility ratings show enough promise for us to recommend their use in appropriate situations, which we describe in detail within the review of each technique.
Five techniques received a low utility assessment: summarization, highlighting, the keyword mnemonic, imagery use for text learning, and rereading. These techniques were rated as low utility for numerous reasons. Summarization and imagery use for text learning have been shown to help some students on some criterion tasks, yet the conditions under which these techniques produce benefits are limited, and much research is still needed to fully explore their overall effectiveness. The keyword mnemonic is difficult to implement in some contexts, and it appears to benefit students for a limited number of materials and for short retention intervals. Most students report rereading and highlighting, yet these techniques do not consistently boost students’ performance, so other techniques should be used in their place (e.g., practice testing instead of rereading).
Our hope is that this monograph will foster improvements in student learning, not only by showcasing which learning techniques are likely to have the most generalizable effects but also by encouraging researchers to continue investigating the most promising techniques. Accordingly, in our closing remarks, we discuss some issues for how these techniques could be implemented by teachers and students, and we highlight directions for future research.
Ginkgo biloba is an herb often used as an alternative treatment to improve cognitive functions. Like most herbal treatments, the use of ginkgo is poorly regulated by government agencies, on the basis of either its efficacy or its health risks. This article reviews the experimental evidence available regarding efficacy, neurobiological actions, and health risks. Findings obtained in studies of humans often include demonstrations of rather mild cognitive enhancement. Interpretation of these findings is complicated by somewhat inconsistent findings, by experimental designs that do not permit identification of cognitive functions susceptible to the influence of ginkgo, and by the paucity of direct comparisons with other treatments. The number of peer-reviewed reports of studies in nonhuman animals is surprisingly small. In this small set, the findings reveal mild behavioral effects that might be attributable to actions on cognitive functions. However, these experiments in rodents, like those in humans, do not involve the use of designs to assess ginkgo's effects on particular cognitive attributes, and generally do not include direct comparisons with other treatments. Interpretation of the findings is further complicated by evidence, obtained in studies of both humans and rats, showing that a single administration of the treatment enhances performance on cognitive measures.
If ginkgo has effects on cognition, there should be effects evident on biological processes as well. Neurobiological studies have largely examined the effects of chronic ginkgo administration, mirroring the most common design in behavioral studies. However, the addition of findings that single administration of ginkgo may influence behavior directs biological investigations to short-term actions of the treatment. Biological effects of ginkgo include vasodilation, protection of neurons from oxidative stress, and actions mediated by effects via neurotransmitters. Adverse reactions to ginkgo consumption have been observed but are relatively rare.
Collectively, the behavioral literature reviewed cannot be used conclusively to document or to refute the efficacy of ginkgo in improving cognitive functions. At best, the effects seem quite modest. In particular, it is questionable whether effects of ginkgo, if present, are equal to those obtained by administration of acetylcholinesterase inhibitors, hearing an arousing story, or ingesting glucose.
The diagnosis of mental disorder initially appears relatively straightforward: Patients present with symptoms or visible signs of illness; health professionals make diagnoses based primarily on these symptoms and signs; and they prescribe medication, psychotherapy, or both, accordingly. However, despite a dramatic expansion of knowledge about mental disorders during the past half century, understanding of their components and processes remains rudimentary. We provide histories and descriptions of three systems with different purposes relevant to understanding and classifying mental disorder. Two major diagnostic manuals—the International Classification of Diseases and the Diagnostic and Statistical Manual of Mental Disorders—provide classification systems relevant to public health, clinical diagnosis, service provision, and specific research applications, the former internationally and the latter primarily for the United States. In contrast, the National Institute of Mental Health’s Research Domain Criteria provides a framework that emphasizes integration of basic behavioral and neuroscience research to deepen the understanding of mental disorder. We identify four key issues that present challenges to understanding and classifying mental disorder: etiology, including the multiple causality of mental disorder; whether the relevant phenomena are discrete categories or dimensions; thresholds, which set the boundaries between disorder and nondisorder; and comorbidity, the fact that individuals with mental illness often meet diagnostic requirements for multiple conditions. We discuss how the three systems’ approaches to these key issues correspond or diverge as a result of their different histories, purposes, and constituencies. Although the systems have varying degrees of overlap and distinguishing features, they share the goal of reducing the burden of suffering due to mental disorder.
Available (open access) from: http://journals.sagepub.com/toc/psia/18/2
Bailey et al. (2016) have provided an excellent, state-of-the-art overview that is a major contribution to our understanding of sexual orientation. However, whereas Bailey and his coauthors have examined the physiological, behavioral, and self-report data of sexual orientation and see categories, I see a sexual and romantic continuum. After noting several objections concerning the limitations of the review and methodological shortcomings characteristic of sexual-orientation research in general, I present evidence from research investigating in-between sexualities to support an alternative, continuum-based perspective regarding the nature of sexual orientation for both women and men. A continuum conceptualization has potential implications for investigating the prevalence of nonheterosexuals, sexual-orientation differences in gender nonconformity, causes of sexual orientation, and political issues.
Much has been written in the past two decades about women in academic science careers, but this literature is contradictory. Many analyses have revealed a level playing field, with men and women faring equally, whereas other analyses have suggested numerous areas in which the playing field is not level. The only widely-agreed-upon conclusion is that women are underrepresented in college majors, graduate school programs, and the professoriate in those fields that are the most mathematically intensive, such as geoscience, engineering, economics, mathematics/computer science, and the physical sciences. In other scientific fields (psychology, life science, social science), women are found in much higher percentages.
In this monograph, we undertake extensive life-course analyses comparing the trajectories of women and men in math-intensive fields with those of their counterparts in non-math-intensive fields in which women are close to parity with or even exceed the number of men. We begin by examining early-childhood differences in spatial processing and follow this through quantitative performance in middle childhood and adolescence, including high school coursework. We then focus on the transition of the sexes from high school to college major, then to graduate school, and, finally, to careers in academic science.
The results of our myriad analyses reveal that early sex differences in spatial and mathematical reasoning need not stem from biological bases, that the gap between average female and male math ability is narrowing (suggesting strong environmental influences), and that sex differences in math ability at the right tail show variation over time and across nationalities, ethnicities, and other factors, indicating that the ratio of males to females at the right tail can and does change. We find that gender differences in attitudes toward and expectations about math careers and ability (controlling for actual ability) are evident by kindergarten and increase thereafter, leading to lower female propensities to major in math-intensive subjects in college but higher female propensities to major in non-math-intensive sciences, with overall science, technology, engineering, and mathematics (STEM) majors at 50% female for more than a decade. Post-college, although men with majors in math-intensive subjects have historically chosen and completed PhDs in these fields more often than women, the gap has recently narrowed by two thirds; among non-math-intensive STEM majors, women are more likely than men to go into health and other people-related occupations instead of pursuing PhDs.
Importantly, of those who obtain doctorates in math-intensive fields, men and women entering the professoriate have equivalent access to tenure-track academic jobs in science, and they persist and are remunerated at comparable rates—with some caveats that we discuss. The transition from graduate programs to assistant professorships shows more pipeline leakage in the fields in which women are already very prevalent (psychology, life science, social science) than in the math-intensive fields in which they are underrepresented but in which the number of females holding assistant professorships is at least commensurate with (if not greater than) that of males. That is, invitations to interview for tenure-track positions in math-intensive fields—as well as actual employment offers—reveal that female PhD applicants fare at least as well as their male counterparts in math-intensive fields.
Along these same lines, our analyses reveal that manuscript reviewing and grant funding are gender neutral: Male and female authors and principal investigators are equally likely to have their manuscripts accepted by journal editors and their grants funded, with only very occasional exceptions. There are no compelling sex differences in hours worked or average citations per publication, but there is an overall male advantage in productivity. We attempt to reconcile these results amid the disparate claims made regarding their causes, examining sex differences in citations, hours worked, and interests.
We conclude by suggesting that although in the past, gender discrimination was an important cause of women’s underrepresentation in scientific academic careers, this claim has continued to be invoked after it has ceased being a valid cause of women’s underrepresentation in math-intensive fields. Consequently, current barriers to women’s full participation in mathematically intensive academic science fields are rooted in pre-college factors and the subsequent likelihood of majoring in these fields, and future research should focus on these barriers rather than misdirecting attention toward historical barriers that no longer account for women’s underrepresentation in academic science.
The U.S. legal system increasingly accepts the idea that the confidence expressed by an eyewitness who identified a suspect from a lineup provides little information as to the accuracy of that identification. There was a time when this pessimistic assessment was entirely reasonable because of the questionable eyewitness-identification procedures that police commonly employed. However, after more than 30 years of eyewitness-identification research, our understanding of how to properly conduct a lineup has evolved considerably, and the time seems ripe to ask how eyewitness confidence informs accuracy under more pristine testing conditions (e.g., initial, uncontaminated memory tests using fair lineups, with no lineup administrator influence, and with an immediate confidence statement). Under those conditions, mock-crime studies and police department field studies have consistently shown that, for adults, (a) confidence and accuracy are strongly related and (b) high-confidence suspect identifications are remarkably accurate. However, when certain non-pristine testing conditions prevail (e.g., when unfair lineups are used), the accuracy of even a high-confidence suspect ID is seriously compromised. Unfortunately, some jurisdictions have not yet made reforms that would create pristine testing conditions and, hence, our conclusions about the reliability of high-confidence identifications cannot yet be applied to those jurisdictions. However, understanding the information value of eyewitness confidence under pristine testing conditions can help the criminal justice system to simultaneously achieve both of its main objectives: to exonerate the innocent (by better appreciating that initial, low-confidence suspect identifications are error prone) and to convict the guilty (by better appreciating that initial, high-confidence suspect identifications are surprisingly accurate under proper testing conditions).
Contents:Introduction, p.1Quasi-Experimental Studies, p.3Experimental and Quasi-Experimental Studies, p.15Why Does Class Size Matter? Inferences from Existing Research, p.20Implications of the Class-Size Findings, p.25References, p.26Appendix, p.29
There is intense public interest in questions surrounding how children learn to read and how they can best be taught. Research in psychological science has provided answers to many of these questions but, somewhat surprisingly, this research has been slow to make inroads into educational policy and practice. Instead, the field has been plagued by decades of “reading wars.” Even now, there remains a wide gap between the state of research knowledge about learning to read and the state of public understanding. The aim of this article is to fill this gap. We present a comprehensive tutorial review of the science of learning to read, spanning from children’s earliest alphabetic skills through to the fluent word recognition and skilled text comprehension characteristic of expert readers. We explain why phonics instruction is so central to learning in a writing system such as English. But we also move beyond phonics, reviewing research on what else children need to learn to become expert readers and considering how this might be translated into effective classroom practice. We call for an end to the reading wars and recommend an agenda for instruction and research in reading acquisition that is balanced, developmentally informed, and based on a deep understanding of how language and writing systems work.
Cognitive abilities are important predictors of educational and occupational performance, socioeconomic attainment, health, and longevity. Declines in cognitive abilities are linked to impairments in older adults’ everyday functions, but people differ from one another in their rates of cognitive decline over the course of adulthood and old age. Hence, identifying factors that protect against compromised late-life cognition is of great societal interest. The number of years of formal education completed by individuals is positively correlated with their cognitive function throughout adulthood and predicts lower risk of dementia late in life. These observations have led to the propositions that prolonging education might (a) affect cognitive ability and (b) attenuate aging-associated declines in cognition. We evaluate these propositions by reviewing the literature on educational attainment and cognitive aging, including recent analyses of data harmonized across multiple longitudinal cohort studies and related meta-analyses. In line with the first proposition, the evidence indicates that educational attainment has positive effects on cognitive function. We also find evidence that cognitive abilities are associated with selection into longer durations of education and that there are common factors (e.g., parental socioeconomic resources) that affect both educational attainment and cognitive development. There is likely reciprocal interplay among these factors, and among cognitive abilities, during development. Education–cognitive ability associations are apparent across the entire adult life span and across the full range of education levels, including (to some degree) tertiary education. However, contrary to the second proposition, we find that associations between education and aging-associated cognitive declines are negligible and that a threshold model of dementia can account for the association between educational attainment and late-life dementia risk. We conclude that educational attainment exerts its influences on late-life cognitive function primarily by contributing to individual differences in cognitive skills that emerge in early adulthood but persist into older age. We also note that the widespread absence of educational influences on rates of cognitive decline puts constraints on theoretical notions of cognitive aging, such as the concepts of cognitive reserve and brain maintenance. Improving the conditions that shape development during the first decades of life carries great potential for improving cognitive ability in early adulthood and for reducing public-health burdens related to cognitive aging and dementia.
Vaccination is one of the great achievements of the 20th century, yet persistent public-health problems include inadequate, delayed, and unstable vaccination uptake. Psychology offers three general propositions for understanding and intervening to increase uptake where vaccines are available and affordable. The first proposition is that thoughts and feelings can motivate getting vaccinated. Hundreds of studies have shown that risk beliefs and anticipated regret about infectious disease correlate reliably with getting vaccinated; low confidence in vaccine effectiveness and concern about safety correlate reliably with not getting vaccinated. We were surprised to find that few randomized trials have successfully changed what people think and feel about vaccines, and those few that succeeded were minimally effective in increasing uptake. The second proposition is that social processes can motivate getting vaccinated. Substantial research has shown that social norms are associated with vaccination, but few interventions examined whether normative messages increase vaccination uptake. Many experimental studies have relied on hypothetical scenarios to demonstrate that altruism and free riding (i.e., taking advantage of the protection provided by others) can affect intended behavior, but few randomized trials have tested strategies to change social processes to increase vaccination uptake. The third proposition is that interventions can facilitate vaccination directly by leveraging, but not trying to change, what people think and feel. These interventions are by far the most plentiful and effective in the literature. To increase vaccine uptake, these interventions build on existing favorable intentions by facilitating action (through reminders, prompts, and primes) and reducing barriers (through logistics and healthy defaults); these interventions also shape behavior (through incentives, sanctions, and requirements). Although identification of principles for changing thoughts and feelings to motivate vaccination is a work in progress, psychological principles can now inform the design of systems and policies to directly facilitate action.
Two major questions about addictive behaviors need to be explained by any worthwhile neurobiological theory. First, why do people seek drugs in the first place? Second, why do some people who use drugs seem to eventually become unable to resist drug temptation and so become “addicted”? We will review the theories of addiction that address negative-reinforcement views of drug use (i.e., taking opioids to alleviate distress or withdrawal), positive-reinforcement views (i.e., taking drugs for euphoria), habit views (i.e., growth of automatic drug-use routines), incentive-sensitization views (i.e., growth of excessive “wanting” to take drugs as a result of dopamine-related sensitization), and cognitive-dysfunction views (i.e., impaired prefrontal top-down control), including those involving competing neurobehavioral decision systems (CNDS), and the role of the insula in modulating addictive drug craving. In the special case of opioids, particular attention is paid to whether their analgesic effects overlap with their reinforcing effects and whether the perceived low risk of taking legal medicinal opioids, which are often prescribed by a health professional, could play a role in the decision to use. Specifically, we will address the issue of predisposition or vulnerability to becoming addicted to drugs (i.e., the question of why some people who experiment with drugs develop an addiction, while others do not). Finally, we review attempts to develop novel therapeutic strategies and policy ideas that could help prevent opioid and other substance abuse.
This monograph describes research findings linking intelligence and personality traits with health outcomes, including health behaviors, morbidity, and mortality. The field of study of intelligence and health outcomes is called cognitive epidemiology, and the field of study of personality traits and health outcomes is known as personological epidemiology. Intelligence and personality traits are the principal research topics studied by differential psychologists, so the combined field could be called differential epidemiology. This research is important for the following reasons: The findings overviewed are relatively new, and many researchers and practitioners are unaware of them; the effect sizes are on par with better-known, traditional risk factors for illness and death; mechanisms of the associations are largely unknown, so they must be explored further; and the findings have yet to be applied, so we write this to encourage diverse interested parties to consider how applications might be achieved. To make this research accessible to as many relevant researchers, practitioners, policymakers, and laypersons as possible, we first provide an overview of the basic discoveries regarding intelligence and personality. We describe the nature and structure of the measured phenotypes (i.e., the observable characteristics of an individual) in both fields. Although both areas of study are well established, we recognize that this may not be common knowledge outside of experts in the field. Human intelligence differences are described by a hierarchy that includes general intelligence (g) at the pinnacle, strongly correlated broad domains of cognitive functioning at a lower level, and specific abilities at the foot. The major human differences in personality are described by five personality factors that are widely agreed on with respect to their number and nature: neuroticism, extraversion, openness, agreeableness, and conscientiousness. As a foundation for health-related findings, we provide a summary of research showing that intelligence and personality differences can be measured reliably and validly and are stable across many years (even decades), substantially heritable, and related to important life outcomes. Cognitive and personality traits are fundamental aspects of a person, and they have relevance to life chances and outcomes, including health outcomes. We provide an overview of major and recent research on the associations between intelligence and personality traits and health outcomes. These outcomes include mortality from all causes, specific causes of death, specific illnesses, and others, such as health-related behaviors. Intelligence and personality traits are significantly and substantially (by comparison with traditional risk factors) related to all of these outcomes. The studies we describe are unusual in psychology: They have large sample sizes (typically thousands of subjects, sometimes ~ 1 million), the samples are more representative of the background population than in most studies, the follow-up times are long (sometimes many decades, almost the whole human life span), and the outcomes are objective health measures (including death), not just self-reports. In addition to the associations, possible mechanisms for the associations are described and discussed, and some attempts to test these mechanisms are illustrated. It is relatively early in this research field, so a significant amount of work remains to be done. Finally, we make some preliminary remarks about possible applications, with the knowledge that the psychological predictors addressed are somewhat stable aspects of the person, with substantial genetic causes. Nevertheless, we believe differential epidemiology can be a useful component of interventions to improve individual and public health. Intelligence and personality differences are possible causes of later health inequalities; the eventual aim of cognitive andpersonological epidemiology is to reduce or eliminate these inequalities, to the extent that it is possible, and provide information to help people toward their own optimal health through the life course. We present these findings to a wider audience so that more associations will be explored, a better understanding of the mechanisms of health inequalities will be produced, and inventive applications will follow on the basis of what we hope will be seen as practically useful knowledge.
The high prevalence and societal burden of chronic pain, its undertreatment, and disparities in its management have contributed to the acknowledgment of chronic pain as a serious public-health concern. The concurrent opioid epidemic, and increasing concern about overreliance on opioid therapy despite evidence of limited benefit and serious harms, has heightened attention to this problem. The biopsychosocial model has emerged as the primary conceptual framework for understanding the complex experience of chronic pain and for informing models of care. The prominence of psychological processes as risk and resilience factors in this model has prompted extensive study of psychological treatments designed to alter processes that underlie or significantly contribute to pain, distress, or disability among adults with chronic pain. Cognitive-behavioral therapy is acknowledged to have strong evidence of effectiveness; other psychological approaches, including acceptance and commitment therapy, mindfulness, biofeedback, hypnosis, and emotional-awareness and expression therapy, have also garnered varying degrees of evidence across multiple pain conditions. Mechanistic studies have identified multiple pathways by which these treatments may reduce the intensity and impact of pain. Despite the growing evidence for and appreciation of these approaches, several barriers limit their uptake at the level of organizations, providers, and patients. Innovative methods for delivering psychological interventions and other research, practice, and policy initiatives hold promise for overcoming these barriers. Additional scientific knowledge and practice gaps remain to be addressed to optimize the reach and effectiveness of these interventions, including tailoring to address individual differences, concurrently addressing co-occurring disorders, and incorporating other optimization strategies.
Collaborative problem solving (CPS) has been receiving increasing international attention because much of the complex work in the modern world is performed by teams. However, systematic education and training on CPS is lacking for those entering and participating in the workforce. In 2015, the Programme for International Student Assessment (PISA), a global test of educational progress, documented the low levels of proficiency in CPS. This result not only underscores a significant societal need but also presents an important opportunity for psychological scientists to develop, adopt, and implement theory and empirical research on CPS and to work with educators and policy experts to improve training in CPS. This article offers some directions for psychological science to participate in the growing attention to CPS throughout the world. First, it identifies the existing theoretical frameworks and empirical research that focus on CPS. Second, it provides examples of how recent technologies can automate analyses of CPS processes and assessments so that substantially larger data sets can be analyzed and so students can receive immediate feedback on their CPS performance. Third, it identifies some challenges, debates, and uncertainties in creating an infrastructure for research, education, and training in CPS. CPS education and assessment are expected to improve when supported by larger data sets and theoretical frameworks that are informed by psychological science. This will require interdisciplinary efforts that include expertise in psychological science, education, assessment, intelligent digital technologies, and policy.
Telecommuting has become an increasingly popular work mode that has generated significant interest from scholars and practitioners alike. With recent advances in technology that enable mobile connections at ever-affordable rates, working away from the office as a telecommuter has become increasingly available to many workers around the world. Since the term telecommuting was first coined in the 1970s, scholars and practitioners have debated the merits of working away from the office, as it represents a fundamental shift in how organizations have historically done business. Complicating efforts to truly understand the implications of telecommuting have been the widely varying definitions and conceptualizations of telecommuting and the diverse fields in which research has taken place.
Our objective in this article is to review existing research on telecommuting in an effort to better understand what we as a scientific community know about telecommuting and its implications. In so doing, we aim to bring to the surface some of the intricacies associated with telecommuting research so that we may shed insights into the debate regarding telecommuting’s benefits and drawbacks. We attempt to sift through the divergent and at times conflicting literature to develop an overall sense of the status of our scientific findings, in an effort to identify not only what we know and what we think we know about telecommuting, but also what we must yet learn to fully understand this increasingly important work mode.
After a brief review of the history of telecommuting and its prevalence, we begin by discussing the definitional challenges inherent within existing literature and offer a comprehensive definition of telecommuting rooted in existing research. Our review starts by highlighting the need to interpret existing findings with an understanding of how the extent of telecommuting practiced by participants in a study is likely to alter conclusions that may be drawn. We then review telecommuting’s implications for employees’ work-family issues, attitudes, and work outcomes, including job satisfaction, organizational commitment and identification, stress, performance, wages, withdrawal behaviors, and firm-level metrics. Our article continues by discussing research findings concerning salient contextual issues that might influence or alter the impact of telecommuting, including the nature of the work performed while telecommuting, interpersonal processes such as knowledge sharing and innovation, and additional considerations that include motives for telecommuting such as family responsibilities. We also cover organizational culture and support that may shape the telecommuting experience, after which we discuss the community and societal effects of telecommuting, including its effects on traffic and emissions, business continuity, and work opportunities, as well as the potential impact on societal ties. Selected examples of telecommuting legislation and policies are also provided in an effort to inform readers regarding the status of the national debate and its legislative implications. Our synthesis concludes by offering recommendations for telecommuting research and practice that aim to improve the quality of data on telecommuting as well as identify areas of research in need of development.
In this article, we report the results of a two-part investigation of psychological assessments by psychologists in legal contexts. The first part involves a systematic review of the 364 psychological assessment tools psychologists report having used in legal cases across 22 surveys of experienced forensic mental health practitioners, focusing on legal standards and scientific and psychometric theory. The second part is a legal analysis of admissibility challenges with regard to psychological assessments. Results from the first part reveal that, consistent with their roots in psychological science, nearly all of the assessment tools used by psychologists and offered as expert evidence in legal settings have been subjected to empirical testing (90%). However, we were able to clearly identify only about 67% as generally accepted in the field and only about 40% have generally favorable reviews of their psychometric and technical properties in authorities such as the Mental Measurements Yearbook. Furthermore, there is a weak relationship between general acceptance and favorability of tools’ psychometric properties. Results from the second part show that legal challenges to the admission of this evidence are infrequent: Legal challenges to the assessment evidence for any reason occurred in only 5.1% of cases in the sample (a little more than half of these involved challenges to validity). When challenges were raised, they succeeded only about a third of the time. Challenges to the most scientifically suspect tools are almost nonexistent. Attorneys rarely challenge psychological expert assessment evidence, and when they do, judges often fail to exercise the scrutiny required by law.
Congratulations to Deary, Weiss, and Batty (2010, this issue) for an encyclopedic and judicious survey of the literature and for their sensible recommendations as to how medical practitioners must tailor prescriptions to the personality and cognitive ability of patients they address. The suggestions for future research are state of the art in terms of analysis of the kind of data psychologists are likely to collect. However, I suggest that we approach sociologists with hypotheses that might motivate them to supplement our knowledge. The curse of any model is that it is underidentified and encourages us to think that we know what human behavior lies behind the numbers it generates. I will comment on what etiology might lie behind the correlation between low IQ and hospitalization for violence-inflicted injury. Others with broader knowledge will I hope make similar contributions.
Almost everyone struggles to act in their individual and collective best interests, particularly when doing so requires forgoing a more immediately enjoyable alternative. Other than exhorting decision makers to "do the right thing," what can policymakers do to reduce overeating, undersaving, procrastination, and other self-defeating behaviors that feel good now but generate larger delayed costs? In this review, we synthesize contemporary research on approaches to reducing failures of self-control. We distinguish between self-deployed and other-deployed strategies and, in addition, between situational and cognitive intervention targets. Collectively, the evidence from both psychological science and economics recommends psychologically informed policies for reducing failures of self-control.