Journal of Marketing Research

Published by American Marketing Association
Online ISSN: 1547-7193
Print ISSN: 0022-2437
Publications
Increasing the amount of family planning in less-developed countries is crucial to their economic development and is basically a marketing job. Much important marketing research has been done in this area; its history is described here. But much more needs to be done.
 
A five equation model of the Jamaican distribution structure is fitted to data on four branded personal products. Decision rules appear oriented toward sales for retailers, wholesalers, importers, and manufacturers, and some policy implications for stimulating the structure are suggested. Problems associated with procedures for fitting sets of equations and examples of the effects of multi-collinearity on simultaneous equation estimation are discussed.
 
Overall, H 5 was supported; fear and romantic 
How do arousal-inducing contexts, such as frightening or romantic television programs, influence the effectiveness of basic persuasion heuristics? Different predictions are made by three theoretical models: A general arousal model predicts that arousal should increase effectiveness of heuristics; an affective valence model predicts that effectiveness should depend on whether the context elicits positive or negative affect; an evolutionary model predicts that persuasiveness should depend on both the specific emotion that is elicited and the content of the particular heuristic. Three experiments examined how fear-inducing versus romantic contexts influenced the effectiveness of two widely used heuristics-social proof (e.g., "most popular") and scarcity (e.g., "limited edition"). Results supported predictions from an evolutionary model, showing that fear can lead scarcity appeals to be counter-persuasive, and that romantic desire can lead social proof appeals to be counter-persuasive. The findings highlight how an evolutionary theoretical approach can lead to novel theoretical and practical marketing insights.
 
WEIGHTING OF INPUT PHOTOS TO CREATE MORPHS
THE CURRENT SELF (TOP) AND FUTURE SELF (BOTTOM) CONDITIONS OF STUDY 3A
Many people fail to save what they need to for retirement (Munnell, Webb, and Golub-Sass 2009). Research on excessive discounting of the future suggests that removing the lure of immediate rewards by pre-committing to decisions, or elaborating the value of future rewards can both make decisions more future-oriented. In this article, we explore a third and complementary route, one that deals not with present and future rewards, but with present and future selves. In line with thinkers who have suggested that people may fail, through a lack of belief or imagination, to identify with their future selves (Parfit 1971; Schelling 1984), we propose that allowing people to interact with age-progressed renderings of themselves will cause them to allocate more resources toward the future. In four studies, participants interacted with realistic computer renderings of their future selves using immersive virtual reality hardware and interactive decision aids. In all cases, those who interacted with virtual future selves exhibited an increased tendency to accept later monetary rewards over immediate ones.
 
Cutting advertising budgets has traditionally been a popular reaction by companies around the globe when faced with a slacking economy. Still, anecdotal evidence suggests the presence of considerable cross-country variability in the cyclical sensitivity of advertising expenditures. We conduct a systematic investigation into the cyclical sensitivity of advertising expenditures in 37 countries across all continents, covering up to 25 years and four key media: magazines, newspapers, radio and television. While our findings confirm that advertising moves in the same direction as the general economic activity, we also show that advertising is considerably more sensitive to business-cycle fluctuations than the economy as a whole, with an average co-movement elasticity of 1.4. Interestingly, advertising’s cyclical dependence is systematically related to the cultural context in which companies operate. Advertising behaves less cyclically in countries high on long-term orientation and power distance, while advertising is more cyclical in countries high on uncertainty avoidance. Further, advertising is more sensitive to the business cycle in countries characterized by significant stock-market pressure and few foreign-owned multinationals. These results have important strategic implications for both global advertisers and their ad agencies.
 
The authors propose a Web-based adaptive self-explicated approach for multiattribute preference measurement (conjoint analysis) with a large number (ten or more) of attributes. The proposed approach overcomes some of the limitations of previous self-explicated approaches. The authors develop a computer-based self-explicated approach that breaks down the attribute importance question into a ranking of attributes followed by a sequence of constant-sum paired comparison questions. In the proposed approach, the questions are chosen adaptively for each respondent to maximize the information elicited from each paired comparison question. Unlike the traditional self-explicated approach, the proposed approach provides standard errors for attribute importance. In two studies involving digital cameras and laptop computers described on 12 and 14 attributes, respectively, the authors find that the ability to correctly predict validation choices of the proposed adaptive approach is substantially and significantly greater than that of adaptive conjoint analysis, the fast polyhedral method, and the traditional self-explicated approach. In addition, the adaptive self-explicated approach yields a significantly higher predictive validity than a nonadaptive fractional factorial constant-sum paired comparison design.
 
The authors propose and test a new "polyhedral" choice-based conjoint analysis question-design method that adapts each respondent's choice sets on the basis of previous answers by that respondent. Polyhedral "interior-point" algorithms design questions that quickly reduce the sets of partworths that are consistent with the respondent's choices. To identify domains in which individual adaptation is promising (and domains in which it is not), the authors evaluate the performance of polyhedral choice-based conjoint analysis methods with Monte Carlo experiments. They vary magnitude (response accuracy), respondent heterogeneity, estimation method, and question-design method in a 4 × 2 3 experiment. The estimation methods are hierarchical Bayes and analytic center. The latter is a new individual-level estimation procedure that is a by-product of polyhedral question design. The benchmarks for individual adaptation are random designs, orthogonal designs, and aggregate customization. The simulations suggest that polyhedral question design does well in many domains, particularly those in which heterogeneity and partworth magnitudes are relatively large. The authors test feasibility, test an important design criterion (choice balance), and obtain empirical data on convergence by describing an application to the design of executive education programs in which 354 Web-based respondents answered stated-choice tasks with four service profiles each.
 
An agenda (either implicit or imposed) is a set of constraints on the order of selecting or eliminating choice alternatives. It can be "top down," "bottom up" (as a tournament), or more general. The author's analytic results identify which probabilistic choice rules are affected by agendas and when they are affected. The results also illustrate how agendas might be used to enhance target products. Examples and behavioral hypotheses are provided and the implications of the results for marketing management are discussed.
 
Because most conjoint studies are conducted in hypothetical situations with no consumption consequences for the participants, the extent to which the studies are able to uncover "true" consumer preference structures is questionable. Experimental economics literature, with its emphasis on incentive alignment and hypothetical blas, suggests that more realistic incentive-aligned studies result in stronger out-of-sample predictive performance of actual purchase behaviors and provide better estimates of consumer preference structures than do hypothetical studies. To test this hypothesis, the authors design an experiment with conventional (hypothetical) conditions and parallel incentive-aligned counterparts. Using Chinese dinner specials as the context, the authors conduct a field experiment in a Chinese restaurant during dinnertime. The results provide strong evidence in favor of incentive-aligned choice conjoint analysis, in that incentive-aligned choice conjoint outperforms hypothetical choice conjoint in out-of-sample predictions. To determine the robustness of the results, the authors conduct a second study that uses snacks as the context and considers only the choice treatments. This study confirms the results by providing strong evidence in favor of incentivealigned choice analysis in out-of-sample predictions. The results provide a strong motivation for conjoint practitioners to consider conducting studies in realistic settings using incentive structures that require participants to "live with" their decisions.
 
The compromise effect denotes the finding that brands gain share when they become the intermediate rather than an extreme option in a choice set (Simonson 1989). Despite the robustness and importance of this phenomenon, choice modelers have neglected to incorporate the compromise effect within formal choice models and to test whether such models outperform the standard value maximization model. In this article, we suggest four context-dependent choice models that can conceptually capture the compromise effect. Although these models are motivated by theory from economics and behavioral decision research, they differ with respect to the particular mechanism that underlies the compromise effect (e.g., contextual concavity vs. loss aversion). Using two empirical applications, we (1) contrast the alternative models and show that incorporating the compromise effect by modeling the local choice context leads to superior predictions and fit relative to the traditional value maximization model and a stronger (naive) model that adjusts for possible biases in utility measurement; (2) generalize the compromise effect by demonstrating that it systematically affects choice in larger sets of products and attributes than previously shown; (3) show the theoretical and empirical equivalence of loss aversion and local (contextual) concavity; and (4) demonstrate the superiority of models that use a single reference point over "tournament models" in which each option serves as a reference point. We discuss the theoretical and practical implications of this research, as well as the ability of the proposed models to predict other behavioral context effects.
 
The author is concerned with whether or not surveys of consumer anticipations can improve predictions of purchase behavior relative to predictions that use only objective variables obtainable at the same date. The basic objective of the study is improved predictions of changes over time. © 1964 by National Bureau of Economic Research, Inc. All Rights Reserved.
 
Using data from surveys of automobile buyers collected in 1990 and 2000 in a natural experiment setting, the authors study the determinants of use of the Internet as a source of information on automobiles, its impact on the use of other sources, and its impact on total search effort. The results indicate that the Internet draws attention in approximately the same proportion from other sources. The results also show that those who use the Internet to search for automobiles are younger and more educated and search more in general. However, the analysis also indicates that they would have searched even more if the Internet had not been present.
 
Histogram of posterior means of price effects for all 100 brands
Overview of the literature on the determinant of price promotion effectiveness
presents the posterior
This paper provides a survey on studies that analyze the macroeconomic effects of intellectual property rights (IPR). The first part of this paper introduces different patent policy instruments and reviews their effects on R&D and economic growth. This part also discusses the distortionary effects and distributional consequences of IPR protection as well as empirical evidence on the effects of patent rights. Then, the second part considers the international aspects of IPR protection. In summary, this paper draws the following conclusions from the literature. Firstly, different patent policy instruments have different effects on R&D and growth. Secondly, there is empirical evidence supporting a positive relationship between IPR protection and innovation, but the evidence is stronger for developed countries than for developing countries. Thirdly, the optimal level of IPR protection should tradeoff the social benefits of enhanced innovation against the social costs of multiple distortions and income inequality. Finally, in an open economy, achieving the globally optimal level of protection requires an international coordination (rather than the harmonization) of IPR protection.
 
PREDICTIONS AND EXPERIMENTAL RESULTS FOR MODEL WITH TPT
AVERAGE CHOICE IN P-BEAUTY CONTESTS
Drazen Prelec, and Matt Rabin for their helpful comments. We are especially grateful to the late journal editor, Dick Wittink, for inviting and encouraging us to undertake this review. Dick was a great supporter of inter-disciplinary research. We hope this review can honor his influence and enthusiasm by spurring Marketing is an applied science that tries to explain and influence how firms and consumers actually behave in markets. Marketing models are usually applications of economic theories. These theories are general and produce precise predictions, but they rely on strong assumptions of rationality of consumers and firms. Theories based on rationality limits could prove similarly general and precise, while grounding theories in psychological plausibility and explaining facts which are puzzles for the standard approach. Behavioral economics explores the implications of limits of rationality. The goal is to make economic theories more plausible while maintaining formal power and accurate prediction of field data. This review focuses selectively on six types of models used in behavioral economics that can be applied to marketing.
 
Figure Response Trend Projections 
Valid predictions for the direction of nonresponse bias were obtained from subjective estimates and extrapolations in an analysis of mail survey data from published studies. For estimates of the magnitude of bias, the use of extrapolations led to substantial improvements over a strategy of not using extrapolations.
 
The creation of online consumer communities to provide product reviews and advice has been touted as an important, albeit somewhat expensive component of Internet retail strategies. In this paper, we characterize reviewer behavior at two popular Internet sites and examine the effect of consumer reviews on firms' sales. We use publicly available data from the two leading online booksellers, Amazon.com and BarnesandNoble.com, to construct measures of each firm's sales of individual books. We also gather extensive consumer review data at the two sites. First, we characterize the reviewer behavior on the two sites such as the distribution of the number of ratings and the valence and length of ratings, as well as ratings across different subject categories. Second, we measure the effect of individual reviews on the relative shares of books across the two sites. We argue that our methodology of comparing the sales and reviews of a given book across Internet retailers allows us to improve on the existing literature by better capturing a causal relationship between word of mouth (reviews) and sales since we are able to difference out factors that affect the sales and word of mouth of both retailers, such as the book's quality. We examine the incremental sales effects of having reviews for a particular book versus not having reviews and also the differential sales effects of positive and negative reviews. Our large database of books also allows us to control for other important confounding factors such as differences across the sites in prices and shipping times.
 
This research examines the dynamic process of inference updating. We present a framework that delineates two mechanisms that guide the updating of personality trait inferences about brands. The results of three experiments show that chronics (those for whom the trait is accessible) update their initial inferences based on the trait implications of new information. Interestingly, nonchronics (those for whom the trait is not accessible) also update their initial inferences, but do so based on the evaluative implications of new information. The framework adds to the inference making literature by uncovering two distinct paths of inference-updating and highlighting the moderating role of trait accessibility. The findings have direct implications for marketers seeking to understand the construction of brand personality, and highlight the constantly evolving nature of brand perceptions, as well as the notion that both the consumer and the marketer have important roles to play in this process.
 
Priming by risk type interaction in experiment 3  
Contrary to predictions based on cognitive accessibility, heightened gender identity salience resulted in lower perceived vulnerability and reduced donation behavior to identity-specific risks (e.g., breast cancer). No such effect was manifest with identity-neutral risks. Establishing the importance of self-identity, perceived breast cancer vulnerability was lower when women were primed with their own gender, but not with the general category of gender. Establishing the involvement of unconscious defense mechanisms, fear appraisal prior to the risk rating task eliminated the effect of a gender identity prime on perceived breast cancer vulnerability. The findings have direct implications for health communication and donation campaigns.
 
In forecasting demand for expensive consumer goods, direct questioning of potential consumers about their future purchasing plans has had considerable predictive success [1, 2, 4]. Any attempt to apply such "intention to purchase" methods to forecast demand for proposed products or services must determine some way to convey product information to the potential consumer [3]. Indeed, all the prospective consumer knows about the product or service is what he may infer from the information given to him by the researcher. This paper presents a study of the effect upon intention to purchase of this seemingly crucial element—the extent and type of description of the new service. How extensive must the description of the new service be in order to measure intention to purchase?
 
ESTIMATION RESULTS FOR CHOICE MODEL
Our study investigates the overall effects of in-store displays (ISD) on category sales and brand market share in an online shopping context, and compares the differences in effectiveness between ISD types. Using data from an online grocer, we examine three online ISD types that match with traditional ones: first screen (entrance), banner (end-of-aisle) and shelf tag (in-aisle) displays. Empirical results for 10 categories confirm that online ISD may substantially increase brand market share and to a lesser extent, category sales. Our results also demonstrate that not all types are equally effective. First screen displays clearly have the strongest effect on market share: they benefit from their placement on the ‘entrance’ location, central on-screen position and direct purchase link. While they only feature 1 SKU, banner displays typically feature all SKUs of a brand, yet, are placed on border-screen positions on traveling-zone pages without a direct purchase link. Based on our results, the advantage of banner displays does not weigh up against the advantages of first screen displays in most cases. Shelf tags, finally, may be very useful in attracting attention to interesting promotions, but appear to have no or at most a limited effect on their own.
 
A stochastic model of individual buyer behavior is developed from a set of postulates about the buying process. The postulates are shown to imply a linear learning model modified by a term to explain response to pricing stimuli. Thus, a customer's purchasing probability is modelled as a combination of the effect of his past purchasing behavior plus the effect of price-variation in the market. Methods are developed to calculate short- and long-term probabilistic properties of the process. A method for parameter estimation is included. The model differs from past modelling efforts in this area in that a controllable variable, product price, is explicitly included in the model-structure, allowing the model to be used to aid in pricing decision making under a certain set of assumptions about competitive behavior in a market situation.
 
Previous research has suggested that households and individuals may possess multiple preferences for a given product category. These multiple preferences may be the result of multiple individuals, different uses and usage occasions, and/or variety seeking. As such, single ideal point models that assume a single invariant ideal point may be operating from a false and misleading assumption. We propose a Multiple Ideal Point Model to capture these multiple preference effects. The basic premise of the model is that consumers may possess a set of ideal points, each of which represents a distinct preference. At any given purchase occasion, one of these points is "activated" with some probability and choices are made with respect to its characteristics. In this paper, the Multiple Ideal Point Model and an associated estimation procedure are assessed with respect to its ability to recover a true choice structure. We then empirically test the model on IRI panel data from the powdered soft drink category. The results are discussed and directions for future research are introduced.
 
Two current trends, information overload combined with increased control of marketers (e.g., on the Internet) over the manner in which their products are sold and presented to buyers, suggest that deciding what information to provide or not to provide can determine a product's success in the marketplace. Although it has long been recognized that most purchase decisions are made with incomplete information, we still know very little about the effect of missing information on consumer choice. Building on earlier work by Slovic and MacPhillamy (1974), we demonstrate that a tendency to give more weight to attributes on which all considered options have values ("common attributes"), relative to attributes for which not all options have values ("unique attributes"), can often lead to intransitive preferences. Using process measures, it is further shown that buyers tend to interpret missing attribute values in a way that supports the purchase of the option that is superior on the common attribute. The results indicate that information presentation format and inferences about mission values cannot account for the observed effects of missing information on consumer choice. We also show that the purchase decisions of buyers who consider attribute importance prior to making a choice and those with high need for cognition are less susceptible to influence by missing information. Finally, the findings indicate that choosing from sets with missing information can impact buyer tastes and purchase decisions made subsequently. We discuss the theoretical and practical implications of this research.
 
Why are similar workers paid differently? I review and compare two lines of research that have recently witnessed great progress in addressing “unexplained” wage inequality: (i) worker unobserved heterogeneity in, and sorting by, human capital; (ii) firms’ monopsony power in labor markets characterized by job search frictions. Both lines share a view of wage differentials as an equilibrium phenomenon. Despite their profound conceptual and technical differences, they remain natural competitors in this investigation. Unlike other hypotheses, they provide natural and unifying explanations for job and worker flows, unemployment duration and incidence, job-to-job quits, and the shape of the wage distribution.
 
The conventional wisdom in economic theory holds that switching costs make markets less competitive. This paper challenges this claim. We find that steady-state equilibrium prices may fall as switching costs are introduced into a dynamic pricing model. To assess whether this finding is of empirical relevance, we consider a general model with differentiated products, imperfect lock-in and a large number of consumer types. We calibrate this model with data from a frequently purchased packaged goods market, where consumers exhibit brand loyalty, a specific form of switching costs. We are able to estimate the level of switching costs from the brand choice behavior in this data. At switching costs of the order of magnitude found in our data, prices are lower than in the situation without switching costs
 
We propose a new model to describe consideration, consisting of a multivariate probit model component for consideration and a multinomial probit model component for choice, given consideration. The approach allows one to analyze stated consideration set data, revealed consideration set (choice) data or both, while at the same time it allows for unobserved dependence in consideration among brands. In addition, the model accommodates different effects of the marketing mix on consideration and choice, an error process that is correlated over time, and unobserved consumer heterogeneity in both processes. We attempt to establish the validity of existing practice to infer consideration sets from observed choices in panel data. To this end, we collect data in an on-line choice experiment involving interactive supermarket shelves and post-choice questionnaires to measure the choice protocol and stated consideration levels. We show with these experimental data that underlying consideration sets can be reliably retrieved from choice data alone. Next, we estimate the model on IRI panel data. We have two main results. First, compared with the single-stage multinomial probit model, promotion effects are larger when they are included in the consideration stage of the two-stage model. Second, we find that consideration of brands does not covary greatly across brands once we account for observed effects.
 
The marketing literature suggests several phenomena that may contribute to the shape of the relationship between sales and price discounts. These phenomena can produce severe nonlinearities and interactions in the curves, and we argue that those are best captured with a flexible approach. Since a fully nonparametric regression model suffers from the curse of dimensionality, we propose a semiparametric regression model. Store-level sales over time is modeled as a nonparametric function of own-and cross-item price discounts, and a parametric function of other predictors (all indicator variables). We compare the predictive validity of the semiparametric model with that of two parametric benchmark models and obtain better performance on average. The results for three product categories indicate a.o. threshold- and saturation effects for both own- and cross-item temporary price cuts. We also show how the own-item curve depends on other items’ price discounts (flexible interaction effects). In a separate analysis, we show how the shape of the deal effect curve depends on own-item promotion signals. Our results indicate that prevailing methods for the estimation of deal effects on sales are inadequate.
 
To push a customer and market orientation deep into the organization, many firms have adopted systems by which internal customers evaluate internal suppliers. The internal supplier receives a larger bonus for a higher evaluation, The authors examine two internal customer-internal supplier incentive systems. In one system, the internal customer provides the evaluation implicitly by selecting the percentage of its bonus that is based on market outcomes (e.g., a combination of net sales and customer satisfaction if these measures can be tied to incremental profits), The internal supplier's reward is based on the percentage that the internal customer chooses. In the second system, the internal customer selects target market outcomes, and the internal supplier is rewarded on the basis of the target. in each incentive system, some risk is transferred from the firm to the employees, and the firm must pay for this; but in return, the firm need not observe either the internal supplier's or the internal customer's actions. The incentive systems are robust even if the firm guesses wrongly about what employees perceive as costly and about how employee actions affect profit. The authors discuss how these systems relate to internal customer satisfaction systems and profit centers.
 
We focus on three critical areas of future research on regulatory fit. The first focuses on how regulatory orientation gets sustained. We argue that there are two distinct approaches that bring about the ‘just right feeling’: (1) process-based (involving the interaction between regulatory orientation and decision making processes) and (2) outcome-based (involving the interaction between regulatory orientation and framed outcomes offered). Second, we discuss possible boundary conditions of regulatory fit effects, highlighting in particular the apparent paradoxical role of involvement. We suggest that the antecedents giving rise to regulatory fit (e.g., lowered motivation) may differ from its consequences (e.g., increased motivation). Finally, we discuss broader implications of regulatory fit, proposing three possible mechanisms by which regulatory fit may lead to improved health and discussing the degree to which the ‘just right feeling’ may play a role in goal-sustaining experiences related to subjective well-being (e.g., flow).
 
This paper introduces a general, formal treatment of dynamic constraints, i.e., constraints on the state changes that are allowed in a given state space. Such dynamic constraints can be seen as representations of "real world" constraints in a managerial context. The notions of transition, reversible and irreversible transition, and transition relation will be introduced. The link with Kripke models (for modal logics) is also made explicit. Several (subtle) examples of dynamic constraints will be given. Some important classes of dynamic constraints in a database context will be identified, e.g. various forms of cumulativity, non-decreasing values, constraints on initial and final values, life cycles, changing life cycles, and transition and constant dependencies. Several properties of these dependencies will be treated. For instance, it turns out that functional dependencies can be considered as "degenerated" transition dependencies. Also, the distinction between primary keys and alternate keys is reexamined, from a dynamic point of view.
 
MAXIMUM LIKELIHOOD ESTIMATION RESULTS
MODEL INPUTS
PREDICTED AND OBSERVED MARKET SHARES
The substantial failure rate of new packaged goods in test markets has stimulated firms to seek improved methods of pre-test-market evaluation. A set of measurement procedures and models designed to produce estimates of the sales potential of a new packaged good before test marketing is presented. A case application of the system also is discussed.
 
An intuitively appealing decision rule is to allocate a company's scarce marketing resources where they have the greatest long-term benefit. This principle, however, is easier to accept than it is to execute, because long-run effects of marketing spending are difficult to estimate. We address this problem by examining the over-time behavior of market response and marketing spending, and identify four commonly occurring strategic scenarios: business as usual, hysteresis in response, escalating expenditures and evolving-business practice. We explain and illustrate why each scenario can occur in practice, and describe its positive and negative consequences for long-term profitability.When good time-series data on revenue and marketing spending are available, it is possible to apply multivariate persistence measures to identify which of the four strategic scenarios is taking place. We apply these ideas to data from two major companies in the packaged-foods and pharmaceuticals industries. We observe several long-term marketing effect, some with profitable and some with unprofitable consequences, and offer recommendations for each case.We conclude that high-quality databases along with modern time-series methods can be instrumental in extracting vital long-term marketing-effectiveness information from readily available data. Therefore, managing marketing resources with long-run performance in mind need no longer be a pure act of faith on behalf of the executive. We hope that this and future work will contribute toward an improved allocation of scarce marketing resources in our companies.
 
Argues that the conclusions of the article by J. Jacoby et al (see record 1975-01138-001) on the effects of information load on consumer purchases do not accurately follow from their data, and that their procedures for constructing the experimental conditions and their method of evaluating the quality of S's decisions are questionable. The exploratory nature of the Jacoby article is emphasized. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Examined the construct validation results of 70 published data sets, which showed that, on average, traits accounted for less than 50% of the variance in construct measures. Findings raise questions about the application of statistical techniques that assume minimal measurement error or do not properly model systematic measurement error. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
This volume has been planned to provide a wide range of potenial readers with a readily accessible compendium of illustrative descriptions of techniques that have been developed to assess individual construct systems. A variety of these techniques have been developed by theorists and practitioners who have described their work in widely spread outlets. In many instances the techniques have been described incidentally as investigators have reported the results of investigations. The presentations in this volume will also illustrate the constant interplay of applied and theoretical problems that have informed the work of those people who have developed the described technologies. As will be seen, some of the chapters focus primarily on measurement and theoretical considerations. Other chapters are written from an applied perspective. Nonetheless, the activity described herein attests to the investigator's belief that technologies can usefully guide thought about day-to-day psychological functioning. Nine chapters in this volume describe programs the authors have developed for use on microcomputers. (The programs described in a tenth chapter have been used on a large computer, and are not yet translated for microcomputer use.) Each of the authors has worked toward the goal of devising a suite of programs that would allow a quantitative representation of the cognitive system a person would use as he or she processes the input from one or another set of events. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Investigated whether the response rate and responses of a group of 238 technically trained professional employees differed between those who received the questionnaire at work and those who received it at home. Response rates were found to be independent of address, and the frequency of questionnaire item significance was not significantly different from chance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Tested the hypothesis that comparative advertisements were processed centrally and noncomparative ads were processed peripherally in 178 students, using a 2-group LISREL model. A 2 × 2 factorial design was used in which the 1st factor was comparative vs noncomparative copy and the 2nd was product attribute vs market standing copy. Results show that positive attitudes toward the ad were a significant predictor of attitude toward the brand (AB) only in the noncomparative case, whereas consistency between AB and the inclination to act in response to the ad (conation) was higher for comparative ads. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Suggests 4 weaknesses that could have contributed to the results and conclusions that S. Goodwin and M. Etgar (see record 1981-13788-001) drew from their experiment on comparative advertising. The 4 problems involve confounding of an independent variable, use of the omnibus F test, low statistical power, and failure to check the validity of the message appeal treatments. (4 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Investigated the behavior of alternative covariance structure estimation procedures in the presence of nonnormal data. Monté Carlo simulation experiments were conducted with a factorial design involving 3 levels of skewness, 3 levels of kurtosis, and 3 different sample sizes. For normal data, among all the elliptical estimation techniques, elliptical reweighted least squares (ERLS) was equivalent in performance to maximum likelihood (ML) estimates. However, as expected for nonnormal data, parameter estimates were unbiased for ML and the elliptical estimation techniques, whereas the bias in standard errors was substantial for generalized least squares and ML. Among elliptical estimation techniques, ERLS was superior in performance. On the basis of the simulation results, it is recommended that researchers use ERLS for both normal and nonnormal data. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Proposes an alternative to the view that rates of repeat purchases are lower after a promotion purchase because of undermining by media-coupon or "cents-off" promotions. The present authors suggest instead that promotions temporarily attract a disproportionate number of households with low purchase probabilities. When the repeat rates of these households are averaged with the repeat rates of those that would have bought the brand even without a promotion, the average rate after a promotion purchase is lower. This effect is demonstrated by means of a numerical example, a closed-form equation for repeat rates, a Monté Carlo purchase simulation, and a logit choice model. An exploratory empirical analysis supports these arguments. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Few scholars would dispute the need to quantify and report the reliability of measurement in marketing research. There appears to be considerable confusion, however, about the appropriateness of various reliability measurement techniques. A comparison is made of the results from several alternative techniques applied to a lifestyle questionnaire. One technique rarely used to date in marketing research is suggested as superior to the others on the basis of the likelihood of its assumptions in an actual measurement setting.
 
LOGISTIC REGRESSION USING BUDGET, BONUS, RISK AHITUDE, PROBABILITY, WORK ENVIRONMENT, AND REWARD ORIENTATION AS PREDICTORS OF THE INITIAL
LOGISTIC REGRESSION USING INITIAL REFERENCE POINT, BUDGET, BONUS, RISK AniTUDE, PROBABILITY, WORK ENVIRONMENT, AND REWARD ORIENTATION AS PREDICTORS OF THE DECISION FRAME
Examined the ability of prospect theory (D. Kahneman and A. Tversky, 1979) to explain industrial buyer decision behavior and explored the benefits of using organizational climate as one of the factors affecting the decision-framing process of 170 fleet managers. The authors employed a conceptual model of the industrial buying decision process, hypothesizing that factors such as the organizational climate and the buyer's general orientation toward risk affect the decision frame and, subsequently, the buyer's choice. Results for the experimentally manipulated factors support the hypotheses about the way industrial buyers form decision reference points, compare alternatives, and eventually make choices. Results for organizational climate factors, which were measured rather than experimentally manipulated, were less clear. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Different streams of research offer seemingly conflicting predictions as to the effects of analyzing reasons for preferences on the attitude-behavior link. Our paper applies these different theoretical accounts to a new product scenario and identifies conditions under which analyzing reasons for brand preferences can increase or decrease the predictive value of reported preferences. Consistent with dual process theories of persuasion, Study 1 finds that reasons analysis increases the link between attitude and behavior, when the measure of behavior closely follows attitude measurement. By contrast, and consistent with research by Wilson and his colleagues on the disruptive effects of reasons analysis, we find that thinking about reasons significantly decreases the attitude-behavior correlation when the observed behavior occurs after a substantial delay. A second study not only replicates this finding, but also suggests that the timing of the reasons task can be an important moderator of...
 
Prior consumer research has demonstrated the ability of promotion and prevention regulatory orientations to moderate a variety of consumer and marketing phenomena, but has used several different scales to measure chronic regulatory focus. This paper assesses five different chronic regulatory focus measures using criteria of theoretical coverage, internal consistency, homogeneity, stability, and predictive ability. The results reveal a lack of convergence among the measures and variation in their performance along these criteria. Specific guidance for choosing a particular measure in regulatory focus research is provided.
 
The authors propose a multicategory brand choice model based on the conceptualization that the intrinsic utility for a brand is a function of underlying attributes, some of which are common across categories. The premise is that household preferences for attributes that are common across categories are likely to be correlated. The model that the authors develop projects the unobserved preferences for attributes to a lower dimensional space of unobserved factors. The factors are interpretable as household "traits" that transcend categories, and they can be used to predict preferences for attributes in new categories. The authors apply the proposed model to household panel data for three closely related snack categories and for two less-related food categories. The authors find strong correlations in preferences for product attributes such as brand names and low fat or fat free. This study demonstrates that these high correlations in product attribute preferences across categories are useful in targeting activities in existing and new categories.
 
Introduces the marketing community to carryover and backfire effects in surveys by providing the theoretical and empirical definitions and by illustrating how these effects might influence marketing decisions. Two factors related to question order effects (product knowledge and attribute diagnosticity) are examined. 279 undergraduates participated in an experiment examining the effects of these factors. Ss learned information about an unfamiliar brand and completed a survey in which they rated the brand overall and on a single attribute. Results suggest that carryover is likely when respondents are moderately knowledgeable about the product category (PRC) and the rated attribute is diagnostic for the overall brand evaluation. Backfire is likely when respondents are highly knowledgeable about the PRC and the rated attribute is nondiagnostic. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Intended to implement a course in multivariate analysis, this book presents the mathematics and behavioral science illustrations of these multivariate techniques: (a) multiple and canonical correlation, (b) multivariate analysis of variance and covariance, (c) multiple-discriminant analysis, (d) classification procedures, and (e) factor analysis. Provided for each procedure are "flow charts for programming any digital computer… [and] FORTRAN-coded, tested, and proven computer programs," where FORTRAN is the IBM coding language acceptable to IBM 704, 709, and 7090 computers. Utility subroutines for matrix inversion are shown in the last chapter, as well as methods for extracting latent roots and vectors used in the computer programs contained in the book. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
 
Figure Response Trend Projections 
This article examines methods for estimating nonresponse bias. Predictions of the direction of nonresponse bias are evaluated, and estimates are made of the magnitude of this bias. An attempt was made to include all relevant previously published studies. Methods For Estimating Nonresponse Bias
 
Top-cited authors
Joan Meyers-Levy
  • University of Minnesota Twin Cities
Julie A Ruth
  • Rutgers, The State University of New Jersey
James Wilcox
  • Texas Tech University
Shelby Hunt
  • Texas Tech University
Mark Heitmann
  • University of Hamburg