ArticlePDF Available

Optimal Selection for Direct Mail

Authors:

Abstract

Direct marketing (mail) is a growing area of marketing practice, yet the academic journals contain very little research on this topic. The most important issue for direct marketers is how to sample targets from a population for a direct mail campaign. Although some selection methods are described in the literature, there seems to be not a single paper discussing the analytical and statistical aspects involved. The objective of this paper is to introduce a comprehensive methodology for the selection of targets from a mailing list for direct mail. At least theoretically, this methodology leads to more efficient selection procedures than the existing ones. The latter are not based on an optimal selection strategy, whereas we explicitly take the profit function into account. By equating marginal costs and marginal returns we determine which households should receive a mailing in order to maximize expected profit. In the empirical part we show that our methodology has great predictive accuracy and generates higher net returns than traditional approaches.
Copyright 1995, by INFORMS, all rights reserved. Copyright of Marketing Science is the property of
INFORMS: Institute for Operations Research and its content may not be copied or emailed to multiple sites or
posted to a listserv without the copyright holder's express written permission. However, users may print,
download, or email articles for individual use.
Chapter
Collaborative work leads to better organizational performance. However, a team leader’s view on collaboration does not always match reality. Due to the increased adoption of (online) collaboration systems in the wake of the COVID pandemic, more digital traces on collaboration are available for a wide variety of use cases. These traces allow for the discovery of accurate and objective insights into a team’s inner workings. Existing social network discovery algorithms however, are often not tailored to discover collaborations. These techniques often have a different view on collaboration by mostly focusing on handover of work, resource profile similarity, or establishing relationships between resources when they work on the same case or activities without any restrictions. Furthermore, only the frequency of appearance of patterns is typically used as a measure of interestingness, which limits the kind of insights one can discover. Therefore we propose an algorithm to discover collaborations from event data using a more realistic approach than basing collaboration on the sequence of resources that carry out activities for the same case. Furthermore, a new research path is explored by adopting the Recency-Frequency-Monetary (RFM) concept, which is used in the marketing research field to assess customer value, in this context to value both the resource and the collaboration on these three dimensions. Our approach and the benefits of adopting RFM to gain insights are empirically demonstrated on a use case of collaboratively developing a curriculum.
Preprint
Full-text available
Actionable Knowledge Discovery (AKD) is a crucial aspect of data mining that is gaining popularity and being applied in a wide range of domains. This is because AKD can extract valuable insights and information, also known as knowledge, from large datasets. The goal of this paper is to examine different research studies that focus on various domains and have different objectives. The paper will review and discuss the methods used in these studies in detail. AKD is a process of identifying and extracting actionable insights from data, which can be used to make informed decisions and improve business outcomes. It is a powerful tool for uncovering patterns and trends in data that can be used for various applications such as customer relationship management, marketing, and fraud detection. The research studies reviewed in this paper will explore different techniques and approaches for AKD in different domains, such as healthcare, finance, and telecommunications. The paper will provide a thorough analysis of the current state of AKD in the field and will review the main methods used by various research studies. Additionally, the paper will evaluate the advantages and disadvantages of each method and will discuss any novel or new solutions presented in the field. Overall, this paper aims to provide a comprehensive overview of the methods and techniques used in AKD and the impact they have on different domains.
Chapter
In our era, the issue of analyzing and predicting customer behavior and thoroughly aligning their business strategies and marketing activities for companies increases its inevitability everyday much more than before. In this context, segmenting customers has become the most necessary action for the firms all around the world. This study aims to make customer segmentation using the invoice data of an eCommerce company in Turkey. Accordingly, customer segmentation is carried out by the application of the RFM (Recency, Frequency and Monetary) model which is one of the most significant models used in customer segmentation to identify valuable customers. More on that, clustering methods are applied on the data retrieved from the RFM model and characteristics of each customer group created are analyzed. For this purpose, the most widely used K-Means and Fuzzy C-Means algorithms in the literature were selected. Followingly, by Silhouette and Dunn Indexes, the best performing algorithm and optimum number of clusters of this eCommerce company located in Turkey are provided as an insight for strategies at the end of the study.
Article
Cosmic.id is one of the businesses engaged in the IoT industry. It has been established since 2018 and is selling the products via e-commerce (Tokopedia). Cosmic.id has three products categories; in-house products (LoRa, and Development Board), resale products (Resistor, Connector, Capacitor, and etc) and custom products or services (Bridge Monitoring System, Temperature Sensor, and etc). Currently, Cosmic.id serves 2 segmented customers; B2C and B2B. Both segment give profit to the company, but since the internal resources are limited, then Cosmic.id has to decided which segment that the company should focus on, so the company could gain more revenue and have stable revenue for each month. The research objective of this thesis is to discover the ideal model business of Cosmic.id, B2B or B2C, therefore they could focus more to gain more revenue for the business. Also, another purpose is to discover whether the existing Unique Value Proposition is still relevant to the target model and what needs to be fixed. The research conducts in qualitative study, interviewing existing and prospective customers, both B2C and B2B customers. During the research, it was found that the B2B segment has a big opportunity for Cosmic.id’s revenue streams. It could give more revenue to the business, but some efforts have to be done to reach that market. In addition to that, some parts of the existing Unique Value Proposition are not relevant to the current condition. The end outcome of this study is the focus on B2B customers and the new Unique Value Proposition, which will be applied in this business and is given in the form of OKR and KPI in accordance with consumer needs. It is hoped that Cosmic.id and the team would do their best to provide the products that the market really needs and apply the new Unique Value Proposition both for B2B and B2C customers, so that the company will have stable revenue and could be sustained in the future. It is also hoped that Cosmic.id could contribute to the IoT development in Indonesia since there are not many LoRa manufacturers in Indonesia.
Article
Full-text available
An e-commerce website provides a platform for merchants to sell products to customers. While most existing research focuses on providing customers with personalized product suggestions by recommended systems. In this paper, we consider the rile of merchants and introduce a parallel problem, i.e., how to select the most valuable customers for a merchant? Accurately answering this question cannot help merchants to gain more profits, but also benefit the ecosystem of e-commerce platforms. In this paper, customer lifetime value (CLV) is used to customer segmentation of a retail store data set. RFM approach is used in order to segmentation of customers. By using K-means algorithm the clustering is done. CLV is calculated based on RFM method for each segment. The results of calculated CLV for different segments can be used to explain marketing and sales strategies by the company.
Article
When acquiring consumer data for marketing or new business initiatives, it is important to decide what attributes or features of potential customers should be acquired. We study a new feature selection problem in the context of customer data acquisition in which different features have different acquisition costs. This feature selection problem is studied for linear regression and logistic regression. We formulate the feature selection and acquisition problems as nonlinear discrete optimization problems that minimize prediction errors subject to a budget constraint. We derive the analytical properties of the solutions for the problems, develop a computational procedure for solving the problems, provide an intuitive interpretation for the feature selection criteria, and discuss managerial implications of the solution approach. The results of the experimental study demonstrate the effectiveness of our approach. This paper was accepted by Kartik Hosanagar, information systems.
Article
The current study aims to illustrate the usefulness of the utilization of customer data into the retail industry and in particular to the supermarket sector. This piece of work tries to fill the gap in the literature regarding the application of database marketing techniques to real-life examples and also to be the starting point of more research related to the Cypriot retail context. The main objective of this study is to analyse the customer database of a supermarket chain based in Cyprus in order to segment their customers into homogeneous groups and then proceed to the identification of the most valuable customers. The RFM analysis was employed in order to segment the customer database and score each customer group according their Recency, Frequency and Monetary values. The findings suggest that the most valuable group of customers was consisted from 3657 customers. These customers represent more than the 34% of the total gross sales while they comprise more than 10% of the total cardholders. Also, a number of other interesting findings were also discussed, such as other valuable segments, middle-ranked segments and the least valuable customers.
Article
Full-text available
The authors examine the effects of a manufacturer coupon on brand choice behavior. The level of coupon redemption and changes in brand choice behavior after redemption are examined as a function of the household's prior probability of purchasing the promoted brand, likelihood of buying a favorite competitive brand, and coupon face value. A model of the coupon redemption decision is developed to predict response to the coupon promotion by different consumer segments. Predictions from the model are tested by using scanner panel data from a field experiment on coupon face values. Coupon redemption rates are found to be much higher among households that have purchased the brand on a regular basis in the past. The results also suggest that most consumers revert to their precoupon choice behavior immediately after their redemption purchase. These and other findings have important implications for the profitability of coupon promotions.
Article
Full-text available
Several authors have noted that the profitability of a coupon promotion depends on the incremental sales generated by the coupon. However, most prior research on coupon promotions has focused on redemption rates and little is known about the characteristics of households that make incremental purchases. The authors develop and test several hypotheses about the characteristics of households that make incremental purchases in response to a direct mail coupon promotion. For the product tested, coupons produced greater incremental sales among households that were larger, more educated, and were homeowners. The findings suggest that directing coupons to the most responsive market segments can increase profits significantly.
Article
Full-text available
Simulated annealing is a stochastic strategy for searching the ground state. A fast simulated annealing (FSA) is a semi-local search and consists of occasional long jumps. The cooling schedule of the FSA algorithm is inversely linear in time which is fast compared with the classical simulated annealing (CSA) which is strictly a local search and requires the cooling schedule to be inversely proportional to the logarithmic function of time. A general D-dimensional Cauchy probability for generating the state is given. Proofs for both FSA and CSA are sketched. A double potential well is used to numerically illustrate both schemes.
Article
Full-text available
In many regression applications, users are often faced with difficulties due to nonlinear relationships, heterogeneous subjects, or time series which are best represented by splines. In such applications, two or more regression functions are often necessary to best summarize the underlying structure of the data. Unfortunately, in most cases, it is not known a priori which subset of observations should be approximated with which specific regression function. This paper presents a methodology which simultaneously clusters observations into a preset number of groups and estimates the corresponding regression functions' coefficients, all to optimize a common objective function. We describe the problem and discuss related procedures. A new simulated annealing-based methodology is described as well as program options to accommodate overlapping or nonoverlapping clustering, replications per subject, univariate or multivariate dependent variables, and constraints imposed on cluster membership. Extensive Monte Carlo analyses are reported which investigate the overall performance of the methodology. A consumer psychology application is provided concerning a conjoint analysis investigation of consumer satisfaction determinants. Finally, other applications and extensions of the methodology are discussed.
Article
Full-text available
When the binary choice probability model is derived from a random utility maximization model, the choice probability for one alternative has the form F[V(z, θ)]. Here V(z, θ) is a given function of the exogenous variables z and unknown parameters θ, representing the systematic component of the utility difference, and F is the distribution function of the random component of the utility difference. This paper describes a method of estimating the parameters θ without assuming any functional form for the distribution function F, and proves that this estimator is consistent. F is also consistently estimated. The method uses maximum likelihood estimation in which the likelihood is maximized not only over the parameter θ but also over a space which contains all distribution functions.
Article
In this paper we are concerned with estimation of a classification model using semiparametric and parametric methods. Benefits and limitations of semiparametric models in general, and of Manski's maximum score method in particular, are discussed. The maximum score method yields consistent estimates under very weak distributional assumptions. The maximum score method can very easily be used in situations where it is more serious to make one kind of classification error than another. In this paper, we use a so-called threshold-crossing model to discriminate between credit card holders and nonholders. The estimated parameters of the logit model differ significantly from the estimates of maximum score. Given an asymmetric loss function, maximum score performs better than the logit model.
Article
In this paper an approach is developed that accommodates heterogeneity in Poisson regression models for count data. The model developed assumes that heterogeneity arises from a distribution of both the intercept and the coefficients of the explanatory variables. We assume that the mixing distribution is discrete, resulting in a finite mixture model formulation. An EM algorithm for estimation is described, and the algorithm is applied to data on customer purchases of books offered through direct mail. Our model is compared empirically to a number of other approaches that deal with heterogeneity in Poisson regression models.
Article
The multivariable techniques most prevalent in direct marketing research today for modeling response to a mailing (responders versus non-responders) are multiple regression, discriminant analysis, and AID (abbreviations are explained in the Introduction). This article shows why these traditional approaches may provide erroneous and misleading results for direct marketing use, which tend to be corrected by newer, more statistically appropriate methods, known as CHAID, logit, and log-linear models. We conclude with an application of the CHAID technique that resulted in a substantial lift in response for an Amoco Oil Company promotion to its credit card file.
Article
A general customer purchase model is presented for segmenting direct marketing customer files for direct mailings. The model specifies the functional relationships among the traditional recency, frequency, and monetary value (RFMV) variables. Because of the generic nature of the model, it can be used by different direct marketing businesses for a variety of different customer mailings. Strategic implications of the model are discussed for: a) defining active versus inactive customers, b) selecting customers for special mailings, c) comparing multiple customer lists for a single company, d) an overall company purchase score for a company with multiple programs/customer files, and e) product purchase scores for special product offers.
Article
This paper reports on the operational characteristics of maximum score estimation of a linear model from binary response data. A series of previous articles have shown that in theory the maximum score method makes possible binary response analysis under very weak distributional assumptions. Here, we present evidence on the properties of maximum score estimation in practice.After reviewing the known asymptotic theory of maximum score estimation, the paper describes an algorithm for maximum score estimation and characterizes its performance. Then findings from a Monte Carlo study comparing maximum score and logit maximum likelihood estimation are reported. Finally, the accuracy of bootstrap estimation of maximum score root mean square errors is evaluated.