Artificial Intelligence (AI) is shaping marketing in an unprecedented way. Empowered by AI, voice assistants are increasingly capable of speaking and listening like humans, offering a great opportunity for a new marketing approach - voice marketing. This research examines how conversation attributes of voice assistants determine consumer trust and intention to engage in voice shopping. Using a sequential mixed-method design, three studies consistently show that consumers perceive the speaking attribute of voice assistants as more human-like than the listening attribute. We find that such incongruency between the two conversation attributes can undermine consumers' trust in voice assistants, leading to reduced willingness to accept product recommendations from voice assistants and shop via voice assistants, which would hamper the development of voice marketing. Accordingly, this research suggests that AI giants with strong technological strength and capital support should distribute more resources to advance the underlying technologies enabling human-like listening (e.g., natural language understanding and voice recognition). But for AI startups with limited financing ability and technical talents, they may consider appropriately reducing investments in the underlying technologies enabling human-like speaking (e.g., natural language generation and voice synthesis) to enhance the congruency level between the conversation attributes of voice assistants.
The present article contributes to the theory of Business Model Innovation by incumbent firms via digital servitization. Our research explores the conditions affecting manufacturers' ability to innovate their business models by developing and supplying advanced, digitally-based services. The authors performed a Qualitative Comparative Analysis via a qualitative investigation of the novel business models adopted by 19 Italian small- and medium-sized incumbent manufacturers. Our study found a series of theoretically relevant causal factors for the targeted outcome variable: size and investments, customer intimacy, and external service suppliers are crucial paths for developing successful digitally-based advanced services. The findings suggest three managerial implications: first, managers must capitalise on corporate knowledge and assets, mapping and leveraging useful people and technologies. Second, they should seek external service providers related to technology and strategy/organisation to help them update the value proposition. Third, they must build and foster customer intimacy and capitalise on key customers, either leveraging the extant sales/field service structures or envisioning new direct data exchange channels.
We introduce a new model to address three methodological biases in research on new venture growth and survival. The model offers entrepreneurship scholars numerous benefits. The biases are identified using a systematic review of 96 papers using longitudinal data published over a period of 20 years. They are: (1) distributional properties of new ventures; (2) selection bias; and (3) causal asymmetry. The biases make the popular use of normal distribution models problematic. As a potential solution, we introduce and test an event magnitude regression model approach (EMM). In this two-stage model, the first model explores the probability of four events: a firm staying the same size, expanding, contracting, or exiting. In the second stage, if the firm contracts or expands, we estimate the magnitude of the change. A suggested benefit is that researchers can better separate the likelihood of an event from its magnitude, thereby opening new avenues for research. We provide an overview of our model analyzing an example data set involving longitudinal venture level data. We provide a new package for the statistical software R. Our findings show that EMM outperforms the widely adopted normal distribution model. We discuss the benefits and consequences of our model, identify areas for future research, and offer recommendations for research practice.
Warehouses are becoming increasingly robotized. Autonomous rack-climbing robots have recently been introduced in e-commerce fulfillment centers. The robots not only retrieve loads from any level in a rack but also, roam the warehouse and bring the loads to order picking stations without using conveyors or lifts. This paper models and analyzes this system under both single and dual commands with different robot assignment (dedicated versus shared) and storage location assignment (class-based and random) policies. We study these policies in the presence of robot congestion. We evaluate the impact of two blocking protocols, a wait-outside-aisle policy and a block-and-recirculate policy, on the order throughput time. The system is modeled using semiopen queuing networks (SOQNs) for the different operating policies. The analytical models are validated using simulation. We also use this model to compare this system with a shuttle-based system. The results show that (1) the choice of the wait-outside-aisle policy or the block-and-recirculate policy mainly depends on the number of the robots in the system and the throughput requirement and that (2) the dedicated robot assignment policy can be an attractive policy, especially for a large system.
The automated warehouses currently in widespread use maintain highly efficient operations at the cost of consuming a large amount of energy. This study first considers the problem of operation optimization in multi-shuttle automated storage and retrieval systems (AS/RSs) to reduce both time and energy consumption. A time and energy bi-objective integer programming model is proposed to jointly optimize the storage/retrieval (S/R) location assignment and S/R scheduling for a multi-shuttle AS/RS. The adaptive variable neighbourhood search algorithm and Lagrangian relaxation algorithm are developed to tackle large-size instances. The results demonstrate that the bi-objective model is effective at finding the compromise solution considering travel time and energy consumption simultaneously. The decrease in travel time is obtained at the expense of increasing energy consumption and vice versa. The study provides valuable insights for warehouse managers to operate a time-efficient and energy-efficient automated warehouse.
We analyze the impact of risk aversion and ambiguity aversion on the competing demands for annuities and bequeathable savings using a lifecycle recursive utility model. Our main finding is that risk aversion and ambiguity aversion have similar effects: an increase in either of the two reduces annuity demand and enhances bond holdings. We obtain this unequivocal result in the flexible intertemporal framework of Hayashi and Miao (2011) by assuming that the agent’s preferences are monotone with respect to first-order stochastic dominance. Our contribution is then twofold. First, from a decision-theoretic point of view, we show that monotonicity allows one to obtain clear-cut results about the respective roles of risk and ambiguity aversion. Second, from the insurance point of view, our result that the demand for annuities decreases with risk and ambiguity aversion stands in contrast with what is usually found with other insurance products. As such, it may help explain the low annuitization level observed in the data.
Autonomous robots are increasingly used in warehouses in the recent decade, due to their flexible throughput capacity and low operating cost. Sorting may be the newest warehouse scene where autonomous robots are adopted. We consider a robotic sorting system with a two-tier layout where robots drive on the top mezzanine and sort parcels from loading stations (inputs) to drop-off points (outputs) via spiral conveyors connected to roll containers at the lower tier. We investigate the assignment optimization of arrival parcels to loading stations, to minimize the system throughput time. We first build an open queueing network to estimate the system performance and validate its accuracy by simulation. Then, we formulate an integer-programming model that takes the minimization of throughput time as the objective. We prove the computational complexity of the model by transferring it into an order batching problem, and design a Tabu search algorithm for solution. We evaluate the efficiency of the algorithm by both numerical experiments that take the Gurobi solver, the random and closest assignment rules as the comparison target, and a real case study. The results show that our algorithm can reduce the system throughput time by 7.09% and 8.76% and lower the manual cost by 11.99% and 17.50% over the random and the closest assignment rule, respectively. Moreover, it outperforms the Gurobi solver in large instances in terms of throughput time. The real case study shows that the system throughput time and the manual cost can be reduced by about 25% and 16%, compared with the assignment rule used in practice.
The aim of this paper is to study the performance of carbon-based portfolios when all emissions scopes are accounted for. We formalize low-carbon mean-variance portfolio strategies by integrating a carbon penalty to a constrained mean-variance optimization framework. We resort to direct and indirect emissions, split between Scopes 1–2 and Scope 3, across geographical zones (Europe and US) and data providers (Refinitiv and Carbon4 Finance). Our results show that it is possible to cut emission intensities in half at least with virtually no loss in Sharpe ratio for reasonable levels of the carbon constraint. These results are valid across various choices of risk aversions, and irrespective of emissions data provider. For a sustainability-aware investor, these low-carbon portfolios are associated to a higher level of welfare. We find that the corresponding allocations are shifted towards assets with higher returns while keeping the portfolio's volatility unchanged. Our results add to the literature contending that sustainable investing is not costly.
We attempt to replicate a seminal paper that offered support for the rational expectations hypothesis and reported evidence that markets with certain features aggregate dispersed information. The original results are based on only a few observations, and our attempt to replicate the key findings with an appropriately powered experiment largely fails. The resulting poststudy probability that market performance is better described by rational expectations than the prior information (Walrasian) model under the conditions specified in the original paper is very low. As a result of our failure to replicate, we investigate an alternate set of market features that combines aspects of the original experimental design. For these markets, which include both contingent claims and homogeneous dividend payments (as in many prediction markets), we do find robust evidence of information aggregation in support of the rational expectations model. In total, our results indicate that information aggregation in asset markets is fragile and should only be expected in limited circumstances. This paper was accepted by Bruno Biais, finance.
I define “organized numbness” as the organized inability to perceive sensations, a learned desensitization operating in the way our (1) bodies, (2) language, and (3) knowledge are organized. I propose poetic synesthesia’s power to associate several sensory perceptions as a way to unlearn this sort of disembodied habituation. Inspired by the so-called “accursed” French poets of the 19th century, the “long, prodigious, and rational disorganization of all the senses” of synesthesia helps me propose a method for unlearning organized numbness. I illustrate this by “a study in scarlet,” that is, by plunging into the depths of a synesthetic exploration of blood as my “fil rouge” to infuse our working bodies with renewed sensorial and embodied—or rather “embloodied”—life. I end by discussing how cultivating poetic synesthesia can help us unlearn organized numbness in the body, in language, and in knowledge, and how it can instead respectively foster resonance by learning (1) a different embodied habituation of sensorial sensitivity, (2) a language that instead of abstracting us from the senses actually allows us to reconnect with them and to delve deeply into their combined and thereby potentiated power, and (3) an epistemological gateway to the “unknown.”
Data-driven innovation enables firms to design products that are more responsive to market needs, which greatly reduces the risk of innovation. Customer data in the same supply chain has certain commonality, but data separation makes it difficult to maximize data value. The selection of an appropriate mode for cooperation innovation should be based on the particular big data analytics capability of the firms. This paper focuses on the influence of big data analytics capability on the choice of cooperation mode, and the influence of their matching relationship on cooperation performance. Specifically, using game-theoretic models, we discuss two cooperation modes, data analytics is implemented individually (i.e., loose cooperation) by either firm, or jointly (tight cooperation) by both firms, and further discuss the addition of coordination contracts under the loose mode. Several important conclusions are obtained. Firstly, both firms’ big data capability have positive effects on the selection of tight cooperation mode. Secondly, with the improvement of big data capability, the firms’ innovative performance gaps between loose and tight mode will increase significantly. Finally, when the capability meet certain condition, the cost subsidy contract can alleviate the gap between the two cooperative models.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.