ArticlePDF Available

An Experimental Study of Competitive Market Behavior

... Early empirical work found ambiguous evidence for whether assignment markets are effectively convergent. Chamberlin (1948) was pessimistic, motivating Smith (1962) to conduct experiments for markets with homogeneous goods. After as little as three trading periods, prices were close to the market clearing price. ...
... Price discovery (Chamberlin, 1948;Smith, 1962) and more generally the question of decentralized dynamics leading to equilibrium (Hayek, 1945) has been studied by economists for at least a century. With more and more market places moving online this topic has sparked renewed interest. ...
We study the dynamics of price discovery in decentralized two-sided markets. We show that there exist memoryless dynamics that converge to the core of the underlying assignment game in which agents' actions depend only on their current payoff. However, we show that for any such dynamic the convergence time can grow exponentially in relation to the population size. We present a natural dynamic in which a player's reservation value provides a summary of his past information and show that this dynamic converges to the core in polynomial time in homogeneous markets.
... In 1962, Vernon Smith published an article in the prestigious Journal of Political Economy (JPE) on the experimental study of competitive market behaviour [19]. The article outlined a number of laboratory-style market simulation experiments where human subjects were given the job of trading in a simple open-outcry CDA where an arbitrary asset was traded, while the experimenters looked on and took down their observations. ...
... DeepTrader takes as input 14 numeric values that are either directly available on BSE's LOB or tape outputs, or directly derivable from them: these 14 values make up the 'snapshot' that is fed as input to DeepTrader's LSTM network for each trade that occurred within a market session. The 14 values are as follows (the +/-prefixes on input values are used in Section V): [19], calculated from P* at time t. 14. ...
We present results demonstrating that an appropriately configured deep learning neural network (DLNN) can automatically learn to be a high-performing algorithmic trading system, operating purely from training-data inputs generated by passive observation of an existing successful trader T. That is, we can point our black-box DLNN system at trader T and successfully have it learn from T's trading activity, such that it trades at least as well as T. Our system, called DeepTrader, takes inputs derived from Level-2 market data, i.e. the market's Limit Order Book (LOB) or Ladder for a tradeable asset. Unusually, DeepTrader makes no explicit prediction of future prices. Instead, we train it purely on input-output pairs where in each pair the input is a snapshot S of Level-2 LOB data taken at the time when T issued a quote Q (i.e. a bid or an ask order) to the market; and DeepTrader's desired output is to produce Q when it is shown S. That is, we train our DLNN by showing it the LOB data S that T saw at the time when T issued quote Q, and in doing so our system comes to behave like T, acting as an algorithmic trader issuing specific quotes in response to specific LOB conditions. We train DeepTrader on large numbers of these S/Q snapshot/quote pairs, and then test it in a variety of market scenarios, evaluating it against other algorithmic trading systems in the public-domain literature, including two that have repeatedly been shown to outperform human traders. Our results demonstrate that DeepTrader learns to match or outperform such existing algorithmic trading systems. We analyse the successful DeepTrader network to identify what features it is relying on, and which features can be ignored. We propose that our methods can in principle create an explainable copy of an arbitrary trader T via "black-box" deep learning methods.
... Smith's initial set of experiments were run in the late 1950's, and were described in his first paper on EE [12], published in the prestigious Journal of Political Economy (JPE) in 1962. The experiment methods laid out in that 1962 paper would subsequently come to dominate the methodology of researchers working to build adaptive autonomous automated trading agents by combining tools and techniques from Artificial Intelligence (AI) and Machine Learning (ML). ...
... Our preliminary experimental results reported in [12], and the much more extensive results reported here, are motivated by and extend this progression of past research. In particular, we noted that Vach's results which first revealed that the ratio of different trading algorithms could affect the dominance hierarchy came from experiments he ran using the OpEx market simulator [6], which is a true parallel asynchronous distributed system: OpEx involves a number of individual trader computers (discrete laptop PCs) communicating over a local-area network with a central exchange-server (a desktop PC). ...
There's a long tradition of research using computational intelligence (methods from artificial intelligence (AI) and machine learning (ML)), to automatically discover, implement, and fine-tune strategies for autonomous adaptive automated trading in financial markets, with a sequence of research papers on this topic published at AI conferences such as IJCAI and in journals such as Artificial Intelligence: we show here that this strand of research has taken a number of methodological mis-steps and that actually some of the reportedly best-performing public-domain AI/ML trading strategies can routinely be out-performed by extremely simple trading strategies that involve no AI or ML at all. The results that we highlight here could easily have been revealed at the time that the relevant key papers were published, more than a decade ago, but the accepted methodology at the time of those publications involved a somewhat minimal approach to experimental evaluation of trader-agents, making claims on the basis of a few thousand test-sessions of the trader-agent in a small number of market scenarios. In this paper we present results from exhaustive testing over wide ranges of parameter values, using parallel cloud-computing facilities, where we conduct millions of tests and thereby create much richer data from which firmer conclusions can be drawn. We show that the best public-domain AI/ML traders in the published literature can be routinely outperformed by a "sub-zero-intelligence" trading strategy that at face value appears to be so simple as to be financially ruinous, but which interacts with the market in such a way that in practice it is more profitable than the well-known AI/ML strategies from the research literature. That such a simple strategy can outperform established AI/ML-based strategies is a sign that perhaps the AI/ML trading strategies were good answers to the wrong question.
... A line of research was started by Vernon Smith's experiments in a paper [Smith, 1962] studying the trading behaviour of humans and allocative efficiency of the market as a whole in a Continuous Double Auction, the style of market mechanism used in almost all financial exchanges around the world. This work was groundbreaking for its experimental approach to economic theory which previously often held unclear or inaccurate prior beliefs about its claims. ...
Full-text available
Modern financial market dynamics warrant detailed analysis due to their significant impact on the world. This, however, often proves intractable; massive numbers of agents, strategies and their change over time in reaction to each other leads to difficulties in both theoretical and simulational approaches. Notable work has been done on strategy dominance in stock markets with respect to the ratios of agents with certain strategies. Perfect knowledge of the strategies employed could then put an individual agent at a consistent trading advantage. This research reports the effects of imperfect oracles on the system - dispensing noisy information about strategies - information which would normally be hidden from market participants. The effect and achievable profits of a singular trader with access to an oracle were tested exhaustively with previously unexplored factors such as changing order schedules. Additionally, the effect of noise on strategic information was traced through its effect on trader efficiency.
... The seminal work of Smith (Smith, 1962;Smith, 1981) empirically established that coupling buyers and sellers with decentralized private information regarding individual preferences and costs with a CDA robustly generated competitive equilibrium market outcomes. There were subsequent efforts to develop accurate models of trader behavior (such as Wilson (1986)), Friedman (1991), and Easley and Ledyard (1993)). ...
Using laboratory experiments, we illustrate that trading algorithms that prioritize low latency pose certain pitfalls in a variety of market structures and configurations. In hybrid double auctions markets with human traders and trading agents, we find superior performance of trading agents to human traders in balanced markets with the same number of human and Zero Intelligence Plus (ZIP) buyers and sellers only, thus providing a partial replication of Das et al. (2001). However, in unbalanced markets and extreme market structures, such as monopolies and duopolies, fast ZIP agents fall into a speed trap and both human participants and slow ZIP agents outperform fast ZIP agents. For human traders, faster reaction time significantly improves trading performance, while Theory of Mind can be detrimental for human buyers, but beneficial for human sellers.
Full-text available
Bu çalışmanın amacı birbiriyle yakından ilgili iki araştırma prog-ramı olan davranışsal iktisat ve deneysel iktisat alanında Nobel eko-nomi ödülü ile onurlandırılan bilim insanlarının çalışmalarını incele-mektir. Bu alanda ödül alanlar arasında psikolog Daniel Kahneman (2002) ve iktisatçı Richard Thaler (2017) davranışsal iktisat alanında; Vernon Smith (2002) ise deneysel iktisat alanında öncü isimler olmuş-lardır. Sınırlı insan bilgisi ve eksik rasyonalite konularındaki görüşle-riyle Friedrich A. von Hayek (1974) ve sınırlı rasyonalite kavramını geliştiren Herbert A. Simon (1978) davranışsal iktisadın ilk öncüleri arasında kabul edilirler. 2001 yılında Nobel ödülüne layık görülen Ge-orge Akerlof, Michael Spence ve Joseph Stiglitz ise enformasyon iktisa-dı ve özel olarak asimetrik enformasyon konularındaki çalışmaları ile davranışsal iktisada dolaylı olarak katkılar sunan bilim insanları ol-muşlardır. Robert J. Schiller ise davranışsal finans alanının öncüsü bir iktisatçıdır. Biz bu çalışmamızda davranışsal iktisat ve deneysel iktisat alanlarına önemli katkılar sunan iktisatçıların görüşlerini ele alacağız.
The authors develop a two-stage classroom experiment to illustrate convergence to long-run equilibrium in a market where price-taking firms are capacity-constrained. Once equilibrium in the first stage is established, capacity constraints are introduced by imposing discontinuities in the fixed costs of several firms. The experiment demonstrates that this supply shock yields a higher market price and, under assumed parameterization, several higher-cost firms that otherwise are not able to survive in the long-run equilibrium enter the market and earn positive profits.
There are two efficiency effects of price controls: an “output effect” measured by the standard welfare loss triangles, and an “imperfect selection effect” that arises when controls prevent price from excluding high-cost sellers or low-value buyers. Although not discussed in most textbooks, the imperfect selection effect can be as large as the standard Harberger triangle welfare loss in symmetric designs, as confirmed by a class experiment described in this paper. The experiment also permits an analysis of the ways random non-price allocations shift the relevant supply function, and the related effects of rent-seeking competition that can arise with price controls.
ResearchGate has not been able to resolve any references for this publication.