Article

Ethical Use of Algorithms in Times of Uncertainty and Crisis: The Case of High-Frequency Trading

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
This study explores the ethical perceptions of employees in the financial industry. Focusing on the high frequency trading (HFT) industry, it analyses a series of interviews with HFT employees (managers, computer programmers and traders). It shows that regulations and firm rules profoundly affect HFT practices. However, they do not provide employees with answers for their ethical questions. To judge the ethicality of HFT, employees choose reference stakeholder groups and assess the way HFT impacts them. The perception that HFT has a positive effect on stakeholder groups is associated with moral satisfaction, whereas the perception that it has a negative effect is related to emotional detachment, sense of meaninglessness and turnover intent. The high variance in employees’ choices of stakeholder reference groups emphasizes the subjectivity and uncertainty that HFT ethicality entails. Therefore, this study suggests that the financial industry may lack moral leadership. It makes empirical and theoretical contributions to the ‘business ethics as practice’ theory and examines management and regulatory applications.
Article
Full-text available
High Frequency Trading (HFT) is automation of the conventional securities trades in exchanges that begins by placing limit buy or sell orders, connecting the buyer to the seller and executing the transaction for profit. HFT began in the wake of the millennium and rapidly grew till 2005, later dropping after the 2007-2009 financial crisis; igniting a huge debate. I argue that HFT neither caused the 2007-2009 financial crisis actually occasioned by mispricing of subprime mortgages nor the May 6, 2010 flash crash actually caused by the immediacy problem. That HFT is just an algorithm that attracted mistrust by a section of exchange stakeholders by reason of high speed trade execution. I finally forecast that HFT can only gain more ground after reaching its lowest in 2014, but that it requires regulation to operate in stability.
Article
Full-text available
Crowdsourcing practices have generated much discussion on their ethics and fairness, yet these topics have received little scholarly investigation. Some have criticized crowdsourcing for worker exploitation and for undermining workplace regulations. Others have lauded crowdsourcing for enabling workers' autonomy and allowing disadvantaged people to access previously unreachable job markets. In this paper, we examine the ethics in crowdsourcing practices by focusing on three questions: a) what ethical issues exist in crowdsourcing practices? b) are ethical norms emerging or are issues emerging that require ethical norms? and, more generally, c) how can the ethics of crowdsourcing practices be established? We answer these questions by engaging with Jürgen Habermas' (Habermas 1990; Habermas 1993) discourse ethics theory to interpret findings from a longitudinal field study (from 2013-2016) involving key crowdsourcing participants (workers, platform organizers and requesters) of three crowdsourcing communities. Grounded in this empirical study, we identify ethical concerns and discuss the ones for which ethical norms have emerged as well as others which remain unresolved and problematic in crowdsourcing practices. Furthermore, we provide normative considerations of how ethical concerns can be identified, discussed and resolved based on the principles of discourse ethics.
Article
Full-text available
Wearables paired with data analytics and machine learning algorithms that measure physiological (and other) parameters are slowly finding their way into our workplace. Several studies have reported positive effects from using such "physiolytics" devices and purported the notion that it may lead to significant workplace safety improvements or to increased awareness among employees concerning unhealthy work practices and other job-related health and well-being issues. At the same time, physiolytics may cause an overdependency on technology and create new constraints on privacy, individuality, and personal freedom. While it is easy to understand why organizations are implementing physiolytics, it remains unclear what employees think about using wearables at their workplace. Using an affordance theory lens, we, therefore, explore the mental models of employees who are faced with the introduction of physiolytics as part of corporate wellness or security programs. We identify five distinct user types each of which characterizes a specific viewpoint on physiolytics at the workplace: the freedom loving, the individualist, the cynical, the tech independent, and the balancer. Our findings allow for better understanding the wider implications and possible user responses to the introduction of wearable technologies in occupational settings and address the need for opening up the "user black box" in IS use research.
Article
Full-text available
Algorithms silently structure our lives. Algorithms can determine whether someone is hired, promoted, offered a loan, or provided housing as well as determine which political ads and news articles consumers see. Yet, the responsibility for algorithms in these important decisions is not clear. This article identifies whether developers have a responsibility for their algorithms later in use, what those firms are responsible for, and the normative grounding for that responsibility. I conceptualize algorithms as value-laden, rather than neutral, in that algorithms create moral consequences, reinforce or undercut ethical principles, and enable or diminish stakeholder rights and dignity. In addition, algorithms are an important actor in ethical decisions and influence the delegation of roles and responsibilities within these decisions. As such, firms should be responsible for not only the value-laden-ness of an algorithm but also for designing who-does-what within the algorithmic decision. As such, firms developing algorithms are accountable for designing how large a role individuals will be permitted to take in the subsequent algorithmic decision. Counter to current arguments, I find that if an algorithm is designed to preclude individuals from taking responsibility within a decision, then the designer of the algorithm should be held accountable for the ethical implications of the algorithm in use.
Article
Full-text available
As a way to address both ominous and ordinary threats of artificial intelligence (AI), researchers have started proposing ways to stop an AI system before it has a chance to escape outside control and cause harm. A so-called “big red button” would enable human operators to interrupt or divert a system while preventing the system from learning that such an intervention is a threat. Though an emergency button for AI seems to make intuitive sense, that approach ultimately concentrates on the point when a system has already “gone rogue” and seeks to obstruct interference. A better approach would be to make ongoing self-evaluation and testing an integral part of a system’s operation, diagnose how the system is in error and to prevent chaos and risk before they start. In this paper, we describe the demands that recent big red button proposals have not addressed, and we offer a preliminary model of an approach that could better meet them. We argue for an ethical core (EC) that consists of a scenario-generation mechanism and a simulation environment that are used to test a system’s decisions in simulated worlds, rather than the real world. This EC would be kept opaque to the system itself: through careful design of memory and the character of the scenario, the system’s algorithms would be prevented from learning about its operation and its function, and ultimately its presence. By monitoring and checking for deviant behavior, we conclude, a continual testing approach will be far more effective, responsive, and vigilant toward a system’s learning and action in the world than an emergency button which one might not get to push in time. © 2018 Springer Science+Business Media B.V., part of Springer Nature
Article
Full-text available
The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we provide an overview of available technical solutions to enhance fairness, accountability, and transparency in algorithmic decision-making. We also highlight the criticality and urgency to engage multi-disciplinary teams of researchers, practitioners, policy-makers, and citizens to co-develop, deploy, and evaluate in the real-world algorithmic decision-making processes designed to maximize fairness and transparency. In doing so, we describe the Open Algortihms (OPAL) project as a step towards realizing the vision of a world where data and algorithms are used as lenses and levers in support of democracy and development.
Article
Full-text available
In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.
Article
Full-text available
Calls for greater transparency as well as corporate and individual accountability have emerged in response to the recent turbulence in financial markets. In the field of high-frequency trading (HFT), suggested solutions have involved a call for increased market information, for example, or better access to the inner workings of algorithmic trading systems. Through a combination of fieldwork conducted in HFT firms and discourse analysis, I show that the problem may not always stem from a lack of information. Instead, my comparative analysis of different market actors (regulators, market analysts and traders) shows that the diverse and complex ways in which they access and construct knowledge out of information in fact lead to what I call different epistemic regimes. An understanding of how epistemic regimes work will enable us to explain not only why the same market event can be viewed as very different things – as market manipulation, predation or error – but also why it is so difficult to arrive at a unified theory or view of HFT. The comparative perspective introduced by the idea of epistemic regimes might also serve as a starting point for the development of a cultural approach to the study of financial markets.
Article
Full-text available
Automated high-frequency trading has grown tremendously in the past 20 years and is responsible for about half of all trading activities at stock exchanges worldwide. Geography is central to the rise of high-frequency trading due to a market design of “continuous trading” that allows traders to engage in arbitrage based upon informational advantages built into the socio-technical assemblages that make up current capital markets. Enormous investments have been made in creating transmission technologies and optimizing computer architectures, all in an effort to shave milliseconds of order travel time (or latency) within and between markets. We show that as a result of the built spatial configuration of capital markets, “public” is no longer synonymous with “equal” information. High-frequency trading increases information inequalities between market participants.
Article
Full-text available
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100% safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: (i) the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; (ii) moral and legal responsibility; and (iii) decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars.
Article
Full-text available
Codes of conduct in autonomous vehicles When it becomes possible to program decision-making based on moral principles into machines, will self-interest or the public good predominate? In a series of surveys, Bonnefon et al. found that even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles (see the Perspective by Greene). Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle. Science , this issue p. 1573 ; see also p. 1514
Article
Full-text available
The ethics of high frequency trading are obscure, due in part to the complexity of the practice. This article contributes to the existing literature of ethics in financial markets by examining a recent trend in regulation in high frequency trading, the prohibition of deception. We argue that in the financial markets almost any regulation, other than the most basic, tends to create a moral hazard and increase information asymmetry. Since the market’s job is, at least in part, price discovery, we argue that simplicity of regulation and restraint in regulation are virtues to a greater extent than in other areas of finance. This article proposes criteria for determining which high-frequency trading strategies should be regulated.
Article
Full-text available
Automated vehicles have received much attention recently, particularly the Defense Advanced Research Projects Agency Urban Challenge vehicles, Google's self-driving cars, and various others from auto manufacturers. These vehicles have the potential to reduce crashes and improve roadway efficiency significantly by automating the responsibilities of the driver. Still, automated vehicles are expected to crash occasionally, even when all sensors, vehicle control components, and algorithms function perfectly. If a human driver is unable to take control in time, a computer will be responsible for precrash behavior. Unlike other automated vehicles, such as aircraft, in which every collision is catastrophic, and unlike guided track systems, which can avoid collisions only in one dimension, automated roadway vehicles can predict various crash trajectory alternatives and select a path with the lowest damage or likelihood of collision. In some situations, the preferred path may be ambiguous. The study reported here investigated automated vehicle crashing and concluded the following: (a) automated vehicles would almost certainly crash, (b) an automated vehicle's decisions that preceded certain crashes had a moral component, and (c) there was no obvious way to encode complex human morals effectively in software. The paper presents a three-phase approach to develop ethical crashing algorithms; the approach consists of a rational approach, an artificial intelligence approach, and a natural language requirement. The phases are theoretical and should be implemented as the technology becomes available.
Article
Full-text available
In 1999, Carruthers and Stinchcombe provided the classic discussion of ‘the social structure of liquidity’: the institutional arrangements that support markets in which ‘exchange occurs easily and frequently’. Our argument in this paper is that the material aspects of these arrangements – and particularly the materiality of prices – need far closer attention than they normally receive. We develop this argument by highlighting two features of new assemblages that have been created in financial markets since 1999. First, these assemblages give sharp economic significance to spatial location and to physical phenomena such as the speed of light (the physics of these assemblages is Einsteinian, not Newtonian, so to speak). Second, they have provoked fierce controversy focusing on ultra-fast ‘high-frequency trading’, controversy in which issues of materiality are interwoven intimately with questions of legitimacy, particularly of fairness.
Article
Full-text available
The competitive nature of AT, the scarcity of expertise, and the vast profits potential, makes for a secretive community where implementation details are difficult to find. AT presents huge research challenges, especially given the economic consequences of getting it wrong, such as the May 6, 2010 Flash Crash in which the Dow Jones Industrial Average plunged 9% wiping $600 billion off the market value and the Knight Capital loss of $440 million on August 1, 2012, due to erratic behavior of its trading algorithms. Current research challenges include: Data challenges cover the quantity/quality of the data, processing data at ultra-high frequency and increasingly incorporating new types of data such as social media and news. Dealers generally execute their orders through a shared centralized order book that lists the buy and sell orders for a specific security ranked by price and order arrival time.
Article
Full-text available
This online paper may be quoted under fair use and academic conventions. This paper may not be published elsewhere in any form (including e-mail lists and electronic bulletin boards) without the author's express permission.
Article
Full-text available
Within sociology risk and uncertainty has become a major interest, most notably with the publication of Beck's Risk Society (1992). There are, however, a number of different approaches available, which define the object of research slightly differently. This has raised concerns as to whether risk sociology has a shared object of research. While it might be contested whether the diversity of risk research is a strength or weakness, I suggest that more conceptual work could help to consolidate its basis. I will argue that whilst there have been a number of controversies in risk research, these debates have neglected a more fundamental difference. While most approaches can agree on the idea that risk is a possible threat in the future, they conceptualise risk in connection to at least three different core ideas: risk in (rational) decision-making, risk in calculative-probabilistic calculation and risk as part of a modern worldview. These ideas are part of risk research but they direct how we examine the social world. I suggest that it is important to clarify this conceptual basis of the sociology of risk and uncertainty as a step towards further theoretical advancement.
Article
Full-text available
The paper investigates the ethics of information transparency (henceforth transparency). It argues that transparency is not an ethical principle in itself but a pro-ethical condition for enabling or impairing other ethical practices or principles. A new definition of transparency is offered in order to take into account the dynamics of information production and the differences between data and information. It is then argued that the proposed definition provides a better understanding of what sort of information should be disclosed and what sort of information should be used in order to implement and make effective the ethical practices and principles to which an organisation is committed. The concepts of “heterogeneous organisation” and “autonomous computational artefact” are further defined in order to clarify the ethical implications of the technology used in implementing information transparency. It is argued that explicit ethical designs, which describe how ethical principles are embedded into the practice of software design, would represent valuable information that could be disclosed by organisations in order to support their ethical standing.
Article
Full-text available
IT failures abound but little is known about the financial impact that these failures have on a firm’s market value. Using the resource-based view of the firm and event study methodology, this study analyzes how firms are penalized by the market when they experience unforeseen operating or implementation-related IT failures. Our sample consists of 213 newspaper reports of IT failures by publicly traded firms, which occurred during a 10-year period. The findings show that IT failures result in a 2% average cumulative abnormal drop in stock prices over a 2-day event window. The results also reveal that the market responds more negatively to implementation failures affecting new systems than to operating failures involving current systems. Further, the study demonstrates that more severe IT failures result in a greater decline in firm value and that firms with a history of IT failures suffer a greater negative impact. The implications of these findings for research and practice are discussed.
Article
We study the impact that algorithmic trading, computers directly interfacing at high frequency with trading platforms, has had on price discovery and volatility in the foreign exchange market. Our dataset represents a majority of global interdealer trading in three major currency pairs in 2006 and 2007. Importantly, it contains precise observations of the size and the direction of the computer-generated and human-generated trades each minute. The empirical analysis provides several important insights. First, we find evidence that algorithmic trades tend to be correlated, suggesting that the algorithmic strategies used in the market are not as diverse as those used by non-algorithmic traders. Second, we find that, despite the apparent correlation of algorithmic trades, there is no evident causal relationship between algorithmic trading and increased exchange rate volatility. If anything, the presence of more algorithmic trading is associated with lower volatility. Third, we show that even though some algorithmic traders appear to restrict their activity in the minute following macroeconomic data releases, algorithmic traders increase their provision of liquidity over the hour following each release. Fourth, we find that non-algorithmic order flow accounts for a larger share of the variance in exchange rate returns than does algorithmic order flow. Fifth, we find evidence that supports the recent literature that proposes to depart from the prevalent assumption that liquidity providers in limit order books are passive.
Article
In a talk in 2013, Karin Knorr Cetina referred to ‘the interaction order of algorithms’, a phrase that implicitly invokes Erving Goffman's ‘interaction order’. This paper explores the application of the latter notion to the interaction of automated-trading algorithms, viewing algorithms as material entities (programs running on physical machines) and conceiving of the interaction order of algorithms as the ensemble of their effects on each other. The paper identifies the main way in which trading algorithms interact (via electronic ‘order books’, which algorithms both ‘observe’ and populate) and focuses on two particularly Goffmanesque aspects of algorithmic interaction: queuing and ‘spoofing’, or deliberate deception. Following Goffman's injunction not to ignore the influence on interaction of matters external to it, the paper examines some prominent such matters. Empirically, the paper draws on documentary analysis and 338 interviews conducted by the author with high-frequency traders and others involved in automated trading.
Article
Sofia Olhede and Russell Rodrigues discuss recent efforts to ensure greater scrutiny of machine-generated decisions
Article
Section 90(1) of the UK Financial Services Act 2012 criminalises the creation of a false or misleading impression in financial markets. In the absence of any criminal prosecutions under this section to date, the potential scope of the new criminal offence remains moot especially in the context of high frequency trading where market participants develop trading strategies using algorithmic computer programs which are designed to profit from very small movements in share prices which have been generated by a series of high-speed purchases and sales, or short sales and subsequent purchases. Notwithstanding the fact that section 90 does not reference high frequency trading, the statutory language is sufficiently broad to capture high frequency trading strategies where it can be shown that they have created a false or misleading impression as to the price or value of the company share which has been, or is being, traded.
Article
I review the recent high-frequency trader (HFT) literature to single out the economic channels by which HFTs affect market quality. I first group the various theoretical studies according to common denominators and discuss the economic costs and benefits they identify. For each group, I then review the empirical literature that speaks to either the models’ assumptions or their predictions. This enables me to come to a data-weighted judgement on the economic value of HFTs. Expected final online publication date for the Annual Review of Financial Economics Volume 8 is November 06, 2016. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Chapter
This chapter is a reprint of Frank P. Ramsey’s seminal paper “Truth and Probability” written in 1926 and first published posthumous in the 1931 The Foundations of Mathematics and other Logical Essays, ed. R.B. Braithwaite, London: Routledge & Kegan Paul Ltd. The paper lays the foundations for the modern theory of subjective probability. Ramsey argues that degrees of beliefs may be measured by the acceptability of odds on bets, and provides a set of decision theoretic axioms, which jointly imply the laws of probability.
Article
Experimental psychologists and economists construct an individual or interactive decision situation in the laboratory. They find non-negligible differences between the observed behavior of participants and the theoretically implied behavior. We refer here to the expected utility theory and to strategic equilibrium in non-cooperative game theory. We comment on the question whether rationality, implies these theoretical behaviors and whether the non-negligible differences as above imply that participants in experiments are irrational. We also comment on the relation between rationality and consistency, in particular in situations of uncertainty. © 2012 Vita e Pensiero/Pubblicazioni dell'Universitá Cattolica del Sacro Cuore.
Article
Modern electronic markets have been characterized by a relentless drive toward faster decision making. Significant technological investments have led to dramatic improvements in latency, the delay between a trading decision and the resulting trade execution. We describe a theoretical model for the quantitative valuation of latency. Our model measures the trading frictions created by the presence of latency, by considering the optimal execution problem of a representative investor. Via a dynamic programming analysis, our model provides a closed-form expression for the cost of latency in terms of well-known parameters of the underlying asset. We implement our model by estimating the latency cost incurred by trading on a human time scale. Examining NYSE common stocks from 1995 to 2005 shows that median latency cost across our sample roughly tripled during this time period. Furthermore, using the same data set, we compute a measure of implied latency and conclude that the median implied latency decreased by approximately two orders of magnitude. Empirically calibrated, our model suggests that the reduction in cost achieved by going from trading on a human time scale to a low latency time scale is comparable with other execution costs faced by the most cost efficient institutional investors, and it is consistent with the rents that are extracted by ultra-low latency agents, such as providers of automated execution services or high frequency traders.
Article
This paper characterizes the trading strategy of a large high frequency trader (HFT). The HFT incurs a loss on its inventory but earns a profit on the bid-ask spread. Sharpe ratio calculations show that performance is very sensitive to cost of capital assumptions. The HFT employs a cross-market strategy as half of its trades materialize on the incumbent market and the other half on a small, high-growth entrant market. Its trade participation rate in these markets is 8.1% and 64.4%, respectively. In both markets, four out of five of its trades are passive i.e., its price quote was consumed by others.
Article
The article discusses the use of algorithmic models in finance (algo or high frequency trading). Algo trading is widespread but also somewhat controversial in modern financial markets. It is a form of automated trading technology, which critics claim can, among other things, lead to market manipulation. Drawing on three cases, this article shows that manipulation also can happen in the reverse way, meaning that human traders attempt to make algorithms ‘make mistakes’ by ‘misleading’ them. These attempts to manipulate are very simple and immediately transparent to humans. Nevertheless, financial regulators increasingly penalize such attempts to manipulate algos. The article explains this as an institutionalization of algo trading, a trading practice which is vulnerable enough to need regulatory protection.
Article
We consider a dynamic equilibrium model of algorithmic trading (AT) for limit order markets. We show that AT improves market performance ‘only’ under specific conditions. For instance, AT traders with only an informational (only a trading speed) advantage increase (reduce) global welfare. AT traders act as liquidity demanders with ‘predatory’ strategies when ‘less-skilled’ investors are majority, which may deteriorate liquidity and welfare. AT reduces waiting costs but finally damages traditional traders’ profits and changes their trading behaviour. AT traders prefer volatile assets, and we report that cancellation fees may be better policy instruments to control AT activity than latency restrictions.
Article
The literature on advanced manufacturing technologies (AMTs) shows that a wide range of outcomes have been experienced by organizations that have adopted these technologies, ranging from implementation failure to increased productivity and enhanced organizational flexibility. This article examines the roles that organization design and culture play in the varying levels of success experienced by AMT-adopting organizations. Several hypotheses are presented on the relationships among culture, structure, and implementation outcomes based on the competing values model of organizational culture.
Article
History demonstrates that hysteria is optional only in a bear market because the market always recovers given enough time. With people's life savings at stake, however, the influence of panic can't be brushed aside. Market conditions in 2008 are unique in that they're far more volatile and seem to inspire the greatest fear factor in the history of the modern market. Moreover, because of extensive global networking and border-transcending fiscal interdependence, initial fluctuations in a single market resonate almost simultaneously world wide.
Article
A quantitative definition of risk is suggested in terms of the idea of a “set of triplets”. The definition is extended to include uncertainty and completeness, and the use of Bayes' theorem is described in this connection. The definition is used to discuss the notions of “relative risk”, “relativity of risk”, and “acceptability of risk”.
Article
All of finance is now automated, most notably high frequency trading. This paper examines the ethical implications of this fact. As automation is an interdisciplinary endeavor, we argue that the interfaces between the respective disciplines can lead to conflicting ethical perspectives; we also argue that existing disciplinary standards do not pay enough attention to the ethical problems automation generates. Conflicting perspectives undermine the protection those who rely on trading should have. Ethics in finance can be expanded to include organizational and industry-wide responsibilities to external market participants and society. As a starting point, quality management techniques can provide a foundation for a new cross-disciplinary ethical standard in the age of automation.
Article
With the advent of algorithmic trading it is essential that investors become more proactive in the decision making process to ensure selection of the most appropriate algorithm. Investors need to specify benchmark price, implementation goal, and preferred deviation strategy (i.e., how the opti-mally prescribed algorithm is to react to changing market conditions or prices). In this paper we describe an analytical process to assess the impact of these decisions on the profit and loss distribution of the algorithm. Understanding the Profit and Loss Distribution of Trading Algorithms, Kissell & Malamut, JPMorgan Originally published in Institutional Investor, Guide to Algorithmic Trading, Spring 2005.
Article
Many news reports and economic experts talk about uncertainty. But what does the word mean in an economic context? Specifically, what do economists have in mind when they talk about it? In this article, Pablo Guerron-Quintana discusses the concepts of risk and uncertainty, what the difference is between the two terms, and why their presence in the economy may have widespread effects. He also talks about measuring risk at the aggregate level — that is, risk that affects all participants in the economy — and he reviews the various types of risk measures that economists have proposed.
Article
This article argues that the problem of uncertainty represents the central limitation of efficiency-based approaches to the explanation and prediction of economic outcomes. The problem of uncertainty reintroduces the Hobbesian problem of order into economics and makes it possible to connect questions of economic decision-making with social theory. The emphasis lies not, as in the behavioral theories of the Carnegie School, in the influence of uncertainty on the actual decision process, but in those social devices that actors rely on in decision-making, i.e., that structure the situation for the agents. If agents cannot anticipate the benefits of an investment, optimizing decisions become impossible, and the question opens up how intentionally rational actors reach decisions under this condition of uncertainty. This provides a systematic starting point for economic sociology. Studies in economic sociology that argue from different theoretical perspectives point to the significance of uncertainty and goal ambiguity. This contribution reflects theoretically why economic sociology can develop a promising approach by building upon these insights. It becomes understandable why culture, power, institutions, social structures, and cognitive processes are important in modern market economies. But it should be equally emphasized that the maximizing paradigm in economics will not be dethroned without a causal theory of the relationship of intentional rationality and social rigidities.
Article
The paper presents a systems view of the organizational preconditions to technological accidents and disasters, and in particular the seminal “Man-made Disasters model” proposed by the late Professor Barry Turner. Events such as Chernobyl, the Challenger and Bhopal have highlighted the fact that in seeking the causes of many modern large-scale accidents we must now consider as key the interaction between technology and organizational failings. Such so-called ‘organizational accidents’ stem from an incubation of latent errors and events which are at odds with the culturally taken for granted, accompanied by a collective failure of organizational intelligence. Theoretical models have also moved on now, from purely post hoc descriptions of accidents and their causes, in the attempt to specify ‘safe’ cultures and ‘high-reliability’ organizations. Recent research, however, has shown us that while effective learning about hazards is a common assumption of such attempts, organizations can be very resistant to learning the full lessons from past incidents and mistakes. Two common barriers to learning from disasters are: (1) information difficulties; and (2) blame and organizational politics. Ways of addressing these barriers are discussed, and the example of aviation learning systems, as an illustration of institutional self-design, is outlined.
Article
. Abstract. Information systems development is a high-risk undertaking, and failures remain common despite advances in development tools and technologies. In this paper, we argue that one reason for this is the collapse of organizational intelligence required to deal with the complexities of systems development. Organizations fail to learn from their experience in systems development because of limits of organizational intelligence, disincentives for learning, organizational designs and educational barriers. Not only have many organizations failed to learn, but they have also learned to fail. Over time they accept and expect poor performance while creating organizational myths that perpetuate short-term optimization. This paper illustrates learning failure in systems development and recommends tactics for overcoming it.
Article
Electronic markets have been a core topic of information systems (IS) research for last three decades. We focus on a more recent phenomenon: smart markets. This phenomenon is starting to draw considerable interdisciplinary attention from the researchers in computer science, operations research, and economics communities. The objective of this commentary is to identify and outline fruitful research areas where IS researchers can provide valuable contributions. The idea of smart markets revolves around using theoretically supported computational tools to both understand the characteristics of complex trading environments and multiechelon markets and help human decision makers make real-time decisions in these complex environments. We outline the research opportunities for complex trading environments primarily from the perspective of design of computational tools to analyze individual market organization and provide decision support in these complex environments. In addition, we present broad research opportunities that computational platforms can provide, including implications for policy and regulatory research.
Article
Algorithmic trading has sharply increased over the past decade. Equity market liquidity has improved as well. Are the two trends related? For a recent five-year panel of New York Stock Exchange (NYSE) stocks, we use a normalized measure of electronic message traffic (order submissions, cancellations, and executions) as a proxy for algorithmic trading, and we trace the associations between liquidity and message traffic. Based on within-stock variation, we find that algorithmic trading and liquidity are positively related. To sort out causality, we use the start of autoquoting on the NYSE as an exogenous instrument for algorithmic trading. Previously, specialists were responsible for manually disseminating the inside quote. As stocks were phased in gradually during early 2003, the manual quote was replaced by a new automated quote whenever there was a change to the NYSE limit order book. This market structure change provides quicker feedback to traders and algorithms and results in more message traffic. For large-cap stocks in particular, quoted and effective spreads narrow under autoquote and adverse selection declines, indicating that algorithmic trading does causally improve liquidity.
Algorithmic Trading and the Market for Liquidity
  • T Hendershott