Article

Visualisation of financial time series by linear principal component analysis and nonlinear principal component analysis

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

In this dissertation, the main goal is visualisation of financial time series. We expect that visualisation of financial time series will be a useful auxiliary for technical analysis. Firstly, we review the technical analysis methods and test our trading rules, which are built by the essential concepts of technical analysis. Next, we compare the quality of linear principal component analysis and nonlinear principal component analysis in financial market visualisation. We compare different methods of data preprocessing for visualisation purposes. Using visualisation, we demonstrate the difference between normal and crisis time period. Thus, the visualisation of financial market can be a tool to support technical analysis.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

Article
Principal Component Analysis (PCA) is a popular multivariate analytic tool which can be used for dimension reduction without losing much information. Data vectors containing a large number of features arriving sequentially may be correlated with each other. An effective algorithm for such situations is online PCA. Existing Online PCA research works revolve around proposing efficient scalable updating algorithms focusing on compression loss only. They do not take into account the size of the dataset at which further arrival of data vectors can be terminated and dimension reduction can be applied. It is well known that the dataset size contributes to reducing the compression loss – the smaller the dataset size, the larger the compression loss while larger the dataset size, the lesser the compression loss. However, the reduction in compression loss by increasing dataset size will increase the total data collection cost. In this paper, we move beyond the scalability and updation problems related to Online PCA and focus on optimising a cost‐compression loss which considers the compression loss and data collection cost. We minimise the corresponding risk using a two‐stage PCA algorithm. The resulting two‐stage algorithm is a fast and an efficient alternative to Online PCA and is shown to exhibit attractive convergence properties with no assumption on specific data distributions. Experimental studies demonstrate similar results and further illustrations are provided using real data. As an extension, a multi‐stage PCA algorithm is discussed as well. Given the time complexity, the two‐stage PCA algorithm is emphasised over the multi‐stage PCA algorithm for online data.
Article
Full-text available
# Risk Management Magazine Vol. 12 N. 3 # La presenza di tassi di interesse negativi nei maggiori mercati finanziari rende problematica la derivazione delle superfici di volatilità log-normali alla Black. Tali superfici vengono ottenute applicando un processo di ingegneria inversa sulla formula di Black, utilizzando come input principali le quotazioni di strumenti finanziari osservati sui mercati e i tassi di interesse. Quest’ultimi, al fine di garantire la corretta derivazione delle superfici di volatilità, devono essere positivi, data la presenza di termini logaritmici nella formula di Black. Qualora i tassi di interesse siano negativi, ci si trova di fronte ad un c.d. Missing-Data Problem, dato che una porzione consistente della superficie di volatilità risulta mancante. I problemi della ricostruzione di dati mancanti sono spesso risolti da metodologie tradizionali, quali l’interpolazione, che purtroppo in questo caso risultano grossolane e non idonee a riprodurre una vasta porzione di superficie. Questo lavoro propone la ricostruzione delle superfici di volatilità mediante l’utilizzo di reti neurali auto-associative, strumenti adatti per un’analisi non lineare delle componenti principali dei dati. Si dimostra che tali modelli permettono di ottenere una ricostruzione completa delle superfici sopra citate, soprattutto in presenza di tassi di interesse negativi, laddove la modellistica tradizionale non risulta in grado di derivare la superficie di volatilità di Black. Lo studio è organizzato in quattro sezioni: la sezione 2 presenta la modellistica dell’analisi non lineare delle componenti principali e del loro impiego tramite reti neurali auto-associative; la sezione 3 consiste nel validare il codice MATLAB su un esempio di superficie analitica piuttosto complessa; la sezione 4 presenta uno studio empirico riguardante la ricostruzione della superficie di volatilità alla Black nel caso di swaption europee; la sezione 5 propone le conclusioni.
Article
Principal component analysis (PCA) is a multivariate technique that analyzes a data table in which observations are described by several inter‐correlated quantitative dependent variables. Its goal is to extract the important information from the table, to represent it as a set of new orthogonal variables called principal components, and to display the pattern of similarity of the observations and of the variables as points in maps. The quality of the PCA model can be evaluated using cross‐validation techniques such as the bootstrap and the jackknife. PCA can be generalized as correspondence analysis (CA) in order to handle qualitative variables and as multiple factor analysis (MFA) in order to handle heterogeneous sets of variables. Mathematically, PCA depends upon the eigen‐decomposition of positive semi‐definite matrices and upon the singular value decomposition (SVD) of rectangular matrices. Copyright © 2010 John Wiley & Sons, Inc. This article is categorized under: Statistical and Graphical Methods of Data Analysis > Multivariate Analysis Statistical and Graphical Methods of Data Analysis > Dimension Reduction
Article
Research on this project was supported by a grant from the National Science Foundation. I am indebted to Arthur Laffer, Robert Aliber, Ray Ball, Michael Jensen, James Lorie, Merton Miller, Charles Nelson, Richard Roll, William Taylor, and Ross Watts for their helpful comments.
Article
This paper tests two of the simplest and most popular trading rules--moving average and trading range break--by utilizing the Dow Jones Index from 1897 to 1986. Standard statistical analysis is extended through the use of bootstrap techniques. Overall, their results provide strong support for the technical strategies. The returns obtained from these strategies are not consistent with four popular null models: the random walk, the AR(1), the GARCH-M, and the Exponential GARCH. Buy signals consistently generate higher returns than sell signals, and further, the returns following buy signals are less volatile than returns following sell signals. Moreover, returns following sell signals are negative, which is not easily explained by any of the currently existing equilibrium models. Copyright 1992 by American Finance Association.
Article
Assistant Professor and Director of Computing Services respectively at the College of Business Administration, University of Rochester. This Research was supported by the Security Trust Company, Rochester, New York. We wish to express our appreciation to David Besenfelder for his help in the computer programming effort.
Article
The authors indicate an apparently novel method for computing an inverse discrete Fourier transform (IDFT) through the use of a forward DFT program. They point out that, in many cases, this is obtained without any additional cost, either in terms of program length or in terms of computational time
Article
Principal manifolds are defined as lines or surfaces passing through ``the middle'' of data distribution. Linear principal manifolds (Principal Components Analysis) are routinely used for dimension reduction, noise filtering and data visualization. Recently, methods for constructing non-linear principal manifolds were proposed, including our elastic maps approach which is based on a physical analogy with elastic membranes. We have developed a general geometric framework for constructing ``principal objects'' of various dimensions and topologies with the simplest quadratic form of the smoothness penalty which allows very effective parallel implementations. Our approach is implemented in three programming languages (C++, Java and Delphi) with two graphical user interfaces (VidaExpert http://bioinfo.curie.fr/projects/vidaexpert and ViMiDa http://bioinfo-out.curie.fr/projects/vimida applications). In this paper we overview the method of elastic maps and present in detail one of its major applications: the visualization of microarray data in bioinformatics. We show that the method of elastic maps outperforms linear PCA in terms of data approximation, representation of between-point distance structure, preservation of local point neighborhood and representing point classes in low-dimensional spaces.
Testing Weak Form of Market Efficiency by Application of Simple Technical Trading rules on the Indian Stock Market
  • A Agarwal
Agarwal, A.: 'Testing Weak Form of Market Efficiency by Application of Simple Technical Trading rules on the Indian Stock Market', Dissertation, 2006
Learning Nonlinear Principal Manifolds by Self-Organising Maps', Principal Manifolds for Data Visualization and Dimension Reduction
  • H Yin
Yin, H.: 'Learning Nonlinear Principal Manifolds by Self-Organising Maps', Principal Manifolds for Data Visualization and Dimension Reduction, Lecture Notes in Computational Science and Engineering, vol. 58, Berlin, Germany: Springer, 2007, Ch. 3, pp. 68-95