DANSKE BANK
  • Copenhagen, Denmark
Recent publications
Contrary to the extensive literature pioneered by James Hamilton in the early 1980s that focuses on analyzing the relationship between changes in the price of crude oil and the U.S. real gross domestic product growth (GDP) rate, Herrera et al. (2011) is essentially the first study that explores the in-sample predictive impact of the price of crude oil on the U.S. industrial production index. To date, almost nothing is known about the nature and degree of the out-of-sample predictive impact of the price of crude oil on the U.S. industrial production index. This study fills the gap. Using various nonlinear transformations of the price crude oil widely employed in the crude oil price/GDP predictability literature as well as crude oil price volatility measures, we document (rather surprisingly) that the form of nonlinearity that delivers the most consistent pattern of out-of-sample population-level predictability gains relative to the benchmark when forecasting ex-post revised as well as real-time U.S. industrial production has to do with crude oil price decreases below the minimum price in recent memory. In contrast to the GDP predictability literature, crude oil price increases beyond the maximum in recent memory do not afford any predictive power. On the contrary, they deteriorate relative forecast performance. These results go directly against a distinct sense of déjà vu that one would expect given the degree of affinity between industrial production and GDP. The predictive power afforded by crude oil price net decreases also translate into economic gains.
Given the important role of the petroleum industry in the Norwegian economy, one would assume that changes in the price of crude oil would help greatly improve the accuracy of the Norwegian real gross domestic product growth rate point (density) forecasts out-of-sample. Surprisingly, evidence of one-quarter-ahead out-of-sample point (density) forecast accuracy gain relative to the benchmark model is very weak, at best close to 3%. Furthermore, results from the unconditional equal predictive ability test suggested in Diebold and Mariano (J Bus Econ Stat 13:253–263, 1995) document that these modest gains are not statistically significant. However, the null hypothesis of equal conditional predictive ability as specified in Giacomini and White (Econometrica 74:1545–1578, 2006) is rejected for a number of models. Moreover, by relying on the information provided by the conditioning variables used in the Giacomini and White (2006) test and devising a forecast selection strategy following Granziera and Sekhposyan (Int J Forecast 35:1636–1657, 2019), we succeed at obtaining point forecast accuracy gains as high as 12%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$12\%$$\end{document} relative to the benchmark one-quarter ahead.
In a recent study, Maheu et al. (Int J Forecast 36: 570–587, 2020) suggest a predictive regression model, where besides the conditional mean, the lagged value of the predictor of interest can also impact the dependent variable through the conditional volatility process. Their out-of-sample study focusing on predicting the conditional distribution of the US real GDP growth rate by conditioning on the price of crude oil finds strong evidence in favor of the suggested specification with respect to density forecast accuracy. In this study, we demonstrate that their framework is also very useful with regard to predicting aggregate equity returns by conditioning on macroeconomic variables. Using the well-known Goyal and Welsh dataset, we show that the suggested framework results in statistically significant more accurate density predictions relative to the stochastic volatility benchmark as well as competitors, where the lagged value of the predictor of interest impacts aggregate equity returns exclusively through the conditional mean process. Evidence of statistical predictability also results in VaR accuracy gains.
This study revisits the topic of predicting aggregate equity returns out-of-sample by conditioning on economic variables through Bayesian model averaging (BMA). Besides simultaneously addressing parameter instability and model uncertainty, I suggest a new model feature, namely, predictors in a given model can also impact the dependent variable through the conditional volatility process. The suggested econometric framework is straightforward to implement without requiring simulation. Likewise, the user can easily decide, which aspects of the predictive channel should to be switched on, off or altered. I apply the suggested framework to the well-known [Goyal, A. and Welch, I., A comprehensive look at the empirical performance of equity premium prediction. Rev. Financial Stud., 2008, 21, 1455–1508] dataset. An extensive out-of-sample prediction evaluation demonstrates that averaging over predictor combinations in a model that allows lagged predictors to impact aggregate equity returns exclusively through the conditional volatility process results in statistically significant more accurate density predictions relative to the benchmark, especially when predicting the left tail of the conditional distribution. One also observes economic gains in favor of certain BMAs. Here, the BMA that allows predictors to impact equity returns through the conditional mean as well as the conditional volatility process is the top performer.
We identify long-lived pricing errors through a model in which inattentive investors arrive stochastically to trade. The model's parameters are structurally estimated using daily NYSE market-maker inventories, retail order flows, and prices. The estimated model fits empirical variances, autocorrelations, and cross-autocorrelations among our three data series from daily to monthly frequencies. Pricing errors for the typical NYSE stock have a standard deviation of 3.2 percentage points and a half-life of 6.2 weeks. These pricing errors account for 9.4%, 7.0%, and 4.5% of the respective daily, monthly, and quarterly idiosyncratic return variances.
Dynamic model averaging (DMA) has become a widely used estimation technique in macroeconomic applications. Since its introduction in econom(etr)ics by Gary Koop and Dimitris Korobilis in 2009, applications of DMA have increased in unimaginable ways. Besides applying the original (univariate) framework suggested by Koop and Korobilis on the data of interest, for example, the inflation rate of the country of choice or return on the rate of equity, practitioners have been able to use DMA‐based techniques to extend current models, thereby further improving out‐of‐sample forecast accuracy, overcome computational bottlenecks, and even help improve our understanding of economic phenomena by introducing new models. These include using Google search data in combination with the predictive likelihood to govern switching between different predictive regressions in the model set or specifying large time‐varying parameter vector autoregressions that can be estimated without resorting to simulation‐based techniques. This study provides an overview of DMA techniques and the ways in which they have evolved since the contribution of Koop and Korobilis.
Apart from the percentage change in the price of crude oil, there is a growing tradition of using various nonlinear transformations of the price of crude oil to forecast real GDP growth rates, equity returns, inflation and other macroeconomic variables. This study attempts to quantify the additional potential predictive power afforded by crude oil price volatility relative to widely used crude oil price‐based variables for more than three hundred U.S. macroeconomic time‐series at the monthly and the quarterly sampling frequency. We observe that predictive regressions employing crude oil price realized volatility and crude oil price realized semivolatilities tend to afford a more consistent pattern of out‐of‐sample prediction gains relative to competitors using well‐known crude oil price measures and the autoregressive benchmark at the quarterly and monthly sampling frequency. While it is somewhat harder to find evidence of finite‐sample predictive gains relative to the benchmark, the evidence is stronger with respect to population‐level predictability one‐quarter (one‐month) ahead for the model with crude oil price realized semivolatilities across the considered data and models. Furthermore, point (density) forecasts employing crude oil price realized volatility tend to be more accurate than corresponding forecasts produced under the crude oil price‐based predictive regressions in a horse race.
We evaluate the impact of changes in the price of crude oil on the United Kingdom (U.K.) real gross domestic product (GDP) growth rate by way of an out-of-sample forecasting analysis. We compare the performance of several nonlinear models and determine, which aspects of nonlinearities are most useful for obtaining forecast improvements. Likewise, our approach takes into account the possibility that relative predictive performance can vary over the out-of-sample period. Results based on quarterly data from 1974q1 through 2018q4 illustrate that our conclusions depend on the definition of forecast improvement and whether we rely on pairwise or multiple forecast comparison. For instance, it is very difficult to find evidence that point forecasts exploiting crude oil price variables are statistically significant more accurate than point forecasts produced under the benchmark. On the other hand, the null hypothesis of no population-level predictability is borderline rejected for certain nonlinear crude oil price variables. We also observe notable differences between using real-time and ex-post revised GDP data with regards to local out-of-sample performance. The predictive power associated with the more successful crude oil price measures appears to concentrate in the early 1990s and around the onset of the Great Recession.
We investigate if crude oil price volatility is predictable by macroeconomic variables. We consider a large number of predictors, take into account the possibility that relative predictive performance varies over the out‐of‐sample period and shed light on the economic drivers of crude oil price volatility. Results using monthly data from 1983m1 to 2018m12 document that variables related to crude oil production, economic uncertainty and variables that either describe the current stance or provide information about the future state of the economy forecast crude oil price volatility at the population level one month ahead. On the other hand, the evidence of finite‐sample predictability is very weak. A detailed examination of our out‐of‐sample results using the fluctuation test suggests that this is because relative predictive performance changes drastically over the out‐of‐sample period. The predictive power associated with the more successful macroeconomic variables concentrates around the Great Recession until 2015. They also generate the strongest signal of a decrease in the price of crude oil towards the end of 2008.
In a major IT development organization, individuals were too often working alone and sub-optimizing. Their performance was not anywhere near the desired level. Analysis indicated that this was due to a number of common factors which lacked attention. To address this, a framework with eight factors grounded in cross-disciplinary theory was developed and evaluated in the organization. It emerged that a self-assessment approach to the eight factors, coupled with training in team theory and facilitated workshops can provide significant value. In the paper, we present the overall framework and details of how to apply it. Furthermore, we present relevant examples and lessons learned. We conclude that you can “bootstrap” yourself to become a higher performing team through facilitated self-assessments using the eight factors.
Purpose Due to the changes in the market, the shift to proactive and self-developed career management is evident. It results in the emergence of contemporary career attitudes, namely, protean and boundaryless ones. Individuals with protean career (PC) and boundaryless career (BC) attitudes may be more inclined to switch jobs, which affect decreased organizational commitment. The purpose of this paper is to analyze whether PC and BC attitudes affect organizational commitment of young adults in finance sector. Design/methodology/approach The data of 177 young Lithuanian adults from finance sector were collected in quantitative research. Findings The research results indicate that young adults in finance sector have contemporary career attitudes significantly expressed. The regression analysis findings show that affective commitment is positively predicted by self-directed career management and boundaryless mindset, and negatively predicted by values-driven career orientation and organizational mobility preference. Continuance commitment is negatively predicted by self-directed career management and organizational mobility preference. Originality/value This research is valuable as few if any studies cover contemporary career attitudes and organizational commitment of already working young adults in finance sector in a European country, namely, Lithuania.
Microservices have seen their popularity blossoming with an explosion of concrete applications in real-life software. Several companies are currently involved in a major refactoring of their back-end systems in order to improve scalability. This article presents an experience report of a real-world case study, from the banking domain, in order to demonstrate how scalability is positively affected by reimplementing a monolithic architecture into microservices. The case study is based on the FX Core system for converting from one currency to another. FX Core is a mission-critical system of Danske Bank, the largest bank in Denmark and one of the leading financial institutions in Northern Europe.
We present techniques and protocols for the preprocessing of secure multiparty computation (MPC), focusing on the so-called SPDZ MPC scheme [14] and its derivatives [1, 11, 13]. These MPC schemes consist of a so-called preprocessing or offline phase where correlated randomness is generated that is independent of the inputs and the evaluated function, and an online phase where such correlated randomness is consumed to securely and efficiently evaluate circuits. In the recent years, it has been shown that such protocols (such as [5, 17, 18]) turn out to be very efficient in practice. While much research has been conducted towards optimizing the online phase of the MPC protocols, there seems to have been less focus on the offline phase of such protocols (except for [11]). With this work, we want to close this gap and give a toolbox of techniques that aim at optimizing the preprocessing. We support both instantiations over small fields and large rings using somewhat homomorphic encryption and the Paillier cryptosystem [19], respectively. In the case of small fields, we show how the preprocessing overhead can basically be made independent of the field characteristic. In the case of large rings, we present a protocol based on the Paillier cryptosystem which has a lower message complexity than previous protocols and employs more efficient zero-knowledge proofs that, to the best of our knowledge, were not presented in previous work.
In this paper, we show how process thinking enables analysis of change in a world of forces and flows, bringing out the contingent nature of change, the importance of activating inherent forces, the power of heterogeneity of factors, and the temporality of change. We apply an extended sensemaking framework to a concrete case of change in a Multinational Corporation, in which we demonstrate and explain how two separate processes under the same change programme involving the same actors and under the same management achieved significantly different degrees of momentum. Our contribution to the sensemaking literature lies in relating social interacts with commitment and the narratives that underlie the change processes. At a more general level, the analysis shows that what drives organizational change may be the dynamics inherent in the process rather than its initial rationale or its context.
This article reports on a study that explored whether evidence can be found of a shared evaluation tradition among evaluation researchers and practitioners working in institutions in the Nordic countries. The study focused on articles in peer-reviewed, international, designated evaluation journals in the period 2000-12; it found little evidence from the analysis of these sources to support this claim. Meanwhile, the study found a clear preference of Nordic evaluators for publishing in European journals, with Sweden being the dominant source country in terms of number of publications, selection of journals in which they were published, and institutions and authors publishing the most.
Dalton is a powerful general-purpose program system for the study of molecular electronic structure at the Hartree-Fock, Kohn-Sham, multiconfigurational self-consistent-field, Møller-Plesset, configuration-interaction, and coupled-cluster levels of theory. Apart from the total energy, a wide variety of molecular properties may be calculated using these electronic-structure models. Molecular gradients and Hessians are available for geometry optimizations, molecular dynamics, and vibrational studies, whereas magnetic resonance and optical activity can be studied in a gauge-origin-invariant manner. Frequency-dependent molecular properties can be calculated using linear, quadratic, and cubic response theory. A large number of singlet and triplet perturbation operators are available for the study of one-, two-, and three-photon processes. Environmental effects may be included using various dielectric-medium and quantum-mechanics/molecular-mechanics models. Large molecules may be studied using linear-scaling and massively parallel algorithms. Dalton is distributed at no cost from http://www.daltonprogram.org for a number of UNIX platforms.
Purpose – To provide a review of the winning case study from the professional development category of EFMD's Excellence in Practice Awards, 2013. Design/methodology/approach – An independent review of the winning case. Findings – A strategic review at Danske Bank Sweden led to a decision to enhance its position as a premium bank by strengthening the advisory concept within personal banking. In response a working group proposed that a new group of investment advisors be created for individuals with up to 600,000 to invest. Originality/value – Provides insight for practitioners into a learning and development initiative of proven impact.
Text messaging on smartphones uses a full soft keyboard instead of the numeric buttons on traditional mobile phones. While being more intuitive, the lack of tactile feedback from physical buttons increases the need for user focus, which may compromise safety in certain settings. This paper reports from an empirical study of the effect of text messaging on road safety. We compared the use of a traditional mobile phone and a smartphone for writing text messages during simulated driving. The results confirm that driver performance when texting decreases considerably as there are significant increases in reaction time, car-following distance, lane violation, number of crash/near-crash incidents, perceived task load and the amount of time the driver is looking away from the road. The results also show that smartphones makes this even worse; on key performance parameters they increase the threat from text messaging while driving. These results suggest that drivers should never text while driving, especially not with a smartphone.
Open innovation has been recognised by the IT industry as a novel way to create innovation, where organisations open their innovation processes and cooperate with others to develop new products and services. We study open innovation by looking at another new trend, innovation through customising standard software as a business model. We investigate the open innovation activities of an inter-organisational network which consists of a small customising company, a large global software producer and other involved companies. We integrate formally separate aspects of open innovation and inter-organisational networks, broaden the view from one focal firm to the relations in a network of companies and underline the importance of balanced formal and informal relations, and 'coopetive' and opportunistic behaviour for the open innovation process. © Springer-Verlag Berlin Heidelberg 2013. All rights are reserved.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
13 members
Antoine Savine
  • Superfly Analytics
Hanna Zoon
  • Business Development
Alice Buccioli
  • Asset Management
Morten Bjerregaard Pedersen
  • Enterprise Architecture Centre of Excellence
Information
Address
Copenhagen, Denmark
Website
http://www.danskebank.com/en-uk/Pages/default.aspx