PreprintPDF Available

Foundations of Descriptive and Inferential Statistics (version 4)

  • parcIT GmbH
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

These lecture notes were written with the aim to provide an accessible though technically solid introduction to the logic of systematical analyses of statistical data to both undergraduate and postgraduate students, in particular in the Social Sciences, Economics, and the Financial Services. They may also serve as a general reference for the application of quantitative-empirical research methods. In an attempt to encourage the adoption of an interdisciplinary perspective on quantitative problems arising in practice, the notes cover the four broad topics (i) descriptive statistical processing of raw data, (ii) elementary probability theory, (iii) the operationalisation of one-dimensional latent statistical variables according to Likert's widely used scaling approach, and (iv) null hypothesis significance testing within the frequentist approach to probability theory concerning (a) distributional differences of variables between subgroups of a target population, and (b) statistical associations between two variables. The relevance of effect sizes for making inferences is emphasised. These lecture notes are fully hyperlinked, thus providing a direct route to original scientific papers as well as to interesting biographical information. They also list many commands for running statistical functions and data analysis routines in the software packages R, SPSS, EXCEL and OpenOffice. The immediate involvement in actual data analysis practices is strongly recommended.
Content may be subject to copyright.
A preview of the PDF is not available
Full-text available
These lecture notes provide a self-contained introduction to the mathematical methods required in a Bachelor degree programme in Business, Economics, or Management. In particular, the topics covered comprise real-valued vector and matrix algebra, systems of linear algebraic equations, Leontief's stationary input-output matrix model, linear programming, elementary financial mathematics, as well as differential and integral calculus of real-valued functions of one real variable. A special focus is set on applications in quantitative economical modelling.
Full-text available
In der aktuellen 4. Auflage gilt der Bortz/Döring, egal ob Psychologie, Medizin, BWL oder Soziologie zu Recht als Standardwerk für Studium und Forschungsalltag - kaum ein anderes Werk zu den Forschungsmethoden und der Evaluation hat sich in den letzten 20 Jahren seit der ersten Auflage so kontinuierlich mit den wachsenden Anforderungen der modernen Sozialforschung weiterentwickelt. Themenschwerpunkt ist noch immer die Psychologie, wobei sich die angewandten Methoden problemlos auf andere Fragestellungen übertragen lassen. Grundsätzlich ein Lehrbuch für Studierende, die Grundlagen und Methoden der empirischen Forschung erlernen wollen, ist der Bortz/Döring aber auch etabliertes Nachschlagewerk für fortgeschrittene Forscher. Neben den 156 Abbildungen und 87 Tabellen bietet das Lehrbuch auch zahlreiche Übungsaufgaben mit Musterlösungen an den Kapitelenden und auch für die eigene, praktische Umsetzung aus Datenerhebung und Auswertung bieten die Autoren Werkzeuge, Anleitungen und Hilfe an.
Statistical Rethinking: A Bayesian Course with Examples in R and Stan builds readers’ knowledge of and confidence in statistical modeling. Reflecting the need for even minor programming in today’s model-based statistics, the book pushes readers to perform step-by-step calculations that are usually automated. This unique computational approach ensures that readers understand enough of the details to make reasonable choices and interpretations in their own modeling work. The text presents generalized linear multilevel models from a Bayesian perspective, relying on a simple logical interpretation of Bayesian probability and maximum entropy. It covers from the basics of regression to multilevel models. The author also discusses measurement error, missing data, and Gaussian process models for spatial and network autocorrelation. By using complete R code examples throughout, this book provides a practical foundation for performing statistical inference. Designed for both PhD students and seasoned professionals in the natural and social sciences, it prepares them for more advanced or specialized statistical modeling. Web Resource The book is accompanied by an R package (rethinking) that is available on the author’s website and GitHub. The two core functions (map and map2stan) of this package allow a variety of statistical models to be constructed from standard model formulas.
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed “the New Statistics” (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
This package contains the functions to compute the standardized effect sizes for experiments (Cohen d, Hedges g, Cliff delta, Vargha and Delaney A). The computation algorithms have been optimized to allow efficient computation even with very large data sets. The package is available on the CRAN web site: