Alexandra Posekany’s research while affiliated with University of Vienna and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Graphical representation of a a feed-forward neural network (FFNN) with one hidden layer; and b a convolution of a filter (v1,v2,v3)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(v_1,v_2,v_3)$$\end{document}, with stride=2, on the Input Channel (x1,x2,⋯)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(x_1,x_2,\dots )$$\end{document}. The result is in the Output Channel (y1,y2,⋯)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(y_1,y_2,\dots )$$\end{document}
Prediction accuracy (PA) of the regularized, adaptive regularized and Bayesian regularized methods, computed as the Pearson correlation coefficient between the true breeding values (TBVs) and the predicted breeding values (PBVs), for the simulated dataset, where T1-T3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_1-T_3$$\end{document} refer to three quantitative milk traits. The choice of λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda$$\end{document}, where applicable, was based on the 10-fold CV. The mean squared and absolute prediction errors are also provided. See Table S2 for details
Prediction accuracy (PA) of the group regularized methods (mean and range values of PA across the different groupings), computed as the Pearson correlation coefficient between the true breeding values (TBVs) and the predicted breeding values (PBVs), for the simulated dataset, where T1-T3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_1-T_3$$\end{document} refer to three quantitative milk traits. Choice of λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda$$\end{document} was based on the 10-fold CV. Display refers to the mean, max and min values of PA across all the 10 grouping schemes. The mean squared and absolute prediction errors are also provided. See Table S3 for details
Prediction accuracy (PA) of the ensemble, instance-based and deep learning methods, computed as the Pearson correlation coefficient between the true breeding values (TBVs) and the predicted breeding values (PBVs), for the simulated dataset, where T1-T3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T_1-T_3$$\end{document} refer to three quantitative milk traits. See Tables S4-S5 for details
Predictive ability (PA; mean and range values computed across the 5-fold validation datasets and 10 replicates) of the regularized and adaptive regularized methods, computed as the Pearson correlation coefficient between the observed breeding values (OBVs) and the predicted breeding values (PBVs), for the KWS datasets. The choice of λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda$$\end{document}, where applicable, was based on 4-fold CV. See Table S8 for details
Genomic prediction using machine learning: a comparison of the performance of regularized regression, ensemble, instance-based and deep learning methods on synthetic and empirical data
  • Article
  • Full-text available

February 2024

·

189 Reads

·

19 Citations

BMC Genomics

·

·

·

[...]

·

Background The accurate prediction of genomic breeding values is central to genomic selection in both plant and animal breeding studies. Genomic prediction involves the use of thousands of molecular markers spanning the entire genome and therefore requires methods able to efficiently handle high dimensional data. Not surprisingly, machine learning methods are becoming widely advocated for and used in genomic prediction studies. These methods encompass different groups of supervised and unsupervised learning methods. Although several studies have compared the predictive performances of individual methods, studies comparing the predictive performance of different groups of methods are rare. However, such studies are crucial for identifying (i) groups of methods with superior genomic predictive performance and assessing (ii) the merits and demerits of such groups of methods relative to each other and to the established classical methods. Here, we comparatively evaluate the genomic predictive performance and informally assess the computational cost of several groups of supervised machine learning methods, specifically, regularized regression methods, deep, ensemble and instance-based learning algorithms, using one simulated animal breeding dataset and three empirical maize breeding datasets obtained from a commercial breeding program. Results Our results show that the relative predictive performance and computational expense of the groups of machine learning methods depend upon both the data and target traits and that for classical regularized methods, increasing model complexity can incur huge computational costs but does not necessarily always improve predictive accuracy. Thus, despite their greater complexity and computational burden, neither the adaptive nor the group regularized methods clearly improved upon the results of their simple regularized counterparts. This rules out selection of one procedure among machine learning methods for routine use in genomic prediction. The results also show that, because of their competitive predictive performance, computational efficiency, simplicity and therefore relatively few tuning parameters, the classical linear mixed model and regularized regression methods are likely to remain strong contenders for genomic prediction. Conclusions The dependence of predictive performance and computational burden on target datasets and traits call for increasing investments in enhancing the computational efficiency of machine learning algorithms and computing resources. Peer Review reports

Download


Assessing Students’ Motivation in a University Course on Digital Education

March 2023

·

8 Reads

·

2 Citations

In this paper we explore aspiring teachers’ motivation in using digital technology. Referring to the self determination theory, we examine changes in their perceived competence, autonomy and relatedness before and after a University course on “Digital Education” in the two consecutive winter terms of 2020/21 and 2021/22. This course aims at contributing to students’ digital empowerment and critical perspective on digitalisation. To get insight on students’ motivation, we arranged a questionnaire. Apart from their motivation, we asked for students’ technology use continuance intention. The validation of the questionnaire along with its evaluation is based on several statistical approaches: We compare the internal validity of content-related group of questions through correlation and factor analyses. Further, we explore the outcomes for two specific student cohorts in two succeeding years. Students perceived competence and realisation of digital media’s usefulness increased in both cohorts after course participation. KeywordsDigitalisationDigital media useMedia use continuanceSelf-determination theoryDesign-based researchPre-post test

Citations (2)


... Supervised machine learning can be applied to both regression and classification tasks. Free from statistical distributional assumptions, these algorithms effectively handle heterogeneous effects and complex interactions in high-dimensional data, enhancing the accuracy of genomic prediction for both oligogenic and polygenic traits (Alemu et al. 2024;Lourenço et al. 2024;Montesinos López et al. 2022;Merrick et al. 2022;Azodi et al. 2019). ...

Reference:

Comparative genomic prediction in winter wheat
Genomic prediction using machine learning: a comparison of the performance of regularized regression, ensemble, instance-based and deep learning methods on synthetic and empirical data

BMC Genomics

... Concepts from computer science, such as computational thinking, are combined with ethical questions (e.g., regarding the use of robots) and economic, social, historical and legal perspectives in education. The common thread is created by the interplay of topics, cross-references within and between the MOOCs, the small group sessions, and by discussion and reflection tasks that each cover several units (Haselberger et al., 2021;Posekany et al., 2023;Yüksel-Arslan et al., 2023). ...

Analyzing Students' Motivation for Acquiring Digital Competences
  • Citing Conference Paper
  • October 2023