Jan-Ole Perschewski’s research while affiliated with Otto-von-Guericke University Magdeburg and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (4)


Performance of DVAEs on the test set in teacher-forcing mode.
Performance for generation from the prior on the voice bank data set.
T-DVAE: A Transformer-Based Dynamical Variational Autoencoder for Speech
  • Chapter
  • Full-text available

September 2024

·

20 Reads

Jan-Ole Perschewski

·

In contrast to Variational Autoencoders, Dynamical Variational Autoencoders (DVAEs) learn a sequence of latent states for a time series. Initially, they were implemented using recurrent neural networks (RNNs) known for challenging training dynamics and problems with long-term dependencies. This led to the recent adoption of Transformers close to the RNN-based implementation. These implementations still use RNNs as part of the architecture even though the Transformer can solve the task as the sole building block. Hence, we improve the LigHT-DVAE architecture by removing the dependence on RNNs and Cross-Attention. Furthermore, we show that a trained LigHT-DVAE ignores output-to-hidden connections, which allows us to simplify the overall architecture by removing output-to-hidden connections. We demonstrate the capability of the resulting T-DVAE on librispeech and voice bank with an improvement in training time, memory consumption, and generative performance.

Download


Correlations between CSA gain, retrospective/post assessment for each factor, and the usage of other courses or other AI education.
Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment

September 2023

·

524 Reads

·

11 Citations

Education Sciences

A growing number of courses seek to increase the basic artificial-intelligence skills (“AI literacy”) of their participants. At this time, there is no valid and reliable measurement tool that can be used to assess AI-learning gains. However, the existence of such a tool would be important to enable quality assurance and comparability. In this study, a validated AI-literacy assessment instrument, the “scale for the assessment of non-experts’ AI literacy” (SNAIL) was adapted and used to evaluate an undergraduate AI course. We investigated whether the scale can be used to reliably evaluate AI courses and whether mediator variables, such as attitudes toward AI or participation in other AI courses, had an influence on learning gains. In addition to the traditional mean comparisons (i.e., t-tests), the comparative self-assessment (CSA) gain was calculated, which allowed for a more meaningful assessment of the increase in AI literacy. We found preliminary evidence that the adapted SNAIL questionnaire enables a valid evaluation of AI-learning gains. In particular, distinctions among different subconstructs and the differentiation constructs, such as attitudes toward AI, seem to be possible with the help of the SNAIL questionnaire.


Neural-Gas VAE

September 2022

·

21 Reads

Lecture Notes in Computer Science

Most deep learning models are known to be black-box models due to their overwhelming complexity. One approach to make models more interpretable is to reduce the representations to a finite number of objects. This can be achieved by clustering latent spaces or training models which include quantization by design such as the Vector Quantised-Variational AutoEncoder (VQ-VAE). However, if the architecture is not chosen carefully, a phenomenon called index collapse can be observed. Here, a large part of the codebook containing the prototypes is not used decreasing the possible performance. Approaches to circumvent this either rely on data-depending initialization or decreasing the dimensionality of the codebook vectors. In this paper, we present a novel variant of the VQ-VAE, the Neural-Gas VAE, which adapts the codebook loss inspired by neural-gas to avoid index collapse. We show that the Neural-Gas VAE achieves competitive performance on CIFAR and Speech Commands for different codebook sizes and dimensions. Moreover, we show that the resulting architecture learns a meaningful latent space and topology for both features or objects.KeywordsIndex collapseVector quantization

Citations (1)


... In the third study (Laupichler et al., 2023c), the aforementioned scale was adapted to evaluate AI literacy among non-experts to assess AI courses for university students from various disciplines. The evaluation relied on participants' self-assessments. ...

Reference:

Artificial intelligence literacy among university students—a comparative transnational survey
Evaluating AI Courses: A Valid and Reliable Instrument for Assessing Artificial-Intelligence Learning through Comparative Self-Assessment

Education Sciences