Vladimir Andreevich Marochko

Vladimir Andreevich Marochko
  • Bachelor of Science
  • PhD candidate at Technological University Dublin

About

10
Publications
1,298
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
18
Citations
Introduction
Vladimir Andreevich Marochko joined PhD research team at the Artificial Intelligence and Cognitive load Research Lab, TU Dublin. Vladimir does research in data mining, artificial neural networks, and artificial intelligence. Their most recent publication is 'An application of Integrated Gradients for the analysis of the P3b event-related potential on a convolutional neural network trained with superlets.'
Current institution
Technological University Dublin
Current position
  • PhD candidate

Publications

Publications (10)
Chapter
Full-text available
Virtual reality (VR) is getting traction in many contexts, allowing users to have a real-life experience in a virtual world. However, its application in the field of Neuroscience, and above all probing newer activity with the analysis of electroencephalographic (EEG) event-related potentials (ERP) is underexplored. This article reviews the state-of...
Preprint
Full-text available
Catastrophic forgetting has a significant negative impact in reinforcement learning. The purpose of this study is to investigate how pseudorehearsal can change performance of an actor-critic agent with neural-network function approximation. We tested agent in a pole balancing task and compared different pseudorehearsal approaches. We have found tha...
Preprint
Full-text available
Catastrophic forgetting has a significant negative impact in reinforcement learning. The purpose of this study is to investigate how pseudorehearsal can change performance of an actor-critic agent with neural-network function approximation. We tested agent in a pole balancing task and compared different pseudorehearsal approaches. We have found tha...
Conference Paper
Catastrophic forgetting has a significant negative impact in reinforcement learning. The purpose of this study is to investigate how pseudorehearsal can change performance of an actor-critic agent with neural-network function approximation. We tested agent in a pole balancing task and compared different pseudorehearsal approaches. We have found tha...
Conference Paper
Full-text available
Catastrophic forgetting is of special importance in reinforcement learning, as the data distribution is generally non-stationary over time. We study and compare several pseudorehearsal approaches for Q-learning with function approximation in a pole balancing task. We have found that pseudorehearsal seems to assist learning even in such very simple...
Technical Report
Full-text available
Catastrophic forgetting has a serious impact in reinforcement learning, as the data distribution is generally sparse and non-stationary over time. The purpose of this study is to investigate whether pseudorehearsal can increase performance of an actor-critic agent with neural-network based policy selection and function approximation in a pole balan...
Thesis
Full-text available
Catastrophic forgetting has a serious impact in reinforcement learning, as the data distribution is generally sparse and non-stationary over time. The purpose of this study is to investigate whether pseudorehearsal can increase performance of an actor-critic agent with neural-network based policy selection and function approximation in a pole balan...
Article
Full-text available
Catastrophic forgetting has a serious impact in reinforcement learning, as the data distribution is generally sparse and non-stationary over time. The purpose of this study is to investigate whether pseudorehearsal can increase performance of an actor-critic agent with neural-network based policy selection and function approximation in a pole balan...
Article
Full-text available
Catastrophic forgetting is of special importance in reinforcement learning, as the data distribution is generally non-stationary over time. We study and compare several pseudorehearsal approaches for Q-learning with function approximation in a pole balancing task. We have found that pseudorehearsal seems to assist learning even in such very simple...

Network

Cited By