Charalampos Koniaris

Charalampos Koniaris
Disney Research

Doctor of Engineering

About

25
Publications
2,702
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
303
Citations
Citations since 2016
19 Research Items
302 Citations
20162017201820192020202120220204060
20162017201820192020202120220204060
20162017201820192020202120220204060
20162017201820192020202120220204060
Additional affiliations
January 2015 - present
Disney Research
Position
  • PostDoc Position

Publications

Publications (25)
Article
Full-text available
The lifetime of proteins in synapses is important for their signaling, maintenance, and remodeling, and for memory duration. We quantified the lifetime of endogenous PSD95, an abundant postsynaptic protein in excitatory synapses, at single-synapse resolution across the mouse brain and lifespan, generating the Protein Lifetime Synaptome Atlas. Excit...
Article
Brain synapses through the life span Excitatory synapses connect neurons in the brain to build the circuits that enable behavior. Cizeron et al. surveyed synapses in the mouse brain from birth to old age and present the data as a community resource, the Mouse Lifespan Synaptome Atlas (see the Perspective by Micheva et al. ). Molecular and morpholog...
Preprint
Full-text available
How synapses change molecularly during the lifespan and across all brain circuits is unknown. We analyzed the protein composition of billions of individual synapses from birth to old age on a brain-wide scale in the mouse, revealing a program of changes in the lifespan synaptome architecture spanning individual dendrites to the systems level. Three...
Article
Full-text available
Synapses are found in vast numbers in the brain and contain complex proteomes. We developed genetic labeling and imaging methods to examine synaptic proteins in individual excitatory synapses across all regions of the mouse brain. Synapse catalogs were generated from the molecular and morphological features of a billion synapses. Each synapse subty...
Data
Each row and column in the table represent one delineated mouse brain subregion and values in the cell represent the similarity of synaptome parameters for the two subregions. Abbreviations: ACAd, Anterior Cingulate Area, dorsal part; ACAv, Anterior Cingulate Area, ventral part; ACB, nucleus Accumbens; AId, Agranular Insular area, dorsal part; AIv,...
Data
Table S4. PSD95 Synaptome Parameters in Wild-Type, Psd93−/−, and Sap102−/− Mice, Related to Figure 7A Average PSD95 punctum density (left tables, in puncta per 100 μm2), intensity (middle tables, mean gray value, 16 bits), and size (right tables, in μm2) in wild-type (orange, n = 13), Psd93−/− (blue, n = 6) and Sap102−/− mice (green, n = 11). The...
Data
Table S1. The PSD95 and SAP102 Synaptome Parameters in Different Brain Subregions on the Whole-Mouse-Brain Scale, Related to Figure 2B Average PSD95 and SAP102 punctum density, intensity, size, and colocalization values in different delineated brain subregions defined in the Allen Reference Atlas. The main overarching area is also indicated for ea...
Article
Pre-calculated depth information is essential for efficient light field video rendering, due to the prohibitive cost of depth estimation from color when real-time performance is desired. Standard state-of-the-art video codecs fail to satisfy such performance requirements when the amount of data to be decoded becomes too large. In this paper, we pro...
Article
We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame,...
Conference Paper
Lightfield video, as a high-dimensional function, is very demanding in terms of storage. As such, lightfield video data, even in a compressed form, do not typically fit in GPU or main memory unless the capture area, resolution or duration is sufficiently small. Additionally, latency minimization--critical for viewer comfort in use-cases such as vir...
Conference Paper
Full-text available
We present immersive storytelling in VR enhanced with non-linear sequenced sound, touch and light. Our Deep Media [Rose 2012] aim is to allow for guests to physically enter rendered movies with novel non-linear storytelling capability. With the ability to change the outcome of the story through touch and physical movement, we enable the agency of g...
Conference Paper
Full-text available
We propose an end-to-end solution for presenting movie quality animated graphics to the user while still allowing the sense of presence afforded by free viewpoint head motion. By transforming offline rendered movie content into a novel immersive representation, we display the content in real-time according to the tracked head pose. For each frame,...
Article
We propose a new real-time temporal filtering and antialiasing (AA) method for rasterization graphics pipelines. Our method is based on Pixel History Linear Models (PHLM), a new concept for modeling the history of pixel shading values over time using linear models. Based on PHLM, our method can predict per-pixel variations of the shading function b...
Conference Paper
Full-text available
With a recent rise in the availability of affordable head mounted gear sets, various sensory stimulations (e.g., visual, auditory and haptics) are integrated to provide seamlessly embodied virtual experience in areas such as education, entertainment, therapy and social interactions. Currently, there is an abundance of available toolkits and applica...
Conference Paper
Compelling virtual reality experiences require high quality imagery as well as head motion with six degrees of freedom. Most existing systems limit the motion of the viewer (prerecorded fixed position 360 video panoramas), or are limited in realism, e.g. video game quality graphics rendered in real-time on low powered devices. We propose a solution...
Conference Paper
Perceptually lossless foveated rendering methods exploit human perception by selectively rendering at different quality levels based on eye gaze (at a lower computational cost) while still maintaining the user's perception of a full quality render. We consider three foveated rendering methods and propose practical rules of thumb for each method to...
Conference Paper
Parameterisation of models is typically generated for a single pose, the rest pose. When a model deforms, its parameterisation characteristics change, leading to distortions in the appearance of texture-mapped mesostructure. Such distortions are undesirable when the represented surface detail is heterogeneous in terms of elasticity (e.g. texture wi...
Article
In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processes to produce layouts of plant distributions. In contrast to previous work on ecosystem simulation, h...
Conference Paper
Animation of models often introduces distortions to their parameterisation, as these are typically optimised for a single frame. The net effect is that under deformation, the mapped features, i.e. UV texture maps, bump maps or displacement maps, may appear to stretch or scale in an undesirable way. Ideally, what we would like is for the appearance...
Conference Paper
Full-text available
Rendering realistic outdoor scenes in realtime applications is a difficult task to accomplish since the geometric complexity of the objects, and most notably of trees, is too high for current hardware to handle efficiently in large amounts. Our method generates trees with self-similarity, and later exploits this property by heavily sharing prerende...

Network

Cited By

Projects

Projects (4)
Project
A project investigating embodied online dancing and partying supported by Extended Reality techniques and AI powered digital characters. The Carousel project investigates novel, original and imaginative combinations of Artificial Intelligence and immersive interaction technologies, to allow people to feel each other’s presence, touch, and movement, even if they are not in the same physical space. The project final goal is to help people to increase happiness by combating loneliness. More details are available on our website : https://www.carouseldancing.org This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101017779. Details on the support of the European Union are provided here : https://cordis.europa.eu/project/id/101017779