Figure - available from: Nature Communications
This content is subject to copyright. Terms and conditions apply.
The deep convolutional neural network (DCNN) model naturally encodes low-level and high-level features and predict participants’ choice behavior a Schematic of the deep convolutional neural network (DCNN) model and the results of decoding analysis⁴². The DCNN model was first trained on ImageNet object classifications, and then the average ratings of art stimuli. We computed correlations between each of the LFS model features and activity patterns in each of the hidden layers of the DCNN model. We found that some low-level visual features exhibit significantly decreasing predictive accuracy over hidden layers (e.g., the mean hue and the mean saturation). We also found that a few computationally demanding low-level features showed the opposite trend (see the main text). We further found that some high-level visual features exhibit significantly increasing predictive accuracy over hidden layers (e.g., concreteness and dynamics). Results reproduced from⁴². b The DCNN model could successfully predict human participants' liking ratings significantly greater than chance across all participants. Statistical significance (p < 0.001, indicated by three stars) was determined by a permutation test (one-sided). Credit. Jean Metzinger, Portrait of Albert Gleizes (public domain; RISD Museum).
Source publication
Little is known about how the brain computes the perceived aesthetic value of complex stimuli such as visual art. Here, we used computational methods in combination with functional neuroimaging to provide evidence that the aesthetic value of a visual stimulus is computed in a hierarchical manner via a weighted integration over both low and high lev...
Citations
... Herein, we investigated the effect of congenital differences in colour vision on visual attention and viewers' impressions of complex images. Using artistic paintings, in which a range of visual features (from low to high levels) have been shown to explain impressions, we extracted a variety of impressions, including colour and aesthetic sensations [7][8][9][10]. We devised an experiment wherein trichromatic observers were shown images that simulated dichromatic perception (hereafter, 'simulated dichromats') to compare the experience of congenital dichromacy with the short-term experience of simulated dichromacy. ...
Humans exhibit colour vision variations due to genetic polymorphisms, with trichromacy being the most common, while some people are classified as dichromats. Whether genetic differences in colour vision affect the way of viewing complex images remains unknown. Here, we investigated how people with different colour vision focused their gaze on aesthetic paintings by eye-tracking while freely viewing digital rendering of paintings and assessed individual impressions through a decomposition analysis of adjective ratings for the images. Gaze-concentrated areas among trichromats were more highly correlated than those among dichromats. However, compared with the brief dichromatic experience with the simulated images, there was little effect of innate colour vision differences on impressions. These results indicate that chromatic information is instructive as a cue for guiding attention, whereas the impression of each person is generated according to their own sensory experience and normalized through one's own colour space.
... The interaction of the three dimensions leads to aesthetic experiences, and the mechanisms by which these dimensions influence one another tend to mimic their interactions in other non-aesthetic engagements with objects (Chatterjee & Vartanian, 2014). Take the visual aesthetics as an example, visual cortices such as medial prefrontal cortex (mPFC) and lateral prefrontal cortex (lPFC) are first activated by aesthetic objects such as faces, paintings, and then these regions are involved in evaluating visual features via personal associations, producing emotions like pleasure, ugliness and so on (Iigaya et al., 2023). ...
In this study, we investigated whether a colourful design would foster the aesthetic appreciation of Chinese classic poetry, and the role of learners’ empathy trait in this process. Middle school students (N = 159) were assigned to one of four conditions in a 2 (Colour Design: Colourful vs. Grayscale) × 2 (Empathy Trait: High vs. Low) between-subject design. Results showed that learners with high empathy attained higher scores on comprehension test and aesthetic appreciation ratings than learners with low empathy. Moreover, a significant interaction of the two factors on aesthetic emotion and evaluations ratings was found, indicating that aesthetic ratings were higher in the high empathy group than in the low empathy group with grayscale learning materials, whereas aesthetic ratings were comparable between the two groups with the colourful design. The results indicated that a colourful design can foster low empathy learners’ aesthetic appreciation, and suggested that individual differences should be taken into consideration in instructional design in the multimedia learning of poetry.
... Yet, space is not the only dimension across which efficient integration can mediate perceived beauty. For example, a recent study used DNN models to quantify the integration of visual features across hierarchical levels to model aesthetic perception, and has in turn linked such hierarchical integration processes to parietal and frontal brain systems (Iigaya et al., 2023). Future studies could also link perceived beauty to integration across time: Recent studies in neuroaesthetics increasingly focus on more naturalistic and dynamic . ...
Previous research indicates that the beauty of natural images is already determined during perceptual analysis. However, it is still largely unclear which perceptual computations give rise to the perception of beauty. Theories of processing fluency suggest that the ease of processing for an image determines its perceived beauty. Here, we tested whether perceived beauty is related to the amount of spatial integration across an image, a perceptual computation that reduces processing demands by aggregating image elements into more efficient representations of the whole. We hypothesized that higher degrees of integration reduce processing demands in the visual system and thereby predispose the perception of beauty. We quantified integrative processing in an artificial deep neural network model of vision: We compared activations between parts of the image and the whole image, where the degree of integration was determined by the amount of deviation between activations for the whole image and its constituent parts. This quantification of integration predicted the beauty ratings for natural images across four studies, which featured different stimuli and task demands. In a complementary fMRI study, we show that integrative processing in human visual cortex predicts perceived beauty in a similar way as in artificial neural networks. Together, our results establish integration as a computational principle that facilitates perceptual analysis and thereby mediates the perception of beauty.
As subjective experiences go, beauty matters. Although aesthetics has long been a topic of study, research in this area has not resulted in a level of interest and progress commensurate with its import. Here, we briefly discuss two recent advances, one computational and one neuroscientific, and their pertinence to aesthetic processing. First, we hypothesize that deep neural networks provide the capacity to model representations essential to aesthetic experiences. Second, we highlight the principal gradient as an axis of information processing that is potentially key to examining where and how aesthetic processing takes place in the brain. In concert with established neuroimaging tools, we suggest that these advances may cultivate a new frontier in the understanding of our aesthetic experiences.