Delrick Nunes De Oliveira’s scientific contributions

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Study and Development of Machine Learning Models Designed for Extended Reality Interactivity in Real-Time
  • Chapter

December 2024

·

7 Reads

Geovana Amorim Abensur

·

Agustín Alejandro Ortiz Díaz

·

·

Delrick Nunes De Oliveira


Experimental Comparative Study of Three Models of Convolutional Neural Networks for Emotion Recognition

November 2023

·

8 Reads

·

1 Citation

Lecture Notes in Computer Science

Many researchers agree that facial expressions are one of the main non-verbal ways that human beings use to communicate and express emotions. For this reason, there has been a significant increase in interest in capturing facial movements to generate realistic digital animations incorporated into virtual environments. Our general project has the final objective of developing and evaluating different models of convolutional neural networks (CNN). These models will fit a linear “blendshapes” model of facial expression from images obtained by a Head-mounted display (HMD). At this stage of the project, in this paper, our goal is to compare, through different evaluation metrics (accuracy, runtime, and so on), three CNN models designed to detect emotions. These models differ in the way they treat input images. Two of the models partition the input images; one divides the images into 3 fundamental parts: forehead, eyes + nose, and mouth + chin; the other divides the images, right in the middle, into two presumably symmetrical parts. The third model works with full images. In these first proposals, we used 2D images with frontal faces in black-and-white to train all the CNN models. The main experiments are carried out on the database “The Japanese Female Facial Expression (JAFFE). JAFFE dataset contains 213 images categorized into 7 facial expressions, 6 basic facial expressions (Happiness, Sadness, Surprise, Anger, Disgust, and Fear) + 1 neutral. The three proposed models yielded satisfactory results in terms of accuracy. In addition, the training time remained within acceptable values.


Study on Different Methods for Recognition of Facial Expressions from the Data Generated by Modern HMDs

July 2023

·

46 Reads

·

1 Citation

Communications in Computer and Information Science

One of the non-verbal ways most used by human beings to communicate and convey emotions, often unconsciously, is facial expression. The recognition and tracking of facial expressions are among the main challenges of several companies that intend to enter virtual social environments. Virtual worlds are becoming viable due to the use of head-mounted displays (HMDs) that allow people to interact in these environments with a great deal of realism. However, recognizing and tracking facial expressions on HMDs has been challenging due to optical occlusions. The same device occludes the eyes, which are a fundamental part of facial expressions. In general, the first HMDs did not have cameras or sensors that captured what was happening behind the device. Because of this, previous research has often proposed to work by extracting partial facial features; for example, mouth, cheeks, chin, and so on). However, as of 2021, some of the latest HMDs manufactured have incorporated cameras and/or sensors for face and hand tracking. Among these modern devices, we can mention HTC-Vive-Focus-3 manufactured by HTC, HP-Reverb-G2-Omnicept-Edition manufactured by HP, Meta-Quest-Pro, manufactured by Meta, and Pico-4-Pro manufactured by Pico. This work aims to carry out a study of the main methods of recognition of facial expressions; whether they are traditional, based on deep learning, or hybrid; using as input the complete facial data provided by the new HMD devices that offer cameras and/or sensors for face tracking.KeywordsRecognition Method Facial Expressions


Study of Different Methods to Design and Animate Realistic Objects for Virtual Environments on Modern HMDs

July 2023

·

4 Reads

Communications in Computer and Information Science

Head-mounted displays (HMDs) are making virtual environments increasingly viable and real. As of the year 2021, some of the latest HMDs manufactured have incorporated cameras and/or sensors for the recognition and tracking of hands and facial expressions. These new devices include HTC-Vive-Focus-3 manufactured by HTC, HP-Reverb-G2-Omnicept-Edition manufactured by HP, Meta-Quest-Pro, manufactured by Meta, and Pico-4-Pro manufactured by Pico. A human's facial expressions convey emotional and non-verbal information. Transferring these expressions to build more realistic designs is a long-standing problem in computer animation. Recently, the development of facial reconstructions (2D and 3D) has achieved high performance, adjusting to being treatable in real time. There are different types of models for design and animation, more human and realistic models, unrealistic cartoon character models, and non-human models with different facial structures. Regardless of the design, there must be guarantees of a smooth transition between expressions so that the facial animation does not look choppy. This work aims to carry out a study of the main models for the design and animation of objects, which can reflect and support various types of human facial expressions obtained from the complete facial data provided by HMDs that incorporate cameras and/or sensors for face recognition and tracking.KeywordsVirtual Object DesignVirtual Object AnimationVirtual EnvironmentsHead-mounted displays