Science topic
Gestures - Science topic
Movement of a part of the body for the purpose of communication.
Questions related to Gestures
Since the Generative AI could visualize the various gestures, postures, picture, maps but when a model of MedGPT will be employed to assist the medical students to learn the various patterns of infections, diseases and etc. How it will be effective for the students????
Soccer robots have both proprioception to note the position of their bodies as well as a visual sense that is egocentric to detect the ball, the goals, and the position of other robots (Behnke and Strucker 2008). To communicate the location of the ball and other items with other robots, an allocentric coordinate system is used, much like that utilized by a group of electric fish (who use electricity to communicate), a pack of wolves (who use gestures and sounds to communicate), or a pod of killer whales (who use sounds to communicate) in pursuit of prey. Perhaps, language evolved to enhance allocentric communication, as is required by soccer robots.
A staunch critic of Noam Chomsky, Daniel Everett has argued that language started some two million years ago (rather than 60,000 years ago, Chomsky 2012) with (bipedal) Homo erectus, who inhabited the South Pacific, used tools, and is suspected of having navigational skill to travel between islands (Everett 2016, 2017). To facilitate the travel, Everett has proposed that Homo erectus used allocentric communication—perhaps, starting with gestures before evolving into verbal behavior some 500,000 years ago for Homo sapiens (Kimura 1993). It is believed that Homo erectus evolved into Homo sapiens.
I do a research about Non-Verbal Communication: Gestures and Body Language in Classroom.
I'm thinking about starting my new work on the topic of gesture Application
development for AI SoC. Is there anybody who is already or would like to start the work on this topic may please reply.
I am planning to research the relationship between framing effects and gestures in the field of cognitive psychology. I want to know about those feld research links or something like that.
We are a group of researchers from the Federal University of Sergipe (Brazil), Universidad Nacional de San Juan (Argentina), Universidad Nacional de Tucumán (Argentina) and Queen’s University (Canada) and we are are conducting research to assess whether robots can be commanded to perform tasks through gestures in an easy and intuitive way and we would like to ask for your help.
Please, answer the electronic questionnaire that can be accessed here: https://forms.gle/RBy75MbwhJcUoYJh6.
The estimated time to answer all questions is just over 10 minutes.
It would also help us even more if you share this message with your entire network of contacts. To participate, it is not necessary to have any prior knowledge of robotics and your collaboration will assist us in the search for a simpler way for anyone to use robots in their daily lives.
Thanks for your cooperation!
Dear Energy and Environmental Researchers,
Kindly assist with me with yearly Environmental Performance Index (EPI) data from 1980 to 2020. I checked the SEDAC website but it appears only EPI for year 2020 is available.
Please if you have YEARLY time series from 1980 to 2020, I will appreciate if you can share. I am also open to collaborating with you using the data as a way of reciprocating the gesture.
Hoping for a favourable feedback, thanks.
Ngozi
For an upcoming study, I am in search of a quick Spanish placement test that can be made by L2 learners (preferably online) to determine their L2 Spanish proficiency level.
Ideally, the test would not be longer than 10 minutes and can be used for free, but please also contact me with recommendations for longer or paid tests. These could still be a useful starting point for us.
Thank you in advance!
Lieke
Been reviewing and reading articles which all claim to use AI to identify movement patterns in animals and humans but seem to miss the key point, that CNS activity and the output seen as a motor event like walking or speech differ in temporal scales. Neuronal firing is always in milliseconds while the output or endpoint (motor event) is in seconds even when at its best.
Another key issue is that the signal recorded along axons and sites away from source are have a delay as these signals are not transmitted at the speed of light, but each axon has its own conduction velocity and it matters!!
Neurophysiologists have been examining these things for over a century now, so please do check some of the matter before jumping to conclusion, this is also for the reviewers.
I was wondering if some primate vocalizations, hand gestures, etc. are universal. For example, a thumbs up in America means doing good, in other countries even if you don't know what they mean exactly, they can see that it means something good. I was wondering since humans are such close relatives to primates especially chimpanzees and bonobos, has any research been done on if some vocalizations and/or hand gestures are universally understood between all nonhuman primates, or do species have their own vocalizations and/or hand gestures that are unique to that species only?
I am Nohemy Navarro and I am currently performing the study for my Master thesis regarding digital skills and competences demanded to support a digital transformation. Therefore, I was wondering whether you would like to participate in this study by just sparing a bit of your time to do the above online survey.
As the population of my study are managers working in Germany, it has been a challenge to collect participants due to the COVID situation. I would highly appreciate any sharing of this link or QR code among colleges or friends, as this gesture contributes a lot of value to my research.
We know that body language, in general, and gesticulation, in particular, is culturaly dependent. It is especially clear for different languages since co-speech gestures are a part of the language.
But do we know how much gesticulation differs between different cultures, which are using the same language, such as the USA and UK?
Children with ASD need a clear and effective way to communicate to reduce frustration and
replace challenging or unacceptable behaviours (Beukelman and Mirenda, 1998; Webb, 2000).
Augmentative and alternative communication (AAC) interventions for these children have traditionally focused on unaided communication (i.e. signs and gestures). Subsequently, the focus shifted to aided communication systems utilizing the visual–spatial processing strengths of individuals with ASD (Light, Roberts, Dimarco, and Greiner, 1998). The PECS is an aided, picturebased communication system, aiming to develop the social exchange underlying all communicative acts.
Hi RG,
There are a lot of papers using the HDsEMG database CapgMyo to test gesture recognition algorithms (http://zju-capg.org/myo/data/).
However, it seems that there is a missing file on the original server (http://zju-capg.org/myo/data/dbc-preprocessed-010.zip).
I wonder if anyone know if there is an alternative source for the database?
All the best.
The difficulty to use para-linguistic features such as body language, gestures, facial expressions, eye contact, etc. in online teaching has affected class control at various levels across the globe. Some naughty students, who are digital natives, outwit their teachers in online classes. How can teachers ensure better co-operation and active participation from students? What are the best strategies and techniques for effective classroom management in online education?
I am looking for a good dataset containing a lot of representational gestures with limited domain (in terms of text), which was recorded in an interaction.
Just having videos would be enough.
Ideally, the dataset would be in English. But German or Spanish would be fine as well.
Here are some examples of such datasets:
1. The Bielefeld speech and gesture alignment corpus (SaGA). 2010
2. "Verbal or visual: How information is distributed across speech and gesture in spatial dialog." 2006
3. Natural Media Motion-Capture Corpus(NM-MoCap-Corpus) 2014
Are there some more datasets I am not aware of yet?
As acclaimed by the researchers on religions; most of the religions talks on to the adherence to the commonly shared/cherished fundamental values, that encourage and reflects on gestures, man to man contacts, respectful interactions, honesty, and sincere understanding related to several issues, duties, and responsibilities encountered by mankind on a day to day engagements with other people through actions, interactions, and reactions. I understand, globally a whole lot of funds are invested to promote religious teaching to generate and enhance value based communication between groups of people. What I presume, if, is correct, then this factor should be considered playing a participatory role in the success of economic development. I would recommend researchers to initiate a fact-finding study on this issue.
Hi,
I am trying to use Rapid Upper Limb Assessment (RULA) method for assessing the physical workload due to gestures and body movements based on a stimulus.
I am referring to the following link for the assessment:
Based on the stimulus presented, the body movements is made for neck, back, palm, wrist and elbow.
Is there any other way I can assess the impact of stimulus on workload on the body parts as mentioned above?
I am testing several methods for finding region of interest in hand gesture. in opencv for example I found some methods like camshift (for tracking a interest object), some background extraction methods (MoG, MoG2, ..) which specially are used in video to subtract background from foreground, which can also be used when we have hand as an object in a video with a complex background. and also GrabCut and backproject methods which can be used for hands posture in a static state. Contours, edge detection or skin methods are some other approaches for detecting hand in an image or video. And lastly I found that haar cascade can be used as well. I want to know that for passing from this stage, which algorithm is the best choice, considering that I use images with complex background. some algorithms like Grabcut or backproject were good but the most important problem was that I should manually specify some regions as foreground or background and this is not what it should be. After choosing a method for roi, generally what are the most important features in hand gesture recognition? for extracting features which method is your suggestion? that can work well with one of the general classifiers like svm, knn, etc to classify an specified image.
Thank you all for taking your time
The mixed_ethod type of writing in my Ph.D. project
We witness that the religious denominations took into consideration changes without precedent in their dogmatic history in regard to the actual threat of Coronavirus. While spreading wide-world COVID19 makes changes in many social departments of our society on levels we never thought about. We see for example the RomanCatholic Church that suspended all masses here and there https://qz.com/1808390/religion-is-at-the-heart-of-koreas-coronavirus-outbreak/, banns the crucial gestures in rituals [ https://abc13.com/5976098/ to suspend the distribution of Holy Communion from the Chalice, to distribute the Eucharist preferably into the hands of the faithful, and to avoid the physical contact from a peaceful handshake, to forego ash crosses on forehead, to suspend placing water in holy water fonts at the entrance of churches, that the churchgoers “refrain” from kissing or touching the cross for veneration, or even cancellation of masses ] Buddhist temples and Protestant churches around Korea have also suspended religious gatherings. Orthodox Romanian Church did the same thing https://basilica.ro/patriarhia-romana-masuri-sanitare-si-spirituale-in-timp-de-epidemie/, but only in the first place, because after 'recommending' for its believers not to kiss public icons in Churches, but their indoor ones, and receive Holy Communion with teaspoons for single-use, same Church reconsider these recommendations and withdrew her decision [perhaps at the pressure of civil fundamentalists].
How can we qualify these measures and moreover the withdrawal on behalf of religious believers, as weakness, populism, diligence, assuming the human limits, or...something else?
I have this project where i need to use social signal processing techniques in order to automatically detect mimicking behavior (e.g. gestures, vocal pitch or facial expressions) between a pitcher (i.e., an entrepreneur) and a panel of investors.
Research suggests that behavioral cues such as an open posture, mimicry and frequent eye contact can positively influence the perceived degree of coachability of an entrepreneur. Moreover, mimicry –which is the imitation of someone's behavior– is typically associated with affiliation and liking. For the aforementioned reasons a positive correlation between mimicking behavior and entrepreneurial success (i.e. the decision to invest) is expected
I am struggling on coming up with a RQ regarding this topic. Does anyone have any suggestions or related articles that can help me inspire with coming up with a RQ?
Is it possible to know if a gesture is controled volontarly or automaticaly, or partialy automaticaly? Is there a difference for this assesment between a single gesture and a cyclic activity like walking?
I have earlier worked on detecting hand motion gesture using Ultrasonic sensors. Sound is absorbed by different objects. But is it possible to quantify absorption and determine the type of deflector?
Yule (2010) assumes that animals use systems of communication that are substantially different from human language. For instance, unlike human language, animals use linguistic systems that relate to the immediate time and place of communication. Also, animals are said to use “fixed and limited sets of vocal or gestural forms” (p. 13). Evidence from religious sources (Holy Quran) suggests that animals can competently communicate with human beings in systems that are both complex and productive. It is legitimate to ask therefore: Is it possible to integrate religious evidence into the linguistic theory to account for animal language?
Farmers are the creators........ Creator-producers of food for all. From our breakfast to dinner. But they are probably the most neglected ones. Media focusses on artists, scientists, industrialists, social reformers, political persons,...........!
Farmers are never glorified (not even in history).
Now, in many countries, the issue of doubling the farmers' income has been taken into account by the governments. Many seminars, meetings, global meets are being organised, money is being spent to make it a global slogan.
But what's the baseline income to double? What socioeconomic status are the benchmark?
Some of them are not having the bare minimum to just live a normal life with all normal requirement for a person.
Can mere "doubling the income" do the justice?
Even if the answer is YES......
Considering all our sincere efforts and good gestures for doing the same......
Is it still a DREAM or at least a near-future REALITY?
I do require a facial gesture dataset in real world.
Physically, is it possible the form of those human corpses could be preserved for 18 centuries (it was discovered in 1863) below 15m of ashes and volcanic rocks? What would have forbade stones and ashes to fill this cavity? The human "soul" maybe? Gently preserving the exact form of the body with its so human expressive gesture? How could Guiseppe Fiorelli, the numismatist, could spot those "cavities" under 15m of ashes? See that scholar drawing explaining its "marvellous" discovery.
Key Features
Human computer interaction--historical, intellectual, and social
Developing interactive systems, including design, evaluation methods, and development
tools
The interaction experience, through a variety of sensory modalities including vision, touch, gesture, audition, speech, and language
Theories of information processing and issues of human-computer fit and adaptation
We plan to conduct interviews with Bt cotton farmers in Telangana.
How to start? Who must be informed regarding permission (administrative/as an respectful gesture)? Who to ask for a list of households? How to adress the villages n advance (if necessary?)? What else should be considered?
Many thanks
I am trying to teach children to recognise people's emotions from body language/gestures as well as facial expressions. However whilst there is plenty of information available which describes facial expressions for emotions I'm struggling to find any information which links specific body language or gestures to certain emotions. I'm aware this is probably because there is far more cultural variation for body language than facial expression but I'm sure there must be some research out there somewhere, so hoping someone may be able to give me a good starting point please.
The "waving officer problem" describes a situation when an autonomous driven car, coming to a stop when approaching a police officer, but when the officer waves the car to get on slowly, it needs to recognize the gesture and acting by slowly driving by the incident.
I ve written a paper abt it, not done yet. Im working on gestures in classroom. I found only a small group of researchers/scholars has worked on it ( Langacker).
I am examinating the parent-child interactions between parents & child (0 to 4 y) and assess 3 phases of communication (following the Hanen Center in Toronto): contact, interactions & the extend of stimulating communication skills in their child. Hereby we look at the skills of parents: sensitiveness & responsiveness, affection; let the child take the lead & follow the lead of the child; provide reinforced stimulation (exaggerated facial expressions, stressing target words, use gestures and movement to accompany verbal language, ect.). To measure the validity of our instrument, it would be very interesting to measure the concurrent validity with your instrument. Best regards, Mie
How to coreograph dances from gesture corporals?
I want to study the cultural significance of gestures to really make gesture recognition work for my current research
I came across an interesting article "Information Management Issues and Business Model Analysis of O2O Games: A Review of Pokémon Go".
I would be interested to find similar ones on VR applications or gesture based interfaces.
I'm researching how the smartphone may change dramatically from what we have today with augmented reality, virtual reality, zero touch interfaces, IoT, etc.
Can anyone recommend good research papers in this area?
Dear Colleagues
We have developed so many software and machines with smart intelligence. Artificial intelligence gives the view of the cognitive analysis and learning phases. After learning the individual cognitive and perception of individual by four sensors of the bodies i.e. Ear, Eyes, Skin, Nose, I think it may be possible to get the point view of others by means of various processing techniques like image processing, natural language processing (Speech), biomedical processing, neurological analysis and cognitive analysis.
Our project entails the evaluation of the "best" ASR software that runs in the Cloud and, preferably in embedded devices.
While we will start with grammar-command applications, we want to quickly migrate to more applications that require NLU & NLP processing at a "state-of-the-art" level. This is a commercial platform-- but not a "toy".
We need a test set which includes hand-gestures representing English alphabets (A-Z).
We are doing research about a gesture system called Visual Phonics. The hand shapes, corresponding to sounds, can be useful in literacy instruction with young children, both with and without disabilities. The individual hand shapes are fairly abstract initially, but, with repetition, do take on meaning. Does this indicate a shift from one type of gesture to another?
We want to observe elementary school teachers during a literacy instruction lesson and record gesture use (iconic, metaphoric, etc.). We would like to determine whether there is any correlation between gesture use and the efficacy of including a gestural system (see the sound/visual phonics) into literacy instruction. This system provides a hand shape (metaphoric gesture) for each sound/ letter combination of English. This is part of larger project on the relation of follow-up to the efficacy of visual phonics use.
I'm looking for a good open-source gesture recognizer (in particular, pointing gesture) that would work on Linux. Either using skeleton tracking (OpenNI) or other RGB-D methods.
While the theory is rather simple (typically, you want to train a hidden Markov model), I'm looking for an robust implementation + trained model.
It is well known, that co-expressive gesticulation gestures develop their structure of movement accompanying the verbal and melodic structure of the utterance and also that they are linked to it semantically and pragmatically.
When analysing gesticulation gestures linked to verbal language and intonation, it seems to me a little bit difficult to establish a close descriptive relation between the movement structure of gesticulation gestures and the verbal and melodic structure of the utterance in spontaneous speech.
I think, that one of these categories or aspects has to do with the close relation between the more prominent segment in pitch range of the utterance and the more prominent phase in gesticulation structure.
Could you suggest me other categories or aspects I should pay attention?
In a cambridge hand gesture dataset, I am getting 100% accuracy using 10 fold cross validation? How should I confirm whether this prediction accuracy is correct?
I have tried classification of static hand gesture using hue moments and the accuracy using this feature is very low. I am using SVM for classification.
Leap motion controller only tracks finger/hand gestures (If I'm right). I need to track the motion of reference objects, such as dollies stuck to deforming objects, which is out of the tracking scope of sensors in the class of leap motion controller. Optotrack technology can definitely do this stuff perfectly, but at a huge financial investment. But what I need is a very affordable trick in the cost range of leap motion sensor (or a way to make leap motion sensor track a 'non-finger' object) at a similar resolution.
Thanks a lot.
In this paper, it seems that the only kind of ambiguity permitted is in the bindings of variables (and it's not clear to me how competing candidate bindings are maintained through a parse after being detected). How is structural ambiguity handled; for example, when a gesture might be either "identifying" or "visualizing"? There would be multiple competing parse trees for as long as the ambiguity persists in the discourse, and thus the "right frontier" could have several competing candidate constituents at each of its nodes.
when using a skin detection algorithm, i need to take the hand as an object, but this algorithm detects the face too, so i need a heuristic more powerful than taking the biggest contour so i need your suggestions please.
The background subtraction is achieved by running average method which con-
tinuously updates the background model. Hence if the hand is still for long
enough it is considered as the background and the gesture is not detected.
The goal of my study is to prove behavioral differences between therapists in CBT with ADHD patients, that might influence the effectiveness of the treatment.
The focus of this analysis might be on empathy of the therapist, using certain techniques (e.g. frequency of paraphrasing), showing interest and commitment, body posture and gestures (certain movements), the way in which the therapist structures the session, etc.
it is about the crane guidance control system using gesture recognition
The book "Body Language" written by Allan Pease is very widespread, but I have never seen any studies which tested explicit/implicit influence on people's perception of each other if they use gestures/postures from that repertoire. Maybe you have?
Hand gesture detection sensors , such as Microsoft Kinect, are soon going to be a part of the operating rooms to enable surgeons scroll through and modify medical images during surgery. This will help them to access images and files faster without compromising sterile procedures.
I am working on a project about hand gesture recognition/tracking, I don't know if you are familiar with sixth sense technology. I want to do some implementation for hand gesturing that is used in this technology. An inventor of this technology is Pranav Mistry,
The signals are gestures, images, video streams. the camera captures the image and then sends to a smart phone. Smart phone do processing and then sends information to projector. projector projects image on the mirror and finally mirror reflects the image on the wall or object on view-object. Can anyone suggest technically detailed information about hand gesturing techniques used in gestural recognition technology?
Regards
I need to find the coordinates of an image in a convex hull algorithm for tracking the finger tip.