Recent years have seen a major change in views on language and lan- guage use. During the last decades, language use has been more and more recognized as an intentional action (Grice 1957). In the form of speech acts (Austin 1962; Searle 1969), language expresses the speaker’s attitudes and communicative intents to shape the listener’s reaction. Notably, the speaker’s intention is often not directly coded in the lexical meaning of a sentence, but rather conveyed implicitly, for example via nonverbal cues such as mimics, body posture, and speech prosody. The theoretical work of intonational phonologists seeking to define the meaning of specific vocal intonation profiles (Bolinger 1986; Kohler 1991) demonstrates the role of prosody in conveying the speaker’s conversational goal. However, to date only little is known about the neurocognitive architecture underlying the comprehension of communicative intents in general (Holtgraves 2005; Egorova, Shtyrov, Pulvermüller 2013), and the distinctive role of prosody in particular. The present study aimed, therefore, to investigate this interpersonal role of prosody in conveying the speaker’s intents and its underlying acoustic properties. Taking speech act theory as a framework for intention in language (Austin 1962; Searle 1969), we created a novel set of short (non-)word utterances intoned to express different speech acts. Adopting an approach from emotional prosody research (Banse, Scherer 1996; Sauter, Eisner, Calder, Scott 2010), this stimulus set was employed in a combination of behavioral ratings and acoustic analyzes to test the following hypotheses: If prosody codes for the communicative intention of the speaker, we expect 1) above-chance behavioral recognition of different intentions that are merely expressed via prosody, 2) acoustic markers in the prosody that identify these intentions, and 3) independence of acoustics and behavior from the overt lexical meaning of the utterance. The German words ‘‘Bier’’ (beer) and ‘‘Bar’’ (bar) and the non-words ‘‘Diem’’ and ‘‘Dahm’’ were recorded from four (two female) speakers expressing six different speech acts in their prosody—crit- icism, wish (expressives), warning, suggestion (directives), doubt, and naming (assertives). Acoustic features for pitch, duration, intensity, and spectral features were extracted with PRAAT. These measures were subjected to discriminant analyzes—separately for words and non-words—in order to test whether the acoustic features have enough discriminant power to classify the stimuli to their corre- sponding speech act category. Furthermore, 20 participants were tested for the behavioral recognition of the speech act categories with a 6 alternative-forced-choice task. Finally, a new group of 40 par- ticipants performed subjective ratings of the different speech acts (e.g. ‘‘How much does the stimulus sound like criticism?’’) to obtain more detailed information on the perception of different intentions and allow, as quantitative variable, further analyzes in combination with the acoustic measures. The discriminant analyzes of the acoustic features yielded high above chance predictions for each speech act category, with an overall classification accuracy of about 90 % for both words and non- words (chance level: 17 %). Likewise, participants were behaviorally very well able to classify the stimuli into the correct category, with a slightly lower accuracy for non-words (73 %) than for words (81 %). Multiple regression analyzes of participants’ ratings of the different speech acts and the acoustic measures further identified distinct pat- terns of physical features that were able to predict the behavioral perception. These findings indicate that prosodic cues convey sufficient detail to classify short (non-)word utterances according to their underlying intention, at acoustic as well as perceptual levels. Lexical meaning seems to be supportive but not necessary for the comprehension of different intentions, given that participants showed a high perfor- mance for the non-words, but scored higher for the words. In total, our results show that prosodic cues are powerful indicators for the speaker’s intentions in interpersonal communication. The present carefully constructed stimulus set will serve as a useful tool to study the neural correlates of intentional prosody in the future.