
Isa Maks- Vrije Universiteit Amsterdam
Isa Maks
- Vrije Universiteit Amsterdam
About
23
Publications
6,675
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
404
Citations
Current institution
Publications
Publications (23)
This report describes a study into age discrimination in job-vacancy texts, which we conducted for "The Netherlands Institute for Human Rights". Using automated content analyses techniques we analysed almost all Dutch-language vacancy texts published on the internet in 2017. This involves more than 1.8 million unique job-vacancy texts. The develope...
Sentiment Analysis is a well-known task of Natural Language Processing that has been studied in different domains such as movies, phones or hotels. However, other areas like medical domain remain yet unexplored. In this paper we study different polarity classification techniques applied on health domain. We present a corpus of patient reviews compo...
Humanities scholars agree that the visualization of their data should bring order and insight, reveal patterns and provide leads for new research questions. However, simple two-dimensional visualizations are often too static and too generic to meet these needs. Visualization tools for the humanities should be able to deal with the observer dependen...
Recently, emotions and their history have become a focus point for research in different academic fields. Traditional sentiment analysis approaches generally try to fit relatively simple emotion models (e.g., positive/negative emotion) to contemporary data. However, this is not sufficient for Digital Humanities scholars who are interested in resear...
In this paper we focus on the creation of general-purpose (as opposed to domain-specific) polarity lexicons in five languages: French, Italian, Dutch, English and Spanish using WordNet propagation. WordNet propagation is a commonly used method to generate these lexicons as it gives high coverage of general purpose language and the semantically rich...
This chapter explores how three methods of political text analysis can complement
each other to differentiate parties in detail. A word-frequency method
and corpus linguistic techniques are joined by critical discourse analysis in
an attempt to assess the ideological relation between election manifestos and
a coalition agreement. How does this agre...
One of the goals of the STEVIN programme is the realisation of a digital infrastructure that will enforce the position of the Dutch language in the modern information and communication technology.A semantic database makes it possible to go from words to concepts and consequently, to develop technologies that access and use knowledge rather than tex...
This paper presents a lexicon model for the description of verbs, nouns and adjectives to be used in applications like sentiment analysis and opinion mining. The model aims to describe the detailed subjectivity relations that exist between the actors in a sentence expressing separate attitudes for each actor. Subjectivity relations that exist betwe...
In recent years techniques have been developed to mine wordnets for senti-ment-bearing words. They annotate synsets with labels for subjectivity and polarity. These techniques assume that all members of a synset are similar with respect to these annotation labels. In this paper we show that this is often not true, especially not when fine-grained p...
The Dutch HLT agency for language and speech technology (known as TST-centrale) at the Institute for Dutch Lexicology is responsible for the maintenance, distribution and accessibility of (Dutch) digital language resources. In this paper we present a project which aims to standardise the format of a set of bilingual lexicons in order to make them a...
Cornetto is a two-year Stevin project (project number STE05039) in which a lexical semantic database is built that combines Wordnet with Framenet-like information for Dutch. The combination of the two lexical resources (the Dutch Wordnet and the Referentie Bestand Nederlands) will result in a much richer relational database that may improve natural...
The goal of this paper is to describe how adjectives are encoded in Cornetto, a semantic lexical database for Dutch. Cornetto combines two existing lexical resources with different semantic organisation, i.e. Dutch Wordnet (DWN) with a synset organisation and Referentie Bestand Nederlands (RBN) with an organisation in Lexical Units. Both resources...
Cornetto is a two-year Stevin project (project number STE05039) in which a lexical semantic database is built that combines Wordnet with Framenet-like information for Dutch. The combination of the two lexical resources (the Dutch wordnet and the Referentie Bestand Nederlands) will result in a much richer relational database that may improve natural...
In this paper we present a quantitative analysis of a bilingual lexical database which has been produced with OMBI, a tool
for creating and editing bilingual dictionaries. OMBI has proven to be a valuable tool in the creation of rich bilingual multi-purpose
lexical databases. One of the most distinctive features of the tool is reversal of source la...
In this paper MULTITALE, a system for the semantic tagging of medical neurosurgical texts and for the semi-automatic expansion of the medical lexicon, will be presented. Given the textual information explosion (in particular in, though not restricted to, specialized domains) there is an urgent need lbr tools enabling to exploit the information avai...
The goal of this paper is to describe how adjectives are encoded in Cornetto, a semantic lexical database for Dutch. Cornetto combines two existing lexical resources with different semantic organisation, i.e. Dutch Wordnet (DWN) with a synset organisation and Referentie Bestand Nederlands (RBN) with an organisation in Lexical Units. Both resources...
In this paper we present some of the features of the model used in the DOT-project. The aim of this pilot project is to find out how to deal with official governmental terminological data in an efficient, consistent and multifunctional way, assuring a maximum of accessibility and user-friendliness. The project has started on January, 1, 1999 and wi...