Javad Pourmostafa Roshan SharamiTilburg University | UVT · Department of Cognitive Science and Artificial Intelligence
Javad Pourmostafa Roshan Sharami
PhD Candidate in Artificial Intelligence
About
18
Publications
13,120
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
17
Citations
Introduction
Personal home page: https://javad.pourmostafa.me
Additional affiliations
Education
March 2021 - March 2025
Publications
Publications (18)
In the beginning, we described Cloud Computing, its deployment models and the type of existing clouds. Finally, several design patterns that are well-known in this scope have been elaborated.
The presentation is about, various modules making a deep Persian sentiment analysis framework regarding the accepted paper in the 5th National Conference on Computational Linguistics of Iran
This paper focuses on how to extract opinions over each Persian sentence-level text. Deep learning models provided a new way to boost the quality of the output. However, these architectures need to feed on big annotated data as well as an accurate design. To best of our knowledge, we do not merely suffer from lack of well-annotated Persian sentimen...
Continuously-growing data volumes lead to larger generic models. Specific use-cases are usually left out, since generic models tend to perform poorly in domain-specific cases. Our work addresses this gap with a method for selecting in-domain data from generic-domain (parallel text) corpora, for the task of machine translation. The proposed method r...
While quality estimation (QE) can play an important role in the translation process, its effectiveness relies on the availability and quality of training data. For QE in particular, high-quality labeled data is often lacking due to the high-cost and effort associated with labeling such data. Aside from the data scarcity challenge, QE models should...
The quality of output from large language models (LLMs), particularly in machine translation (MT), is closely tied to the quality of in-context examples (ICEs) provided along with the query, i.e., the text to translate. The effectiveness of these ICEs is influenced by various factors, such as the domain of the source text, the order in which the IC...
Businesses and customers can gain valuable information from product reviews. The sheer number of reviews often necessitates ranking them based on their potential helpfulness. However, only a few reviews ever receive any helpfulness votes on online marketplaces. Sorting all reviews based on the few existing votes can cause helpful reviews to go unno...
The effectiveness of Neural Machine Translation (NMT) models largely depends on the vocabulary used at training; small vocabularies can lead to out-of-vocabulary problems -- large ones, to memory issues. Subword (SW) tokenization has been successfully employed to mitigate these issues. The choice of vocabulary and SW tokenization has a significant...
Businesses and customers can gain valuable information from product reviews. The sheer number of reviews often necessitates ranking them based on their potential helpfulness. However, only a few reviews ever receive any helpfulness votes on online marketplaces. Sorting all reviews based on the few existing votes can cause helpful reviews to go unno...
With the increase in machine translation (MT) quality over the latest years, it has now become a common practice to integrate MT in the workflow of language service providers (LSPs) and other actors in the translation industry. With MT having a direct impact on the translation workflow, it is important not only to use high-quality MT systems, but a...
Machine Translation (MT) has become an irreplaceable part of translation industry workflows. With a direct impact on productivity, it is very important for human post-editors and project managers to be informed about the translation quality of MT.
MT Quality estimation (QE) is the task of predicting the quality of a translation without human refer...
Slides are for my talk at the 13th TAISIG meeting, where I presented one of my papers, titled "Selecting Parallel In-domain Sentences for NMT Using Monolingual Texts"
For more information about the TAISIG talks, visit this link: https://www.tilburguniversity.edu/research/institutes-and-research-groups/taisig
Continuously-growing data volumes lead to larger generic models. Specific use-cases are usually left out, since generic models tend to perform poorly in domain-specific cases. Our work addresses this gap with a method for selecting in-domain data from generic-domain (parallel text) corpora, for the task of machine translation. The proposed method r...
General-domain corpora are becoming increasingly available for Machine Translation (MT) systems. However, using those that cover the same or comparable domains allow achieving high translation quality of domain-specific MT. It is often the case that domain-specific corpora are scarce and cannot be used in isolation to effectively train (domain-spec...
Sentiment Analysis or Opinion Mining is one of the developing fields in text mining that used to define and extract the people’s opinions, emotions toward entities, issues, events or topics. A lot of research has been done to improve the function of sentiment analysis systems, such as using simple linear models in machine learning and more complex...
Presenting a Sentiment Analysis system and addressing the long-term dependency by using the Long Short-Term Memory (LSTMs) in deep learning.
The aim of our presentation is to clarify why LSTMs models are efficient. Arise from this fact describing RNNs and LSTMs are needed.
Eventually, we have used a plain stacked LSTM model with dropout technique i...
Telecommunication networks with the optical fiber which consist of numerous concepts and methods.
With EvalVid we present a complete framework and tool-set for evaluation of the quality of video transmitted over a real or simulated communication network. Besides measuring QoS parameters of the underlying network, like loss rates, delays, and jitter, we support also a subjective video quality evaluation of the received video based on the frame-b...
Questions
Questions (5)
This has been previously shown that the proximity search between two embedding vectors can be computed by various distance similarity measures methods, such as cosine similarity, Manhattan distance, Euclidean distance, etc. In one of our projects, we aim to employ one of them to find the similarity score of two embedded vectors. But before using them intuitively, I thought it is a good idea to have a discussion here if these methods have any specific use cases, such that method X has a priority over method Y in the task/use case Z. (e.g. If the magnitude of the vectors is important in the task then method X should be used, or if the vector dimension is greater than x, it is recommended to use method D.) Please, if possible, reply to this thread with a reference to the academic papers alongside your explanation. Thanks in advance.
Greetings,
The number of our arXiv papers' citations are being increased (if it was a few, I wouldn't care). The problem is that they have not been indexed by ResearchGate yet! I assume the most probable reason is that the papers referring us were ArXiv papers as well!
I was wondering if anyone faces the same scenario and could explain how they solved it. Of course, if it's feasible.
Worth mentioning that I've seen people added citations manually by using data files, but that sounds clunky to me and I'm looking for a more intelligent way.
Regards,
Javad
I am working on text extraction from street level images. There are some pre-processing steps like segmentation, fast filters and then it heads to pattern classification.
In the related article, it has been mentioned due to the variability of analyzed regions, descriptors must be invariant to rotation and scale so we have tested a lot of different shape descriptors (such as Hu moments, Fourier moments...). Among them, we have selected two families of moments: Fourier moments and the pseudo-Zernike moments.
To make a long story short, the main questions are what do they mean and how can we get the PZM, Fourier and Polar moments and then integrate them into the SVM in order to obtain the final decision like the attached picture?
Sentiment Analysis or Opinion Mining
Hello,
I have installed Evalvid and its relevant components on ubuntu 16.04 but while I want to use such commands like PSNR, MOS or etmp4, the terminal response, command not found, it seems it doesn't install at all but I have followed the whole of installation stages.
I got the list of current directory and PSNR, MOS, and the rest. They were shown but when I wanted to execute them, terminal says: there is no such a file.
NB: Even I have seen Youtube video tutorial and followed their instruction but at the final stage which I described It doesn't work!
Thanks in advance