Myeongjun Jang

Myeongjun Jang
University of Oxford | OX · Department of Computer Science

Master of Engineering

About

21
Publications
1,766
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
167
Citations
Introduction
I am currently working in Intelligent System Labs, Department of Computer Science, University of Oxford. My research interest lies in Natural Language Processing, Knowledge Representation, and Semantic Analysis. I had previously worked in AI-Language Tech Labs, SK Telecom, South Korea from April 2019 to September 2020.
Additional affiliations
April 2019 - September 2020
SK Telecom
Position
  • Engineer
Education
March 2017 - March 2019
Korea University
Field of study
  • Natural Language Processing
March 2013 - February 2017
Korea University
Field of study
  • Industrial Engineering

Publications

Publications (21)
Preprint
Full-text available
While recent works have been considerably improving the quality of the natural language explanations (NLEs) generated by a model to justify its predictions, there is very limited research in detecting and alleviating inconsistencies among generated NLEs. In this work, we leverage external knowledge bases to significantly improve on an existing adve...
Preprint
Full-text available
ChatGPT, a question-and-answer dialogue system based on a large language model, has gained huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, including the law, medical, and finance domains, adding ext...
Preprint
Full-text available
The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs, is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-trained language models (PLMs) do not satisfy this property. In this paper, we perform experiments usin...
Preprint
Full-text available
A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been...
Article
Full-text available
The recent development in pretrained language models that are trained in a self-supervised fashion, such as BERT, is driving rapid progress in natural language processing. However, their brilliant performance is based on leveraging syntactic artefacts of the training data rather than fully understanding the intrinsic meaning of language. The excess...
Preprint
Full-text available
The recent development in pretrained language models trained in a self-supervised fashion, such as BERT, is driving rapid progress in the field of NLP. However, their brilliant performance is based on leveraging syntactic artifacts of the training data rather than fully understanding the intrinsic meaning of language. The excessive exploitation of...
Preprint
Full-text available
Natural language free-text explanation generation is an efficient approach to train explainable language processing models for commonsense-knowledge-requiring tasks. The most predominant form of these models is the explain-then-predict (EtP) structure, which first generates explanations and uses them for making decisions. The performance of EtP mod...
Preprint
Full-text available
Consistency, which refers to the capability of generating the same predictions for semantically similar contexts, is a highly desirable property for a sound language understanding model. Although recent pretrained language models (PLMs) deliver outstanding performance in various downstream tasks, they should exhibit consistent behaviour provided th...
Article
Full-text available
Previous methods for system intrusion detection have mainly consisted of those based on pattern matching that employs prior knowledge extracted from experts’ domain knowledge. However, pattern matching-based methods have a major drawback that it can be bypassed through various modified techniques. These advanced persistent threats cause limitation...
Article
Full-text available
Text summarization is an information condensation technique that abbreviates a source document to a few representative sentences with the intention to create a coherent summary containing relevant information of source corpora. This promising subject has been rapidly developed since the advent of deep learning. However, summarization models based o...
Article
Sentence embedding is an important research topic in natural language processing. It is essential to generate a good embedding vector that fully reflects the semantic meaning of a sentence in order to achieve an enhanced performance for various natural language processing tasks, such as machine translation and document classification. Thus far, var...
Article
The Vehicle Dependability Study (VDS) is a survey study on customer satisfaction for vehicles that have been sold for three years. VDS data analytics plays an important role in the vehicle development process because it can contribute to enhancing the brand image and sales of an automobile company by properly reflecting customer requirements retrie...
Preprint
Full-text available
Sentence embedding is a significant research topic in the field of natural language processing (NLP). Generating sentence embedding vectors reflecting the intrinsic meaning of a sentence is a key factor to achieve an enhanced performance in various NLP tasks such as sentence classification and document summarization. Therefore, various sentence emb...
Preprint
Full-text available
Sentence embedding is an important research topic in natural language processing. It is essential to generate a good embedding vector that fully reflects the semantic meaning of a sentence in order to achieve an enhanced performance for various natural language processing tasks, such as machine translation and document classification. Thus far, var...
Article
Full-text available
Sequence-to-sequence (Seq2seq) models have played an import role in the recent success of various natural language processing methods, such as machine translation, text summarization, and speech recognition. However, current Seq2seq models have trouble preserving global latent information from a long sequence of words. Variational autoencoder (VAE)...

Network

Cited By