Liliana Barrio-Alvers

Technische Universität Dresden, Dresden, Saxony, Germany

Are you Liliana Barrio-Alvers?

Claim your profile

Publications (5)2.58 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This article provides an overview of the first BIOASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BIOASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies. The 2013 BIOASQ competition comprised two tasks, Task 1a and Task 1b. In Task 1a participants were asked to automatically annotate new PUBMED documents with MESH headings. Twelve teams participated in Task 1a, with a total of 46 system runs submitted, and one of the teams performing consistently better than the MTI indexer used by NLM to suggest MESH headings to curators. Task 1b used benchmark datasets containing 29 development and 282 test English questions, along with gold standard (reference) answers, prepared by a team of biomedical experts from around Europe and participants had to automatically produce answers. Three teams participated in Task 1b, with 11 system runs. The BIOASQ infrastructure, including benchmark datasets, evaluation mechanisms, and the results of the participants and baseline methods, is publicly available. A publicly available evaluation infrastructure for biomedical semantic indexing and QA has been developed, which includes benchmark datasets, and can be used to evaluate systems that: assign MESH headings to published articles or to English questions; retrieve relevant RDF triples from ontologies, relevant articles and snippets from PUBMED Central; produce "exact" and paragraph-sized "ideal" answers (summaries). The results of the systems that participated in the 2013 BIOASQ competition are promising. In Task 1a one of the systems performed consistently better from the NLM's MTI indexer. In Task 1b the systems received high scores in the manual evaluation of the "ideal" answers; hence, they produced high quality summaries as answers. Overall, BIOASQ helped obtain a unified view of how techniques from text classification, semantic indexing, document and passage retrieval, question answering, and text summarization can be combined to allow biomedical experts to obtain concise, user-understandable answers to questions reflecting their real information needs.
    Full-text · Article · Apr 2015 · BMC Bioinformatics
  • Source
    M R Alvers · H J Götze · L Barrio-Alvers · C Plonka · S Schmidt · B Lahmeyer
    [Show abstract] [Hide abstract]
    ABSTRACT: present a technique whereby triangulated facets and voxel-cubes are treated in parallel, allowing integrated models for seismic, magnetic and EM data. It is a commonly accepted truth in the oil industry that 'the easy oil has been found'. Finding the remaining hydrocarbons requires better technologies. Examples are exploration projects below salt and basalt, which are difficult to image with seismic. The main exploration method is still seismic but it has become more important to integrate seismic with other methods in order to improve imaging. In areas of strong lateral velocity and density changes, gravity modelling can help to improve velocity models used for seismic imaging. Efforts of joint interpretation of e.g., seismic, gravity and EM methods lead to more and more realistic and therefore more complex models. At first the word 'complex' appears only as a rather fashionable replacement for 'complicated'. 'Complexity' characterizes a system that is difficult to overlook, but where profound analysis allows the decomposition into sub-units, e.g., an analysis of the 'entanglement'. By dealing with single parts and understanding their system-behaviour, managing complexity can become possible. A joint interpretation on the other hand can be named emergent modelling. This more holistic approach leads to more insight than analysing all single aspects separately. The goal should therefor be, to take advantage of emergent effects by modelling different aspects simultaneously. 3D gravity and magnetic field and full tensor modelling are used to improve the results of seismic imaging projects. This applies especially to areas of strong lateral velocity and density contrast with corresponding imaging problems. Typical areas where gravity and magnetic modelling have been successfully used are sub-salt (for example O'Brian et al., 2005; Fichler et al., 2007) and sub-basalt (e.g., Reynisson et al., 2007). The integration of human geo-expertise, different techniques and different geo-disciplines play another important role. This aspect becomes particularly manifest in interactive modelling. In the forthcoming chapters we describe a novel user-software-interaction, automated interpretation and hybrid techniques where triangulated facets and voxel-cubes are treated in parallel. Theses ideas allow integrating models for different data such as seismic, gravity, magnetic and EM (planned). This approach in itself is complex but it allows us to address nature's complexity better than before. Modelling vs inversion – integration is needed In general, a distinction is made between forward modelling and inversion. An example of forward modelling is the solution of differential equations under the assumption of constraints and/or initial conditions in a region whose geometry and physical properties are well known. In theory, the Earth-body can generally be interpreted as a data producer (e.g., gravity) or as a filter, which can receive data (e.g., seismic waves), changes these data and outputs them as a product of the filter-process. The generated and/or output-data are measured geophysical data. The solution of the inverse problem, however, deals with measured and processed data to derive the physical properties and structures in the Earth-filter. Mathematically, inversion of geophysical data is always an ill-posed problem because it usually suffers from ambiguity. Within the experimental accuracy many different models 'cause' the same data. This ambiguity can be – and must be – reduced by constraining a priori information, however ambiguity can never completely ruled out. Therefore, independent data should be interpreted in a joint approach. Data inversion is a central step in processing and interpretation of measured data (among many others references e.g., Clauser, 2014). Through the inversion of data, the Earth-filter process is reversed. The aim is the determination of the characteristics of the filter. This holds also for data intrinsi-cally produced by the Earth-body itself like e.g., gravity or magnetism. Various inversion methods are used to interpret geophysi-cal data – some 4D (e.g., time dependent stress modelling), some 3D, most still 2D, and some even 1D. Examples include seismic 2D-raytracing models, 2D and 3D density modelling
    Full-text · Article · Apr 2014 · First Break
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of the joint research project is to generate information from airborne geophysical measurements that are properly transferred from physically quantitative descriptions of the subsurface (electrical conductivities, densities, susceptibilities) into spatial structures and information matching the understanding of end-users: geologists, hydrogeologists, engineers and others. We suggest new types of inversion, which are integrated in the interactive workflow to support typical trial and error approaches of inverse and forward EM and gravity/magnetic field modelling for 1D and 3D cases. Subsequently, we combine resistivity and density models with geological 3D subsurface models. The integrated workflow minimizes uncertainties in the interpretation of geophysical data and allows a significantly improved and fast interpretation and imaging of the 3D subsurface architecture. The results of the AIDA project demonstrate that combined 3D geological and geophysical models enable a much better reconstruction of the subterraneous space. AIDA stands for “From Airborne Data Inversion to In-Depth Analysis” and is part of the R&D program: Tomography of the Earth’s Crust—From Geophysical Sounding to Real-Time Monitoring.
    No preview · Chapter · Jan 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Nr.: Status: Präsentationsform: Jungautoren Titel: Text: Thema: 73. Jahrestagung der Deutschen Geophysikalischen Gesellschaft Vorschau A-326 eingereicht Mündliche Präsentation nein S1: Geotomografie Inhalt Deutsch AIDA -From Airborne Data Inversion to In-Depth Analysis Inhalt From Airborne Data Inversion to In-Depth Analysis (AIDA) is a common project of selected university institutions and federal state service. It aims to investigate new and advanced methods and applications for joint electromagnetic, electric and magnetic data sets. The actual special interest of the project is to develop a decision support system whether 1D or 3D inversion schemes are needed. Therefore, profile (1D) or grid (2D) gradients are evaluated for decision making. Parallel to this work an enhanced 1D inversion and a new 3D inversion scheme are developed. The main commons are boundary conditions and best model to data fits. In order to estimate the quality of the numerical models, geological models based on bore hole and other structural data with given internal physical parameters as conductivity, susceptibility etc. are developed and used for
    Full-text · Conference Paper · Mar 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: With the ever increasing size of scientific literature, finding relevant documents and answering questions has become even more of a challenge. Recently, ontologies—hierarchical, controlled vocabularies—have been introduced to annotate genomic data. They can also improve the question and answering and the selection of relevant documents in the literature search. Search engines such as GoPubMed.org use ontological background knowledge to give an overview over large query results and to answer questions. We review the problems and solutions underlying these next-generation intelligent search engines and give examples of the power of this new search paradigm. KeywordsPubMed-Literature search-Ontology-Intelligent search
    No preview · Chapter · Dec 2008