Rik Van de Walle

Ghent University, Gand, Flanders, Belgium

Are you Rik Van de Walle?

Claim your profile

Publications (637)194.75 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The production of animation is a resource intensive process in game companies. Therefore, techniques to synthesize animations have been developed. However, these procedural techniques offer limited adaptability by animation artists. In order to solve this, a fuzzy neural network model of the animation is proposed, where the parameters can be tuned either by machine learning techniques that use motion capture data as training data or by the animation artist himself. This paper illustrates how this real time procedural animation system can be developed, taking the human gait on flat terrain and inclined surfaces as example. Currently, the parametric model is capable of synthesizing animations for various limb sizes and step sizes.
    10/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Scientific publications point to many associated resources, including videos, prototypes, slides, and datasets. However, discovering and accessing these resources is not always straightforward: links could be broken, readers may be offline, or the number of associated resources might make it difficult to keep track of the viewing order. In this paper, we explore potential integration of such resources into the digital version of a scientific publication. Specifically, we evaluate the most common scientific publication formats in terms of their capabilities to implement the desirable attributes of an enhanced publication and to meet the functional goals of an enhanced publication information system: PDF, HTML, EPUB2, and EPUB3. In addition, we present an EPUB3 version of an exemplary publication in the field of computer science, integrating and interlinking an explanatory video and an inter- active prototype. Finally, we introduce a demonstrator that is capable of outputting customized scientific publications in EPUB3. By making use of EPUB3 to create an integrated and customizable representation of a scientific publication and its associated resources, we believe that we are able to augment the reading experience of scholarly publications, and thus the effectiveness of scientific communication.
    Proceedings of the 4th Workshop on Linked Science; 10/2014
  • Source
    Proceedings of the 13th International Semantic Web Conference (Oct 2014); 10/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Semantically annotating and interlinking Open Data results in Linked Open Data which concisely and unambiguously describes a knowledge domain. However, the uptake of the Linked Data depends on its usefulness to non-Semantic Web experts. Failing to support data consumers understanding the added-value of Linked Data and possible exploitation opportunities could inhibit its diffusion. In this paper, we propose an interactive visual workflow for discovering and exploring Linked Open Data. We implemented the workflow considering academic library metadata and carried out a qualitative evaluation. We assessed the workflow's potential impact on data consumers which bridges the offer as published Linked Open Data, and the demand as requests for: (i) higher quality data; and (ii) more applications that re-use data. More than 70% of the test users agreed that the workflow fulfills its goal: it facilitates non-Semantic Web experts to understand the potential of Linked Open Data.
    Proceedings of the 3rd Workshop Intelligent Exploration of Semantic Data; 10/2014
  • Source
    Proceedings of the 13th International Semantic Web Conference: Posters and Demos; 10/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we investigate the feasibility of using collec-tive knowledge for predicting the winner of a soccer game. Specifically, we developed different methods that extract and aggregate the information contained in over 50 million Twitter microposts to predict the outcome of soccer games, considering methods that use the Twitter volume, the sen-timent towards teams and the score predictions made by Twitter users. Apart from collective knowledge-based pre-diction methods, we also implemented traditional statistical methods. Our results show that the combination of different types of methods using both statistical knowledge and large sources of collective knowledge can beat both expert and bookmaker predictions. Indeed, we were for instance able to realize a monetary profit of almost 30% when betting on soccer games of the second half of the English Premier League 2013-2014.
    Workshop on Large-Scale Sports Analytics; 08/2014
  • Tarek Beji, Steven Verstockt, Rik Van de Walle, Bart Merci
    [Show abstract] [Hide abstract]
    ABSTRACT: The potential of the concept of combining video data analysis and numerical simulations for numerical fire forecasting is illustrated for the case of a burning sofa in an ISO room. The fire is monitored by means of a video camera. The temporal evolution of smoke layer height, flame height and flame width are obtained from the real-time video data analysis. The fire heat release rate, estimated from the flame height and width, serves as input for the numerical simulations. The two-zone model approach is adopted, because the calculations are very fast. This is necessary for forecasting: time scales in fire development are in the order of seconds (minutes), not hours (which are typical calculation times in CFD simulations). Data assimilation with real-time adjustments according to sudden changes in the fire development as observed, improves the predictions by the two-zone model and allows to make a forecast of the fire development and possible subsequent hazards, in terms of evolution of smoke layer height and temperature.
    Fire Technology 07/2014; · 0.70 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Web resources can be linked directly to their provenance, as specified in W3C PROV-AQ. On its own, this solution places all responsibility to the resource's publisher, who hopefully maintains and publishes provenance information. In reality, however, most publishers lack of incentives to publish the resources' provenance, even if the authors would like such information to be published. Currently it is impossible to link existing resources to new provenance information, either provided by the author or a third party. In this paper, we present a solution for this problem, by implementing a lightweight, read/write provenance query service, integrated with a pingback mechanism, following PROV-AQ.
    Proceedings International Provenance and Annotation Workshop; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Incorporating structured data in the Linked Data cloud is still complicated, despite the numerous existing tools. In particular, hierarchical structured data (e.g., JSON) are underrepresented, due to their processing complexity. A uniform mapping formalization for data in different formats, which would enable reuse and exchange between tools and applied data, is missing. This paper describes a novel approach of mapping heterogeneous and hierarchical data sources into RDF using the RML mapping language, an extension over R2RML (the W3C standard for mapping relational databases into RDF). To facilitate those mappings, we present a toolset for producing RML mapping files using the Karma data modelling tool, and for consuming them using a prototype RML processor. A use case shows how RML facilitates the mapping rules' definition and execution to map several heterogeneous sources.
    2014 IEEE International Conference on Semantic Computing (ICSC); 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: To inform citizens when they can use government services, governments publish the services' opening hours on their website. When opening hours would be published in a machine interpretable manner, software agents would be able to answer queries about when it is possible to contact a certain service. We introduce an ontology for describing opening hours and use this ontology to create an input form. Furthermore, we explain a logic which can reply to queries for government services which are open or closed. The data is modeled according to this ontology. The principles discussed and applied in this paper are the first steps towards a design pattern for the governance of Open Government Data.
    Eighth IEEE International Conference on Semantic Computing; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a framework for processing visual and auditory textures in an augmented reality environment that enables realtime artistic creativity without imposing prede�ned interaction rules or constraints. It integrates multiple problem domain knowledge in soni�- cation, real-time rendering, object tracking and object recognition in a collaborative art installation using a familiar Carom Billiard game table, motion tracking cameras, a table-top digital projector and a digital audio installation. A demonstrator was presented at a 10-day annual innovation exhibition in Belgium and was perceived as innovative, intuitive and very easy to interact with.
    Human Computer Interaction International Conference, Crete, Greece; 06/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose the Normalized Freebase Distance (NFD), a new measure for determing semantic concept relatedness that is based on similar principles as the Normalized Web Distance (NWD). We illustrate that the NFD is more effective when comparing ambiguous concepts.
    Extended Semantic Web Conference; 05/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Despite the significant number of existing tools, incorporating data into the Linked Open Data cloud remains complicated; hence discouraging data owners to publish their data as Linked Data. Unlocking the semantics of published data, even if they are not provided by the data owners, can contribute to surpass the barriers posed by the low availability of Linked Data and come closer to the realisation of the envisaged Semantic Web. RML, a generic mapping language based on an extension over R2RML, the W3C standard for mapping relational databases into RDF, offers a uniform way of defining the mapping rules for data in heterogeneous formats. In this paper, we present how we adjusted our prototype RML Processor, taking advantage of RML’s scalability, to extract and map data of workshop proceedings published in HTML to the RDF data model for the Semantic Publishing Challenge needs.
    Semantic Publishing Challenge of the 11th Extended Semantic Web Conference; 05/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: To date, there are almost no tools that support the elaboration and research of project ideas in media preproduction. The typical tools that are being used are merely a browser and a simple text editor. Therefore, it is our goal to improve this pre-production process by structuring the multimedia and accompanying annotations found by the creator, by providing functionality that makes it easier to find appropriate multimedia in a more efficient way, and by providing the possibility to work together. To achieve these goals, intelligent multimedia mind maps are introduced. These mind maps offer the possibility to structure your multimedia information and accompanying annotations by creating relations between the multimedia. By automatic connecting to external sources, the user can rapidly search different information sources without visiting them one by one. Furthermore, the content that is added to the mind map is analyzed and enriched; these enrichments are then used to give the user extra recommendations based on the content of the current mind map. Subsequently, an architecture for these needs has been designed and implemented as an architectural concept. Finally, this architectural concept is evaluated positively by several people that are active in the media production industry.
    Proceedings of the Sixth International Conference on Creative Content Technologies; 05/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Resources for research are not always easy to explore, and rarely come with strong support for identifying, linking and selecting those that can be of interest to scholars. In this work we introduce a model that uses state-of-the-art semantic technologies to interlink structured research data and data from Web collaboration tools, social media and Linked Open Data. We use this model to build a platform that connects scholars, using their profiles as a starting point to explore novel and relevant content for their research. Scholars can easily adapt to evolving trends by synchronizing new social media accounts or collaboration tools and integrate then with new datasets. We evaluate our approach by a scenario of personalized exploration of research repositories where we analyze real world scholar profiles and compare them to a reference profile.
    Proceedings of the companion publication of the 23rd international conference on World wide web companion; 04/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Describing multimedia content in general and TV programs in particular is a hard problem. Relying on subtitles to extract named entities that can be used to index fragments of a program is a common method. However, this approach is limited to what is being said in a program and written in a subtitle, therefore lacking a broader context. Furthermore, this type of index is restricted to a flat list of entities. In this paper, we combine the power of non-structured documents with structured data coming from DBpedia to generate a much richer, context aware metadata of a TV program. We demonstrate that we can harvest a rich context by expanding an initial set of named entities detected in a TV fragment. We evaluate our approach on a TV news show.
    Proceedings of the companion publication of the 23rd international conference on World wide web companion; 04/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: As the Web evolves in an integrated and interlinked knowledge space thanks to the growing amount of published Linked Open Data, the need to find solutions that enable the scholars to discover, explore and analyse the underlying research data emerges. Scholars, typically non-expert technology users, lack of in-depth understanding of the underlying semantic technology which limits their ability to interpret and query the data. We present a visual workflow to connect scholars and scientific resources on the Web of Data. We allow scholars to move from exploratory analysis in academic social networks to exposing relations between these resources. We allow them to reveal experts in a particular field and discover relations in and beyond their research communities. This paper aims to evaluate the potential of such a visual workflow to be used by non-expert users to interact with the semantically enriched data and familiarize with the underlying dataset.
    Proceedings of the companion publication of the 23rd international conference on World wide web companion; 04/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many organisations publish their data through a Web API. This stimulates use by Web applications, enabling reuse and enrichments. Recently, resource-oriented APIs are increasing in popularity because of their scalability. However, for organisations subject to data archiving, creating such an APIraises certain issues. Often, datasets are stored in different files and different formats. Therefore, tracking revisions is a challenging task and the API has to be custom built. Moreover, standard APIs only provide access to the current state of a resource. This creates time-based inconsistencies when they are combined. In this paper, we introduce an end-to-end solution for publishing a dataset as a time-based versioned REST API, with minimal input of the publisher. Furthermore, it publishes the provenance of each created resource. We propose a technology stack composed of prior work, which versions datasets, generates provenance, creates an API and adds Memento Datetime negotiation.
    Proceedings of the companion publication of the 23rd international conference on World wide web companion; 04/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: The missing feedback loop is considered the reason for broken Data Cycles on current Linked Open Data ecosystems. Read-Write platforms are proposed, but they are restricted to capture modifications after the data is released as Linked Data. Triggering though a new iteration results in loosing the data consumers' modifications, as a new version of the source data is mapped, overwriting the currently published. We propose a prime solution that interprets the data consumers' feedback to update the mapping rules. This way, data publishers initiate a new iteration of the Data Cycle considering the data consumers' feedback when they map a new version of the published data.
    Proceedings of the companion publication of the 23rd international conference on World wide web companion; 04/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To unlock the full potential of Linked Data sources, we need flexible ways to query them. Public SPARQL endpoints aim to fulfill that need, but their availability is notoriously problematic. We therefore introduce Linked Data Fragments, a publishing method that allows efficient offloading of query execution from servers to clients through a lightweight partitioning strategy. It enables servers to maintain availability rates as high as any regular HTTP server, allowing querying to scale reliably to much larger numbers of clients. This paper explains the core concepts behind Linked Data Fragments and experimentally verifies their Web-level scalability, at the cost of increased query times. We show how trading server-side query execution for inexpensive data resources with relevant affordances enables a new generation of intelligent clients.
    Proceedings of the 7th Workshop on Linked Data on the Web; 04/2014

Publication Stats

2k Citations
194.75 Total Impact Points

Institutions

  • 1996–2014
    • Ghent University
      • • Department of Electronics and Information Systems
      • • Department of Electrical Energy, Systems and Automation
      • • Multimedia Lab (MMLab)
      Gand, Flanders, Belgium
  • 2013
    • University of Vigo
      Vigo, Galicia, Spain
  • 2010–2013
    • Universitair Ziekenhuis Ghent
      Gand, Flanders, Belgium
    • Hogeschool West-Vlaanderen
      Bruxelles, Brussels Capital Region, Belgium
    • MIT Portugal
      Porto Salvo, Lisbon, Portugal
    • Free University of Brussels
      • Electronics and Informatics (ETRO)
      Brussels, BRU, Belgium
    • University of the West of Scotland
      • School of Computing
      Пейсли, Scotland, United Kingdom
    • Brunel University London
      अक्सब्रिज, England, United Kingdom
  • 2012
    • University of California, Santa Barbara
      Santa Barbara, California, United States
    • Boston College, USA
      Boston, Massachusetts, United States
  • 2011
    • The Police Academy of the Czech Republic in Prague
      Praha, Praha, Czech Republic
    • Loughborough University
      • Department of Computer Science
      Loughborough, ENG, United Kingdom
  • 2010–2011
    • University of Castilla-La Mancha
      • Instituto de Investigación en Informática de Albacete
      Ciudad Real, Castille-La Mancha, Spain
  • 2009
    • Κωνσταντοπούλειο νοσοκομείο Νέας Ιωνίας (Η Αγία Όλγα)
      Athínai, Attica, Greece
  • 2008
    • Information and Communications University
      South Korea
    • Universiteit Hasselt
      • Expertise Centre for Digital Media (EDM)
      Diepenbeek, VLG, Belgium
  • 2006–2007
    • University of Wollongong
      City of Greater Wollongong, New South Wales, Australia
  • 2004–2006
    • imec Belgium
      Louvain, Flanders, Belgium
    • École Polytechnique Fédérale de Lausanne
      Lausanne, Vaud, Switzerland
  • 2005
    • National Technical University of Athens
      • School of Electrical and Computer Engineering
      Athens, Attiki, Greece