ArticlePublisher preview available
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

We present a real-time volume rendering component for the Web, which provides a set of illustrative and non-photorealistic styles. Volume data is used in many scientific disciplines, requiring the visualization of the inner data, features for enhancing extracted characteristics or even coloring the volume. The Medical Working Group of X3D published a volume rendering specification. The next step is to build a component that realizes the functionalities defined by the specification. We have designed and built a volume rendering component integrated in the X3DOM framework. This component allows content developers to use the X3D specification. It combines and applies multiple rendering styles to several volume data types, offering a suitable tool for declarative volume rendering on the Web. As we show in the result section, the proposed component can be used in many fields that requires the visualization of multi-dimensional data, such as in medical and scientific fields. Our approach is based on WebGL and X3DOM, providing content developers with an easy and flexible declarative way of sharing and visualizing volumetric content over the Web.
This content is subject to copyright. Terms and conditions apply.
Multimed Tools Appl (2017) 76:13425–13454
DOI 10.1007/s11042-016-3743-1
X3DOM volume rendering component for web content
developers
Ander Arbelaiz1·Aitor Moreno1·Luis Kabongo1,2 ·
Alejandro Garc´
ıa-Alonso3
Received: 2 December 2014 / Revised: 4 April 2016 / Accepted: 30 June 2016 /
Published online: 15 July 2016
© Springer Science+Business Media New York 2016
Abstract We present a real-time volume rendering component for the Web, which provides
a set of illustrative and non-photorealistic styles. Volume data is used in many scientific
disciplines, requiring the visualization of the inner data, features for enhancing extracted
characteristics or even coloring the volume. The Medical Working Group of X3D published
a volume rendering specification. The next step is to build a component that realizes the
functionalities defined by the specification. We have designed and built a volume rendering
component integrated in the X3DOM framework. This component allows content develop-
ers to use the X3D specification. It combines and applies multiple rendering styles to several
volume data types, offering a suitable tool for declarative volume rendering on the Web.
As we show in the result section, the proposed component can be used in many fields that
requires the visualization of multi-dimensional data, such as in medical and scientific fields.
Our approach is based on WebGL and X3DOM, providing content developers with an easy
and flexible declarative way of sharing and visualizing volumetric content over the Web.
Keywords Volume rendering ·Web GL ·Declarative 3D ·X3DOM ·X3D
Ander Arbelaiz
aarbelaiz@vicomtech.org
Aitor Moreno
amoreno@vicomtech.org
Luis Kabongo
lkabongo@vicomtech.org
Alejandro Garc´
ıa-Alonso
alex.galonso@ehu.es
1Vicomtech-IK4, 20009 Donostia / San Sebasti´
an, Spain
2Biodonostia Health Research Institute, Donostia / San Sebasti´
an, Spain
3University of the Basque Country, Donostia / San Sebasti´
an, Spain
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... X3DOM is a DOM-based implementation of X3D (Fraunhofer IGD 2014) that enables declarative X3D in the Web. Arbelaiz et al. (2016b) presented a volume rendering component for X3DOM based in Congote el al. (2012; approach. This component implementation o ers X3D's volume visualization reproducible and declarative features and it has been the reference to obtain feedback from the community (X3DOM Community 2015a,b, 2016a,b, 2017. ...
... As stated before, WebGL 1.0 does not support this texture type, so we have to make use of the ImageTextureAtlas again. Arbelaiz et al. (2016b) have extended the ImageTextureAtlas approach to adapt its use to the other nodes described in the current X3D v3.3 ISO speci cation (Web3DConsortium 2017). In this manner, we can proceed to apply illustrative and non-photorealistic styles to the volume visualization. ...
Conference Paper
Full-text available
Recent developments in Web-based volume rendering have gained recognition by Web users and professionals in several fields. The ISO-IEC Standard Extensible 3D (X3D) version 3.3 specifies the integration and visual styling of volumetric data for real-time interaction. The specification is an important milestone describing a framework for expressive presentation. However, it was written before the emergence of WebGL and the HTML5 platform. This paper describes our work to adapt the X3D Volume rendering nodes to the Web platform and to enhance their functionality based on feedback provided by the X3D and X3DOM open source communities. These include: a description of a new volume data node and an application of such node to create 4D volume rendering real time visualizations. We present functionalities that are currently not part of the standard: the edition of Transfer Functions, Multi Planar Reconstruction (MPR), intersection of the volume with 3D objects, clipping planes with volume data and control in the quality of the generated volume visualization. These additions should be considered for inclusion in future revisions of the X3D ISO volume rendering component.
... Additionally, the online version features an integrated virtual reality viewer for Google cardboard. We 132 used X3DOM libraries to achieve rapid volume rendering and enable users to select data resolution that 133 best matches their connection speed, so that the browser-based implementation remains highly 134 responsive (Arbelaiz et al., 2017b(Arbelaiz et al., , 2017a. Both local and web-based versions include hyper-links to 135 ...
Article
Full-text available
Decoding the functional connectivity of the nervous system is facilitated by transgenic methods that express a genetically encoded reporter or effector in specific neurons; however, most transgenic lines show broad spatiotemporal and cell-type expression. Increased specificity can be achieved using intersectional genetic methods which restrict reporter expression to cells that co-express multiple drivers, such as Gal4 and Cre. To facilitate intersectional targeting in zebrafish, we have generated more than 50 new Cre lines, and co-registered brain expression images with the Zebrafish Brain Browser, a cellular resolution atlas of 264 transgenic lines. Lines labeling neurons of interest can be identified using a web-browser to perform a 3D spatial search (zbbrowser.com). This resource facilitates the design of intersectional genetic experiments and will advance a wide range of precision circuit-mapping studies.
... Thus, for example, in the works written by of D. Lv [5] and M. Pignatelli [6], the problem of scientific visualization based on SVG technology is described. A. Arbelaiz [7], in his work, demonstrated the possibilities of modern web-technologies using WebGL library for constructing three-dimensional scenes. The technology of using jqPlot tools for scientific visualization is widely represented in work written by K. Wang [8]. ...
Article
The optimal tools selection for design of web-based visual mining client for real time fraud detection systems was discussed. The features of modern real time fraud detection software were analyzed. The necessity of transition to using of web-based technologies for client software design was shown. The market of web-frameworks and browser to web-server data exchange technologies were investigated. Basing on experimental research the most efficient toolset for design of web-client software for real time fraud detection systems was offered. Keywords: fraud detection, Visual Mining, real time data exchange, web-visualization, webSockets, MessageBus.
... via https://doi.org/10.1007/978-3-319-59397-5_25 [25], [26] using the X3DOM Library that offers implementation for most common representation use cases of volumetric medical imaging data [27]. Table 2 depicts mean preparation times for datasets with different resolutions before interactive rendering. ...
Conference Paper
Traditional Picture Archiving and Communication Systems (PACS) were designed for vendor-specific environments, dedicated radiology workstations and scanner consoles. These kinds of systems are becoming obsolete due to two main reasons. Firstly, they don’t satisfy the long-standing need in healthcare to put all the resources related to the patient into a single solution rather than a multitude of partial solutions. And secondly, communication, storage and security technologies have demonstrated that they are mature enough to support this demand in other fields. “Vendor Neutral Archives” are becoming the new trend in medical imaging storage and “deconstructed PACS” goes one step beyond proposing a totally decoupled implementation. Our work combines this implementation with the scalability and ubiquitous availability of cloud solutions and internet technologies to provide an architecture of a PACS-as-a-service system that handles a simple enterprise workflow orchestration of tele-radiology.
Article
The development of medical device technology has led to the rapid growth of medical imaging data. The reconstruction from two-dimensional images to three-dimensional volume visualization not only shows the location and shape of lesions from multiple views but also provides intuitive simulation for surgical treatment. However, the three-dimensional reconstruction process requires the high performance execution of image data acquisition and reconstruction algorithms, which limits the application to equipments with limited resources. Therefore, it is difficult to apply on many online scenarios, and mobile devices cannot meet high-performance hardware and software requirements. This paper proposes an online medical image rendering and real-time three-dimensional (3D) visualization method based on Web Graphics Library (WebGL). This method is based on a four-tier client-server architecture and uses the method of medical image data synchronization to reconstruct at both sides of the client and the server. The reconstruction method is designed to achieve the dual requirements of reconstruction speed and quality. The real-time 3D reconstruction visualization of large-scale medical images is tested in real environments. During the interaction with the reconstruction model, users can obtain the reconstructed results in real-time and observe and analyze it from all angles. The proposed four-tier client-server architecture will provide instant visual feedback and interactive information for many medical practitioners in collaborative therapy and tele-medicine applications. The experiments also show that the method of online 3D image reconstruction is applied in clinical practice on large scale image data while maintaining high reconstruction speed and quality.
Article
Full-text available
Web3D site which uses standard format has begun to elicit difficulties for some users, coupled with browser compatibility that is increasingly narrowing. This also happened on the sites which represented an institution in the form of a virtual college campus. For the campus Web3D to keep providing its services and facilitate access to more prospective users in the future, an update is needed that requires major changes. This can be done by the use of different technology approach which has wider features and possibilities of use. In this research, an experiment was first carried out to see the prospect of using it on the campus Web3D site. Furthermore, the prototype world using newer format was developed and then tested in a variety of browsers that are currently widely used. The resulting world was able to be displayed on more browsers and platforms, and it kept the same complexity which leads to similar appearance and functionality as the previous one. Performance decrease did occur, so further optimization was needed.
Article
The rapid development of Internet and various mobile communication media initiate the demands for access to medical image visualization systems. Medical image reading and interpretation at any time, any place and any device become an urgent need for radiologists. The current medical image online visualization methods have disadvantages in computing and storage resource restrained environments. This study presents a novel framework of medical image online visualization based on shadow proxy, which makes applications have across platform ability and universal environmental adaptability especial for devices with restricted running resources. The framework can be adapted in multiple client architectures including the pure web applications, mobile applications or regular desktop applications. It is easy to be integrated into third party software and there are no restrictions of the communication protocols between the client and server side due to two innovations of the framework that are shadow proxy mechanism and shadow data. The shadow proxy just does lightweight tasks on shadow data and the ultimate processing of computing tasks are moved to the server side to complete. The size of shadow data is small enough for shadow proxy that speeds up local display and processing tasks. Finally, the framework takes advantage of high performance on server side to render high quality image results. The performance of proposed work is evaluated in a web based medical image visualization system, and the results show that the framework in this paper allows the system to have smooth and quasi-real-time interaction performance. Therefore, this study ensures the local client operations fluency and fast while the quality of the visualization is still not lost that gives the best user experience.
Conference Paper
In this paper, we describe a client server architecture oriented open three dimensional reconstruction layered reference model (named as R4) for medical images. The model is divided into four layers including remote transmission protocol layer, reshaping volume data layer, reconstruction scene algorithm layer and rendering visual model layer. The innovations of this model are consisted of a unified reference layered architecture and an adaptive reconstruction mode switching method based on decision tree method. We review all the models published in nearly two decades and each existing model can be found as a specific instance of the R4 model proposed in this paper. Another contribution in this paper is that the different phases about medical image three dimensional reconstruction solutions are analyzed with different main technologies over years according to R4 model proposed in this paper.
Conference Paper
Due to a lack of ubiquitous tools for volume data visualization, 3D rendering of volumetric content is shared and distributed as 2D media (video and static images). This work shows how using open web technologies (HTML5, JavaScript, WebGL and SVG), high quality volume rendering is achievable in an interactive manner with any WebGL-enabled device. In the web platform, real-time volume rendering algorithms are constrained to small datasets. This work presents a WebGL progressive ray-casting volume rendering approach that allows the interactive visualization of larger datasets with a higher rendering quality. This approach is better suited for devices with low compute capacity such as tablets and mobile devices. As a validation case, the presented method is used in an industrial quality inspection use case to visually assess the air void distribution of a plastic injection mould component in the web browser.
Conference Paper
Full-text available
Rough set theory is an approach to handle vagueness or uncertainty. We propose methods that apply rough set theory in the context of segmentation (or partitioning) of multichannel medical imaging data. We put this approach into a semi-automatic framework, where the user specifies the classes in the data by selecting respective regions in 2D slices. Rough set theory provides means to compute lower and upper approximations of the classes. The boundary region between the lower and the upper approximations represents the uncertainty of the classification. We present an approach to automatically compute segmentation rules from the rough set classification using a k-means approach. The rule generation removes redundancies, which allows us to enhance the original feature space attributes with a number of further feature and object space attributes. The rules can be transferred from one 2D slice to the entire 3D data set to produce a 3D segmentation result. The result can be refined by the user by interactively adding more samples (from the same or other 2D slices) to the respective classes. Our system allows for a visualization of both the segmentation result and the uncertainty of the individual class representations. The methods can be applied to single-as well as multichannel (or multimodal) imaging data. As a proof of concept, we applied it to medical imaging data with RGB color channels.
Conference Paper
Full-text available
This paper proposes and compares several methods for interactive volume rendering in mobile devices. This kind of devices has several restrictions and limitations both in performance and in storage capacity. The paper reviews the suitability of some existing direct volume rendering methods, and proposes a novel approach that takes advantage of the graphics capabilities of modern OpenGL ES 2.0 enabled devices. Several experiments have been carried out to test the behaviour of the described method.
Conference Paper
Full-text available
Platforms based on OpenGL ES 2.0 such as mobile devices and WebGL have recently being used to render 3D volumetric models. However, the texture storage limitations of these platforms cause that only low-resolution models can be visualized. This paper describes a novel technique that overcomes these limitations and allows us to render detailed high resolution volumes on these platforms. Additionally, we propose a software architecture that permits existing volume rendering techniques to be adapted to mobile devices and WebGL. A set of experiments has been carried out to assess the performance of the proposed architecture on these platforms with different volumes of increasing resolution. Results prove that our proposal is feasible, robust and achieves visualization of very large volumes on constrained platforms.
Conference Paper
This paper presents new algorithms to trace objects represented by densities within a volume grid, e.g. clouds, fog, flames, dust, particle systems. We develop the light scattering equations, discuss previous methods of solution, and present a new approximate solution to the full three-dimensional radiative scattering problem suitable for use in computer graphics. Additionally we review dynamical models for clouds used to make an animated movie.
Article
A major difficulty of volume rendering has been the recognition of different semantic regions which is crucial for the appropriate assignment of optical properties. Such difficulty arises from the fact that different semantic regions may share the same input value ranges. In this paper, we introduce the concept of ray-feature analysis and propose an on-the-fly state transition framework for the recognition of different semantic regions during volume rendering without the need of explicit segmentation information. In this framework, we consider the value along the path of a ray as a 1D-signal, and by making use of the feature analysis of these 1D-signals, semantic information of the current ray sample is extracted. To define the condition of state transition, we propose a method called “threshold based state transition”. Since the parameters of the threshold based state transition method is not intuitive, an automatic learning method which enables an interactive user labeling routine is proposed. Experimental results show that our proposed framework is cost effective for on-the-fly semantic region recognition, and is especially suitable for closed, mostly convex, multi-layered objects.
Article
Direct volume rendering is one of the most effective ways to visualize volume data sets, which employs intuitive 2D images to display internal structures in 3D space. However, opaque features always occlude other parts of the volume, and make some features of interest invisible in the final rendered images. Although a class of highly transparent transfer functions are capable of revealing all features at once, it is still laborious and time-consuming to specify appropriate transfer functions to reduce occlusions on less important structures and highlight features of interest, even for experts. Thus, the research on simpler volume visualization techniques which do not rely on complex transfer functions has been a hotspot in many practical applications. In this paper, an occlusion-free feature exploration approach that consists of modifying the traditional volume rendering integral is proposed, which can achieve better visibility of all internal features of interest with simple linear transfer functions. During the ray casting, a modulation parameter is derived to reduce the contributions of previous samples along the viewing ray, whenever the accumulated opacity value is close to overflow. In addition, several relevant functions are introduced to refine the modulation parameter and highlight the features of interest identified according to the attributes such as scalar, gradient module, occurrence and depth. Thereby, the proposed approach is capable of generating informative rendered images and enhancing the visual perception of features of interest without resorting to complex transfer functions.
Article
We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
Article
This article describes how to use level sets to represent and compute deformable surfaces. A deformable surface is a sequence of surface models obtained by taking an initial model and incrementally modifying its shape. Typically, we can parameterize ...