Article

The visualization toolkit: an object oriented approach to computer graphics (2nd edition)

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For this reason, we use separate visualization clusters to render the data. Output volumes are sent from the simulation machine to the remote visualization machine, so that the simulation can proceed independently of the visualization; these are then rendered using the open source VTK (Schroeder et al., 2003) visualization library into bitmap images, which can in turn be multicast over the AccessGrid, so that the state of the simulation can be viewed by scientists around the globe. In particular, this was demonstrated by performing and interacting with a simulation in front of a live worldwide audience, as part of the SCGlobal track of the SuperComputing 2004 conference. ...
... This mesophase was observed to form from a homogeneous mixture, without any external constraints imposed to bring about the gyroid geometry, which is an emergent effect of the mesoscopic fluid parameters. It is important to note that this method allows examination of the dynamics of mesophase formation, since most treatments to date have focussed on properties or mathematical Article submitted to Royal Society description (Seddon and Templer, 1993;Schwarz and Gompper, 1999;Gandy and Klinowski, 2000;Große-Brauckmann, 1997) of the static equilibrium state. In addition to its biological importance, there have been recent attempts (Chan et al., 1999) to use self-assembling gyroids to construct nanoporous materials. ...
Preprint
During the last two years the RealityGrid project has allowed us to be one of the few scientific groups involved in the development of computational grids. Since smoothly working production grids are not yet available, we have been able to substantially influence the direction of software development and grid deployment within the project. In this paper we review our results from large scale three-dimensional lattice Boltzmann simulations performed over the last two years. We describe how the proactive use of computational steering and advanced job migration and visualization techniques enabled us to do our scientific work more efficiently. The projects reported on in this paper are studies of complex fluid flows under shear or in porous media, as well as large-scale parameter searches, and studies of the self-organisation of liquid cubic mesophases. Movies are available at http://www.ica1.uni-stuttgart.de/~jens/pub/05/05-PhilTransReview.html
... (4.32) and (4.33). These 3D objects are generated using superquadrics primitives from VTK library (Will Schroeder et al. 2006). The convergence study has been conducted gradually reducing the size of the triangles composing the discretized surface of these objects and calculating the cumulative relative area-weighted 1 -error̊over all the points = 1̊∑ (̊− exact )̊( 4.34) ...
... The details are thoroughly explained in chapter 4 in terms of formal algorithms and related results. The library offers a higher-level abstractions on top of the VTK graphic toolkit (Will Schroeder et al. 2006), therefore the underlying data structures are for the most part borrowed from VTK. ...
Thesis
Full-text available
In current times we are witnessing a “second space race”: private companies like SpaceX are paving the way to a new generation of space launcher systems optimized for cost effectiveness and extreme performances that will bring humankind to Mars for the first time in its existence. A key aspect of those systems is to provide a high level of reusability leading to a drastic drop of launch costs. This translates into propulsion systems that need to operate on wider flight envelopes, with more advantageous propellant pairs like cryogenic methane and liquid oxygen, therefore requiring tighter designs for the injection systems. The injectors are responsible for the correct nebulization of fuel and oxidizer and they have a direct impact on the performance of the engines. These kind of problems are shared across different applications and are somehow generic The current state of the art modeling strategies fail at predicting the correct distributions of droplets in the combustion chamber. Therefore, the target of this thesis is to contribute to the design of a unified modeling framework addressing the derivation of system of equations governing two-phase flow systems characterized by a sound mathematical structure via a variational approach named Stationary Action Principle (SAP) coupled to the second principle of thermodynamics. This effort is backed by a tailored computational toolset that allows the rational choice of modeling assumptions and the effective simulations of the developed models, possibly on modern computing architectures. This work identifies three main points of improvement: the development of reduced-order models via a variational procedure named the Stationary Action Principle (SAP) featuring a set of equations that include geometrical properties such as the interfacial surface density and the mean and Gauss curvatures; the implementation of a geometric DNS post-processing tool that is used to collect useful insight from high-fidelity simulations in order to craft an accurate reducedorder model, and the development of a Python library that acts as a prototyping playbook aimed at quickly testing ideas in the context of numerical schemes, boundary conditions, domain configurations, with the potential ability of leveraging modern computational architectures such as GPUs.
... Small arteries branching off in an irregular and more or less perpendicular fashion are known as supernumerous arteries but are rather an exception from the general branching pattern. In larger animals including humans, the general branching pattern of airways and arteries is mainly dichotomous; whereas in smaller animals, like rodents or rabbits, a monopodial branching pattern prevails (Ochs and Weibel 2008;Singhal et al. 1973;Townsley 2012;Horsfield 1978Horsfield , 1984. ...
... Schroeder et al. 2006), SimpleElastix [https ://simpl eelas tix.githu b.io/] (Marstal et al. 2016), SGEXT[https ://githu b.com/phcer dan/SGEXT ] and ParaView [http://www.parav iew.org] ...
Article
Full-text available
Various lung diseases, including pulmonary hypertension, chronic obstructive pulmonary disease or bronchopulmonary dysplasia, are associated with structural and architectural alterations of the pulmonary vasculature. The light microscopic (LM) analysis of the blood vessels is limited by the fact that it is impossible to identify which generation of the arterial tree an arterial profile within a LM microscopic section belongs to. Therefore, we established a workflow that allows for the generation-specific quantitative (stereological) analysis of pulmonary blood vessels. A whole left rabbit lung was fixed by vascular perfusion, embedded in glycol methacrylate and imaged by micro-computed tomography (µCT). The lung was then exhaustively sectioned and 20 consecutive sections were collected every 100 µm to obtain a systematic uniform random sample of the whole lung. The digital processing involved segmentation of the arterial tree, generation analysis, registration of LM sections with the µCT data as well as registration of the segmentation and the LM images. The present study demonstrates that it is feasible to identify arterial profiles according to their generation based on a generation-specific color code. Stereological analysis for the first three arterial generations of the monopodial branching of the vasculature included volume fraction, total volume, lumen-to-wall ratio and wall thickness for each arterial generation. In conclusion, the correlative image analysis of µCT and LM-based datasets is an innovative method to assess the pulmonary vasculature quantitatively.
... 2.9.1 Geometry of the volume conductor. A high resolution T1-MRI was taken from a patient and used to construct a 3D volume through a custom rendering program developed with VTK [Sch18]. Then, a virtual electrode was placed in the 3D model at the left frontal lobe within the grey-white matter interface (see Fig. 2.12). ...
... Depth lead placement was performed in regions adjacent to the epileptic circuit identified with SISCOM, EEG, MEG, pspiDTI, and observed semiology. To aid with this process, a custom Java program was created to allow 3D rendering of the T1, T2, and functional imaging volumes using methods provided by the Visualization Toolkit (VTK) [Sch18]. ...
Thesis
Full-text available
A critical step towards applying direct brain stimulation therapy in focal onset epilepsy is to effectively interface with epileptogenic neural circuits using a limited set of active contacts. This takes special relevance when interacting with networks that exhibit two or more foci. A strategy to influence the maximum extent of the epileptogenic circuit is to stimulate white matter pathways to enhance propagation to distant epileptic tissue. A significant number of elements must be considered in the clinical response to stimulation delivered directly to neuronal populations. These variables include: stimulation parameter settings, number and interdependence of anatomical targets, electrode number, electrode location and orientation, geometry or shape of the electrode contacts, contact polarity, biophysical properties of stimulated medium, and trajectory of axonal bundles adjacent to the stimulation site. This document addresses the development of a computational model which takes into consideration all the mentioned variables to predict activation of distant sites via white matter pathways. A method to calculate the extracellular potential field, induced by the application of time-dependent stimulation waveforms, is discussed. Such a method considers both the anisotropic conductivity nature of neural tissue and the electrochemical phenomena of the electrode-tissue interface. The response of white matter fibers is then evaluated by solving a compartmental cable model based in the Hodgkin and Huxley membrane description. The model was integrated into a pre-surgical workflow and was used prospectively to guide stereotactic implantation of depth leads to apply direct neurostimulation therapy in four patients with refractory focal onset epilepsy.
... The interaction between these components is realised via remote procedure calls (RCP) and TCP sockets. As our task is to bring together interactive simulations and visualisations with HPC applications, i. e. large systems of equations to be solved and large data sets to be visualised, the visualisation and simulation are parallel processes themselves as displayed in Fig. 2. The visualisation is based on the Visualization Toolkit (VTK, [10,7]). For scalar data sets, it provides a colour mapping as well as iso-lines or iso-surfaces enhanced by cutting planes that can be displaced and rotated interactively. ...
Preprint
Computational Steering, the combination of a simulation back-end with a visualisation front-end, offers great possibilities to exploit and optimise scenarios in engineering applications. Due to its interactivity, it requires fast grid generation, simulation, and visualisation and, therefore, mostly has to rely on coarse and inaccurate simulations typically performed on rather small interactive computing facilities and not on much more powerful high-performance computing architectures operated in batch-mode. This paper presents a steering environment that intends to bring these two worlds - the interactive and the classical HPC world - together in an integrated way. The environment consists of efficient fluid dynamics simulation codes and a steering and visualisation framework providing a user interface, communication methods for distributed steering, and parallel visualisation tools. The gap between steering and HPC is bridged by a hierarchical approach that performs fast interactive simulations for many scenario variants increasing the accuracy via hierarchical refinements in dependence of the time the user wants to wait. Finally, the user can trigger large simulations for selected setups on an HPC architecture exploiting the pre-computations already done on the interactive system.
... file using vtk 9.1.0 (Schroeder et al., 1996). The resulting file was used in the subsidence risk assessment Section 2.3 and Section 2.4. ...
Article
Full-text available
A full-scale topology optimisation formulation has been developed to automate the design of cages used in instrumented transforaminal lumbar interbody fusion. The method incorporates the mechanical response of the adjacent bone structures in the optimisation process, yielding patient-specific spinal fusion cages that both anatomically and mechanically conform to the patient, effectively mitigating subsidence risk compared to generic, off-the-shelf cages and patient-specific devices. In this study, in silico medical device testing on a cohort of seven patients was performed to investigate the effectiveness of the anatomically and mechanically conforming devices using titanium and PEEK implant materials. A median reduction in the subsidence risk by 89% for titanium and 94% for PEEK implant materials was demonstrated compared to an off-the-shelf implant. A median reduction of 75% was achieved for a PEEK implant material compared to an anatomically conforming implant. A credibility assessment of the computational model used to predict the subsidence risk was provided according to the ASME V&V40–2018 standard.
... file using vtk 9.1.0 (Schroeder et al., 1996). The 'as-built' design was used for additive manufacturing and the subsidence risk assessment. ...
... the 'as-built' design was achieved by applying the 'Iso volume' filter to the intermediate design and exporting to an .stl file using vtk 9.1.0[46]. The 'as-built' design was used for additive manufacturing and the subsidence risk assessment. ...
Preprint
Full-text available
Cage subsidence after instrumented lumbar spinal fusion surgery remains a significant cause of treatment failure, specifically for posterior or transforaminal lumbar interbody fusion. Recent advancements in computational techniques and additive manufacturing, have enabled the development of patient-specific implants and implant optimization to specific functional targets. This study aimed to introduce a novel full-scale topology optimization formulation that takes the structural response of the adjacent bone structures into account in the optimization process. The formulation includes maximum and minimum principal strain constraints that lower strain concentrations in the adjacent vertebrae. This optimization approach resulted in anatomically and mechanically conforming spinal fusion cages. Subsidence risk was quantified in a commercial finite element solver for off-the-shelf, anatomically conforming and the optimized cages, in two representative patients. We demonstrated that the anatomically and mechanically conforming cages reduced subsidence risk by 91% compared to an off-the-shelf implant with the same footprint for a patient with normal bone quality and 54% for a patient with osteopenia. Prototypes of the optimized cage were additively manufactured and mechanically tested to evaluate the manufacturability and integrity of the design and to validate the finite element model.
... Post-processing was done with ParaView [12,13]. Based on the vtk library [14,15], ParaView is a software with a clear graphical interface and a powerful toolkit for building the necessary graphs, animated video files and presenting the distribution of the solution on the computational domain. ...
Article
The basis of the research hypothesis is the assumption that in wooden structures, deformations and stresses propagate in waves. The numerical experiment demonstrated a correct qualitative visual picture of the wave propagation of deformations, with wave manifestations and characteristic effects on the surface of the sample, at axial and corner points. Visually, the numerical model showed Rayleigh waves on the surface layer of the sample, depending on the ratio of the external geometric dimensions of the sample model, with pronounced wave interference on the outer shell. The visual manifestation of deformation on the outer sides (faces) and the reflection of deformation waves from the outer boundaries of the elastic medium of the sample in the form of Rayleigh waves confirm the correctness of the general hypothesis and the implemented model. Visualization of the process of emergence, propagation and attenuation of deformation waves on the surface of the sample shows that in the quantitative description of the deformation gradient, areas dangerous for the material can be identified.
... Data characterization and processing are essential steps in analyzing and visualizing simulation results in fields such as laser processing. One widely used file format for 3D spatial data with global field variables like temperature and phase parameters is VTK [41]. The Python library Meshio [42] simplifies the process of reading and writing VTK files, making it easier to analyze spatial data. ...
Conference Paper
Full-text available
Directed Energy Deposition (DED) is crucial in additive manufacturing for various industries like aerospace, automotive, and biomedical. Precise temperature control is essential due to high-power lasers and dynamic environmental changes. Employing Reinforcement Learning (RL) can help with temperature control, but challenges arise from standardization and sample efficiency. In this study, a model-based Reinforcement Learning (MBRL) approach is used to train a DED model, improving control and efficiency. Computational models evaluate melt pool geometry and temporal characteristics during the process. The study employs the Allen-Cahn phase field (AC-PF) model using the Finite Element Method (FEM) with the Multi-physics Object-Oriented Simulation Environment (MOOSE). MBRL, specifically Dyna-Q+, outperforms traditional Q-learning, requiring fewer samples. Insights from this research aid in advancing RL techniques for laser metal additive manufacturing.
... Subsequently, the 'as-built' design is achieved by smoothing the voxel geometry, using vtk 9.1.0 (Schroeder et al., 1996). The 'as-built' design is used for manufacturing and FE-analysis. ...
Article
Full-text available
A promising new treatment for large and complex bone defects is to implant specifically designed and additively manufactured synthetic bone scaffolds. Optimizing the scaffold design can potentially improve bone in-growth and prevent under- and over-loading of the adjacent tissue. This study aims to optimize synthetic bone scaffolds over multiple-length scales using the full-scale topology optimization approach, and to assess the effectiveness of this approach as an alternative to the currently used mono- and multi-scale optimization approaches for orthopaedic applications. We present a topology optimization formulation, which is matching the scaffold's mechanical properties to the surrounding tissue in compression. The scaffold's porous structure is tuneable to achieve the desired morphological properties to enhance bone in-growth. The proposed approach is demonstrated in-silico, using PEEK, cortical bone and titanium material properties in a 2D parameter study and on 3D designs. Full-scale topology optimization indicates a design improvement of 81% compared to the multi-scale approach. Furthermore, 3D designs for PEEK and titanium are additively manufactured to test the applicability of the method. With further development, the full-scale topology optimization approach is anticipated to offer a more effective alternative for optimizing orthopaedic structures compared to the currently used multi-scale methods.
... The visualization frameworks commonly use dataflow programming to construct visualization workflows [SLM04]. These frameworks represent workflows as pipelines or directed graphs, where nodes stand for low-level visualization components and data is processed hierarchically. ...
Preprint
Full-text available
In situ visualization and steering of computational modeling can be effectively achieved using reactive programming, which leverages temporal abstraction and data caching mechanisms to create dynamic workflows. However, implementing a temporal cache for large-scale simulations can be challenging. Implicit neural networks have proven effective in compressing large volume data. However, their application to distributed data has yet to be fully explored. In this work, we develop an implicit neural representation for distributed volume data and incorporate it into the DIVA reactive programming system. This implementation enables us to build an in situ temporal caching system with a capacity 100 times larger than previously achieved. We integrate our implementation into the Ascent infrastructure and evaluate its performance using real-world simulations.
... Besides various professional and semi-professional tools such as VTK [Schroeder et al. 1998] and its front end Par- Note that the list of authors releasing their data, code and toolkits is in constant evolution. ...
... Besides various professional and semi-professional tools such as VTK [Schroeder et al. 1998] and its front end ParaView [Ayachit 2015], Cubit [CUBIT 2022], MeshGems [Distene SAS 2022], Gmsh [Geuzaine and Remacle 2009], CoreForm [CoreForm 2022a], CGAL [Fabri and Pion 2009] and many others, over the years academics have released both data and a variety of open-source tools to aid not only their research, but also the activities of other practitioners in the field. This section summarizes the most prominent available resources for hexahedral and hex-dominant meshing. ...
Article
Full-text available
In this article, we provide a detailed survey of techniques for hexahedral mesh generation. We cover the whole spectrum of alternative approaches to mesh generation, as well as post processing algorithms for connectivity editing and mesh optimization. For each technique, we highlight capabilities and limitations, also pointing out the associated unsolved challenges. Recent relaxed approaches, aiming to generate not pure-hex but hex-dominant meshes, are also discussed. The required background, pertaining to geometrical as well as combinatorial aspects, is introduced along the way.
... Besides various professional and semi-professional tools such as VTK [Schroeder et al. 1998] and its front end ParaView [Ayachit 2015], Cubit [CUBIT 2021], MeshGems [Distene SAS 2020], Gmsh [Geuzaine and Remacle 2009], CoreForm [CoreForm 2021a], CGAL [Fabri and Pion 2009] and many others, over the years academics have released both data and a variety of open-source tools to aid not only their research, but also the activities of other practitioners in the field. This section summarizes the most prominent available resources for hexahedral and hex-dominant meshing. ...
Preprint
Full-text available
In this article, we provide a detailed survey of techniques for hexahedral mesh generation. We cover the whole spectrum of alternative approaches to mesh generation, as well as post processing algorithms for connectivity editing and mesh optimization. For each technique, we highlight capabilities and limitations, also pointing out the associated unsolved challenges. Recent relaxed approaches, aiming to generate not pure-hex but hex-dominant meshes, are also discussed. The required background, pertaining to geometrical as well as combinatorial aspects, is introduced along the way.
... Visualization and post-processing were performed with ParaView (Kitware, Inc., Clifton Park, NY, USA) (Schroeder et al., 2004). Equations for the haemodynamic indices PI (Gosling & King, 1974), RI (Pourcelot, 1974), and TPV (Kitabatake et al., 1983;Kosturakis et al., 1984) and for Re Mean (Reynolds, 1883) are as follows. ...
Article
Full-text available
Haemodynamic correlations among the pulsatility index (PI), resistive index (RI), time to peak velocity (TPV), and mean Reynolds number (ReMean) were numerically investigated during the progression of carotid stenosis (CS), a highly prevalent condition. Fifteen patient-specific CS cases were modeled in the package, SimVascular, by using computed tomography angiography data for the aortic-cerebral vasculature. Computational fluid domains were solved with a stabilized Petrov–Galerkin scheme under Newtonian and incompressible assumptions. A rigid vessel wall was assumed, and the boundary conditions were pulsatile inflow and three-element lumped Windkessel outlets. During the progression, the increase in the TPV resembled that during aortic stenosis, and the parameter was negatively correlated with PI, RI, and ReMean in the ipsilateral cerebral region. The ReMean was inversely related to PI and RI on the contralateral side. In particular, PI and RI in cerebral arteries showed three second-order regression patterns: ‘constant (Group A)’, ‘moderately decreasing (Group B)’, and ‘decreasing (Group C)’. The patterns were defined using a new parameter, mean ratio (lowest mean index/mean index at 0% CS). This parameter could effectively indicate stenosis-driven tendencies in local haemodynamics. Overall, the haemodynamic indices changed drastically during severe unilateral CS, and they reflected both regional and aortic-cerebral flow characteristics.
... The results of the optimization process can be written to a vtr file and viewed in Paraview (https://www.paraview.org/) (Schroeder et al. 1996). The first ten designs are output by default. ...
Article
Full-text available
This paper presents a Python wrapper and extended functionality of the parallel topology optimization framework introduced by Aage et al. (Topology optimization using PETSc: an easy-to-use, fully parallel, open source topology optimization framework. Struct Multidiscip Optim 51(3):565–572, 2015). The Python interface, which simplifies the problem definition, is intended to expand the potential user base and to ease the use of large-scale topology optimization for educational purposes. The functionality of the topology optimization framework is extended to include passive domains and local volume constraints among others, which contributes to its usability to real-world design applications. The functionality is demonstrated via the cantilever beam, bracket and torsion ball examples. Several tests are provided which can be used to verify the proper installation and for evaluating the performance of the user’s system setup. The open-source code is available at https://github.com/thsmit/ , repository TopOpt_in_PETSc_wrapped_in_Python\texttt {TopOpt\_in\_PETSc\_wrapped\_in\_Python} TopOpt _ in _ PETSc _ wrapped _ in _ Python .
... The resulting triangular mesh was further processed and analysed using the open-source Visualization Toolkit. 26 First, a smoothing filter (vtkSmoothPolyDataFilter) was applied to the mesh with 3,000 iterations, a convergence threshold of 0.1, and a feature angle of 90°. Volumes and surface areas of each of the resulting 3D reconstructions were computed using the vtkMassProperties filter. ...
Article
Full-text available
Aims: Macrophages (MΦ), known for immunological roles such as phagocytosis and antigen presentation, have been found to electrotonically couple to cardiomyocytes (CM) of the atrio-ventricular node via Cx43, affecting cardiac conduction in isolated mouse hearts. Here, we characterise passive and active electrophysiological properties of murine cardiac resident MΦ, and model their potential electrophysiological relevance for CM. Methods and results: We combined classic electrophysiological approaches with 3 D florescence imaging, RNA-sequencing, pharmacological interventions and computer simulations. We used Cx3cr1eYFP/+ mice wherein cardiac MΦ were fluorescently labelled. FACS-purified fluorescent MΦ from mouse hearts were studied by whole-cell patch-clamp. MΦ electrophysiological properties include: membrane resistance 2.2 ± 0.1 GΩ (all data mean±SEM), capacitance 18.3 ± 0.1 pF, resting membrane potential -39.6 ± 0.3 mV, and several voltage-activated, outward or inwardly-rectifying potassium currents. Using ion channel blockers (barium, TEA, 4-AP, margatoxin, XEN-D0103, DIDS), flow cytometry, immuno-staining and RNA-sequencing, we identified Kv1.3, Kv1.5 and Kir2.1 as channels contributing to observed ion currents. MΦ displayed four patterns for outward and two for inward-rectifier potassium currents. Additionally, MΦ showed surface expression of Cx43, a prerequisite for homo- and/or heterotypic electrotonic coupling. Experimental results fed into development of an original computational model to describe cardiac MΦ electrophysiology. Computer simulations to quantitatively assess plausible effects of MΦ on electrotonically coupled CM showed that MΦ can depolarise resting CM, shorten early and prolong late action potential duration, with effects depending on coupling strength and individual MΦ electrophysiological properties, in particular resting membrane potential and presence/absence of Kir2.1. Conclusions: Our results provide a first electrophysiological characterisation of cardiac resident MΦ, and a computational model to quantitatively explore their relevance in the heterocellular heart. Future work will be focussed at distinguishing electrophysiological effects of MΦ-CM coupling on both cell types during steady-state and in patho-physiological remodelling, when immune cells change their phenotype, proliferate, and/or invade from external sources. Translational perspective: Cardiac tissue contains resident macrophages (MΦ) which, beyond immunological and housekeeping roles, have been found to electrotonically couple via connexins to cardiomyocytes (CM), stabilising atrio-ventricular conduction at high excitation rates. Here, we characterise structure and electrophysiological function of murine cardiac MΦ and provide a computational model to quantitatively probe the potential relevance of MΦ-CM coupling for cardiac electrophysiology. We find that MΦ are unlikely to have major electrophysiological effects in normal tissue, where they would hasten early and slow late CM-repolarisation. Further work will address potential arrhythmogenicity of MΦ in patho-physiologically remodelled tissue containing elevated MΦ-numbers, incl. non-resident recruited cells.
... It provides much of the same functionality as commercial programs while remaining free and open source. 3D Slicer allows unrestricted use for all users under a BSD-style license, [7] even for commercial uses. It is designed to allow for custom modules to be easily integrated with full access to the underlying scientific toolkits, which can be distributed and then downloaded from within Slicer itself. ...
Article
Full-text available
As the interest in image-guided medical interventions has increased, so too has the necessity for open-source software tools to provide the required capabilities without exorbitant costs. A common issue encountered in these procedures is the need to compare computed tomography (CT) data with X-ray data, for example, to compare pre-operative CT imaging with intraoperative X-rays. A software approach to solve this dilemma is the production of digitally reconstructed radiographs (DRRs) which computationally simulate an X-ray-type image from CT data. The resultant image can be easily compared to an X-ray image and can provide valuable clinical information, such as small anatomical changes that have occurred between the pre-operative and operative imaging (i.e., vertebral positioning). To provide an easy way for clinicians to make their own DRRs, we propose DRR generator, a customizable extension for the open-source medical imaging application three-dimensional (3D) Slicer. DRR generator provides rapid computation of DRRs through a highly customizable user interface. This extension provides end-users a free, open-source, and reliable way of generating DRRs. This program is integrated within 3D Slicer and thus can utilize its powerful imaging tools to provide a comprehensive segmentation and registration application for clinicians and researchers. DRR generator is available for download through 3D Slicer’s in-app extension manager and requires no additional software.
... The warmer (e.g., red) colors or lighter shades represent higher stresses (compressing is positive), and the cooler (e.g., blue) colors or darker shades indicate lower stresses. We implemented the visualization using the tools provided by VTK [29]. One example of the visualization is shown in Fig. 7b. ...
Article
Full-text available
Understanding stress distributions over 3D models is a highly desired feature in many scientific and engineering fields. The stress is mathematically a second-order tensor, and it is typically visualized using either color maps, tensor glyphs, or streamlines. However, neither of these methods is physically intuitive to the end user, and they become even more awkward when dealing with the volumetric tensor field over a complicated 3D shape. In this paper, we present a virtual perception system, which leverages a multi-finger haptic interface to help users intuitively perceive 3D stress fields. Our system allows the user to navigate the interior of the 3D model freely and maps the stress tensor to the haptic rendering along the direction of the finger’s trajectory. Doing so provides user a natural and straightforward understanding of the stress distribution without interacting with the parameters in the mapped visual representations. Experimental results show that our system is preferred in navigating stress fields inside an object and is applicable for different design tasks.
... The measure of complexity we chose for our algorithm is derived from the work of Alyassin et al. (1994), which is specifically designed for medical data, and is implemented in the VTK toolkit (Schroeder et al. 2004) in the filter called vtkMassProperties. This algorithm calculates the normalized shape index (NSI, see Eq. 2 and Fig. 5), which characterizes the deviation of the shape of an object from a sphere. ...
Article
Full-text available
Voxelizing three-dimensional surfaces into binary image volumes is a frequently performed operation in medical applications. In radiation therapy (RT), dose-volume histograms (DVHs) calculated within such surfaces are used to assess the quality of an RT treatment plan in both clinical and research settings. To calculate a DVH, the 3D surfaces need to be voxelized into binary volumes. The voxelization parameters may considerably influence the output DVH. An effective way to improve the quality of the voxelized volume (i.e., increasing similarity between that and the original structure) is to apply oversampling to increase the resolution of the output binary volume. However, increasing the oversampling factor raises computational and storage demand. This paper introduces a fuzzy inference system that determines an optimal oversampling factor based on relative structure size and complexity, finding the balance between voxelization accuracy and computation time. The proposed algorithm was used to automatically calculate oversampling factor in four RT studies: two phantoms and two real patients. The results show that the method is able to find the optimal oversampling factor in most cases, and the calculated DVHs show good match to those calculated using manual overall oversampling of two. The algorithm can potentially be adopted by RT treatment planning systems based on the open-source implementation to maintain high DVH quality, enabling the planning system to find the optimal treatment plan faster and more reliably.
... In a way, the CindyJS project [vGKRS16] followed the first path and now provides a set of mathematical illustrations via its gallery [Pro]. Regarding the second aspect of more complex web applications, the Visualization Toolkit (VTK) [SML04] expanded its functionality via a JS add-on, called VTK-JS [MSL]. ...
Conference Paper
Full-text available
The JavaView visualization framework was designed at the end of the 1990s as a software that provides-among other services- easy, interactive geometry visualizations on web pages.We discuss how this and other design goals were met and present several applications to highlight the contemporary use-cases of the framework. However, as JavaView's easy web exports was based on Java Applets, the deprecation of this technology disabled one main functionality of the software. The remainder of the article uses JavaView as an example to highlight the effects of changes in the underlying programming language on a visualization toolkit. We discuss possible reactions of software to such challenges, where the JavaView framework serves as an example to illustrate development decisions. These discussions are guided by the broader, underlying question as to how long it is sensible to maintain a software.
... In this section we present some results obtained exploiting the numerical strategies to compute geometrical parameters that are described in section 2. First of all in section 3.1 we benchmark the accuracy and convergence rate of the methods on some canonical objects for which the expressions of the curvatures in Cartesian coordinates are available. These 3D objects are generated using superquadrics primitive from VTK library (Schroeder, Martin, et al. (2006)). We then apply the computation to a DNS and we compute an areaweighted Probability Density Function (PDF) to highlight interesting footprints in the H − G phase space of the topological objects produced at each simulation time step. ...
Conference Paper
Full-text available
This work presents a methodology to collect useful flow statistics over DNS simulations exploiting geometrical properties maps and topological invariants. The procedure is based on estimating curvatures on triangulated surfaces as as averaged values around a given point and its first neighbours (the 1-ring of such a point). In the case of two-phase flow high-fidelity simulations, the surfaces are obtained after an iso-contouring procedure of the volumetric level-set field. The estimation of the curvatures on the surface allows the possibility of characterizing the 3D objects that are created in a high-fidelity simulation in terms of their area-weighted geometrical maps. In this work we provide an assessment of the robustness of the curvature estimation algorithm applied to some canonical 3D objects and to the Direct Numerical Simulation of the collision of two droplets. We provide the tracking of the topological evolution of such objects in terms of geometrical maps and we highlight the effect of mesh resolution on those topological changes.
... When the parent object's rotation is no longer (0, 0, 0), that is, the parent coordinate system has rotated, then Eq. (3.6) will no longer apply. Rotating coordinate transformation is the most complicated of the three transformations of translation (Schroeder et al. 2004), rotation and scaling. The difference between the parent coordinate system before and after the rotation is shown in Fig. 3. ...
Article
Full-text available
As a new human–computer interaction technology, virtual reality technology has been widely used in education, military, industry, art and entertainment, etc. Unity3D is one of the most popular virtual reality product development engines in the world. Coordinate transformation is the mathematical basis for space transformation in virtual reality technology. This paper explains the mechanism of Euler rotation causing the phenomenon of gimbal lock from the perspective of mathematical principles and derives the importance of Unity3D using quaternion for rotation calculation. Based on the Unity3D quaternion rotation calculation, the world–local coordinate transformation relationship of the child object under the Unity3D engine is deeply deduced and verified, which lays a theoretical foundation for the in-depth development of virtual reality products based on Unity3D.
... Although domain-specific tools exist that support efficient methods for rendering streamlines [BSG * 09, GKM * 15], off-the-shelf visualization tools, such as ParaView [Aya15] and VisIt [CBW * 12], default to tessellating them. For example, in the visualization toolkit (VTK) [SLM04], the default method for rendering streamlines is to tessellate them. Similarly, in the field of neuroscience, we are aware of at least one major project that originally rendered large neuron datasets by tessellating them [BMB * 13], and dealt with the large number of triangles produced using parallel rendering [Eil13]. ...
Article
Full-text available
We present a general high‐performance technique for ray tracing generalized tube primitives. Our technique efficiently supports tube primitives with fixed and varying radii, general acyclic graph structures with bifurcations, and correct transparency with interior surface removal. Such tube primitives are widely used in scientific visualization to represent diffusion tensor imaging tractographies, neuron morphologies, and scalar or vector fields of 3D flow. We implement our approach within the OSPRay ray tracing framework, and evaluate it on a range of interactive visualization use cases of fixed‐ and varying‐radius streamlines, pathlines, complex neuron morphologies, and brain tractographies. Our proposed approach provides interactive, high‐quality rendering, with low memory overhead.
... There are visualization tools designed for expert use in climate data analysis, such as UV-CDAT [WBD * 13], which is an extended version of an earlier tool that was unable to handle large datasets. There are other general purpose tools that are used in ocean and atmospheric domain like Met.3D [RKSW15], Paraview [AGL05], VTK [SLM04], and VisIt [CBW * 12]. There are in-situ analysis techniques as well that are useful to address the computational and storage requirements for large scale datasets [AJO * 14, WPS * 16, LAA * 17]. ...
Article
Full-text available
The analysis of ocean and atmospheric datasets offers a unique set of challenges to scientists working in different application areas. These challenges include dealing with extremely large volumes of multidimensional data, supporting interactive visual analysis, ensembles exploration and visualization, exploring model sensitivities to inputs, mesoscale ocean features analysis, predictive analytics, heterogeneity and complexity of observational data, representing uncertainty, and many more. Researchers across disciplines collaborate to address such challenges, which led to significant research and development advances in ocean and atmospheric sciences, and also in several relevant areas such as visualization and visual analytics, big data analytics, machine learning and statistics. In this report, we perform an extensive survey of research advances in the visual analysis of ocean and atmospheric datasets. First, we survey the task requirements by conducting interviews with researchers, domain experts, and end users working with these datasets on a spectrum of analytics problems in the domain of ocean and atmospheric sciences. We then discuss existing models and frameworks related to data analysis, sense‐making, and knowledge discovery for visual analytics applications. We categorize the techniques, systems, and tools presented in the literature based on the taxonomies of task requirements, interaction methods, visualization techniques, machine learning and statistical methods, evaluation methods, data types, data dimensions and size, spatial scale and application areas. We then evaluate the task requirements identified based on our interviews with domain experts in the context of categorized research based on our taxonomies, and existing models and frameworks of visual analytics to determine the extent to which they fulfill these task requirements, and identify the gaps in current research. In the last part of this report, we summarize the trends, challenges, and opportunities for future research in this area. (see http://www.acm.org/about/class/class/2012)
Article
Full-text available
Driven by the urge to expand renewable energy generation and mitigate the intensifying extreme climatic events effects on crops, development of agrivoltaics is currently accelerating. However, harmonious deployment requires to assess both photovoltaic and crop yields to ensure simultaneous compliance with energetic and agricultural objectives of stakeholders within evolving local legal contexts. Based on the community’s priority modelling needs, this paper presents the Python Agrivoltaic Simulation Environment (PASE), an MIT-licensed framework developed in partnership to assess the land productivity of agrivoltaic systems. The various expected benefits of this development are outlined, along with the open-source business model established with partners and the subsequent developments stemming from it. Examples illustrate how PASE effectively fulfils two primary requirements encountered by agrivoltaics stakeholders: predict irradiation on relevant surfaces and estimate agricultural and energy yields. In a dedicated experiment, PASE light model assumptions resulted in 1% error in the daily irradiation received by a sensor under two contrasted types of sky conditions. PASE’s ability to predict photovoltaic and crop yields and land equivalent ratio over several years is demonstrated for wheat on the BIODIV-SOLAR pilot. Ultimately, a sensitivity analysis of inter-row spacing demonstrates its usefulness to optimise systems according to different criteria.
Article
Full-text available
This paper presents the design, implementation, and evaluation of VR-EX, a combination of a virtual field trip and a serious game in immersive virtual reality. The application’s purpose is the communication of research conducted in the Mont Terri underground research laboratory in Switzerland. VR-EX enables users to actively attend electrical resistivity tomography measurements within a geological experiment, from planning to execution to analysis of the results, and in this way implements an active and playful learning approach. The work conducted in underground research laboratories has a high relevance for society as it contributes to research on the final disposal of nuclear waste. Therefore, the active communication of research methodology and results is crucial to increase understanding of scientific processes and boost interest. VR-EX was evaluated in a user study with 35 participants to measure its overall quality and its effectiveness of the knowledge transfer. Taking the evaluation’s qualitative results into account, the application was improved in an iterative process. Overall, the results prove the good quality of the application and its high effectiveness in terms of knowledge transfer. The reported high engagement, joy, and immersion indicate the benefits of employing immersive virtual reality for vivid science communication.
Article
Full-text available
Architectural parameters of skeletal muscle such as pennation angle provide valuable information on muscle function, since they can be related to the muscle force generating capacity, fiber packing, and contraction velocity. In this paper, we introduce a 3D ultrasound-based workflow for determining 3D fascicle orientations of skeletal muscles. We used a custom-designed automated motor driven 3D ultrasound scanning system for obtaining 3D ultrasound images. From these, we applied a custom-developed multiscale-vessel enhancement filter-based fascicle detection algorithm and determined muscle volume and pennation angle. We conducted trials on a phantom and on the human tibialis anterior (TA) muscle of 10 healthy subjects in plantarflexion (157 ± 7 ^\circ ∘ ), neutral position (109 ± 7 ^\circ ∘ , corresponding to neutral standing), and one resting position in between (145 ± 6 ^\circ ∘ ). The results of the phantom trials showed a high accuracy with a mean absolute error of 0.92 ± 0.59 ^\circ ∘ . TA pennation angles were significantly different between all positions for the deep muscle compartment; for the superficial compartment, angles are significantly increased for neutral position compared to plantarflexion and resting position. Pennation angles were also significantly different between superficial and deep compartment. The results of constant muscle volumes across the 3 ankle joint angles indicate the suitability of the method for capturing 3D muscle geometry. Absolute pennation angles in our study were slightly lower than recent literature. Decreased pennation angles during plantarflexion are consistent with previous studies. The presented method demonstrates the possibility of determining 3D fascicle orientations of the TA muscle in vivo.
Article
Full-text available
Combined magnetic resonance imaging (MRI) and positron emission tomography/computed tomography (PET/CT) may enhance diagnosis, aid surgical planning and intra-operative orientation for prostate biopsy and radical prostatectomy. Although PET-MRI may provide these benefits, PET-MRI machines are not widely available. Image fusion of Prostate specific membrane antigen PET/CT and MRI acquired separately may be a suitable clinical alternative. This study compares CT-MR registration algorithms for urological prostate cancer care. Paired whole-pelvis MR and CT scan data were used (n = 20). A manual prostate CTV contour was performed independently on each patients MR and CT image. A semi-automated rigid-, automated rigid- and automated non-rigid registration technique was applied to align the MR and CT data. Dice Similarity Index (DSI), 95% Hausdorff distance (95%HD) and average surface distance (ASD) measures were used to assess the closeness of the manual and registered contours. The automated non-rigid approach had a significantly improved performance compared to the automated rigid- and semi-automated rigid-registration, having better average scores and decreased spread for the DSI, 95%HD and ASD (all p < 0.001). Additionally, the automated rigid approach had similar significantly improved performance compared to the semi-automated rigid registration across all accuracy metrics observed (all p < 0.001). Overall, all registration techniques studied here demonstrated sufficient accuracy for exploring their clinical use. While the fully automated non-rigid registration algorithm in the present study provided the most accurate registration, the semi-automated rigid registration is a quick, feasible, and accessible method to perform image registration for prostate cancer care by urologists and radiation oncologists now.
Article
Full-text available
In this paper, we present a set of improved algorithms for recovering CAD-type surface models from 3D images. The goal of the proposed framework is to generate B-Spline or NURBS surfaces, which are standard mathematical representations of solid objects in digital engineering. To create a NURBS surface, we first compute a control network (a quadrilateral mesh) from a triangular mesh using the Marching Cubes algorithm and Discrete Morse theory. To create a NURBS surface, we first compute a triangular mesh using the Marching Cubes algorithm, then the control network (a quadrilateral mesh) is determined from the triangular mesh by using discrete Morse theory. Discrete Morse theory uses the critical points of a specific scalar field defined over the triangulation to generate a quad mesh. Such a scalar field is obtained by solving a graph Laplacian eigenproblem over the triangulation. However, the resulting surface is not optimal. We therefore introduce an optimisation algorithm to better approximate the geometry of the object. In addition, we propose a statistical method for selecting the most appropriate eigenfunction of the graph Laplacian to generate a control network that is neither too coarse nor too fine, given the precision of the 3D image. To do this, we set up a regression model and use an information criterion to choose the best surface. Finally, we extend our approach by taking into account both model and data uncertainty using probabilistic regression and sampling the posterior distribution with Hamiltonian MCMC.
Article
Purpose: Computer-assisted surgical planning methods help to reduce the risks and costs in transpedicular fixation surgeries. However, most methods do not consider the speed and versatility of the planning as factors that improve its overall performance. In this work, we propose a method able to generate surgical plans in minimal time, within the required safety margins and accounting for the surgeon's personal preferences. Methods: The proposed planning module takes as input a CT image of the patient, initial-guess insertion trajectories provided by the surgeon and a reduced set of parameters, delivering optimal screw sizes and trajectories in a very reduced time frame. Results: The planning results were validated with quantitative metrics and feedback from surgeons. The whole planning pipeline can be executed at an estimated time of less than 1 min per vertebra. The surgeons remarked that the proposed trajectories remained in the safe area of the vertebra, and a Gertzbein-Robbins ranking of A or B was obtained for 95 % of them. Conclusions: The planning algorithm is safe and fast enough to perform in both pre-operative and intra-operative scenarios. Future steps will include the improvement of the preprocessing efficiency, as well as consideration of the spine's biomechanics and intervertebral rod constraints to improve the performance of the optimisation algorithm.
Article
Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems. For a complex problem relating to multi-field, multi-process and multi-scale, different computing tools have to be developed so as to solve particular fields at different scales and for different processes. Therefore, the integration of different types of software is inevitable. However, it is difficult to perform the transfer of the meshes and simulated results among software packages because of the lack of shared data formats or encrypted data formats. An image processing based method for three-dimensional model reconstruction for numerical simulation was proposed, which presents a solution to the integration problem by a series of slice or projection images obtained by the post-processing modules of the numerical simulation software. By means of mapping image pixels to meshes of either finite difference or finite element models, the geometry contour can be extracted to export the stereolithography model. The values of results, represented by color, can be deduced and assigned to the meshes. All the models with data can be directly or indirectly integrated into other software as a continued or new numerical simulation. The three-dimensional reconstruction method has been validated in numerical simulation of castings and case studies were provided in this study.
Article
Full-text available
Statistical data summarization can significantly reduce the data storage footprint for large-scale scientific simulations while maintaining data accuracy. However, the high-resolution reconstructed data causes a memory bottleneck in graphics processing unit (GPU)-based post-hoc visualization using limited graphics memory. In this paper, we propose a statistical summarization model-driven adaptive data reconstruction method for large-scale statistical visualization on GPUs. It uses the spatial Gaussian mixture model to iteratively compute the Shannon entropy on multi-level grids, driving an adaptive mesh refinement that retains complex physical features. A graphics shader-based data reconstruction algorithm is used to efficiently generate the scalar field on the adaptive grid while seamlessly integrating with GPU-accelerated rendering algorithms. The experimental tests used data generated by five real-world scientific simulations with a maximum grid resolution of 134 million. Qualitative and quantitative analysis results show that our method can achieve efficient and high-quality reconstruction of the statistical summary data on a GPU, and the maximum data compression ratio is close to two orders of magnitude.
Article
Multiscale modeling of marine and aerial plankton has traditionally been difficult to address holistically due to the challenge of resolving individual locomotion dynamics while being carried with larger-scale flows. However, such problems are of paramount importance, e.g., dispersal of marine larval plankton is critical for the health of coral reefs, and aerial plankton (tiny arthropods) can be used as effective agricultural biocontrol agents. Here we introduce the open-source, agent-based modeling software Planktos targeted at 2D and 3D fluid environments in Python. Agents in this modeling framework are relatively tiny organisms in sufficiently low densities that their effect on the surrounding fluid motion can be considered negligible. This library can be used for scientific exploration and quantification of collective and emergent behavior, including interaction with immersed structures. In this paper, we detail the implementation and functionality of the library along with some illustrative examples. Functionality includes arbitrary agent behavior obeying either ordinary differential equations, stochastic differential equations, or coded movement algorithms, all under the influence of time-dependent fluid velocity fields generated by computational fluid dynamics, experiments, or analytical models in domains with static immersed mesh structures with sliding or sticky collisions. In addition, data visualization tools provide images or animations with kernel density estimation and velocity field analysis with respect to deterministic agent behavior via the finite-time Lyapunov exponent.
Article
Purpose: Communicating complex blood flow patterns generated from computational fluid dynamics (CFD) simulations to clinical audiences for the purposes of risk assessment or treatment planning is an ongoing challenge. While attempts have been made to develop new software tools for such clinical visualization of CFD data, these often overlook established medical imaging/visualization practice and data infrastructures. Here, leveraging the clinical ubiquity of the DICOM file format, we present techniques for the translation of CFD data to DICOM series, facilitating interactive visualization in standard radiological software. Methods: Unstructured CFD data (volumetric fields of velocity magnitude, Q-criterion, and pathlines) are resampled to structured grids. Novel raster-based techniques that simulate experimental optical blurring are presented for bringing simulated pathlines into structured image volumes. DICOM series are created by strategically encoding these data into the file's PixelArray tag. Lumen surface information is also strategically encoded into a different range of pixel intensities, allowing hemodynamics and morphology to be co-visualized in a single volume using opacity-based rendering transfer functions. Results: We show that 3D temporal CFD data represented as structured DICOM series can be rendered interactively in Horos, a widely-used medical imaging/radiology software. Our transfer function-based approach allows for representations of scalar isosurfaces, volumetric rendering, and tubular pathlines to be modified in real-time, resembling conventional unstructured visualizations. Careful selection of voxelization ROIs helps to ensure that data are kept lightweight for real-time rendering and minimal storage. Conclusion: While our approach inherently sacrifices some of the advanced visualization capabilities of specialized software tools, we believe our closer consideration of standardization can help to facilitate meaningful clinical interaction. This work opens up possibilities for the complete integration of measured and simulated data in established radiological software environments and workflows from PACS storage to 3D/4D visualization.
Chapter
Fluorescence microscopy has enabled imaging of spatial proteome (morphological pattern of subcellular protein localization). Automated (and even manual) high-resolution fluorescence image acquisition generates a large amount of complex image data, and manual analysis of such data in order to distill biologically meaningful information is challenging. Automated image analysis is an inevitable approach to perform quantification of phenotypic changes, evade subjective bias, and provide accurate and reproducible results. Automation requires the utilization of image processing, image analysis, and data analysis tools. In Chap. 9 is presented our customized system for automated analysis of fluorescently stained blood cells. Our aim was to develop an automatic image analysis system that would enable us to minimize the amount of required manual operations not only throughout the inspection of segmentation results but also during the initial tuning of various parameters of the segmentation algorithm. Setting free from the tuning of initial parameters allows for faster switching to data analysis of the new experiment when imaging settings or other conditions were changed. The development of the fluorescence image analysis algorithms included a good practice from the development of 2DE image analysis system (see Chap. 6). The developed tools were applied for automated analysis of confocal microscopy images to evaluate changes of histone modifications in cell populations.
Article
Full-text available
Computational models are increasingly used for diagnosis and treatment of cardiovascular disease. To provide a quantitative hemodynamic understanding that can be effectively used in the clinic, it is crucial to quantify the variability in the outputs from these models due to multiple sources of uncertainty. To quantify this variability, the analyst invariably needs to generate a large collection of high-fidelity model solutions, typically requiring a substantial computational effort. In this paper, we show how an explicit-in-time ensemble cardiovascular solver offers superior performance with respect to the embarrassingly parallel solution with implicit-in-time algorithms, typical of an inner-outer loop paradigm for non-intrusive uncertainty propagation. We discuss in detail the numerics and efficient distributed implementation of a segregated FSI cardiovascular solver on both CPU and GPU systems, and demonstrate its applicability to idealized and patient-specific cardiovascular models, analyzed under steady and pulsatile flow conditions.
Article
Full-text available
In forensic autopsy, medical examiners (MEs) and diagnostic radiologists (DRs) cooperate with each other to perform an autopsy of the corpse. Effective computational assistance tools are imperative for facilitating the intricate collaborative work involved in the autopsy. In this paper, we present an integrated visual analysis environment named FORSETI (forensic autopsy system for e-court instruments), whose technical essence is twofold. The first is to be designed on the basis of an extended version of legal medicine mark-up language for authoring reports on physical autopsy (PA) as well as on virtual autopsy (VA). The second lies in autopsy juxtaposition, which seamlessly assists the MEs and DRs in referring to the VA and PA works, respectively. A fictitious case with the Visible Female Dataset is used to demonstrate the effectiveness of an initial prototype of the FORSETI system.
Thesis
Full-text available
Dans cette thèse, nous nous intéressons à la modélisation mathématique de l’électrophysiologie cardiaque et plus précisément, l’étude numérique de l’activité électrique du coeur. L’un des défis de cette thématique scientifique consiste à reconstruire l’information électrique à la surface cardiaque à partir des mesures réalisées à la surface thoracique. Un tel problème est appelé problème inverse. Dans un premier temps, nous analysons plusieurs méthodes de résolution du problème inverse présentes dans la littérature et nous proposons une nouvelle approche de régularisation basée sur le flux du courant électrique à la surface cardiaque. Les résultats sont illustrés moyennant des données simulées et expérimentales. Ensuite, nous nous intéressons aux méthodes d’apprentissage automatique. Plusieurs modèles de réseaux de neurones artificiels sont créés et développés pour résoudre le problème inverse. Nous montrons que cette approche améliore les résultats de reconstruction du potentiel électrique cardiaque par rapport aux méthodes inverses classiques. Puis, Le plus grand apport de cette thèse consiste au développement d’un modèle de réseau neuronal artificiel de cartographie d’activation cardiaque. Ce dernier se caractérise par une très grande robustesse face au bruit préalablement présent dans les signaux électriques thoraciques. La dernière partie est consacrée à la comparaison entre les différents modèles développés auparavant afin de déterminer la meilleure approche numérique de cartographie de l’activation cardiaque. L’étude est menée en utilisant un jeu de données simulées. Nous prouvons que les méthodes basées sur l’apprentissage automatique fournissent les meilleurs résultats.
Article
Full-text available
Knowledge of the static and morphodynamic components of the river bed is important for the maintenance of waterways. Under the action of a current, parts of the river bed sediments can move in the form of dunes. Recordings of the river bed by multibeam echosounding are used as input data within a morphological analysis in order to compute the bedload transport rate using detected dune shape and migration. Before the morphological analysis, a suitable processing of the measurement data is essential to minimize inherent uncertainties. This paper presents a simulation-based evaluation of suitable data processing concepts for vertical sections of bed forms based on a case study at the river Rhine. For the presented spatial approaches, suitable parameter sets are found, which allow the reproduction of nominal dune parameters in the range of a few centimetres. However, if parameter sets are chosen inadequately, the subsequently derived dune parameters can deviate by several decimetres from the simulated truth. A simulation-based workflow is presented, to find the optimal hydrographic data processing strategy for a given dune geometry.
Article
Full-text available
Purpose Image-guided surgery (IGS) is an integral part of modern neuro-oncology surgery. Navigated ultrasound provides the surgeon with reconstructed views of ultrasound data, but no commercial system presently permits its integration with other essential non-imaging-based intraoperative monitoring modalities such as intraoperative neuromonitoring. Such a system would be particularly useful in skull base neurosurgery. Methods We established functional and technical requirements of an integrated multi-modality IGS system tailored for skull base surgery with the ability to incorporate: (1) preoperative MRI data and associated 3D volume reconstructions, (2) real-time intraoperative neurophysiological data and (3) live reconstructed 3D ultrasound. We created an open-source software platform to integrate with readily available commercial hardware. We tested the accuracy of the system’s ultrasound navigation and reconstruction using a polyvinyl alcohol phantom model and simulated the use of the complete navigation system in a clinical operating room using a patient-specific phantom model. Results Experimental validation of the system’s navigated ultrasound component demonstrated accuracy of <4.5mm<4.5\,\hbox {mm} < 4.5 mm and a frame rate of 25 frames per second. Clinical simulation confirmed that system assembly was straightforward, could be achieved in a clinically acceptable time of <15min<15\,\hbox {min} < 15 min and performed with a clinically acceptable level of accuracy. Conclusion We present an integrated open-source research platform for multi-modality IGS. The present prototype system was tailored for neurosurgery and met all minimum design requirements focused on skull base surgery. Future work aims to optimise the system further by addressing the remaining target requirements.
Article
Full-text available
Purpose: An intraoperative real-time respiratory tumor motion prediction system with magnetic tracking technology is presented. Based on respiratory movements in different body regions, it provides patient and single/multiple tumor-specific prediction that facilitates the guiding of treatments. Methods: A custom-built phantom patient model replicates the respiratory cycles similar to a human body, while the custom-built sensor holder concept is applied on the patient's surface to find optimum sensor number and their best possible placement locations to use in real-time surgical navigation and motion prediction of internal tumors. Automatic marker localization applied to patient's 4D-CT data, feature selection and Gaussian process regression algorithms enable off-line prediction in the preoperative phase to increase the accuracy of real-time prediction. Results: Two evaluation methods with three different registration patterns (at fully/half inhaled and fully exhaled positions) were used quantitatively at all internal target positions in phantom: The statical method evaluates the accuracy by stopping simulated breathing and dynamic with continued breathing patterns. The overall root mean square error (RMS) for both methods was between [Formula: see text] and [Formula: see text]. The overall registration RMS error was [Formula: see text]. The best prediction errors were observed by registrations at half inhaled positions with minimum [Formula: see text], maximum [Formula: see text]. The resulting accuracy satisfies most radiotherapy treatments or surgeries, e.g., for lung, liver, prostate and spine. Conclusion: The built system is proposed to predict respiratory motions of internal structures in the body while the patient is breathing freely during treatment. The custom-built sensor holders are compatible with magnetic tracking. Our presented approach reduces known technological and human limitations of commonly used methods for physicians and patients.
Chapter
Full-text available
Definitions of three types of bioimage analysis software—Component, Collection, and Workflow—are introduced in this chapter. The aim is to promote the structured designing of bioimage analysis methods, and to improve related learning and teaching.
Article
Full-text available
Purpose A robotic intraoperative laser guidance system with hybrid optic-magnetic tracking for skull base surgery is presented. It provides in situ augmented reality guidance for microscopic interventions at the lateral skull base with minimal mental and workload overhead on surgeons working without a monitor and dedicated pointing tools. Methods Three components were developed: a registration tool (Rhinospider), a hybrid magneto-optic-tracked robotic feedback control scheme and a modified robotic end-effector. Rhinospider optimizes registration of patient and preoperative CT data by excluding user errors in fiducial localization with magnetic tracking. The hybrid controller uses an integrated microscope HD camera for robotic control with a guidance beam shining on a dual plate setup avoiding magnetic field distortions. A robotic needle insertion platform (iSYS Medizintechnik GmbH, Austria) was modified to position a laser beam with high precision in a surgical scene compatible to microscopic surgery. Results System accuracy was evaluated quantitatively at various target positions on a phantom. The accuracy found is 1.2 mm ± 0.5 mm. Errors are primarily due to magnetic tracking. This application accuracy seems suitable for most surgical procedures in the lateral skull base. The system was evaluated quantitatively during a mastoidectomy of an anatomic head specimen and was judged useful by the surgeon. Conclusion A hybrid robotic laser guidance system with direct visual feedback is proposed for navigated drilling and intraoperative structure localization. The system provides visual cues directly on/in the patient anatomy, reducing the standard limitations of AR visualizations like depth perception. The custom- built end-effector for the iSYS robot is transparent to using surgical microscopes and compatible with magnetic tracking. The cadaver experiment showed that guidance was accurate and that the end-effector is unobtrusive. This laser guidance has potential to aid the surgeon in finding the optimal mastoidectomy trajectory in more difficult interventions.
Article
Full-text available
Abstract ‘Big data’ is massive amounts of information that can work wonders. It has become a topic of special interest for the past two decades because of a great potential that is hidden in it. Various public and private sector industries generate, store, and analyze big data with an aim to improve the services they provide. In the healthcare industry, various sources for big data include hospital records, medical records of patients, results of medical examinations, and devices that are a part of internet of things. Biomedical research also generates a significant portion of big data relevant to public healthcare. This data requires proper management and analysis in order to derive meaningful information. Otherwise, seeking solution by analyzing big data quickly becomes comparable to finding a needle in the haystack. There are various challenges associated with each step of handling big data which can only be surpassed by using high-end computing solutions for big data analysis. That is why, to provide relevant solutions for improving public health, healthcare providers are required to be fully equipped with appropriate infrastructure to systematically generate and analyze big data. An efficient management, analysis, and interpretation of big data can change the game by opening new avenues for modern healthcare. That is exactly why various industries, including the healthcare industry, are taking vigorous steps to convert this potential into better services and financial advantages. With a strong integration of biomedical and healthcare data, modern healthcare organizations can possibly revolutionize the medical therapies and personalized medicine.
ResearchGate has not been able to resolve any references for this publication.