Preprint

Distributed Objective Function Evaluation for Optimization of Radiation Therapy Treatment Plans

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

The modern workflow for radiation therapy treatment planning involves mathematical optimization to determine optimal treatment machine parameters for each patient case. The optimization problems can be computationally expensive, requiring iterative optimization algorithms to solve. In this work, we investigate a method for distributing the calculation of objective functions and gradients for radiation therapy optimization problems across computational nodes. We test our approach on the TROTS dataset -- which consists of optimization problems from real clinical patient cases -- using the IPOPT optimization solver in a leader/follower type approach for parallelization. We show that our approach can utilize multiple computational nodes efficiently, with a speedup of approximately 2-3.5 times compared to the serial version.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
The Radiotherapy Optimisation Test Set (TROTS) is an extensive set of problems originating from radiotherapy (radiation therapy) treatment planning. This dataset is created for 2 purposes: (1) to supply a large-scale dense dataset to measure performance and quality of mathematical solvers, and (2) to supply a dataset to investigate the multi-criteria optimisation and decision-making nature of the radiotherapy problem. The dataset contains 120 problems (patients), divided over 6 different treatment protocols/tumour types. Each problem contains numerical data, a configuration for the optimisation problem, and data required to visualise and interpret the results. The data is stored as HDF5 compatible Matlab files, and includes scripts to work with the dataset.
Chapter
Full-text available
This paper gives an overview about the Score-P performance measure-ment infrastructure which is being jointly developed by leading HPC performance tools groups. It motivates the advantages of the joint undertaking from both, the de-veloper and the user perspectives, and presents the design and components of the newly developed Score-P performance measurement infrastructure. Furthermore, it gives first evaluation results in comparison with existing performance tools and presents an outlook to the long-term cooperative development of the new system.
Article
We present a primal-dual interior-point algorithm with a lter line-search method for non- linear programming. Local and global convergence properties of this method were analyzed in previous work. Here we provide a comprehensive description of the algorithm, including the fea- sibility restoration phase for the lter method, second-order corrections, and inertia correction of the KKT matrix. Heuristics are also considered that allow faster performance. This method has been implemented in the IPOPT code, which we demonstrate in a detailed numerical study based on 954 problems from the CUTEr test set. An evaluation is made of several line-search options, and a comparison is provided with two state-of-the-art interior-point codes for nonlin- ear programming.
Article
This paper attempts to motivate and justify quasi-Newton methods as useful modifications of Newton's method for general and gradient nonlinear systems of equations. References are given to ample numerical justification; here is given an overview of many of the important theoretical results, and each is accompanied by sufficient discussion to make the results and hence the methods plausible. 1 figure.
Conference Paper
The number partitioning problem is to divide a given set of integers into a collection of subsets, so that the sum of the numbers in each subset are as nearly equal as possible. While a very efficient algorithm exists for optimal two-way partitioning, it is not nearly as effective for multi-way partition- ing. We develop two new linear-space algorithms for multi-way partitioning, and demonstrate their performance on three, four, and five-way partition- ing. In each case, our algorithms outperform the previous state of the art by orders of magnitude, in one case by over six orders of magnitude. Empiri- cal analysis of the running times of our algorithms strongly suggest that their asymptotic growth is less than that of previous algorithms. The key insight behind both our new algorithms is that if an op- timal k-way partition includes a particular subset, then optimally partitioning the numbers not in that set k −1 ways results in an optimal k-way partition.
Article
A plot of a cumulative dose-volume frequency distribution, commonly known as a dose-volume histogram (DVH), graphically summarizes the simulated radiation distribution within a volume of interest of a patient which would result from a proposed radiation treatment plan. DVHs show promise as tools for comparing rival treatment plans for a specific patient by clearly presenting the uniformity of dose in the target volume and any hot spots in adjacent normal organs or tissues. However, because of the loss of positional information in the volume(s) under consideration, it should not be the sole criterion for plan evaluation. DVHs can also be used as input data to estimate tumor control probability (TCP) and normal tissue complication probability (NTCP). The sensitivity of TCP and NTCP calculations to small changes in the DVH shape points to the need for an accurate method for computing DVHs. We present a discussion of the methodology for generating and plotting the DVHs, some caveats, limitations on their use and the general experience of four hospitals using DVHs.
Article
In this work a novel plan optimization platform is presented where treatment is delivered efficiently and accurately in a single dynamically modulated arc. Improvements in patient care achieved through image-guided positioning and plan adaptation have resulted in an increase in overall treatment times. Intensity-modulated radiation therapy (IMRT) has also increased treatment time by requiring a larger number of beam directions, increased monitor units (MU), and, in the case of tomotherapy, a slice-by-slice delivery. In order to maintain a similar level of patient throughput it will be necessary to increase the efficiency of treatment delivery. The solution proposed here is a novel aperture-based algorithm for treatment plan optimization where dose is delivered during a single gantry arc of up to 360 deg. The technique is similar to tomotherapy in that a full 360 deg of beam directions are available for optimization but is fundamentally different in that the entire dose volume is delivered in a single source rotation. The new technique is referred to as volumetric modulated arc therapy (VMAT). Multileaf collimator (MLC) leaf motion and number of MU per degree of gantry rotation is restricted during the optimization so that gantry rotation speed, leaf translation speed, and dose rate maxima do not excessively limit the delivery efficiency. During planning, investigators model continuous gantry motion by a coarse sampling of static gantry positions and fluence maps or MLC aperture shapes. The technique presented here is unique in that gantry and MLC position sampling is progressively increased throughout the optimization. Using the full gantry range will theoretically provide increased flexibility in generating highly conformal treatment plans. In practice, the additional flexibility is somewhat negated by the additional constraints placed on the amount of MLC leaf motion between gantry samples. A series of studies are performed that characterize the relationship between gantry and MLC sampling, dose modeling accuracy, and optimization time. Results show that gantry angle and MLC sample spacing as low as 1 deg and 0.5 cm, respectively, is desirable for accurate dose modeling. It is also shown that reducing the sample spacing dramatically reduces the ability of the optimization to arrive at a solution. The competing benefits of having small and large sample spacing are mutually realized using the progressive sampling technique described here. Preliminary results show that plans generated with VMAT optimization exhibit dose distributions equivalent or superior to static gantry IMRT. Timing studies have shown that the VMAT technique is well suited for on-line verification and adaptation with delivery times that are reduced to approximately 1.5-3 min for a 200 cGy fraction.