Conference PaperPDF Available

Optimizing Large-Scale Linear Energy System Problems with Block Diagonal Structure by Using Parallel Interior-Point Methods

Authors:
  • GAMS Developement
  • RWE Renewables

Abstract and Figures

The BEAM-ME project addresses the need for efficient solution strategies for complex energy system models. The project brings together researchers from the fields of energy systems analysis, mathematics, operations research, and informatics and aims at developing technical and conceptual strategies for every step of the solution process. This includes changes to the formulation of the energy system model, improving the solvers and utilising the resources of high performance computing.
Content may be subject to copyright.
Optimizing Large-Scale Linear Energy System
Problems with Block Diagonal Structure by
Using Parallel Interior-Point Methods
Thomas Breuer1, Michael Bussieck2, Karl-Kiˆen Cao3, Felix Cebulla3, Frederik
Fiand2, Hans Christian Gils3, Ambros Gleixner4, Dmitry Khabi5, Thorsten
Koch4, Daniel Rehfeldt4, and Manuel Wetzel3
1ulich Supercomputing Centre (JSC), Forschungszentrum J¨ulich GmbH
2GAMS Software GmbH
3German Aerospace Center (DLR)
4Zuse Institute Berlin/Technical University Berlin
5High Performance Computing Center Stuttgart (HLRS)
Abstract. Current linear energy system models (ESM) acquiring to
provide sufficient detail and reliability frequently bring along problems of
both high intricacy and increasing scale. Unfortunately, the size and com-
plexity of these problems often prove to be intractable even for commer-
cial state-of-the-art linear programming solvers. This article describes
an interdisciplinary approach to exploit the intrinsic structure of these
large-scale linear problems to be able to solve them on massively parallel
high-performance computers. A key aspect are extensions to the par-
allel interior-point solver PIPS-IPM originally developed for stochastic
optimization problems. Furthermore, a newly developed GAMS inter-
face to the solver as well as some GAMS language extensions to model
block-structured problems will be described.
Keywords: energy system models, linear programming, interior-point
methods, parallelization, high performance computing
1 Introduction
Energy system models (ESMs) have versatile fields of application. For example
they can be utilized to gain insights into the design if future energy supply
systems. Increasing decentralization and the need for more flexibility caused by
the temporal fluctuations of solar and wind power lead to increasing spatial and
temporal granularity of ESMs. In consequence, state-of-the-art solvers meet their
limits for certain model instances.
A distinctive characteristic of many linear programs (LPs) arising from ESMs
is their block-diagonal structure with both linking variables and linking con-
straints. This article sketches an extensions of the parallel interior-point solver
PIPS-IPM [5] to handle LPs with this characteristic. The extended solver is de-
signed to make use of the massive parallel power of high performance computing
(HPC) platforms.
2 Thomas Breuer et al.
Furthermore, this article introduces an interface between PIPS-IPM (includ-
ing its new extension) and energy system models implemented in GAMS, a high
level modeling language for mathematical optimization problems. In particular,
it will be described how the user can annotate their model to communicate its
problem structure to PIPS-IPM. Since finding a proper block structure annota-
tion for a complex ESM is not trivial, we will exemplify the annotation process
for the ESM REMix [3]. With many ESMs implemented in GAMS, the new in-
terface between GAMS and PIPS-IPM makes the solver available to the energy
modeling community.
2 A Specialized Parallel Interior Point Solver
When it comes to solving linear programs (LPs), the two predominant algorith-
mic approaches to choose from are Simplex and interior-point, see e.g. [6]. Since
interior-point methods are often more successful for large problems, in particu-
lar for ESM [1], this method was chosen for the LPs at hand. Mathematically,
a salient characteristic of these LPs is their block-diagonal structure with both
linking constraints and linking variables, as depicted below
min cTx
s.t. T0x0=h0(eq0)
T1x0+W1x1=h1(eq1)
T2x0+W2x2=h2(eq2)
.
.
.....
.
.
TNx0+WNxN=hN(eqN)
F0x0+F1x1+F2x2· · · FNxN=hN+1,(eqN+1 )
with x= (x0, x1, ..., xN). The linking variables are represented by the vector x0,
whereas the linking constraints are described by the matrices F0, ..., FNand the
vector hN+1. The approach to solve this LP is based on the parallel interior-point
solver PIPS-IPM [5] that was originally developed for solving stochastic linear
programs. However, PIPS-IPM in its original form cannot handle problems with
linking constraints. In the last months, the authors of this paper have extended
PIPS-IPM such that it can now handle LPs with both linking constraints and
linking variables.
PIPS-IPM and also its new extension make use of the Message Passing In-
terface (MPI) for communication between their (parallel) processes, which will
in the following be referred to as MPI-processes. While the details of the solving
process are beyond the scope of this article, an important feature is that the
whole LP can be distributed among the MPI-processes with no process needing
to store the entire problem. This allows to tackle problems that are too large
to even be stored in the main memory of a single desktop machine. The main
principle is that for each index i∈ {0,1, ..., N}all xi,hi,Ti, and Wi(for i > 0)
need to be available in the same MPI-process—hN+1 needs to be assigned to
Optimizing Large-Scale Linear Problems with Block Diagonal Structure 3
the MPI-process handling i= 0. Moreover, each MPI-process needs access to
the current value of x0. The distribution is in the following exemplified for the
case of the information to both i= 0 and i= 1 being assigned to the same MPI-
process (in gray). The vectors and matrices that need to be processed together
are marked in gray, black, and bold, respectively.
min cT
0x0+cT
1x1+cT
2x2+· · · cT
NxN
s.t. T0x0=h0
T1x0+W1x1=h1
T2x0+W2x2=h2
.
.
.....
.
.
TNx0+WNxN=hN
F0x0+F1x1+F2x2· · · FNxN=hN+1
The maximum of MPI processes that can be used is N; in the opposite border
case the whole LP is assigned to a single MPI-process
The extension of PIPS-IPM has already been successfully tested on small-
scale ESM problems with several thousand constraints and variables by using a
maximum of 32 MPI-processes.
3 Communicating Block Structured GAMS Models to
PIPS-IPM
A recently implemented GAMS/PIPS-IPM interface that considers the special
HPC platform characteristics makes the solver available to a broader audience.
This section is twofold. It outlines how users can annotate their GAMS models to
provide a processable representation of the model block structure and provides
insights in some technical aspects of the GAMS/PIPS-IPM-Link.
3.1 Annotating GAMS Models to Communicate Block Structures
Automatic detection of block structures in models is challenging and hence a
processable block structure information based on the user’s deep understanding
of the model is often preferable. It is important to note that there is no unique
block structure in a model but there are many of them, depending on how rows
and columns of the corresponding matrix are permuted. For ESMs blocks may
for example be formed by regions or time steps as elaborated in section 4.
GAMS provides facilities that allow complex processable model annotations [2].
The modeler can assign stages to variables via an attribute <variable name>.stage.
That functionality originates from multistage stochastic programming and can
also be used to annotate the block structure of a model to be solved with PIPS-
IPM. Once the block membership for all variables is annotated, the block mem-
bership of the constraints can in principle be derived from that annotation.
4 Thomas Breuer et al.
However, manual annotation of constraints in a similar fashion is also possi-
ble and allows to run consistency checks on the annotation to detect potential
mistakes. Without diving deeply into the details of the GAMS language, the
functionality can be demonstrated with a simple example based on the block
structure introduced in section 2. The following pseudo-annotation would assign
stages to all variables xito indicate their block membership.
xi.stage =ii∈ {0,1, ..., N }
Linking variables are those assigned to stage 0. Similarly, constraints could also
be annotated where stage 0 constraints are those containing only linking vari-
ables. Constraints assigned to stages 1,..,N are those incorporating only vari-
ables from the corresponding block plus linking variables and finally constraints
assigned to stage N+1 are the linking ones. Note that the exemplary pseudo-
annotation may seem obvious and simple but finding a good block structure
annotation for a complex model is not trivial. The challenge is not mainly to
find an annotation that is correct in the mathematical sense but to find one
where the power of PIPS-IPM is exploited best. A desirable annotation would
reveal a block structure with many independent blocks of similar size while the
set of linking variables and linking constraints is small.
3.2 The GAMS/PIPS-IPM-Link
Currently, the GAMS/PIPS-IPM-Link implements the connection between mod-
eling language and the solver in a two-phase process. Phase 1, the model gen-
eration, is followed by phase 2 where PIPS-IPM pulls the previously generated
model via its callback interface and solves the problem.
So far, model generation used to be a sequential process where GAMS gener-
ates one constraint after another. For the majority of applications this is fine as
model generation is usually fast and the time consumption is negligible compared
to the time consumed to solve the actual problem. However, some ESMs may
result in sizeable LPs where model generation time becomes relevant. Hence, it
is worthwhile to mention that the previously introduced annotation can serve as
a basis to generate the model in a distributed fashion. Instead of generating one
large monolithic model, many small model blocks can be generated in parallel
to exploit the power of HPC architectures already during model generation.
4 Structuring Energy System Models for PIPS-IPM
In order to distribute all blocks of the full-scale ESM to the computing nodes
of a HPC architecture a problem-specific model annotation has to be provided.
Based on the modeler’s knowledge about the problem at hand the number of
blocks and block structure has to be decided upon corresponding directly to the
assignment of variables to blocks. The semantic information of variables can help
during this process to distinguish between variables belonging to the same block
and linking variables connecting multiple blocks.
Optimizing Large-Scale Linear Problems with Block Diagonal Structure 5
The concurrency of supply and demand of electrical energy necessitates a bal-
ancing for every region and timestep. While in theory these balancing constraints
can be solved independently, transport of energy between regions and storage
of energy require a integrated optimization of all regions and timesteps. The
number of variables and constraints that become linking due to the assignment
of block structures depends strongly on these spatial and temporal intercon-
nections. Transport of energy between two regions is typically represented by
dispatch variables leading to linking variables if their respective regions have
been assigned to different blocks. State of charge variables for energy storages
consider the state of charge in the previous time step and therefore lead to a
large number of linking constraints if each time step is represented by a single
block. Typically, ESM also comprise boundary conditions that link both regions
and time steps, e.g. by the consideration of global and annual emission lim-
its. Details on the REMix model enhanced here are provided in [3]. The high
number of linking variables and constraints lead to a tradeoff between speedup
and parallelism that will have to be studied systematically in future numerical
experiments.
Figure 1 shows the non-zero entries matrix of the ESM on the left side and the
revealed underlying block structure after permutation of the matrix on the right
side. Linking variables and constraints are marked in dark gray while PIPS-IPM
blocks are marked in light gray.
Fig. 1. Non-zero entries of the ESM and permuted matrix with block structure
5 Summary and Outlook
Large-scale LPs emerging from ESMs that are computationally intractable for to-
day’s state-of-the-art LP solvers motivate the need for new solution approaches.
To serve those needs, extensions to the parallel interior point solver PIPS-IPM
that exploits the parallel power of high performance computers have been imple-
mented. In the future, the solver will be made available to the ESM community
by a GAMS/PIPS-IPM interface
6 Thomas Breuer et al.
The integration of HPC specialists in the development process ensures con-
sideration of peculiarities of several targeted HPC platforms at an early stage
of development. PIPS-IPM is developed and tested on several target platforms
like the petaflops systems Hazel Hen at HLRS and JURECA at JSC as well as
on many-core platforms like JUQUEEN and modern Intel Xeon Phi Processors.
Workflow automation tools explicitly designed for HPC applications like JUBE
[4] support the development and execution by simplifying the usage of workflow
managers like PBS and Slurm.
Initial computational experiments already show the capability of the ex-
tended PIPS-IPM version to solve the ESM problems at hand, although so far
only on a small scale. However, the good scaling behavior and the results of
the original PIPS-IPM in solving large-scale problems [5] suggest that the ap-
proach described in this article might ultimately lead to a solver that can tackle
currently unsolvable large-scale ESMs.
Extensions to the GAMS/PIPS-IPM-Link will finally integrate the current
multi-phase workflow (see section 3.2) into one seamless process to give energy
system modelers a similar workflow compared to the use of conventional LP
solvers.
Acknowledgements
The described research activities are funded by the Federal Ministry for Eco-
nomic Affairs and Energy within the BEAM-ME project (ID: 03ET4023A-F).
Ambros Gleixner was supported by the Research Campus MODAL Mathemati-
cal Optimization and Data Analysis Laboratories funded by the Federal Ministry
of Education and Research (BMBF Grant 05M14ZAM).
References
1. Cao, K., Gleixner, A., Miltenberger, M.: Methoden zur Reduktion der Rechenzeit
linearer Optimierungsmodelle in der Energiewirtschaft - Eine Performance-Analyse.
In: EnInnov 2016: 14. Symposium Energieinnovation (2016)
2. Ferris, M.C., Dirkse, S.P., Jagla, J., Meeraus, A.: An Extended Mathematical Pro-
gramming Framework. In: Computers & Chemical Engineering, vol. 33, pp. 1973-
1982 (17 June 2009), doi:10.1016/j.compchemeng.2009.06.013
3. Gils, H.C., Scholz, Y. Pregger, T., de Tena, D. L., Heide, D.: Integrated modelling
of variable renewable energy-based power supply in Europe., Energy 123, 173-188
(2017). doi:10.1016/j.energy.2017.01.115
4. Luehrs, S. et al., Flexible and Generic Workflow Management doi:10.3233/
978-1-61499-621-7-431
5. Petra, C.G., Schenk, 0., Anitescu, M.: Real-time Stochastic Optimization of Com-
plex Energy Systems on High Performance Computers. Computing in Science &
Engineering (CiSE) 16(5), pages 32-42.
6. Vanderbei, R.J.: Linear Programming: Foundations and Extensions. Springer (2014)
7. Schenk, O, Gartner, K.: On Fast Factorization Pivoting Methods for Sparse Sym-
metric Indefinite Systems. Technical Report, Department of Computer Science, Uni-
versity of Basel, 2004.
... The NLP model was implemented in the mathematical language Julia, using the optimization environment JuMP and the solver Ipopt (Dunning et al. 2017;Wächter and Biegler 2006) which is commonly used to solve nonlinear optimization problems using the interior point filter with a line-search algorithm (Breuer et al. 2018). As Ipopt is a local solver, for determining a suitable initial value for all the experiments, an algorithm for seeking feasibility, based on bootstrapping, was implemented (Chinneck 2008). ...
Article
Full-text available
Phenomena such as the growing use of environments associated with Big Data have been conducive to the access and generation of databases of energy consumption and environmental conditions. Due to data-driven analysis and the availability of large data sets, the current issue of handling “a lot of information” has arisen in the Renewable Energy Systems (RES) design. The problem of a high amount of data to feed a model, either control or optimal design, is the increase in computational costs. Therefore, it is necessary to define strategies to discriminate data without significant loss of information. This paper presents a comparison of three different strategies of data reduction, random selection of real days, and two approaches to construct typical days (TDs) profiles of solar irradiance α, temperature (T), power demand (WD) and wind speed (υ) for the optimal design of RES. The addressed strategies to obtain TD are based on principal component analysis (PCA) and k-means clustering for pattern recognition, respectively, which are compared with the use of random real days (RRD) from a database of 1 year of measurements of the four mentioned variables in the northwest region of Mexico. The WD data corresponds to a residential building. The optimal design algorithm minimizes the total annual cost of the system; additionally, results include the size of wind turbine, photovoltaic system and battery. The analysis performed allows identifying advantages in the use of the different strategies for catching the behavior of all variables as well as their influence in the design of RES and computational issues associated with the optimization process as time of computation (ToC). Results show that using a conservative number of typical days to feed the model may be sufficient to obtain a similar design to the one obtained using the full data. On the other hand, a wind turbine design gap caused by wind speed variability is analyzed.
... Parallelization should therefore not only be thought at the conceptual level but also on the technical layer. This goes hand in hand with the parallelization of solvers which is realized with the PIPS-IPM++ solver (Breuer et al., 2018). This solver provides a HPC-compatible implementation of the interior point method for LPs that are characterized by linking variables and linking constraints. ...
Technical Report
Full-text available
Background Energy system models (ESM) are widely used in research and industry to analyze todays and future energy systems and potential pathways for the European energy transition. Current studies address future policy design, analysis of technology pathways and of future energy systems. To address these questions and support the transformation of today’s energy systems, ESM have to increase in complexity to provide valuable quantitative insights for policy makers and industry. Especially when dealing with uncertainty and in integrating large shares of renewable energies, ESM require a detailed implementation of the underlying electricity system. The increased complexity of the models makes the application of ESM more and more difficult, as the models are limited by the available computational power of today’s decentralized workstations. Severe simplifications of the models are common strategies to solve problems in a reasonable amount of time – naturally significantly influencing the validity of results and reliability of the models in general. Solutions for Energy-System Modelling Within BEAM-ME a consortium of researchers from different research fields (system analysis, mathematics, operations research and informatics) develop new strategies to increase the computational performance of energy system models and to transform energy system models for usage on high performance computing clusters. Within the project, an ESM will be applied on two of Germany’s fastest supercomputers. To further demonstrate the general application of named techniques on ESM, a model experiment is implemented as part of the project. Within this experiment up to six energy system models will jointly develop, implement and benchmark speed-up methods. Finally, continually collecting all experiences from the project and the experiment, identified efficient strategies will be documented and general standards for increasing computational performance and for applying ESM to high performance computing will be documented in a best-practice guide.
... However, the current literature shows that there is research in this field, so that the solution of such complex problems is quite feasible. One possibility to apply the approach also to higher dimensional systems can be the transition to massively parallel computers as well as the further investigation and analysis of the special structure of the constraint matrix with respect to its efficiency, cf. for example [30]. Furthermore, it would be conceivable to use a region growing based technology, similar to [31] or [32], a hierarchical algorithm [33] or a tiling strategy [34]. ...
Article
Full-text available
One of the most critical steps in a multitemporal D-InSAR analysis is the resolution of the phase ambiguities in the context of phase unwrapping. The Extended Minimum Cost Flow approach is one of the potential phase unwrapping algorithms used in the Small Baseline Subset analysis. In a first step, each phase gradient is unwrapped in time using a linear motion model and, in a second step, the spatial phase unwrapping is individually performed for each interferogram. Exploiting the temporal and spatial information is a proven method, but the two-step procedure is not optimal. In this paper, a method is presented which solves both the temporal and spatial phase unwrapping in one single step. This requires some modifications regarding the estimation of the motion model and the choice of the weights. Furthermore, the problem of temporal inconsistency of the data, which occurs with spatially filtered interferograms, must be considered. For this purpose, so called slack variables are inserted. To verify the method, both simulated and real data are used. The test region is the Lower-Rhine-Embayment in the southwest of North Rhine-Westphalia, a very rural region with noisy data. The studies show that the new approach leads to more consistent results, so that the deformation time series of the analyzed pixels can be improved.
... Parallelization should therefore not only be thought at the conceptual level but also on the technical layer. This goes hand in hand with the parallelization of solvers which is realized with the PIPS-IPM++ solver (Breuer et al., 2018). This solver provides a HPC-compatible implementation of the interior point method for LPs that are characterized by linking variables and linking constraints. ...
Conference Paper
This paper provides results from the BEAM-ME project. The project was conducted by an interdisciplinary team of energy modelers, software developer, experts in high performance computing and operation research. One aim of the project was to develop a novel approach for distributed computing and application of energy system models to high performance computing. Further the project investigated conceptual speed-up methods. In this paper we provide an overview of the main lessons learned from the different approaches. It outlines difficulties that arise for modelers when applying energy system models based on linear programming on large scale distributed machines. Further it offers suggestions for modelers how speed-up of energy system models could be reached from a modelling perspective. The BEAM-Me project is funded by the German Ministry of Economic Affairs and Energy.
... Additional substructures (such local constraints and variables) are detected and exploited automatically by the solver-the user does not have to provide any information about them. More detail on the model annotation with GAMS can be found in [6] Currently, model generation is a sequential process where GAMS generates one constraint after the other. Usually, model generation is fast and the time consumption is negligible compared to the time consumed to solve the actual problem. ...
Preprint
Full-text available
Linear energy system models are often a crucial component of system design and operations, as well as energy policy consulting. Such models can lead to large-scale linear programs, which can be intractable even for state-of-the-art commercial solvers-already the available memory on a desktop machine might not be sufficient. Against this backdrop, this article introduces an interior-point solver that exploits common structures of linear energy system models to efficiently run in parallel on distributed-memory systems. The solver is designed for linear programs with doubly-bordered block-diagonal constraint matrix and makes use of a Schur complement based decomposition. Special effort has been put into handling large numbers of linking constraints and variables as commonly observed in energy system models. In order to handle this strong linkage, a distributed preconditioning of the Schur complement is used. In addition, the solver features a number of more generic techniques such as parallel matrix scaling and structure-preserving presolving. The implementation is based on the existing parallel interior-point solver PIPS-IPM. We evaluate the computational performance on energy system models with up to 700 million non-zero entries in the constraint matrix-and with more than 200 million columns and 250 million rows. This article mainly concentrates on the energy system model ELMOD, which is a linear optimization model representing the European electricity markets by the use of a nodal pricing market clearing. It has been widely applied in the literature on energy system analyses during recent years. However, it will be demonstrated that the new solver is also applicable to other energy system models.
... Linear programs (LPs) from energy system modeling and from other applications based on time-indexed decision variables often exhibit a distinct block-diagonal structure. Our extension [3] of the parallel interior-point solver PIPS-IPM [8] exploits this structure even when both linking variables and linking constraints are present simultaneously. It was designed to run on high-performance computing (HPC) platforms to make use of their massive parallel capabilities. ...
Preprint
Full-text available
In linear optimization, matrix structure can often be exploited algorithmically. However, beneficial presolving reductions sometimes destroy the special structure of a given problem. In this article, we discuss structure-aware implementations of presolving as part of a parallel interior-point method to solve linear programs with block-diagonal structure, including both linking variables and linking constraints. While presolving reductions are often mathematically simple, their implementation in a high-performance computing environment is a complex endeavor. We report results on impact, performance, and scalability of the resulting presolving routines on real-world energy system models with up to 700 million nonzero entries in the constraint matrix.
Chapter
In linear optimization, matrix structure can often be exploited algorithmically. However, beneficial presolving reductions sometimes destroy the special structure of a given problem. In this article, we discuss structure-aware implementations of presolving as part of a parallel interior-point method to solve linear programs with block-diagonal structure, including both linking variables and linking constraints. While presolving reductions are often mathematically simple, their implementation in a high-performance computing environment is a complex endeavor. We report results on impact, performance, and scalability of the resulting presolving routines on real-world energy system models with up to 700 million nonzero entries in the constraint matrix.
Chapter
Full-text available
We analyze the window fill rate in an inventory system with constant lead times under a periodic review policy. The window fill rate is the probability that a random customer gets serviced within a predefined time window. It is an extension of the traditional fill rate that takes into account that customers generally tolerate a certain waiting time. We analyze the impact of the reorder-cycle on the window fill rate and present an inventory model that finds the optimal spares allocation and the optimal reorder cycle with the objective of minimizing the total costs. Furthermore, we present a numerical example and find that the number of spares increase almost linearly when the reorder-cycle time increases. Finally, we show how managers can find the optimal spares allocation and the optimal reorder-cycle time and show how they can estimate the cost of changing the required window fill rate and the reorder-cycle time.
Article
Full-text available
The comprehensive evaluation of strategies for decarbonizing large-scale energy systems requires insights from many different perspectives. In energy systems analysis, optimization models are widely used for this purpose. However, they are limited in incorporating all crucial aspects of such a complex system to be sustainably transformed. Hence, they differ in terms of their spatial, temporal, technological and economic perspective and either have a narrow focus with high resolution or a broad scope with little detail. Against this background, we introduce the so-called granularity gaps and discuss two possibilities to address them: increasing the resolutions of the established optimization models, and the different kinds of model coupling. After laying out open challenges, we propose a novel framework to design power systems. Our exemplary concept exploits the capabilities of energy system optimization, transmission network simulation, distribution grid planning and agent-based simulation. This integrated framework can serve to study the energy transition with greater comprehensibility and may be a blueprint for similar multi-model analyses.
Article
Full-text available
Energy system optimization models used for capacity expansion and dispatch planning are established tools for decision making support in both energy industry and energy politics. The ever-increasing complexity of the systems under consideration leads to an increase in mathematical problem size of the models. This implies limitations of today's common solution approaches especially with regard to required computing times. To tackle this challenge many model-based speed-up approaches exist which, however, are typically only demonstrated on small generic test cases. In addition, in applied energy systems analysis the effects of such approaches are often not well understood. The novelty of this study is the systematic evaluation of several model reduction and heuristic decomposition techniques for a large applied energy system model using real data and particularly focusing on reachable speed-up. The applied model is typically used for examining German energy scenarios and allows expansion of storage and electricity transmission capacities. We find that initial computing times of more than two days can be reduced up to a factor of ten while having acceptable loss of accuracy. Moreover, we explain what we mean by “effectiveness of model reduction” which limits the possible speed-up with shared memory computers used in this study.
Article
Full-text available
Variable renewable energy (VRE) resources increasingly add fluctuations to power systems. The required types and capacities of balancing measures, amounts of curtailment, and costs associated with system integration need to be assessed for advising policy makers and economic actors. Previous studies mostly exclude storage from model-endogenous capacity expansion and omit concentrated solar power (CSP) completely. In this study, we stress the need for grid and backup capacity by investigating an integrated market in Europe, allowing for additional short-term as well as long-term storage and considering CSP as a dispatchable backup option. The Renewable Energy Mix (REMix) energy system model is introduced and applied to assess the capacity expansion and hourly dispatch at various levels of photovoltaic and wind power penetration. The model results demonstrate combinations of spatial and temporal balancing measures that enable net photovoltaic and wind supply shares of 60% and 70% of the annual demand, respectively. The usage of storage and grid can keep curtailments below 20% of the demand for theoretical VRE shares of up to 100%. Furthermore, we determine that the VRE supply structure has a strong impact on the least-cost allocation of power plants across Europe but only a limited effect on supply costs.
Conference Paper
Full-text available
Dieser Beitrag stellt mögliche Ansätze zur Reduktion der Rechenzeit von li-nearen Optimierungsproblemen mit energiewirtschaftlichem Anwendungshintergrund vor. Diese Ansätze bilden im Allgemeinen die Grundlage für konzeptionelle Strategien zur Be-schleunigung von Energiesystemmodellen. Zu den einfachsten Beschleunigungsstrategien zählt die Verkleinerung der Modelldimensionen, was beispielsweise durch Ändern der zeitli-chen, räumlichen oder technologischen Auflösung eines Energiesystemmodells erreicht wer-den kann. Diese Strategien sind zwar häufig ein Teil der Methodik in der Energiesystemana-lyse, systematische Benchmarks zur Bewertung ihrer Effektivität werden jedoch meist nicht durchgeführt. Die vorliegende Arbeit adressiert genau diesen Sachverhalt. Hierzu werden Modellinstanzen des Modells REMix in verschiedenen Größenordnungen mittels einer Per-formance-Benchmark-Analyse untersucht. Die Ergebnisse legen zum einen den Schluss nahe, dass verkürzte Betrachtungszeiträume das größte Potential unter den hier analysierten Strategien zur Reduktion von Rechenzeit bieten. Zum anderen empfiehlt sich die Verwen-dung des Barrier-Lösungsverfahrens mit multiplen Threads unter Vernachlässigung des Cross-Over.
Article
A scalable approach computes in operationally-compatible time the energy dispatch under uncertainty for electrical power grid systems of realistic size with thousands of scenarios. The authors propose several algorithmic and implementation advances in their parallel solver PIPS-IPM for stochastic optimization problems. New developments include a novel, incomplete, augmented, multicore, sparse factorization implemented within the PARDISO linear solver and new multicore- and GPU-based dense matrix implementations. They show improvement on the interprocess communication on Cray XK7 and XC30 systems. PIPS-IPM is used to solve 24-hour horizon power grid problems with up to 1.95 billion decision variables and 1.94 billion constraints on Cray XK7 and Cray XC30, with observed parallel efficiencies and solution times within an operationally defined time interval. To the authors' knowledge, "real-time"-compatible performance on a broad range of architectures for this class of problems hasn't been possible prior to this work.
Article
The authors discuss new pivoting factorization methods for solving sparse symmetric indefinite systems. As opposed to many existing pivoting methods, our supernode-Bunch-Kaufman (SBK) pivoting method dynamically selects 1×1 and 2×2 pivots and may be supplemented by pivot perturbation techniques. We demonstrate the effectiveness and the numerical accuracy of this algorithm and also show that a high performance implementation is feasible. We also show that symmetric minimum-weighted matching strategies add an additional level of reliability to SBK. These techniques can be seen as it complement to the alternative idea of using more complete pivoting techniques during the numerical factorization. Numerical experiments validate these conclusions.
Article
Extended mathematical programs are collections of functions and variables joined together using specific optimization and complementarity primitives. This paper outlines a mechanism to describe such an extended mathematical program by means of annotating the existing relationships within a model to facilitate higher level structure identification. The structures, which often involve constraints on the solution sets of other models or complementarity relationships, can be exploited by modern large scale mathematical programming algorithms for efficient solution. A specific implementation of this framework is outlined that communicates structure from the GAMS modeling system to appropriate solvers in a computationally beneficial manner. Example applications are taken from chemical engineering.
An Extended Mathematical Programming Framework
  • M C Ferris
  • S P Dirkse
  • J Jagla
  • A Meeraus
Ferris, M.C., Dirkse, S.P., Jagla, J., Meeraus, A.: An Extended Mathematical Programming Framework. In: Computers & Chemical Engineering, vol. 33, pp. 19731982 (17 June 2009), doi:10.1016/j.compchemeng.2009.06.013
Flexible and Generic Workflow Management
  • S Luehrs
Luehrs, S. et al., Flexible and Generic Workflow Management doi:10.3233/ 978-1-61499-621-7-431