Figure - available from: Computing and Software for Big Science
This content is subject to copyright. Terms and conditions apply.
Example of a Jupyter notebook mixing cells with Julia and Python code

Example of a Jupyter notebook mixing cells with Julia and Python code

Source publication
Article
Full-text available
Research in high energy physics (HEP) requires huge amounts of computing and storage, putting strong constraints on the code speed and resource usage. To meet these requirements, a compiled high-performance language is typically used; while for physicists, who focus on the application when developing the code, better research productivity pleads fo...

Similar publications

Article
Full-text available
We present tfQMRgpu, a GPU-accelerated iterative linear solver based on the transpose-free quasi-minimal residual (tfQMR) method. Designed for large-scale electronic structure calculations, particularly in the context of Korringa–Kohn–Rostoker density functional theory, tfQMRgpu efficiently handles block-sparse complex matrices arising from multipl...

Citations

... We chose the Julia language because it is performant, high level, has an overwhelmingly open source community, and has a mature and feature-full software ecosystem that supports nonlinear timeseries analysis ( §2.5). In several scientific domains authors provide similar arguments for the adoption of Julia for computational research [41][42][43]. ...
Article
Full-text available
In the nonlinear timeseries analysis literature, countless quantities have been presented as new “entropy” or “complexity” measures, often with similar roles. The ever-increasing pool of such measures makes creating a sustainable and all-encompassing software for them difficult both conceptually and pragmatically. Such a software however would be an important tool that can aid researchers make an informed decision of which measure to use and for which application, as well as accelerate novel research. Here we present ComplexityMeasures.jl, an easily extendable and highly performant open-source software that implements a vast selection of complexity measures. The software provides 1638 measures with 3,841 lines of source code, averaging only 2.3 lines of code per exported quantity (version 3.7). This is made possible by its mathematically rigorous composable design. In this paper we discuss the software design and demonstrate how it can accelerate complexity-related research in the future. We carefully compare it with alternative software and conclude that ComplexityMeasures.jl outclasses the alternatives in several objective aspects of comparison, such as computational performance, overall amount of measures, reliability, and extendability. ComplexityMeasures.jl is also a component of the DynamicalSystems.jl library for nonlinear dynamics and nonlinear timeseries analysis and follows open source development practices for creating a sustainable community of developers and contributors.
... Finally, note that we compare our Algorithm 2, implemented in Julia, with two wellknown libraries, Qhull [64] and CGAL [65], both of which are implemented in C++. [69] provides an extensive efficiency comparison between C++ and Julia, showing that the two languages are generally comparable in performance, with each being faster in certain scenarios. It is worth noting that when Julia is faster, the speedup factor is usually small. ...
Article
Full-text available
For a given finite set X and an approximation parameter δ0\delta \ge 0, a convex polygon or polyhedron Pinner\mathcal{P}^\textrm{inner} is called an inner δ\delta -approximation of the convex hull convX{{\,\textrm{conv}\,}}X of X if convX{{\,\textrm{conv}\,}}X contains Pinner\mathcal{P}^\textrm{inner} and the Hausdorff distance between them is not greater than δ\delta . In this paper, two algorithms for computing inner δ\delta -approximation in 2D are developed. This approximation approach can reduce the computation time. For example, if X consists of 1, ⁣000, ⁣0001,\!000,\!000 random points in an ellipse, the computation time can be reduced by 11.20%11.20\% if one chooses δ\delta to be equal to 10410^{-4} multiplied by the diameter of this ellipse. By choosing δ=0\delta = 0, our algorithms can be applied to quickly determine the exact convex hull convX{{\,\textrm{conv}\,}}X. Numerical experiments confirm that their time complexity is linear in n if X consists of n random points in ellipses or rectangles. Compared to others, our Algorithm 2 is much faster than the Quickhull algorithm in the Qhull library, which is faster than all 2D convex hull functions in CGAL (Computational Geometry Algorithm Library). If X consists of n=100, ⁣000n = 100,\!000 random points in an ellipse or a rectangle, Algorithm 2 is 5.17 or 18.26 times faster than Qhull, respectively. The speedup factors of our algorithms increase with n. E.g., if X consists of n=46, ⁣200, ⁣000n = 46,\!200,\!000 random points in an ellipse or a rectangle, the speedup factors of Algorithm 2 compared to Qhull are 8.46 and 22.44, respectively.
... This difference in linear solvers could also contribute to the performance difference. In terms of the performance differences between programming languages C++ and Julia, some studies suggest that Julia might be slightly faster for some operations like matrix multiplication but not fast enough to explain the relatively large differences between CADET-DG and CADET-Julia in simulation time (Eschle et al., 2023). Regarding the different SMA implementations, the speed ups for SMA case studies align with the general speed up trend, meaning the differences in the SMA implementation is probably not contributing significantly to the differences in the speed up. ...
... Julia's flexibility and speed make it an ideal choice for developing new lattice QCD algorithms, as it allows for rapid prototyping without sacrificing performance. Thanks to these advantages, Julia is attractive language for high energy physics [8]. ...
Preprint
Full-text available
We develop a new lattice gauge theory code set JuliaQCD using the Julia language. Julia is well-suited for integrating machine learning techniques and enables rapid prototyping and execution of algorithms for four dimensional QCD and other non-Abelian gauge theories. The code leverages LLVM for high-performance execution and supports MPI for parallel computations. Julia's multiple dispatch provides a flexible and intuitive framework for development. The code implements existing algorithms such as Hybrid Monte Carlo (HMC), many color and flavor, supports lattice fermions, smearing techniques, and full QCD simulations. It is designed to run efficiently across various platforms, from laptops to supercomputers, allowing for seamless scalability. The code set is currently available on GitHub https://github.com/JuliaQCD.
... The Julia programming language, which was specifically designed to allow researcher to use a single language providing high-performance and easy of programming at the same time, has attracted attention of our community in the last couple of years [1,2]. ...
... In the comparison performed in Ref. [2] of the running speed for different algorithms implemented in Julia, Python, and C/C++, whose results are reproduced in Fig. 4, we see that Julia provides performance similar to C/C++, while Python can be up to two orders of magnitude slower. ...
... The dynamic multiple dispatch design allows for a massive code reuse and sharing. A detailed comparison of polymorphism mechanisms in C++, Python, and Julia and a discussion of its impact on code reuse can be found in Ref. [2]. ...
Article
Full-text available
The Julia programming language was created 10 years ago and is now a mature and stable language with a large ecosystem including more than 8,000 third-party packages. It was designed for scientific programming to be a high-level and dynamic language as Python is, while achieving runtime performances comparable to C/C++ or even faster. With this, we ask ourselves if the Julia language and its ecosystem is ready now for its adoption by the High Energy Physics community. We will report on a number of investigations and studies of the Julia language that have been done for various representative HEP applications, ranging from computing intensive initial data processing of experimental data and simulation, to final interactive data analysis and plotting. Aspects of collaborative code development of large software within a HEP experiment has also been investigated: scalability with large development teams, continuous integration and code test, code reuse, language interoperability to enable an adiabatic migration of packages and tools, software installation and distribution, training of the community, benefit from development from industry and academia from other fields.
... Such time steps result in orders of magnitude speedups when simulating collapse of an inflaton field in the early universe (Musoke et al., 2020). Julia (Bezanson et al., 2017) has seen increasing use in scientific computing; see for example Eschle et al. (2023) and Roesch et al. (2021) for overviews of its use in high energy physics and biology. The use of Julia is one of the choices that separates UltraDark.jl ...
Article
Full-text available
In this mini review, we propose the use of the Julia programming language and its software as a strong candidate for reproducible, efficient, and sustainable physiological signal analysis. First, we highlight available software and Julia communities that provide top-of-the-class algorithms for all aspects of physiological signal processing despite the language’s relatively young age. Julia can significantly accelerate both research and software development due to its high-level interactive language and high-performance code generation. It is also particularly suited for open and reproducible science. Openness is supported and welcomed because the overwhelming majority of Julia software programs are open source and developed openly on public platforms, primarily through individual contributions. Such an environment increases the likelihood that an individual not (originally) associated with a software program would still be willing to contribute their code, further promoting code sharing and reuse. On the other hand, Julia’s exceptionally strong package manager and surrounding ecosystem make it easy to create self-contained, reproducible projects that can be instantly installed and run, irrespective of processor architecture or operating system.
Article
Full-text available
We present tools for high-performance analysis written in pure Julia, a just-in-time (JIT) compiled dynamic programming language with a high-level syntax and performance. The packages we present center around UnROOT.jl, a pure Julia ROOT file I/O package that is optimized for speed, lazy reading, flexibility and thread safety. We discuss what affects performance in Julia, the challenges, and their solutions during the development of UnROOT.jl. We highlight type stability as a challenge and discuss its implication whenever any “compilation” happens (incl. Numba, Jax, C++) as well as Julia’s specific ones. We demonstrate the performance and “easy to use” claim by comparing UnROOT.jl against popular alternatives (RDataFrame, Uproot, etc.) in medium-size realistic benchmarks, comparing both performance and code complexity. Finally, we also showcase real ATLAS analysis workflows both locally and on an HPC system, highlighting the composability of UnROOT.jl with multithread/process and out-of-core distributed computing libraries.
Article
Full-text available
The evaluation of new computing languages for a large community, like HEP, involves comparison of many aspects of the languages’ behaviour, ecosystem and interactions with other languages. In this paper we compare a number of languages using a common, yet non-trivial, HEP algorithm: the anti- kT clustering algorithm used for jet finding. We compare specifically the algorithm implemented in Python (pure Python and accelerated with numpy and numba), and Julia, with respect to the reference implementation in C++, from Fastjet. As well as the speed of the implementation we describe the ergonomics of the language for the coder, as well as the efforts required to achieve the best performance, which can directly impact on code readability and sustainability.