## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Article

A bstract
A measurement of the inclusive b-jet production cross section is presented in pp and p-Pb collisions at $$ \sqrt{s_{\mathrm{NN}}} $$ s NN = 5 . 02 TeV, using data collected with the ALICE detector at the LHC. The jets were reconstructed in the central rapidity region |η| < 0 . 5 from charged particles using the anti- k T algorithm with resolution parameter R = 0 . 4. Identification of b jets exploits the long lifetime of b hadrons, using the properties of secondary vertices and impact parameter distributions. The p T -differential inclusive production cross section of b jets, as well as the corresponding inclusive b-jet fraction, are reported for pp and p-Pb collisions in the jet transverse momentum range 10 ≤ p T , ch jet ≤ 100 GeV/ c , together with the nuclear modification factor, $$ {R}_{\mathrm{pPb}}^{\mathrm{b}-\mathrm{jet}} $$ R pPb b − jet . The analysis thus extends the lower p T limit of b-jet measurements at the LHC. The nuclear modification factor is found to be consistent with unity, indicating that the production of b jets in p-Pb at $$ \sqrt{s_{\mathrm{NN}}} $$ s NN = 5 . 02 TeV is not affected by cold nuclear matter effects within the current precision. The measurements are well reproduced by POWHEG NLO pQCD calculations with PYTHIA fragmentation.

To read the full-text of this research,

you can request a copy directly from the authors.

... The production of charm and beauty jets in pp collisions at √ s = 5.02 TeV and 13 TeV was measured with ALICE [1,2]. Charm jets were identified by the presence of a prompt D 0 meson (reconstructed via its hadronic decay D 0 → Kπ − ) among their constituents, while beauty jets were tagged exploiting the wider impact parameter distribution of beauty-hadron-decay particles. ...

... Within the experimental and theoretical uncertainties, the measurements are also in agreement with the POWHEG + PYTHIA 8 calculations. The p ch jet T -differential inclusive production cross section of b jets, as well as the corresponding inclusive b-jet fraction, are reported in [2] and the measurements are well reproduced by POWHEG calculations with PYTHIA8 fragmentation. ...

... The beauty-jet production cross section has been measured down to p ch jet T =10 GeV/c in p-Pb collisions [2]. The overall impact of cold nuclear matter effects on the resulting p ch jet T -differential cross section can be quantified by means of the nuclear modification factor R b−jet pPb defined as the ratio of the yield measured in p-Pb collisions and the expected yield that would be obtained from a superposition of independent pp collisions. ...

The early production of heavy-flavor (HF, charm and beauty) quarks makes them an excellent probe of the dynamical evolution of quantum chromodynamics (QCD) systems. Jets tagged by the presence of a HF hadron give access to the kinematics of the heavy quarks, and along with correlation measurements involving HF hadrons allow for comparisons of their production, propagation and fragmentation across different systems. In this contribution the latest results on HF jets and correlations measured with the ALICE detector in pp, p--Pb and Pb--Pb collisions from the LHC Run 2 data are reported.

... In the LHC era this is used almost routinely for b orb jets. Corresponding measurements were performed by the ATLAS [35], CMS [36], and (very recently) ALICE [37] collaborations. The ALICE experiment can reconstruct the b-flavored jets down to extremely low transverse momenta, such as p T ≈ 10 GeV [37]. ...

... Corresponding measurements were performed by the ATLAS [35], CMS [36], and (very recently) ALICE [37] collaborations. The ALICE experiment can reconstruct the b-flavored jets down to extremely low transverse momenta, such as p T ≈ 10 GeV [37]. The CMS collaboration recently measured jet shapes for b jets in pp collisions for the first time [38]. ...

We calculate differential cross sections for cc¯- and bb¯-dijet production in pp scattering at s=13 TeV in the kT-factorization and hybrid approaches with different unintegrated parton distribution functions (uPDFs). We present distributions in the transverse momentum and pseudorapidity of the leading jet, the rapidity difference between the jets and the dijet invariant mass. Our results are compared to recent LHCb data on forward production of heavy-flavor dijets, measured recently for the first time individually for both charm and bottom flavors. We find that an agreement between the predictions and the data within the full kT factorization is strongly related to the modeling of the large-x behavior of the gluon uPDFs, which is usually not well constrained. The problem may be avoided following the hybrid-factorization approach. Then, we obtain a good description of the measured distributions with the parton-branching, Kimber-Martin-Ryskin, Kutak-Sapeta, and Jung setA0 models for the gluon uPDF. We also calculate differential distributions for the ratio of the cc¯ and bb¯ cross sections. In all cases we obtain a ratio close to 1, which is caused by the condition on the minimal jet transverse momentum (pTjet>20 GeV) introduced in the experiment, that makes the role of the heavy-quark mass almost negligible. The LHCb experimental ratio seems a bit larger than the theoretical predictions. We discuss potentially important for the ratio effect of c- or b-quark gluon radiative corrections related to emission outside of the jet cone. The found effect seems rather small. A more refined calculation requires full simulation of c and b jets, which goes beyond of the scope of this paper.

... (Note that R influences the amount of background picked up by the jet clustering algorithm as well as the fraction of the full parton shower is typically contained in the jet. This analysis uses R = 0.4, a choice employed by several inclusive and heavy-flavour jet analyses [27,28]). The jets were required to be fully contained within the pseudorapidity region of |η| < 0.8. ...

We present a systematic analysis of heavy-flavour production in the underlying event in connection to a leading hard process in pp collisions at s=13 TeV, using the PYTHIA 8 Monte Carlo event generator. We compare results from events selected by triggering on the leading hadron, as well as those triggered with reconstructed jets. We show that the kinematics of heavy-flavour fragmentation complicates the characterisation of the underlying event, and the usual method which uses the leading charged final-state hadron as a trigger may wash away the connection between the leading process and the heavy-flavour particle created in association with that. Events triggered with light or heavy-flavour jets, however, retain this connection and bring more direct information on the underlying heavy-flavour production process, but may also import unwanted sensitivity to gluon radiation. The methods outlined in the current work provide means to verify model calculations for light and heavy-flavour production in the jet and the underlying event in great details.

... The reconstructed jets were categorized in 20 different p jet T ranges, from 15 GeV up to 400 GeV. In the case of the charm and beauty jet samples, the corresponding heavy quark was required to fall within the cone of the selected jet, similarly to jet-tagging methods that are utilized in the experiment [29,30]. ...

It has recently been shown that a KNO-like scaling is fulfilled inside the jets, which indicates that the KNO scaling is violated by complex vacuum-QCD processes outside the jet development, such as single and double parton scattering or softer multiple-parton interactions. In the current work we investigated the scaling properties of heavy-flavor jets using Monte-Carlo simulations. We found that while jets from leading-order flavor-creation processes exhibit a flavor-dependent pattern, heavy-flavor jets from production in the parton shower follow the inclusive-jet pattern. This suggests that the KNO-like scaling is driven by initial hard parton production and not by processes in the later stages of the reaction.

A bstract
We present a model-independent determination of the nuclear parton distribution functions (nPDFs) using machine learning methods and Monte Carlo techniques based on the NNPDF framework. The neutral-current deep-inelastic nuclear structure functions used in our previous analysis, nNNPDF1.0, are complemented by inclusive and charm-tagged cross-sections from charged-current scattering. Furthermore, we include all available measurements of W and Z leptonic rapidity distributions in proton-lead collisions from ATLAS and CMS at $$ \sqrt{s} $$ s = 5 . 02 TeV and 8.16 TeV. The resulting nPDF determination, nNNPDF2.0, achieves a good description of all datasets. In addition to quantifying the nuclear modifications affecting individual quarks and antiquarks, we examine the implications for strangeness, assess the role that the momentum and valence sum rules play in nPDF extractions, and present predictions for representative phenomenological applications. Our results, made available via the LHAPDF library, highlight the potential of high-energy collider measurements to probe nuclear dynamics in a robust manner.

Jet quenching in heavy ion collisions is expected to be accompanied by recoil effects, but unambiguous signals for the induced medium response have been difficult to identify so far. Here, we argue that modern jet substructure measurements can improve this situation qualitatively since they are sensitive to the momentum distribution inside the jet. We show that the groomed subjet shared momentum fraction $z_g$, and the girth of leading and subleading subjets signal recoil effects with dependencies that are absent in a recoilless baseline. We find that recoil effects can explain most of the medium modifications to the $z_g$ distribution observed in data. Furthermore, for jets passing the Soft Drop Condition, recoil effects induce in the differential distribution of subjet separation $\Delta R_{12}$ a characteristic increase with $\Delta R_{12}$, and they introduce a characteristic enhancement of the girth of the subleading subjet with decreasing $z_g$. We explain why these qualitatively novel features, that we establish in \textsc{Jewel+Pythia} simulations, reflect generic physical properties of recoil effects that should therefore be searched for as telltale signatures of jet-induced medium response.

We introduce a global analysis of collinearly factorized nuclear parton distribution functions (PDFs) including, for the first time, direct data constraints from the LHC proton-lead collisions. In comparison to our previous analysis, EPS09, where data only from charged-lepton-nucleus deep inelastic scattering (DIS), Drell-Yan (DY) dilepton production in proton-nucleus collisions and inclusive pion production in deuteron-nucleus collisions were used as input, we now increase the variety of data constraints to cover also neutrino-nucleus DIS as well as low-mass DY production in pion-nucleus collisions. The new LHC data significantly extend the kinematic reach of the available data constraints. The larger number of data points allows, in particular, to let much more freedom for the flavour dependence of nuclear effects than in other currently available analyses. As a result, especially the uncertainty estimates are now less biased and more objectively reflect the uncertainties flavour by flavour. From the new data, the neutrino DIS plays a pivotal role in obtaining a mutually consistent behaviour for both up and down valence quarks, and the LHC dijet data place clear constraints for the gluons at large momentum fraction. Mainly for insufficient statistics, the data for pion-nucleus DY and heavy gauge boson production in proton-lead collisions impose less visible constraints. The outcome of the analysis - a new set of next-to-leading order nuclear PDFs we call EPPS16 - is made available for applications in high-energy nuclear collisions.

We argue that contemporary jet substructure techniques might facilitate a more direct measurement of hard medium-induced gluon bremsstrahlung in heavy-ion collisions, and focus specifically on the "soft drop declustering" procedure that singles out the two leading jet substructures. Assuming coherent jet energy loss, we find an enhancement of the distribution of the energy fractions shared by the two substructures at small subjet energy caused by hard medium-induced gluon radiation. Departures from this approximation are discussed, in particular, the effects of colour decoherence and the contamination of the grooming procedure by soft background. Finally, we propose a complementary observable, that is the ratio of the two-pronged probability in Pb-Pb to proton-proton collisions and discuss its sensitivity to various energy loss mechanisms.

We present new parton distribution functions (PDFs) at next-to-next-to-leading order (NNLO) from the CTEQ-TEA global analysis of quantum chromodynamics. These differ from previous CT PDFs in several respects, including the use of data from LHC experiments and the new D0 charged-lepton rapidity asymmetry data, as well as the use of a more flexible parametrization of PDFs that, in particular, allows a better fit to different combinations of quark flavors. Predictions for important LHC processes, especially Higgs boson production at 13 TeV, are presented. These CT14 PDFs include a central set and error sets in the Hessian representation. For completeness, we also present the CT14 PDFs determined at the LO and the NLO in QCD. Besides these general-purpose PDF sets, we provide a series of (N)NLO sets with various αs values and additional sets in general-mass variable flavor number schemes, to deal with heavy partons, with up to three, four, and six active flavors.

This report reviews the study of open heavy-flavour and quarkonium production
in high-energy hadronic collisions, as tools to investigate fundamental aspects
of Quantum Chromodynamics, from the proton and nucleus structure at high energy
to deconfinement and the properties of the Quark-Gluon Plasma. Emphasis is
given to the lessons learnt from LHC Run 1 results, which are reviewed in a
global picture with the results from SPS and RHIC at lower energies, as well as
to the questions to be addressed in the future. The report covers heavy flavour
and quarkonium production in proton-proton, proton-nucleus and nucleus-nucleus
collisions. This includes discussion of the effects of hot and cold strongly
interacting matter, quarkonium photo-production in nucleus-nucleus collisions
and perspectives on the study of heavy flavour and quarkonium with upgrades of
existing experiments and new experiments. The report results from the activity
of the SaporeGravis network of the I3 Hadron Physics programme of the European
Union 7th Framework Programme.

In 2013, the Large Hadron Collider provided proton-lead and lead-proton collisions at the center-of-mass energy per nucleon pair √sNN=5.02 TeV . Van der Meer scans were performed for both configurations of colliding beams, and the cross section was measured for two reference processes, based on particle detection by the T0 and V0 detectors, with pseudo-rapidity coverage 4.6 < η < 4.9, -3.3 < η < -3.0 and 2.8 < η < 5.1, -3.7 < η < -1.7, respectively. Given the asymmetric detector acceptance, the cross section was measured separately for the two configurations. The measured visible cross sections are used to calculate the integrated luminosity of the proton-lead and lead-proton data samples, and to indirectly measure the cross section for a third, configuration-independent, reference process, based on neutron detection by the Zero Degree Calorimeters.

ALICE is the heavy-ion experiment at the CERN Large Hadron Collider. The experiment continuously took data during the first physics campaign of the machine from fall 2009 until early 2013, using proton and lead-ion beams. In this paper we describe the running environment and the data handling procedures, and discuss the performance of the ALICE detectors and analysis methods for various physics observables.

We study the incoherent multiple scattering effects on heavy meson production
in the backward rapidity region of p+A collisions within the generalized
high-twist factorization formalism. We calculate explicitly the double
scattering contributions to the heavy meson differential cross sections by
taking into account both initial-state and final-state interactions, and find
that these corrections are positive. We further evaluate the nuclear
modification factor for muons that come form the semi-leptonic decays of heavy
flavor mesons. Phenomenological applications in d+Au collisions at a
center-of-mass energy $\sqrt{s}=200$ GeV at RHIC and in p+Pb collisions at
$\sqrt{s}=5.02$ TeV at the LHC are presented. We find that incoherent multiple
scattering can describe rather well the observed nuclear enhancement in the
intermediate $p_T$ region for such reactions.

At the Large Hadron Collider, the identification of jets originating from b
quarks is important for searches for new physics and for measurements of
standard model processes. A variety of algorithms has been developed by CMS to
select b-quark jets based on variables such as the impact parameters of
charged-particle tracks, the properties of reconstructed decay vertices, and
the presence or absence of a lepton, or combinations thereof. The performance
of these algorithms has been measured using data from proton-proton collisions
at the LHC and compared with expectations based on simulation. The data used in
this study were recorded in 2011 at sqrt(s) = 7 TeV for a total integrated
luminosity of 5.0 inverse femtobarns. The efficiency for tagging b-quark jets
has been measured in events from multijet and t-quark pair production. CMS has
achieved a b-jet tagging efficiency of 85% for a light-parton misidentification
probability of 10% in multijet events. For analyses requiring higher purity, a
misidentification probability of only 1.5% has been achieved, for a 70% b-jet
tagging efficiency.

We present an updated set of parameters for the PYTHIA 8 event generator. We reevaluate the constraints imposed by LEP and SLD on hadronization, in particular with regard to heavy-quark fragmentation and strangeness production. For hadron collisions, we combine the updated fragmentation parameters with a new NNPDF2.3 LO PDF set. We use minimum-bias, Drell-Yan, and underlying-event data from the LHC to constrain the initial-state-radiation and multi-parton-interaction parameters, combined with data from SPS and the Tevatron to constrain the energy scaling. Several distributions show significant improvements with respect to the current defaults, for both ee and pp collisions, though we emphasize that interesting discrepancies remain in particular for strange particles and baryons. The updated parameters are available as an option starting from PYTHIA 8.185, by setting Tune:ee = 7 and Tune:pp = 14.

ALICE is an LHC experiment devoted to the study of strongly interacting
matter in proton--proton, proton--nucleus and nucleus--nucleus collisions at
ultra-relativistic energies. The ALICE VZERO system, made of two scintillator
arrays at asymmetric positions, one on each side of the interaction point,
plays a central role in ALICE. In addition to its core function as a trigger,
the VZERO system is used to monitor LHC beam conditions, to reject beam-induced
backgrounds and to measure basic physics quantities such as luminosity,
particle multiplicity, centrality and event plane direction in nucleus-nucleus
collisions. After describing the VZERO system, this publication presents its
performance over more than four years of operation at the LHC.

EPOS is a Monte-Carlo event generator for minimum bias hadronic interactions,
used for both heavy ion interactions and cosmic ray air shower simulations.
Since the last public release in 2009, the LHC experiments have provided a
number of very interesting data sets comprising minimum bias p-p, p-Pb and
Pb-Pb interactions. We describe the changes required to the model to reproduce
in detail the new data available from LHC and the consequences in the
interpretation of these data. In particular we discuss the effect of the
collective hadronization in p-p scattering. A different parametrization of flow
has been introduced in the case of a small volume with high density of
thermalized matter (core) reached in p-p compared to large volume produced in
heavy ion collisions. Both parametrizations depend only on the geometry and the
amount of secondary particles entering in the core and not on the beam mass or
energy. The transition between the two flow regimes can be tested with p-Pb
data. EPOS LHC is able to reproduce all minimum bias

It is the restriction on the phase space of emitting gluons connected with the kinematics of a heavy quark Q=c, b, . . . which determines the difference of the QCD jet produced by Q from that of ordinary light (practically massless) quarks q=u, d, s. The authors consider the Q-quark as relativistic (EQ>>MQ, Theta 0 identical to MQ/EQ<<1) and taking care of logarithmic effects ignoring small (power) corrections of the order of Theta 0.

We argue that high energy proton–nucleus (p + A) collisions provide an excellent laboratory for studying nuclear size enhanced parton multiple scattering where power corrections to the leading twist perturbative QCD factorization approach can be systematically computed. We identify and resum these corrections and calculate the centrality- and rapidity-dependent nuclear suppression of single and double inclusive hadron production at moderate transverse momenta. We demonstrate that both spectra and dihadron correlations in p + A reactions are sensitive measures of such dynamical nuclear attenuation effects.

We present an implementation of the next-to-leading order dijet production
process in hadronic collisions in the framework of POWHEG, which is a method to
implement NLO calculations within a shower Monte Carlo context. In constructing
the simulation, we have made use of the POWHEG BOX toolkit, which makes light
of many of the most technical steps. The majority of this article is concerned
with the study of the predictions of the Monte Carlo simulation. In so doing,
we validate our program for use in experimental analyses, elaborating on some
of the more subtle features which arise from the interplay of the NLO and
resummed components of the calculation. We conclude our presentation by
comparing predictions from the simulation against a number of Tevatron and LHC
jet-production results.

The standard method used for tagging b-hadrons in the DELPHI experiment at the CERN LEP Collider is discussed in detail. The main ingredient of b-tagging is the impact parameters of tracks, which relies mostly on the vertex detector. Additional information, such as the mass of particles associated to a secondary vertex, significantly improves the selection efficiency and the background suppression. The paper describes various discriminating variables used for the tagging and the procedure of their combination. In addition, applications of b-tagging to some physics analyses, which depend crucially on the performance and reliability of b-tagging, are described briefly.

Modification of parton fragmentation functions by multiple scattering and gluon bremsstrahlung in nuclear media is shown to describe very well the recent HERMES data in deeply inelastic scattering, giving the first evidence of the A(2/3) dependence of the modification. The energy loss is found to be <dE/dL> approximately 0.5 GeV/fm for a 10-GeV quark in an Au nucleus. Including the effect of expansion, analysis of the pi(0) spectra in central Au+Au collisions at sqrt[s]=130 GeV yields an averaged energy loss equivalent to <dE/dL> approximately 7.3 GeV/fm in a static medium. Predictions for central Au+Au collisions at sqrt[s]=200 GeV are also given.

The Pythia program can be used to generate high-energy-physics `events', i.e. sets of outgoing particles produced in the interactions between two incoming particles. The objective is to provide as accurate as possible a representation of event properties in a wide range of reactions, within and beyond the Standard Model, with emphasis on those where strong interactions play a rôle, directly or indirectly, and therefore multihadronic final states are produced. The physics is then not understood well enough to give an exact description; instead the program has to be based on a combination of analytical results and various QCD-based models. This physics input is summarized here, for areas such as hard subprocesses, initial- and final-state parton showers, underlying events and beam remnants, fragmentation and decays, and much more. Furthermore, extensive information is provided on all program elements: subroutines and functions, switches and parameters, and particle and process data. This should allow the user to tailor the generation task to the topics of interest.
The code and further information may be found on the Pythia web page: http://www.thep.lu.se/~torbjorn/Pythia.html.

A new generation of parton distribution functions with increased precision
and quantitative estimates of uncertainties is presented. This work
significantly extends previous CTEQ and other global analyses on two fronts:
(i) a full treatment of available experimental correlated systematic errors for
both new and old data sets; (ii) a systematic and pragmatic treatment of
uncertainties of the parton distributions and their physical predictions, using
a recently developed eigenvector-basis approach to the Hessian method. The new
gluon distribution is considerably harder than that of previous standard fits.
A number of physics issues, particularly relating to the behavior of the gluon
distribution, are addressed in more quantitative terms than before. Extensive
results on the uncertainties of parton distributions at various scales, and on
parton luminosity functions at the Tevatron RunII and the LHC, are presented.
The latter provide the means to quickly estimate the uncertainties of a wide
range of physical processes at these high-energy hadron colliders, based on
current knowledge of the parton distributions. In particular, the uncertainties
on the production cross sections of the $W,Z$ at the Tevatron and the LHC are
estimated to be $\pm 4%$ and $\pm 5%$ respectively, and that of a light Higgs
at the LHC to be $\pm 5%$.

We study open heavy flavor meson production in proton–nucleus (pA) collisions at RHIC and LHC energies within the Color Glass Condensate framework. We use the unintegrated gluon distribution at small Bjorkenʼs x in the proton obtained by solving the Balitsky–Kovchegov equation with running coupling correction and constrained by global fitting of HERA data. We change the initial saturation scale of the gluon distribution for the heavy nucleus. The gluon distribution with McLerran–Venugopalan model initial condition is also used for comparison. We present transverse momentum spectra of single D and B productions in pA collisions, and the so-called nuclear modification factor. The azimuthal angle correlation of open heavy flavor meson pair is also computed to study the modification due to the gluon saturation in the heavy nucleus at the LHC.

The kt and Cambridge/Aachen inclusive jet finding algorithms for hadron-hadron collisions can be seen as belonging to a broader class of sequential recombination jet algorithms, parametrised by the power of the energy scale in the distance measure. We examine some properties of a new member of this class, for which the power is negative. This ``anti-kt'' algorithm essentially behaves like an idealised cone algorithm, in that jets with only soft fragmentation are conical, active and passive areas are equal, the area anomalous dimensions are zero, the non-global logarithms are those of a rigid boundary and the Milan factor is universal. None of these properties hold for existing sequential recombination algorithms, nor for cone algorithms with split-merge steps, such as SISCone. They are however the identifying characteristics of the collinear unsafe plain ``iterative cone'' algorithm, for which the anti-kt algorithm provides a natural, fast, infrared and collinear safe replacement.

We have calculated the ground-state properties of a quark gas to second order in the quark-gluon coupling constant. Asymptotic freedom has been taken into account by using the renormalized coupling constant of Politzer and Gross and Wilczek. We find that this asymptotically free perturbation theory leads to an equation of state for a quark gas which for pressure P>0 is very similar to the equation of state obtained from the MIT bag model of hadrons. In particular, we can identify a "bag pressure" term in the perturbation theory expression for the pressure as a function of density. We obtain estimates for the baryonquark transition pressure by comparing the perturbation theory results with the Gibbs energy per baryon of baryonic matter. Our calculations show that the baryon-quark transition takes place at densities on the order of 10-20 times that in ordinary nuclei. These transition densities are higher than the maximum central density calculated for a neutron star.

We calculate the lowest-order charm and beauty parton distribution functions in and fragmentation functions into D and B mesons using the operator definitions of factorized perturbative quantum chromodynamics (QCD). In the vacuum, we find the leading corrections that arise from the structure of the final-state hadrons. Quark-antiquark potentials extracted from the lattice are employed to demonstrate the existence of open heavy flavor bound-state solutions in the quark-gluon plasma in the vicinity of the critical temperature. We provide first results for the in-medium modification of the heavy-quark distribution and decay probabilities in a comoving plasma. In an improved perturbative QCD description of heavy-flavor dynamics in the thermal medium, we combine D- and B-meson formation and dissociation with parton-level charm and beauty quark quenching to obtain predictions for the heavy-meson and nonphotonic-electron suppression in Cu+Cu and Pb+Pb collisions at the Relativistic Heavy Ion Collider and the Large Hadron Collider, respectively.

The scale dependence of the ratios of parton distributions in a proton of a nucleus A and in the free proton, , is studied within the framework of the lowest order leading-twist DGLAP evolution. By evolving the initial nuclear distributions
obtained with the GRV-LO and CTEQ4L sets at a scale , we show that the ratios are only moderately sensitive to the choice of a specific modern set of free parton distributions. We propose that to a good
first approximation, this parton distribution set-dependence of the nuclear ratios can be neglected in practical applications. With this result, we offer a numerical parametrization of for all parton flavours i in any , and at any and any for computing cross sections of hard processes in nuclear collisions.

These lectures present an overview of the current status of the QCD based phenomenology for open and hidden heavy flavor production at high energies. A unified description based on the light-cone color-dipole approach is employed in all cases. A good agreement with available data is achieved without fitting to the data to be explained, and nontrivial predictions for future experiments are made. The key phenomena under discussion are: (i) formation of the wave function of a heavy quarkonium; (ii) quantum interference and coherence length effects; (iii) Landau-Pomeranchuk suppression of gluon radiation leading to gluon shadowing and nuclear suppression of heavy flavors; (iv) higher twist shadowing related to the finite size of heavy quark dipoles; (v) higher twist corrections to the leading twist gluon shadowing making it process dependent.

Measurements of different physical quantities are often correlated when they are performed by the same experiment, using the same data or the same detector. Correlations may also exist between the results of different experiments, for instance if they rely on the use of the same theoretical models. All these correlations must be properly taken into account to provide the best combined estimate of each measured quantity. A procedure used to combine the correlated results of different high-energy physics experiments is reviewed in this paper.

Experiments to measure a single physical quantity often produce several estimates based on the same data, and which are hence correlated. We describe how to combine these correlated estimates in order to provide the best single answer, and also how to check whether the correlated estimates are mutually consistent.We discuss the properties of our technique, and illustrate its application by using it for a specific experiment which measured the lifetime of charmed particles.

QCD calculations of the production rate in a quark-gluon plasma and account of the space-time picture of hadronic collisions lead to estimates of the dilepton mass spectrum, p⊥ distributions of e±, μ±, γ, π±, production cross sections of charm and psions.

We review the leading 1/Q2 corrections to hadronic structure functions measured in deeply inelastic scattering, and we calculate the leading 1/Q2 corrections to Drell-Yan cross section. We find that the leading 1/Q2 corrections to a Drell-Yan cross section is given by the convolution of a calculable short-distance hard part, a twist-2 matrix element and a twist-4 matrix element. At leading order in αs, the normalization of the 1/Q2 longitudinal structure functions for Drell-Yan cross sections are determined by higher-twist longitudinal structure functions in deeply inelastic scattering. Other experimental tests of 1/Q2 corrections are also discussed.

One of the major challenges for the LHC will be to extract precise information from hadronic final states in the presence of the large number of additional soft pp collisions, pileup, that occur simultaneously with any hard interaction in high luminosity runs. We propose a novel technique, based on jet areas, that provides jet-by-jet corrections for pileup and underlying-event effects. It is data driven, does not depend on Monte Carlo modelling and can be used with any jet algorithm for which a jet area can be sensibly defined. We illustrate its effectiveness for some key processes and find that it can be applied also in the context of the Tevatron, low-luminosity LHC and LHC heavy-ion collisions.

The modification and amplification of the gluon angular distribution produced along with hard jets in nuclear collisions is computed. We consider the limit of a thin quark–gluon plasma, where the number of rescatterings of the jet and gluons is small. The focus is on jet quenching associated with the formation of highly off-shell partons in hard scattering events involving nuclei. The interference between the initial hard radiation amplitude, the multiple induced Gunion–Bertsch radiation amplitudes, and gluon rescattering amplitudes leads to an angular distribution that differs considerably from both the standard DGLAP evolution and from the classical limit parton cascading. The cases of a single and double rescattering are considered in detail, and a systematic method to compute all matrix elements for the general case is developed. A simple power law scaling of the angular distribution with increasing number of rescatterings is found and used for estimates of the fractional energy loss as a function of the plasma thickness.

We present a calculation of the fully exclusive parton cross sections for heavy-quark production at order O(αS3) in QCD. Our result includes the Born cross section for producing a pair, of order O(αS2), the virtual corrections to the Born cross section, of order O(αS3), and the cross section for producing a pair plus a light parton, of order O(αS3). We can therefore compute distributions in which correlations among the heavy quarks (and, if present, the associated jet) are correctly taken into account up to order O(αS3). We present some applications of phenomenological interest to top, bottom and charm production at hadron colliders.

Distributions measured in high energy physics experiments are usually distorted and/or transformed by various detector effects. A regularization method for unfolding these distributions is re-formulated in terms of the Singular Value Decomposition (SVD) of the response matrix. A relatively simple, yet quite efficient unfolding procedure is explained in detail. The concise linear algorithm results in a straightforward implementation with full error propagation, including the complete covariance matrix and its inverse. Several improvements upon widely used procedures are proposed, and recommendations are given how to simplify the task by the proper choice of the matrix. Ways of determining the optimal value of the regularization parameter are suggested and discussed, and several examples illustrating the use of the method are presented.

We analyze contributions to inclusive hadron-hadron scattering cross sections that decrease as 1/Q2 relative to leading behavior, with Q2 a large momentum transfer. We show why this behavior may be treated in perturbative QCD, in terms of generalized factorization theorems. We suggest that this makes possible a unified of first nonleading power corrections in a large class of processes.

The exponentially increasing spectrum proposed by Hagedorn is not necessarily connected with a limiting temperature, but it is present in any system which undergoes a second order phase transition. We suggest that the “observed” exponential spectrum is connected to the existence of a different phase of the vacuum in which quarks are not confined.

The RooUnfold package provides a common framework to evaluate and use
different unfolding algorithms, side-by-side. It currently provides
implementations or interfaces for the Iterative Bayes, Singular Value
Decomposition, and TUnfold methods, as well as bin-by-bin and matrix inversion
reference methods. Common tools provide covariance matrix evaluation and
multi-dimensional unfolding. A test suite allows comparisons of the performance
of the algorithms under different truth and measurement models. Here I outline
the package, the unfolding methods, and some experience of their use.

FastJet is a C++ package that provides a broad range of jet finding and
analysis tools. It includes efficient native implementations of all widely used
2-to-1 sequential recombination jet algorithms for pp and e+e- collisions, as
well as access to 3rd party jet algorithms through a plugin mechanism,
including all currently used cone algorithms. FastJet also provides means to
facilitate the manipulation of jet substructure, including some common boosted
heavy-object taggers, as well as tools for estimation of pileup and
underlying-event noise levels, determination of jet areas and subtraction or
suppression of noise in jets.

We calculate the transverse momentum dependence of the medium-induced gluon energy distribution radiated off massive quarks in spatially extended QCD matter. In the absence of a medium, the distribution shows a characteristic mass-dependent depletion of the gluon radiation for angles smaller than m/E, the so-called dead cone effect. Medium-modifications of this spectrum are calculated as a function of quark mass, initial quark energy, in-medium pathlength and density. Generically, medium-induced gluon radiation is found to fill the dead cone, but it is reduced at large gluon energies compared to the radiation off light quarks. We quantify the resulting mass-dependence for momentum-averaged quantities (gluon energy distribution and average parton energy loss), compare it to simple approximation schemes and discuss its observable consequences for nucleus-nucleus collisions at RHIC and LHC. In particular, our analysis does not favor the complete disappearance of energy loss effects from leading open charm spectra at RHIC.

We present a next-to-leading order (NLO) global DGLAP analysis of nuclear parton distribution functions (nPDFs) and their uncertainties. Carrying out an NLO nPDF analysis for the first time with three different types of experimental input -- deep inelastic $\ell$+A scattering, Drell-Yan dilepton production in p+$A$ collisions, and inclusive pion production in d+Au and p+p collisions at RHIC -- we find that these data can well be described in a conventional collinear factorization framework. Although the pion production has not been traditionally included in the global analyses, we find that the shape of the nuclear modification factor $R_{\rm dAu}$ of the pion $p_T$-spectrum at midrapidity retains sensitivity to the gluon distributions, providing evidence for shadowing and EMC-effect in the nuclear gluons. We use the Hessian method to quantify the nPDF uncertainties which originate from the uncertainties in the data. In this method the sensitivity of $\chi^2$ to the variations of the fitting parameters is mapped out to orthogonal error sets which provide a user-friendly way to calculate how the nPDF uncertainties propagate to any factorizable nuclear cross-section. The obtained NLO and LO nPDFs and the corresponding error sets are collected in our new release called {\ttfamily EPS09}. These results should find applications in precision analyses of the signatures and properties of QCD matter at the LHC and RHIC.

Nuclear parton distribution functions (NPDFs) are determined by global analyses of experimental data on structure-function ratios F_2^A/F_2^{A'} and Drell-Yan cross-section ratios \sigma_{DY}^A/\sigma_{DY}^{A'}. The analyses are done in the leading order (LO) and next-to-leading order (NLO) of running coupling constant \alpha_s. Uncertainties of the NPDFs are estimated in both LO and NLO for finding possible NLO improvement. Valence-quark distributions are well determined, and antiquark distributions are also determined at x<0.1. However, the antiquark distributions have large uncertainties at x>0.2. Gluon modifications cannot be fixed at this stage. Although the advantage of the NLO analysis, in comparison with the LO one, is generally the sensitivity to the gluon distributions, gluon uncertainties are almost the same in the LO and NLO. It is because current scaling-violation data are not accurate enough to determine precise nuclear gluon distributions. Modifications of the PDFs in the deuteron are also discussed by including data on the proton-deuteron ratio F_2^D/F_2^p in the analysis. A code is provided for calculating the NPDFs and their uncertainties at given x and Q^2 in the LO and NLO.

This report introduces general ideas and some basic methods of the Bayesian probability theory applied to physics measurements. Our aim is to make the reader familiar, through examples rather than rigorous formalism, with concepts such as: model comparison (including the automatic Ockham's Razor filter provided by the Bayesian approach); parametric inference; quantification of the uncertainty about the value of physical quantities, also taking into account systematic effects; role of marginalization; posterior characterization; predictive distributions; hierarchical modelling and hyperparameters; Gaussian approximation of the posterior and recovery of conventional methods, especially maximum likelihood and chi-square fits under well defined conditions; conjugate priors, transformation invariance and maximum entropy motivated priors; Monte Carlo estimates of expectation, including a short introduction to Markov Chain Monte Carlo methods. Comment: 40 pages, 2 figures, invited paper for Reports on Progress in Physics

After a brief review of the various scenarios for quarkonium production in ultra-relativistic nucleus-nucleus collisions we focus on the ingredients and assumptions underlying the statistical hadronization model. We then confront model predictions for J/$\psi$ phase space distributions with the most recent data from the RHIC accelerator. Analysis of the rapidity dependence of the J/$\psi$ nuclear modification factor yields first evidence for the production of J/$\psi$ mesons at the phase boundary. We conclude with predictions for charmonium production at the LHC.

This is a review of the theoretical background, experimental techniques, and phenomenology of what is called the "Glauber Model" in relativistic heavy ion physics. This model is used to calculate "geometric" quantities, which are typically expressed as impact parameter (b), number of participating nucleons (N_part) and number of binary nucleon-nucleon collisions (N_coll). A brief history of the original Glauber model is presented, with emphasis on its development into the purely classical, geometric picture that is used for present-day data analyses. Distinctions are made between the "optical limit" and Monte Carlo approaches, which are often used interchangably but have some essential differences in particular contexts. The methods used by the four RHIC experiments are compared and contrasted, although the end results are reassuringly similar for the various geometric observables. Finally, several important RHIC measurements are highlighted that rely on geometric quantities, estimated from Glauber calculations, to draw insight from experimental observables. The status and future of Glauber modeling in the next generation of heavy ion physics studies is briefly discussed.

We discuss the factorization theorems that enable one to apply perturbative calculations to many important processes involving hadrons. In this introductory section we state briefly what the theorems are, and in Sects. 2 to 4, we indicate how they are applied in calculations. In subsequent sections, we present an outline of how the theorems are established, both in the simple but instructive case of scalar field theory and in the more complex and physically interesting case of quantum chromodynamics (QCD).

We perform a next to leading order QCD global analysis of nuclear deep inelastic scattering and Drell-Yan data using the convolution approach to parameterize nuclear parton densities. We find both a significant improvement in the agreement with data compared to previous extractions, and substantial differences in the scale dependence of nuclear effects compared to leading order analyses.

We study Drell-Yan (DY) dilepton production in proton(deuterium)-nucleus and in nucleus-nucleus collisions within the light-cone color dipole formalism. This approach is especially suitable for predicting nuclear effects in the DY cross section for heavy ion collisions, as it provides the impact parameter dependence of nuclear shadowing and transverse momentum broadening, quantities that are not available from the standard parton model. For p(D)+A collisions we calculate nuclear shadowing and investigate nuclear modification of the DY transverse momentum distribution at RHIC and LHC for kinematics corresponding to coherence length much longer than the nuclear size. Calculations are performed separately for transversely and longitudinally polarized DY photons, and predictions are presented for the dilepton angular distribution. Furthermore, we calculate nuclear broadening of the mean transverse momentum squared of DY dileptons as function of the nuclear mass number and energy. We also predict nuclear effects for the cross section of the DY process in heavy ion collisions. We found a substantial nuclear shadowing for valence quarks, stronger than for the sea.

Broadening of the transverse momentum of a parton propagating through a medium is treated using the color dipole formalism, which has the advantage of being a well developed phenomenology in deep-inelastic scattering and soft processes. Within this approach, nuclear broadening should be treated as color filtering, i.e. absorption of large-size dipoles leading to diminishing (enlarged) transverse separation (momentum). We also present a more intuitive derivation based on the classic scattering theory of Moli\`ere. This derivation helps to understand the origin of the dipole cross section, part of which comes from attenuation of the quark, while another part is due to multiple interactions of the quark. It also demonstrates that the lowest-order rescattering term provides an A-dependence very different from the generally accepted A^{1/3} behavior. The effect of broadening increases with energy, and we evaluate it using different phenomenological models for the unintegrated gluon density. Although the process is dominated by soft interactions, the phenomenology we use is tested using hadronic cross section data. Comment: 27 pages of Latex including 5 figures. A few references are added

We present an improved leading-order global DGLAP analysis of nuclear parton distribution functions (nPDFs), supplementing the traditionally used data from deep inelastic lepton-nucleus scattering and Drell-Yan dilepton production in proton-nucleus collisions, with inclusive high-$p_T$ hadron production data measured at RHIC in d+Au collisions. With the help of an extended definition of the $\chi^2$ function, we now can more efficiently exploit the constraints the different data sets offer, for gluon shadowing in particular, and account for the overall data normalization uncertainties during the automated $\chi^2$ minimization. The very good simultaneous fit to the nuclear hard process data used demonstrates the feasibility of a universal set of nPDFs, but also limitations become visible. The high-$p_T$ forward-rapidity hadron data of BRAHMS add a new crucial constraint into the analysis by offering a direct probe for the nuclear gluon distributions -- a sector in the nPDFs which has traditionally been very badly constrained. We obtain a strikingly stronger gluon shadowing than what has been estimated in previous global analyses. The obtained nPDFs are released as a parametrization called EPS08. Comment: 26 pages, 14 figures; for v2, we have revised the Table 1 and Fig. 13, and added the Fig. 14 and the Table 3 along with some more discussion

The aim of this work is to describe in detail the POWHEG method, first suggested by one of the authors, for interfacing parton-shower generators with NLO QCD computations. We describe the method in its full generality, and then specify its features in two subtraction frameworks for NLO calculations: the Catani-Seymour and the Frixione-Kunszt-Signer approach. Two examples are discussed in detail in both approaches: the production of hadrons in e+e- collisions, and the Drell-Yan vector-boson production in hadronic collisions. Comment: 91 pages, 2 figures