Alejandro Pozas-Kerstjens’s research while affiliated with University of Geneva and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (51)


Figure 3.1. Performance of TT-RSS applied to different functions when varying the number of pivots N . From left to right, the columns show: the relative error on the N pivots x used in TT-RSS, Rpxq; the relative error on a set of M test samples s from the functions' domains, Rpsq; and the time (in seconds) required to perform the decompositions. For each value of N , the decomposition is performed 10 times. The figures display the mean values with error bars at ˘0.5σ, where σ denotes the standard deviation. Additionally, three configurations with different numbers of variables, n " 100, n " 200, and n " 500, are shown in different colors.
Figure 3.2. Performance of TT-RSS applied to different NN models when varying the number of pivots N . From left to right, columns show: relative error on the N pivots x used in TT-RSS, Rpxq; relative error on a set of M test samples s from the test sets, Rpsq; percentage of classifications of the TT models that differ from those of the original NN models; and time in seconds taken to perform the decompositions. For each value of N , the decomposition is performed 10 times, displaying in the figures the mean values with error bars at ˘0.5σ, where σ denotes the standard deviation. Also, three configurations are considered with different numbers of variables, n " 144, n " 256, and n " 400, which are displayed using different colors.
Figure 3.3. Distribution of the accuracies of 50 tensorized models for each configuration, represented using box plots. Each configuration is made by varying one of the hyperparameters d, r, and N from the baseline point d " 2, r " 5, N " 50. The orange lines connect the medians across all the different values for a hyperparameter. The horizontal dashed black lines represent the original accuracy of the NN model being tensorized.
Figure 4.1. Accuracy of the different models evaluated on a test dataset consisting of 300 points, equally distributed across the 4 subgroups: English woman, Canadian woman, English man, and Canadian man. From top to bottom, the models are: the original NN models, TT models output by TT-RSS, and TT models re-trained for 10 epochs. Accuracies are measured for all percentages q of English speakers present in the datasets.
Figure 5.1. Evolution of training accuracies of TT models trained via the Adam optimizer (learning rate = 10´510´5 , weight decay = 10´1010´10 ) for a binary classification task. The top figure shows results using the polynomial embedding, while the bottom figure corresponds to the unit embedding. Different initialization methods are employed for both embeddings, with each method represented by a distinct color.

+1

Tensorization of neural networks for improved privacy and interpretability
  • Preprint
  • File available

January 2025

·

5 Reads

José Ramón Pareja Monturiol

·

Alejandro Pozas-Kerstjens

·

We present a tensorization algorithm for constructing tensor train representations of functions, drawing on sketching and cross interpolation ideas. The method only requires black-box access to the target function and a small set of sample points defining the domain of interest. Thus, it is particularly well-suited for machine learning models, where the domain of interest is naturally defined by the training dataset. We show that this approach can be used to enhance the privacy and interpretability of neural network models. Specifically, we apply our decomposition to (i) obfuscate neural networks whose parameters encode patterns tied to the training data distribution, and (ii) estimate topological phases of matter that are easily accessible from the tensor train representation. Additionally, we show that this tensorization can serve as an efficient initialization method for optimizing tensor trains in general settings, and that, for model compression, our algorithm achieves a superior trade-off between memory and time complexity compared to conventional tensorization methods of neural networks.

Download


Guarantees on the structure of experimental quantum networks

November 2024

·

4 Reads

npj Quantum Information

Andrés Ulibarrena

·

Jonathan W. Webb

·

Alexander Pickston

·

[...]

·

Alejandro Pozas-Kerstjens

Quantum networks connect and supply a large number of nodes with multi-party quantum resources for secure communication, networked quantum computing and distributed sensing. As these networks grow in size, certification tools will be required to answer questions regarding their properties. In this work we demonstrate a general method to guarantee that certain correlations cannot be generated in a given quantum network. We apply quantum inflation methods to data obtained in quantum group encryption experiments, guaranteeing the impossibility of producing the observed results in networks with fewer optical elements. Our results pave the way for scalable methods of obtaining device-independent guarantees on the network structure underlying multipartite quantum protocols.


Hierarchical certification of nonclassical network correlations

August 2024

·

6 Reads

·

5 Citations

Physical Review A

With the increased availability of quantum technological devices, it becomes more important to have tools to guarantee their correct nonclassical behavior. This is especially important for quantum networks, which constitute the platforms where multipartite cryptographic protocols will be implemented, and where guarantees of nonclassicality translate into security proofs. We derive linear and nonlinear Bell-like inequalities for networks, whose violation certifies the absence of a minimum number of classical sources in them. We do so, first, without assuming that nature is ultimately governed by quantum mechanics, providing a hierarchy interpolating between network nonlocality and full network nonlocality. Second we insert this assumption, which leads to results more amenable to certification in experiments.


Classification of joint quantum measurements based on entanglement cost of localization

August 2024

·

26 Reads

Despite their importance in quantum theory, joint quantum measurements remain poorly understood. An intriguing conceptual and practical question is whether joint quantum measurements on separated systems can be performed without bringing them together. Remarkably, by using shared entanglement, this can be achieved perfectly when disregarding the post-measurement state. However, existing localization protocols typically require unbounded entanglement. In this work, we address the fundamental question: "Which joint measurements can be localized with a finite amount of entanglement?" We develop finite-resource versions of teleportation-based schemes and analytically classify all two-qubit measurements that can be localized in the first steps of these hierarchies. These include several measurements with exceptional properties and symmetries, such as the Bell state measurement and the elegant joint measurement. This leads us to propose a systematic classification of joint measurements based on entanglement cost, which we argue directly connects to the complexity of implementing those measurements. We illustrate how to numerically explore higher levels and construct generalizations to higher dimensions and multipartite settings.


Privacy-preserving machine learning with tensor networks

July 2024

·

24 Reads

·

6 Citations

Quantum

Tensor networks, widely used for providing efficient representations of low-energy states of local quantum many-body systems, have been recently proposed as machine learning architectures which could present advantages with respect to traditional ones. In this work we show that tensor-network architectures have especially prospective properties for privacy-preserving machine learning, which is important in tasks such as the processing of medical records. First, we describe a new privacy vulnerability that is present in feedforward neural networks, illustrating it in synthetic and real-world datasets. Then, we develop well-defined conditions to guarantee robustness to such vulnerability, which involve the characterization of models equivalent under gauge symmetry. We rigorously prove that such conditions are satisfied by tensor-network architectures. In doing so, we define a novel canonical form for matrix product states, which has a high degree of regularity and fixes the residual gauge that is left in the canonical forms based on singular value decompositions. We supplement the analytical findings with practical examples where matrix product states are trained on datasets of medical records, which show large reductions on the probability of an attacker extracting information about the training dataset from the model's parameters. Given the growing expertise in training tensor-network architectures, these results imply that one may not have to be forced to make a choice between accuracy in prediction and ensuring the privacy of the information processed.


TensorKrowch: Smooth integration of tensor networks in machine learning

June 2024

·

26 Reads

·

1 Citation

Quantum

Tensor networks are factorizations of high-dimensional tensors into networks of smaller tensors. They have applications in physics and mathematics, and recently have been proposed as promising machine learning architectures. To ease the integration of tensor networks in machine learning pipelines, we introduce TensorKrowch, an open source Python library built on top of PyTorch. Providing a user-friendly interface, TensorKrowch allows users to construct any tensor network, train it, and integrate it as a layer in more intricate deep learning models. In this paper, we describe the main functionality and basic usage of TensorKrowch, and provide technical details on its building blocks and the optimizations performed to achieve efficient operation.


The simplest triangle scenario. Alice, Bob and Charlie are pairwise connected by sources that each emit a pair of particles. They each perform a selected measurement and obtain binary outcomes.
The hexagon-web inflation of the triangle scenario. There are two copies of each of the sources in the original triangle scenario, and four copies of each of the parties, one for each combination of copies of sources that send systems to the corresponding party. Note that this inflation assumes that information originating from the sources can be cloned, and thus it constrains triangle-local models. The sub-network shaded is the hexagon inflation, used for instance in [15]. Since in this case the sources do not make copies of the information sent, this inflation constrains NSI correlations.
Random two-outcome distributions, parameterised by E 1, E 2 and E 3, in the region of interest of [15, figure 2]. The turquoise points denote the distributions that are incompatible with (a) the hexagon inflation, i.e. with any triangle model, and (b) the hexagon-web inflation, i.e. with any triangle-local model. The grey region represents that for which the corresponding values of E 1 and E 2 do not produce a valid probability distribution. Notably, when the LPI constraints are not added to the classical hexagon-web inflation, the resulting figure is exactly the same as in (a).
Rotation of figure 3(b) to depict the boundary between distributions not admitting (in light blue) and not known to admit (in dark blue) triangle-local models. The boundary can not be described by a simple polynomial in the variables E 1, E 2, E 3 up to degree 5.
of results for triangle-local and triangle-quantum models for the distributions in equation (3) in the minimal triangle scenario. The arrows pointing rightwards represent lower bounds via the construction of explicit models, while the arrows pointing leftwards represent upper bounds obtained via inflation. For the inflations, the green color denotes the results obtained via linear programming, and the pink color denotes the results obtained via semidefinite programming. The values highlighted are the best upper bounds for classical models (green) and quantum models (pink).
Post-quantum nonlocality in the minimal triangle scenario

November 2023

·

98 Reads

·

7 Citations

We investigate network nonlocality in the triangle scenario when all three parties have no input and binary outputs. Through an explicit example, we prove that this minimal scenario supports nonlocal correlations compatible with no-signaling and independence of the three sources, but not with realisations based on independent quantum or classical sources. This nonlocality is robust to noise. Moreover, we identify the equivalent to a Popescu-Rohrlich box in the minimal triangle scenario.


Bell Inequalities with Overlapping Measurements

August 2023

·

4 Reads

·

1 Citation

Physical Review Letters

Which nonlocal correlations can be obtained, when a party has access to more than one subsystem? While traditionally nonlocality deals with spacelike separated parties, this question becomes important with quantum technologies that connect devices by means of small shared systems. Here, we study Bell inequalities where measurements of different parties can have overlap. This allows us to accommodate problems in quantum information such as the existence of quantum error correction codes in the framework of nonlocality. The scenarios considered show an interesting behavior with respect to Hilbert space dimension, overlap, and symmetry.


Semidefinite programming relaxations for quantum correlations

July 2023

·

57 Reads

·

1 Citation

Semidefinite programs are convex optimisation problems involving a linear objective function and a domain of positive semidefinite matrices. Over the last two decades, they have become an indispensable tool in quantum information science. Many otherwise intractable fundamental and applied problems can be successfully approached by means of relaxation to a semidefinite program. Here, we review such methodology in the context of quantum correlations. We discuss how the core idea of semidefinite relaxations can be adapted for a variety of research topics in quantum correlations, including nonlocality, quantum communication, quantum networks, entanglement, and quantum cryptography.


Citations (24)


... Hence, addressing noise robustness conclusively is an important open problem, which we address in this work. Much progress has been made in studying correlations within multipartite network structures [26,[31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49], and a general framework has been developed to investigate non-local correlations in networks featuring independent sources. But unlike the standard Bell scenario, these multipartite networks have non-convex local boundaries owing to source independence, making optimization a hard problem. ...

Reference:

Is genuine nonlocality in the triangle network exclusive to pure states?
Semidefinite programming relaxations for quantum correlations
  • Citing Article
  • December 2024

Review of Modern Physics

... However, recent research reveals that these barriers can be overcome to extract training data, potentially compromising privacy [28,29]. Additionally, gradient-based optimization techniques have been shown to cause data leakage, as information from the training dataset can create identifiable patterns within the NN parameters [30]. ...

Privacy-preserving machine learning with tensor networks

Quantum

... Our algorithm is well-suited for scenarios of high dimensionality and sparsity, such as NN models, while also being applicable to general functions to accelerate computations. An implementation of our method is available in the open-source Python package TensorKrowch [66]. ‚ Using the proposed algorithm, we extend the privacy experiments conducted in Ref. [30] to more realistic scenarios. ...

TensorKrowch: Smooth integration of tensor networks in machine learning

Quantum

... The simplest example is the triangle network, obtained from the entanglement-swapping network by adding a new source connecting the extremal parties. This network, being the simplest one where explicit factorizations fail to characterize it, has been subject to intense study 15,32,[54][55][56][57] . ...

Post-quantum nonlocality in the minimal triangle scenario

... We resolve two open questions from Ref. [24]. First, in Appendix A, using semidefinite programming (SDP) relaxation tools [46], we obtain tight maximal violations of such inequalities up to m = 7, and lower bounds up to m = 20. Secondly, in Appendix B, we provide a proof of the following theorem. ...

Semidefinite programming relaxations for quantum correlations
  • Citing Preprint
  • July 2023

... In this sense, quantum inflation becomes the most suitable tool, since it allows to take into account both the network structure and the fact that the systems distributed are quantum, and the strength of both types of constraints can be tuned independently [ref. 65 , Ch. 5]. ...

Experimental Full Network Nonlocality with Independent Sources and Strict Locality Constraints
  • Citing Article
  • May 2023

Physical Review Letters

... By contrast, in the exogenized case, operators associated with different parties always commute. In our Semidefinite Program for the AC intermediate node scenario we set the objective function to be maximized to be the left-hand side of Eq. (3), using (X,Y ) = (0,0) for the correlator of B and C. Our open-source implementation can be found online [12], and is accomplished by manually adapting the current version of the Inflation package [13] to capture the new commutation rules. We find that with an AC intermediate latent we have For completeness, we note that the bound in Eq. (4) is tight, as itcan be achievedby setting Charlie andAlice to always report the same value, namely that produced by the measurement at the intermediate latent position. ...

Inflation: a Python library for classical and quantum causal compatibility

Quantum

... Moreover, our work substantiates the hypothesis [82,83] that quantum mechanics is as nonlocal as it is without violating causality, due to the existence of uncertainty. This result, following a rich and flourishing research path [84][85][86][87][88][89][90][91][92][93][94], represents a significant step toward understanding nonlocality in our most fundamental theory of nature [95]. On the practical side, such a measurement capability provides the key to investigating novel applications of quantum theory combining both local and nonlocal correlations. ...

Certification of non-classicality in all links of a photonic star network without assuming quantum mechanics

... These sets of correlations were really striking as they can not be traced back to the standard bell scenario as in most cases, demonstrating a genuine network non-locality [26]. Progress has been made in studying these correlations, but all current conclusive proofs are noiseless [26][27][28][29][30]. That is, they consider a set-arXiv:2501.08079v1 ...

Proofs of Network Quantum Nonlocality in Continuous Families of Distributions
  • Citing Article
  • February 2023

Physical Review Letters