Orange Labs
  • Meylan, France
Recent publications
The emergence of spatial immersive technologies allows new ways to collaborate remotely. However, they still need to be studied and enhanced in order to improve their effectiveness and usability for collaborators. Remote Physical Collaborative Extended Reality (RPC-XR) consists in solving augmented physical tasks with the help of remote collaborators. This paper presents our RPC-AR system and a user study evaluating this system during a network hardware assembly task. Our system offers verbal and non-verbal interpersonal communication functionalities. Users embody avatars and interact with their remote collaborators thanks to hand, head and eye tracking, and voice. Our system also captures an environment spatially, in real-time and renders it in a shared virtual space. We designed it to be lightweight and to avoid instrumenting collaborative environments and preliminary steps. It performs capture, transmission and remote rendering of real environments in less than 250ms. We ran a cascading user study to compare our system with a commercial 2D video collaborative application. We measured mutual awareness, task load, usability and task performance. We present an adapted Uncanny Valley questionnaire to compare the perception of remote environments between systems. We found that our application resulted in better empathy between collaborators, a higher cognitive load and a lower level of usability, remaining acceptable, to the remote user. We did not observe any significant difference in performance. These results are encouraging, as participants' observations provide insights to further improve the performance and usability of RPC-AR.
This article introduces the distributed and intelligent integrated sensing and communications (DISAC) concept, a transformative approach for 6G wireless networks that extends the emerging concept of integrated sensing and communications (ISAC). DISAC addresses the limitations of the existing ISAC models and, to overcome them, it introduces two novel foundational functionalities for both sensing and communications: a distributed architecture (enabling large-scale and energy-efficient tracking of connected users and objects, leveraging the fusion of heterogeneous sensors) and a semantic and goal-oriented framework (enabling the transition from classical data fusion to the composition of semantically selected information).
Optimizing networks to meet user needs has been a long-standing goal for the various key players in the network sector. To this end, a large number of studies have addressed the case of classifying network traffic into a set of activities (e.g., streaming) and applications (e.g., Spotify). Nonetheless, the fast-paced growth of the digital market has favored the advent of new consuming habits such as the simultaneous performance of multiple activities. This concept is referred to as multi-activity situations or media multi-tasking. Conceiving solutions that can cope with these emerging consuming patterns may enable network operators and service providers to better adapt their network management solutions and commercial plans. In this paper, we propose a novel approach that can deal with a challenging scenario comprising both single-activity and multi-activity situations. The proposed approach pre-processes a network trace over a time-window and then determines to which situation type it belongs. Furthermore, it identifies the type of the activities being performed and the applications being used (e.g., chat on Facebook & streaming on Spotify). Our experiments highlighted that our solution is able to achieve a satisfactory level of performance despite the complexity of the scenario that we target. Indeed, our obtained results are comparable to state-of-the-art techniques addressing less challenging scenarios that involve only single-activity situations.
Energy management in datacenters is a major challenge today due to the environmental and economic impact of increasing energy consumption. Efficient placement of virtual machines in physical machines within modern datacenters is crucial for their effective management. In this context, five algorithms named CNN-GA, CNN-greedy, CNN-ABC, CNN-ACO and CNN-PSO, have been developed to minimize hosts’ power consumption and ensure service quality with relatively low response times. We propose a comparative approach between the developed algorithms and other existing methods for virtual machine placement. The algorithms use optimization algorithms combined with Convolutional Neural Networks to build predictive models of virtual machine placement. The models were evaluated based on their accuracy and complexity to select the optimal solution. The necessary data is collected using the CloudSim Plus simulator, and the prediction results were used to allocate virtual machines according to the predictions of the models. The main objective of this research is to optimize the management of Information Technology resources within datacenters. This is achieved by seeking a virtual machine placement policy that minimizes hosts’ power consumption and ensures an appropriate level of service for users' needs. It considers the imperatives of sustainability, performance, and availability by reducing energy consumption and response times. We studied six scenarios under specific constraints to determine the best model for virtual machines’ placement. This approach aims to address current challenges in energy management and operational efficiency.
Recent innovations in online advertising facilitate the use of a wide variety of data sources to build micro-segments of consumers, and delegate the manufacture of audience segments to machine learning algorithms. Both techniques promise to replace demographic targeting, as part of a post-demographic turn driven by big data technologies. This article empirically investigates this transformation in online advertising. We show that targeting categories are assessed along three criteria: efficiency, communicability, and explainability. The relative importance of these objectives helps explain the lasting role of demographic categories, the development of audience segments specific to each advertiser, and the difficulty in generalizing interest categories associated with big data. These results underline the importance of studying the impact of advanced big data and AI technologies in their organizational and professional contexts of appropriation, and of paying attention to the permanence of the categorizations that make the social world intelligible.
The deployment of RISs in future 6G networks is expected to substantially improve mobile network coverage. This paper introduces a new cross-layer low-complexity scheme for the online optimization of RIS-assisted communication systems. It jointly combines BS and RIS configuration and fair UEs’ scheduling which is critical for high-performance deployments. A RIS beam synthesis method is especially proposed for RIS configuration. The proposed solution embeds two nested control loops: i) a fast control loop working at the OFDMA slot scale and consisting in a standard UEs proportional fair scheduler, and ii) a slow control loop operating at the OFDMA frame scale which adapts the RIS’ configuration to the UEs’ spatial distribution and maximizes the UEs’ aggregated performance. The slow control loop is based on an online stochastic approximation algorithm whose convergence to the optimal restpoint is proved. In a reference scenario, the proposed scheduler achieves a gain of 47% in mean spectral efficiency for NLOS UEs over a baseline scheme.
In low-power wireless networks, every byte sent by an embedded device causes its radio to stay on a little longer, which eats into its limited energy reserve. And because the radio is often the most power-hungry circuit in the device, reducing the number of bytes to be sent and received automatically increases the battery lifetime of the device, resulting in a lower total cost of ownership for the end-user, hence better adoption. Low-power wireless devices tend to generate short data payload, typically in the order of 2–50 B. This means that protocol headers make up a large portion of the bytes inside a wireless frame, 3070% is not uncommon. Compressing those headers, i.e., removing bytes that can be reconstructed anyways or that are not needed, constrained device constrained network (headers are compressed) compression LBR decompression computer traditional Internet (headers are NOT compressed) makes perfect sense. This article serves as a primer on header compression in constrained networks. We start by describing exactly why it is needed, then survey the different standards doing header compression. We indicate how today's approach requires expert input for every deployment, severely hindering the rollout of such approaches. Instead, we argue that an automated approach based on machine learning and artificial intelligence is the right way to go, and provide blueprints for such approaches.
The outcomes of extensive-form games are the realisation of an exponential number of distinct strategies, which may or may not be Nash equilibria. The aim of this work is to determine whether an outcome of an extensive-form game can be the realisation of a Nash equilibrium, without recurring to the cumbersome notion of normal-form strategy. We focus on the minimal example of pure Nash equilibria in two-player extensive-form games with perfect information. We introduce a new representation of an extensive-form game as a graph of its outcomes and we provide a new lightweight algorithm to enumerate the realisations of Nash equilibria. It is the first of its kind not to use normal-form brute force. The algorithm can be easily modified to provide intermediate results, such as lower and upper bounds to the value of the utility of Nash equilibria. We compare this modified algorithm to the only existing method providing an upper bound to the utility of any outcome of a Nash equilibrium. The experiments show that our algorithm is faster by some orders of magnitude. We finally test the method to enumerate the Nash equilibria on a new instances library, that we introduce as benchmark for representing all structures and properties of two-player extensive-form games.
Self-supervised learning enables the training of large neural models without the need for large, labeled datasets. It has been generating breakthroughs in several fields, including computer vision, natural language processing, biology, and speech. In particular, the state-of-the-art in several speech processing applications, such as automatic speech recognition or speaker identification, are models where the latent representation is learned using self-supervised approaches. Several configurations exist in self-supervised learning for speech, including contrastive, predictive, and multilingual approaches. There is, however, a crucial limitation in the majority of existing approaches: their high computational costs. These costs limit the deployment of models, the size of the training dataset, and the number of research groups that can afford research with large self-supervised models. Likewise, we should consider the environmental costs that high energy consumption implies. Efforts in this direction comprise optimization of existing models, neural architecture efficiency, improvements in finetuning for speech processing tasks, and data efficiency. But despite current efforts, more work could be done to address high computational costs in self-supervised representation learning.
Competing service providers in the cloud environment ensure services are delivered under the promised security requirements. It is crucial for mobile services where user’s movement results in the service’s migration between edge servers or clouds in the Continuum. Maintaining service sovereignty before, during, and after the migration is a real challenge, especially when the service provider has committed to ensuring its quality following the Service Level Agreement. In this paper, we present the main challenges mobile service providers face in a cloud environment to guarantee the required level of security and digital sovereignty as described in the Security Service Level Agreement, with emphasis on challenges resulting from the service migration between the old and new locations. We present the security and sovereignty context intended for migration and the steps of the migration algorithm. We also analyze three specific service migration cases for three vertical industries with different service quality requirements.
Due to the fast growth of data that are measured on a continuous scale, functional data analysis has undergone many developments in recent years. Regression models with a functional response involving functional covariates, also called “function-on-function”, are thus becoming very common. Studying this type of model in the presence of heterogeneous data can be particularly useful in various practical situations. We mainly develop in this work a function-on-function Mixture of Experts (FFMoE) regression model. Like most of the inference approach for models on functional data, we use basis expansion (B-splines) both for covariates and parameters. A regularized inference approach is also proposed, it accurately smoothes functional parameters in order to provide interpretable estimators. Numerical studies on simulated data illustrate the good performance of FFMoE as compared with competitors. Usefullness of the proposed model is illustrated on two data sets: the reference Canadian weather data set, in which the precipitations are modeled according to the temperature, and a Cycling data set, in which the developed power is explained by the speed, the cyclist heart rate and the slope of the road.
Recent studies in active learning, particularly in uncertainty sampling, have focused on the decomposition of model uncertainty into reducible and irreducible uncertainties. In this paper, the aim is to simplify the computational process while eliminating the dependence on observations. Crucially, the inherent uncertainty in the labels is considered, i.e. the uncertainty of the oracles. Two strategies are proposed, sampling by Klir uncertainty, which tackles the exploration–exploitation dilemma, and sampling by evidential epistemic uncertainty, which extends the concept of reducible uncertainty within the evidential framework, both using the theory of belief functions. Experimental results in active learning demonstrate that our proposed method can outperform uncertainty sampling.
Segmentation of anatomical structures on 2D images of cardiac exams is necessary for performing 3D volumetric analysis, enabling the computation of parameters for diagnosing cardiovascular disease. In this work, we present robust algorithms to automatically segment cardiac imaging data and generate a volumetric anatomical reconstruction of a patient-specific heart model by propagating active contour output within a patient stack through a self-supervised learning model. Contour initializations are automatically generated, then output segmentations on sparse image slices are transferred and merged across a stack of images within the same heart data set during the segmentation process. We demonstrate whole-heart segmentation and compare the results with ground truth manual annotations. Additionally, we provide a framework to represent segmented heart data in the form of implicit surfaces, allowing interpolation operations to generate intermediary models of heart sections and volumes throughout the cardiac cycle and to estimate ejection fraction.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
671 members
Fabrice Clerot
  • Innovation, Marketing and Technologies
Ivan Bedini
  • Orange Labs Products & Services
Marc Lacoste
  • Department of Security
Gilles Privat
  • M2M, Internet of Things, Smart Cities
Information
Address
Meylan, France