Recent publications
We present a secret-key encryption scheme based on random rank metric ideal linear codes with a simple decryption circuit. It supports unlimited homomorphic additions and plaintext multiplications (i.e. the homomorphic multiplication of a clear plaintext with a ciphertext) as well as a fixed arbitrary number of homomorphic multiplications. We study a candidate bootstrapping algorithm that requires no multiplication but additions and plaintext multiplications only. This latter operation is therefore very efficient in our scheme, whereas bootstrapping is usually the main reason which penalizes the performance of other fully homomorphic encryption schemes. However, the security reduction of our scheme restricts the number of independent ciphertexts that can be published. In particular, this prevents to securely evaluate the bootstrapping algorithm as the number of ciphertexts in the key switching material is too large. Our scheme is nonetheless the first somewhat homomorphic encryption scheme based on random ideal codes and a first step towards full homomorphism. Random ideal codes give stronger security guarantees as opposed to existing constructions based on highly structured codes. We give concrete parameters for our scheme that shows that it achieves competitive sizes and performance, with a key size of 3.7 kB and a ciphertext size of 0.9 kB when a single multiplication is allowed.
In symmetric cryptography, vectorial Boolean functions over the finite field are used to construct strong S-boxes. A strong S-box must meet various criteria to resist known attacks, including differential, linear, boomerang, and their variants. To evaluate an S-box’s resistance, several tables are utilized, such as the Difference Distribution Table (DDT) and the Boomerang Connectivity Table (BCT). Recent developments in boomerang attacks have revisited the concept of the boomerang switch effect, illustrating the effectiveness of this technique. As a result, a new tool called the Boomerang Difference Table (BDT) was introduced as an alternative to the traditional BCT. Additionally, two novel tables have been proposed: the Upper Boomerang Connectivity Table (UBCT) and the Lower Boomerang Connectivity Table (LBCT). These tables are enhancements over the BCT and facilitate a systematic evaluation of boomerangs that can return over multiple rounds. This paper focuses on the new tools for measuring the revisited version of boomerang attacks and the related tables , , as well as the so-called Extended Boomerang Connectivity Table (). Specifically, we examine the properties of these novel tools and investigate the corresponding tables. We also study their interconnections, their links to the DDT, and their values for affine equivalent vectorial functions and compositional inverses of permutations of . Moreover, we introduce the concept of the nontrivial boomerang connectivity uniformity and determine the explicit values of all the entries of the , , and for the important cryptographic case of the inverse function.
The establishment of BIM raises issues at the transition between sketches, still mostly used in ideation, and project digital models. With the latest AI evolution, one way of supporting this transition is to provide sketch interpretation software generating digital representations. However, the question arises as to whether these two modes of representation can co-exist, and whether digital representations can provide a resource for designers in the preliminary phase. In this paper, we investigate the impact of auto-generated representations on the preliminary design activity of architects. We implement a Wizard of Oz protocol with nine participants sketching their design and being provided with CAD plans, 3D models and inspirational images. We study reflexive conversations by analyzing and categorizing behavior patterns preceding and following the reception of self-generated representations. We identify 23 patterns, and show the flexible role of the digital representations, having a strong impact on the process.
In this article, we investigate the notion of
model-based deep learning
in the realm of music information research (MIR). Loosely speaking, we refer to the term model-based deep learning for approaches that combine traditional knowledge-based methods with data-driven techniques, especially those based on deep learning, within a differentiable computing framework. In music, prior knowledge for instance related to sound production, music perception or music composition theory can be incorporated into the design of neural networks and associated loss functions. We outline three specific scenarios to illustrate the application of model-based deep learning in MIR, demonstrating the implementation of such concepts and their potential.
ModelHamiltonian is a free, open source, and cross-platform Python library designed to express model Hamiltonians, including spin-based Hamiltonians (Heisenberg and Ising models) and occupation-based Hamiltonians (Pariser–Parr–Pople, Hubbard, and Hückel models) in terms of 1- and 2-electron integrals, so that these systems can be easily treated by traditional quantum chemistry software programs. ModelHamiltonian was originally intended to facilitate the testing of new electronic structure methods using HORTON but emerged as a stand-alone research tool that we recognize has wide utility, even in an educational context. ModelHamiltonian is written in Python and adheres to modern principles of software development, including comprehensive documentation, extensive testing, continuous integration/delivery protocols, and package management. While we anticipate that most users will use ModelHamiltonian as a Python library, we include a graphical user interface so that models can be built without programming, based on connectivity/parameters inferred from, for example, a SMILES string. We also include an interface to ChatGPT so that users can specify a Hamiltonian in plain language (without learning ModelHamiltonian’s vocabulary and syntax). This article marks the official release of the ModelHamiltonian library, showcasing its functionality and scope.
Platooning-based vehicle-to-vehicle (V2V) integrated sensing and communication (ISAC) frameworks have emerged as an attractive strategy in recent years. In this work, we present an optimal time partitioning (OTP) framework in V2V ISAC systems. We propose a novel sensing measure for quantifying radar sensing performance as a function of the maximum detectable range and velocity of the radar. With the communication operation following the sensing operation, an OTP problem is formulated and solved as a convex problem, constrained by sensing and communication performance guarantees. Optimal bounds on the time duration for sensing and communication are derived, along with the maximum achievable communication throughput. Furthermore, analytical insights on the inherent trade-offs associated with the design parameters are presented. The simulation results demonstrate that the proposed OTP framework achieves a communication throughput gain of up to 12.6% over the equal time partitioning framework, in addition to meeting the sensing performance requirements.
RESUMO Um dos princípios da democracia digital é informar ativamente os cidadãos e mobilizá-los para participarem no debate político. Este artigo apresenta uma ferramenta de processamento de documentos políticos públicos para tornar as informações mais acessíveis aos cidadãos e grupos profissionais específicos. Em particular, investigamos e desenvolvemos técnicas de Inteligência Artificial para mineração de textos do Diário da Assembleia da República de Portugal para particionar, analisar, extrair e sintetizar a informação das atas das sessões parlamentares. Desenvolvemos ainda dashboards que mostram as informações extraídas de forma simples e visual, como resumos de falas e tópicos discutidos. O nosso objetivo principal é, mais do que caracterizar o comportamento político, aumentar a transparência e a responsabilidade dos eleitores e das autoridades eleitas.
The increasing availability and use of big data analytics (BDA) in creative industries engender powerful new opportunities for enhancing consumer-driven innovation. But it also raises significant organizational challenges as it requires the redesigning of creative processes in which analytics-based logics need to be reconciled with legacy creativity-based ones. This paper aims to explore the ways in which BDA lead to redesigning creative processes, as well as organizational tensions they may induce. Building on a case study of video game development projects, our findings show that while big data analytics can contribute to the exploration of new ideas, be a tool to support decisions, and provide negotiation power, it also induces several organizational tensions. We uncover eight of these tensions, which we group into three themes: coordination, decision making, and control. The paper contributes to research on organizational transformation through big data and on creative industries and offers practical implications for the management of creative projects in the age of big data.
Market sentiment analysis (MSA) has evolved significantly over nearly four decades, growing in relevance and application in economics and finance. This paper extensively reviews MSA, encompassing methodologies ranging from lexicon‐based techniques to traditional Machine Learning (ML), Deep Learning (DL), and hybrid approaches. Emphasizing the transition from rudimentary word counters to sophisticated feature extraction from diverse sources such as news, social media, and share prices, the study presents an updated state‐of‐the‐art review of sentiment analysis. Furthermore, using network analysis, a bibliometric and scientometric lens is applied to map the expanding footprint of sentiment research within economics and finance, revealing key trends, dominant research hubs, and potential areas for interdisciplinary collaboration. This exploration consolidates the foundational and emerging methods in MSA and underscores its dynamic interplay with global financial ecosystems and the imperative for future integrative research trajectories.
The diversity of nanosatellite applications is increasingly attracting the scientific community’s attention. The main component of these satellites is the OnBoard Computer (OBC), which is responsible for all control and processing. Also, OBC encompasses memory elements highly susceptible to failure; due to spatial radiation, errors in these memories can cause severe damage. As integrated circuit technology advances, cluster errors are more and more frequent. Error Correction Code (ECC) is one of the most used techniques for mitigating errors, and two-dimensional ECCs are used to reach higher error correction power. The paper aims to assess the number of checkbit regions to include for code enhancement. Our analysis investigates the impact of incorporating up to three checkbit regions. The results are analyzed through adjacent and exhaustive error injection tests and compared to other ECCs. Besides, reliability, redundancy, and hardware implementation costs are investigated, and an evaluation metric is proposed to choose the best ECC. Experiments with random error patterns show that the proposal with three crossed check-bit regions achieves a correction of 100% for up to four bitflips and greater than 90% for up to seven bitflips. Additionally, considering adjacent error patterns, the proposal achieves a correction greater than 97.4% with up to five bitflips.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
Information