
Adriano Soares KoshiyamaPontifical Catholic University of Rio de Janeiro · Department of Electrical Engineering (ELE)
Adriano Soares Koshiyama
MD Electrical Enginnering, BSc Economics
About
93
Publications
19,080
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
690
Citations
Introduction
Skills and Expertise
Publications
Publications (93)
Os impactos antrópicos sobre os ecossistemas naturais
são importantes ameaças para a fauna de abelhas
silvestres. Neste estudo avaliou-se a diversidade
de abelhas e espécies florais, para se verificar se a
fragmentação dos hábitats da Mata Atlântica pode
levar a alterações na comunidade de abelhas e plantas.
Ao longo de um ano, quatro localidades d...
As Large Language Models (LLMs) become increasingly integrated into various facets of society, a significant portion of online text consequently become synthetic. This raises concerns about bias amplification, a phenomenon where models trained on synthetic data amplify the pre-existing biases over successive training iterations. Previous literature...
Open-generation bias benchmarks evaluate social biases in Large Language Models (LLMs) by analyzing their outputs. However, the classifiers used in analysis often have inherent biases, leading to unfair conclusions. This study examines such biases in open-generation benchmarks like BOLD and SAGED. Using the MGSD dataset, we conduct two experiments....
The development of unbiased large language models is widely recognized as crucial, yet existing benchmarks fall short in detecting biases due to limited scope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the first holistic benchmarking pipeline to address these problems. The pipeline encompasses five core stages: scraping materi...
Hallucination, the generation of factually incorrect content, is a growing challenge in Large Language Models (LLMs). Existing detection and mitigation methods are often isolated and insufficient for domain-specific needs, lacking a standardized pipeline. This paper introduces THaMES (Tool for Hallucination Mitigations and EvaluationS), an integrat...
Stereotypes are generalised assumptions about societal groups, and even state-of-the-art LLMs using in-context learning struggle to identify them accurately. Due to the subjective nature of stereotypes, where what constitutes a stereotype can vary widely depending on cultural, social, and individual perspectives, robust explainability is crucial. E...
As the demand for human-like interactions with LLMs continues to grow, so does the interest in manipulating their personality traits, which has emerged as a key area of research. Methods like prompt-based In-Context Knowledge Editing (IKE) and gradient-based Model Editor Networks (MEND) have been explored but show irregularity and variability. IKE...
While Large Language Models (LLMs) excel in text generation and question-answering, their effectiveness in AI legal and policy is limited by outdated knowledge, hallucinations, and inadequate reasoning in complex contexts. Retrieval-Augmented Generation (RAG) systems improve response accuracy by integrating external knowledge but struggle with retr...
Business reliance on algorithms is becoming ubiquitous, and companies are increasingly concerned about their algorithms causing major financial or reputational damage. High-profile cases include Google’s AI algorithm for photo classification mistakenly labelling a black couple as gorillas in 2015 (Gebru 2020 In The Oxford handbook of ethics of AI,...
Rapid advancements in artificial intelligence (AI) technology have brought about a plethora of new challenges in terms of governance and regulation. AI systems are being integrated into various industries and sectors, creating a demand from decision-makers to possess a comprehensive and nuanced understanding of the capabilities and limitations of t...
Staff Working Paper No. 1,038 - Bank of England
The issue of fairness in AI has received an increasing amount of attention in recent years. The problem can be approached by looking at different protected attributes (e.g., ethnicity, gender, etc) independently, but fairness for individual protected attributes does not imply intersectional fairness. In this work, we frame the problem of intersecti...
Recent advancements in GANs and diffusion models have enabled the creation of high-resolution, hyper-realistic images. However, these models may misrepresent certain social groups and present bias. Understanding bias in these models remains an important research question, especially for tasks that support critical decision-making and could affect m...
The use of automated decision tools in recruitment has received an increasing amount of attention. In November 2021, the New York City Council passed a legislation (Local Law 144) that mandates bias audits of Automated Employment Decision Tools. From 15th April 2023, companies that use automated tools for hiring or promoting employees are required...
In releasing the Algorithmic Transparency Standard, the UK government has reiterated its commitment to greater algorithmic transparency in the public sector. The Standard signals that the UK government is both pushing forward with the AI standards agenda and ensuring that those standards benefit from empirical practitioner-led experience, enabling...
Manufacturers migrate their processes to Industry 4.0, which includes new technologies for improving productivity and efficiency of operations. One of the issues is capturing, recreating, and documenting the tacit knowledge of the aging workers. However, there are no systematic procedures to incorporate this knowledge into Enterprise Resource Plann...
In recent years, the field of ethical artificial intelligence (AI), or AI ethics, has gained traction and aims to develop guidelines and best practices for the responsible and ethical use of AI across sectors. As part of this, nations have proposed AI strategies, with the UK releasing both national AI and data strategies, as well as a transparency...
With its proposed EU AI Act, the EU is aspiring to lead the world in admiral AI regulation (April 2021). In this brief, we summarise and comment on the ‘Presidency compromise text’, which is a revised version of the proposed act reflecting the consultation and deliberation by member states and actors (November 2021). The compromise text echoes the...
Algorithms are becoming ubiquitous. However, companies are increasingly alarmed about their algorithms causing major financial or reputational damage. A new industry is envisaged: auditing and assurance of algorithms with the remit to validate artificial intelligence, machine learning, and associated algorithms.
Manufacturers migrate their processes to Industry 4.0 that includes new technologies for improving productivity and efficiency of operations. One of the issues is capturing, recreating, and documenting the tacit knowledge of the aging workers. However, there are no systematic procedures to incorporate this knowledge into Enterprise Resource Plannin...
Systematic financial trading strategies account for over 80% of trade volume in equities and a large chunk of the foreign exchange market. In spite of the availability of data from multiple markets, current approaches in trading rely mainly on learning trading strategies per individual market. In this paper, we take a step towards developing fully...
Developers proposing new machine learning for health (ML4H) tools often pledge to match or even surpass the performance of existing tools, yet the reality is usually more complicated. Reliable deployment of ML4H to the real world is challenging as examples from diabetic retinopathy or Covid-19 screening show. We envision an integrated framework of...
This article is based on an archived draft version of the Information Commissioner's Office Guidance on the AI auditing Framework released for consultation in February 2020.
This paper reviews Artificial Intelligence (AI), Machine Learning (ML) and associated algorithms in future Capital Markets. New AI algorithms are constantly emerging, with each 'strain' mimicking a new form of human learning, reasoning, knowledge, and decisionmaking. The current main disrupting forms of learning include Deep Learning, Adversarial L...
This article describes a new genetic-programming-based optimization method using a multi-gene approach along with a niching strategy and periodic domain constraints. The method is referred to as Niching MG-PMA, where MG refers to multi-gene and PMA to parameter mapping approach. Although it was designed to be a multimodal optimization method, recen...
Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, an analyst needs to appropriately fine-tune their strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning...
This work presents a new neuro-evolutionary model, called NEVE (Neuroevolutionary Ensemble), based on an ensemble of Multi-Layer Perceptron (MLP) neural networks for learning in nonstationary environments. NEVE makes use of quantum-inspired evolutionary models to automatically configure the ensemble members and combine their output. The quantum-ins...
In this work we introduce QuantNet: an architecture that is capable of transferring knowledge over systematic trading strategies in several financial markets. By having a system that is able to leverage and share knowledge across them, our aim is two-fold: to circumvent the so-called Backtest Overfitting problem; and to generate higher risk-adjuste...
Dynamic trading strategies, in the spirit of trend-following or mean-reversion, represent an only partly understood but lucrative and pervasive area of modern finance. Assuming Gaussian returns and Gaussian dynamic weights or signals, (e.g., linear filters of past returns, such as simple moving averages, exponential weighted moving averages, foreca...
State-of-the-art deep learning methods have shown a remarkable capacity to model complex data domains, but struggle with geospatial data. In this paper, we introduce SpaceGAN, a novel generative model for geospatial domains that learns neighbourhood structures through spatial conditioning. We propose to enhance spatial representation beyond mere sp...
Systematic trading strategies are rule-based procedures which choose portfolios and allocate assets. In order to attain certain desired return proles, quantitative strategists must determine a large array of trading parameters. Backtesting, the attempt to identify the appropriate parameters using historical data available, has been highly criticize...
Dynamic trading strategies, in the spirit of trend-following or mean-reversion, represent an only partly understood but lucrative and pervasive area of modern nance. Assuming Gaussian returns and Gaussian dynamic weights or signals, (e.g., linear lters of past returns, such as simple moving averages, exponential weighted moving averages, forecasts...
Systematic trading strategies are rule-based procedures which choose portfolios and allocate assets. In order to attain certain desired return profiles, quantitative strategists must determine a large array of trading parameters. Backtesting, the attempt to identify the appropriate parameters using historical data available, has been highly critici...
Derivative traders are usually required to scan through hundreds, even thousands of possible trades on a daily basis. Up to now, not a single solution is available to aid in their job. Hence, this work is aimed to develop a trading recommendation system, and to apply this system to the so‐called Mid‐Curve Calendar Spread (MCCS) trade. To suggest th...
The legal status of AI and algorithms continues to be debated. Resume-sifting algorithms exhibit unethical, discriminatory, and illegal behavior; crime-sentencing algorithms are unable to justify their decisions; and autonomous vehicles' predictive analytics software will make life and death decisions.
Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, namely fine-tuning and com...
Derivative traders are usually required to scan through hundreds, even thousands of possible trades on a daily-basis; a concrete case is the so-called Mid-Curve Calendar Spread (MCCS). The actual procedure in place is full of pitfalls and a more systematic approach where more information at hand is crossed and aggregated to find good trading picks...
Studies in Evolutionary Fuzzy Systems (EFSs) began in the 90s and have experienced a fast development since then, with applications to areas such as pattern recognition, curve‐fitting and regression, forecasting and control. An EFS results from the combination of a Fuzzy Inference System (FIS) with an Evolutionary Algorithm (EA). This relationship...
One of the main advantages of fuzzy classifier models is their linguistic interpretability, revealing the relation between input variables and the output class. However, these systems suffer from the curse of dimensionality when dealing with high dimensional problems (large number of attributes and instances). This paper presents a new fuzzy classi...
Almost all drift detection mechanisms designed for classification problems work reactively: after receiving the complete data set (input patterns and class labels) they apply a sequence of procedures to identify some change in the class-conditional distribution – a concept drift. However, detecting changes after its occurrence can be in some situat...
Solving a regression problem is equivalent to finding a model that relates the behavior of an output or response variable to a given set of input or explanatory variables. An example of such a problem would be that of a company that wishes to evaluate how the demand for its product varies in accordance to its and other competitors’ prices. Another...
This work introduces AutoFIS-Class, a methodology for automatic synthesis of Fuzzy Inference Systems for classification problems. It is a data-driven approach, which can be described in five steps: (i) mapping of each pattern to a membership degree to fuzzy sets; (ii) generation of a set of fuzzy rule premises, inspired on a search tree, and applic...
Genetic Fuzzy Systems (GFSs) are models capable of integrating accuracy and high comprehensibility in their results. In the case of GFSs for classification, more emphasis has been given to improving the "Genetic" component instead of its "Fuzzy" counterpart. This paper focus on the Fuzzy Inference component to obtain a more accurate and interpretab...
This work presents a novel Genetic Fuzzy System (GFS), called Genetic Programming Fuzzy Inference System for Regression problems (GPFISRegress). It makes use of Multi-Gene Genetic Programming to build the premises of fuzzy rules, including t-norms, negation and linguistic hedge operators. GPFIS-Regress also defines a consequent term that is more co...
We propose the Quantum-Inspired Multi-Gene Lin-ear Genetic Programming (QIMuLGP), which is a generalization of Quantum-Inspired Linear Genetic Programming (QILGP) model for symbolic regression. QIMuLGP allows us to explore a different genotypic representation (i.e. linear), and to use more than one genotype per individual, combining their outputs u...
The Brazilian Sac Brood is a disease that affects apiaries of Africanized bee hives in Brazil, thereby making them susceptible to high losses. This study investigated the pathogenicity of Africanized bee hives by the entomopathogenic fungi in a Brazilian Sac Brood endemic region. The degree of fungal contamination, presence of mycotoxins in beehive...
Combining forecasts is a common practice in time series analysis. This technique involves weighing each estimate of different models in order to minimize the error between the resulting output and the target. This work presents a novel methodology, aiming to combine forecasts using genetic programming, a metaheuristic that searches for a nonlinear...
One of the most important issues in oil \& gas industry is the lithological identification. Lithology is the macroscopic description of the physical characteristics of a rock. This work proposes a new methodology for lithological discrimination, using GPF-CLASS model (Genetic Programming for Fuzzy Classification) a Genetic Fuzzy System based on Mul...
This paper presents a new model for regression problems based on Multi-Gene and Quantum Inspired Linear Genetic Programming. We discuss theoretical aspects, operators, representation, and experimental results.
This paper introduces two new hybrid models for clustering problems in which the input features and parameters of a spiking neural network (SNN) are optimized using evolutionary algorithms. We used two novel evolutionary approaches, the quantum-inspired evolutionary algorithm (QIEA) and the optimization by genetic programming (OGP) methods, to deve...
This work presents a Genetic Fuzzy Controller (GFC), called Genetic Programming Fuzzy Inference System for Control tasks (GPFIS-Control). It is based on Multi-Gene Genetic Programming, a variant of canonical Genetic Programming. The main characteristics and concepts of this approach are described, as well as its distinctions from other GFCs. Two be...
This work presents a Genetic Fuzzy Controller (GFC), called Genetic Programming Fuzzy Inference System for Control tasks (GPFIS-Control). It is based on Multi-Gene Genetic Programming, a variant of canonical Genetic Programming. The main characteristics and concepts of this approach are described, as well as its distinctions from other GFCs. Two be...