Institute for Systems and Computer Engineering, Technology and Science (INESC TEC)
Recent publications
This paper evaluates lines repair and maintenance impacts on generation-transmission expansion planning (GTEP), considering the transmission and generation reliability. The objective is to form a balance between the transmission and generation expansion and operational costs and reliability, as well as lines repair and maintenance costs. For this purpose, the transmission system reliability is represented by the value of loss of load (LOL) and load shedding owing to line outages, and generation reliability is formulated by the LOL and load shedding indices because of transmission congestion and outage of generating units. The implementation results of the model on the IEEE RTS show that including line repair and maintenance as well as line loading in GTEP leads to optimal generation and transmission plans and significant savings in expansion and operational costs.
Nowadays, decentralized microgrids (DC-MGs) have become a popular topic due to the effectiveness and the less complexity. In fact, DC-MGs resist to share their internal information with the distribution system operator (DSO) to protect their privacy and compete in the electricity market. Further, lack of information sharing among MGs in normal operation conditions leads to form a competitive market. However, in emergency operation conditions, it results numerous challenges in managing network outages. Therefore, this paper presents a hierarchical model consisting of three stages to enhance the resilience of DC-MGs. In all stages, the network outage management is performed considering the reported data of MGs. In the first stage, proactive actions are performed with the aim of increasing the network readiness against the upcoming windstorm. In the second stage, generation scheduling, allocation of mobile units and distribution feeder reconfiguration (DFR) are operated by DSO to minimize operating costs. In the final stage, the repair crew is allocated to minimize the energy not served (ENS). Uncertainties of load demand, wind speed and solar radiation are considered, and the effectiveness of the proposed model is investigated by integrating to the 118-bus distribution network. Finally, the results of the simulation indicate that DFR and proactive actions decrease the ENS by 19,124 kWh and 4101 kWh, respectively. Further, the sharing of information among MGs leads to a 48.16% growth in the supply service level to critical loads, and consequently a 3.47% increase in the resilience index.
In the last few decades, Portugal has witnessed an extraordinary quantitative and qualitative transformation in housing provision. The pace of housing construction was so extensive that the contemporary real estate market is currently characterized by an excessive supply, vis-à-vis the resident population. In this study, we discuss the impact of the financial process on the housing sector in comparison with tenancy. We consider transaction prices of the housing assignments, either through acquisition or through tenancy. The recent shock resulting from the pandemic situation did not slow down house prices but caused a slight drop in rents. The model used proposes to analyze the fluctuations in prices and rents in the face of external shocks. In the residential market, the estimation is complex due to the many heterogeneous attributes of residential assets. Non-fluctuating variables, such as size, location, and external demand for homes, explain a large part of the variation in price levels included in the model.
Nowadays, social networks are one of the biggest ways of sharing real time information. These networks, have several groups focused on sharing information about road incidents and other traffic events. The work here presented aims the creation of an AI model capable of identifying publications related to traffic events in a specific road, based on publications shared on social networks. A predictive model was obtained by training a deep learning model for the detection of publications related with road incidents with an average accuracy of 95%. The model deployed as a service is already fully functional and is operating in 24/7 while awaits a final integration with the road management system of a company where it will be used to support the Control Center team in the decision making.
The European rabbit ( Oryctolagus cuniculus ) was the first animal model used to understand human diseases like rabies and syphilis. Nowadays, the rabbit is still used to study several human infectious diseases like syphilis, HIV and papillomavirus. However, due to several mainly practical reasons, it has been replaced as an animal model by mice ( Mus musculus ). The rabbit and mouse share a recent common ancestor and are classified in the superorder Glires which arose at approximately 82 million years ago (mya). These species diverged from the Primates’ ancestor at around 92 million years ago and, as such, one expects the rabbit-human and mouse-human genetic distances to be very similar. To evaluate this hypothesis, we developed a set of tools for automatic data extraction, sequence alignment and similarity study, and a web application for visualization of the resulting data. We aligned and calculated the genetic distances for 2793 innate immune system genes from human, rabbit and mouse using sequences available in the NCBI database. The obtained results show that the rabbit-human genetic distance is lower than the mouse-human genetic distance for 88% of these genes. Furthermore, when we considered only genes with a difference in genetic distance higher than 0.05, this figure increase to 93%. These results can be explained by the increase of the mutation rates in the mouse lineage suggested by some authors and clearly show that, at least looking to the genetic distance to human genes, the European rabbit is a better model to study innate immune system genes than the mouse.
This paper describes two different approaches to sentiment analysis. The first is a form of symbolic approach that exploits a sentiment lexicon together with a set of shifter patterns and rules. The sentiment lexicon includes single words (unigrams) and is developed automatically by exploiting labeled examples. The shifter patterns include intensification, attenuation/downtoning and inversion/reversal and are developed manually. The second approach exploits a deep neural network, which uses a pre-trained language model. Both approaches were applied to texts on economics and finance domains from newspapers in European Portuguese. We show that the symbolic approach achieves virtually the same performance as the deep neural network. In addition, the symbolic approach provides understandable explanations, and the acquired knowledge can be communicated to others. We release the shifter patterns to motivate future research in this direction.
Energy hub systems improve energy efficiency and reduce emissions due to the coordinated operation of different infrastructures. Given that these systems meet the needs of customers for different energies, their optimal design and operation is one of the main challenges in the field of energy supply. Hence, this paper presents a two-stage stochastic model for the integrated design and operation of an energy hub in the presence of electrical and thermal energy storage systems. As the electrical, heating, and cooling loads, besides the wind turbine’s (WT’s) output power, are associated with severe uncertainties, their impacts are addressed in the proposed model. Besides, demand response (DR) and integrated demand response (IDR) programs have been incorporated in the model. Furthermore, the real-coded genetic algorithm (RCGA), and binary-coded genetic algorithm (BCGA) are deployed to tackle the problem through continuous and discrete methods, respectively. The simulation results show that considering the uncertainties leads to the installation of larger capacities for assets and thus a 8.07% increase in investment cost. The results also indicate that the implementation of shiftable IDR program modifies the demand curve of electrical, cooling and heating loads, thereby reducing operating cost by 15.1%. Finally, the results substantiate that storage systems with discharge during peak hours not only increase system flexibility but also reduce operating cost.
Demand Response (DR) programs are essential for easing end-user demand on the power system, adding benefits across the power sector by reducing peak demand and power flow congestion. With the modernization of power grids, DR programs ensure the integration of Distributed Energy Resources (DER) in a controlled manner through Advanced Metering Infrastructure (AMI), which enables communication between grid operators, prosumers and consumers. However, the diversity of DR programs, the spread of DERs, the advent of prosumers, and the several types of power trading among system entities make network and market operation more complex. In this context, optimization methods have been widely applied in distribution grids and market operation, assisting in the decision-making of DER management, prosumer’s and consumers’ welfare, and DR program applications. This work will address the main market models comprising DR programs to assess opportunities in prosumers’ decision-making with the help of optimization tools. Thus, different optimization techniques are introduced that have been addressed in the literature aiming at the application of market models, taking into account the prosumer framework. As a whole, this review paper aims to present the main perspectives of energy market models with demand-side management actions considering the prosumer design.
Businesses that are growing by supplying more services or reaching more customers, might need to create or relocate a facility location to expand their geographical coverage and improve their services. This decision is complex, and it is crucial to analyse their client locations, their journeys and be aware of the factors that may affect their geographical decision and the impact that they can have in the business strategy. Therefore, the decision-maker needs to ensure that the location is the most profitable site according to the business scope and future perspectives.In this paper, we propose a decision support system to help businesses on this complex decision that is capable of providing facility location suggestions based on their journeys analysis and the factors that the decision-makers consider more relevant to the company. The system helps the business managers to make better decisions by returning facility locations that have potential to maximise the company’s profit by reducing costs and maximise the number of covered customers by expanding their territorial coverage.To verify and validate the decision support system, a system evaluation was developed. Thus, a survey was responded by decision-makers in order to evaluate the efficiency, understandability, accuracy and effectiveness of the suggestions.
The controlled growth of organic crystalline materials in predefined locations still poses a challenge for functional device application such as phototransistors, photoconductors, or photovoltaic solar cells. This work evidences the use of optical lithography and a fluoroalkylsilane to selectively modify the surface energy and how to create a wettability micro‐patterned structure. These are then combined with non‐contact printing of the organic solution providing custom‐shaped films. To deliver printed films with improved morphological quality, key process parameters for high‐performance organic materials are optimized. Particular attention is given to the adjustment of the concentration and solvent mixture to tune the jetting properties, and consequently, slow down the evaporation rate. Continuous films are obtained for an optimized number of droplets and spacing between them. Micro‐Raman spectroscopy imaging confirms the crystalline nature of the printed films and the lack of impurities. To validate the method, rubrene and triisopropylsilylethynyl (TIPS)‐pentacene are tested using two‐terminal optoelectronic devices. TIPS‐pentacene rectangular printed micrometric photosensor presents linear behavior and no hysteresis, reaching 0.33 nA under 18.1 mW cm−2. The structural and optoelectronic characterization is in line with other micro‐patterned examples, opening doors for new industrial applications. Micropatterning of organic photoactive materials using printing and local chemical modification of the surface by 1H,1H,2H,2H‐perfluorodecyltriethoxysilane. The proposed wettability‐assisted method is extendable, but not limited to, multiple photonic devices such as phototransistors or resistive memories.
Doing internships is increasingly common in Portugal, often being regarded as a prerequisite for entering the labour market. This trend reinforces the need to understand the interns’ perception of the ideal characteristics for this type of experience, in order to stop (or at least slow down) the brain drain that is currently felt. This exploratory research aims at reflecting on how organisations can contribute to successful internships. Themes such as the leadership and organisational culture of the host entity are addressed, areas which have been debated less often in the current literature. Our quantitative research was based on a survey, which obtained 143 responses from individuals with internship experience. Data were analysed using descriptive, reliability, inferential, and multiple linear regression analysis. The results from this study showcased a tendency toward servant leadership (in detriment to paternalistic or autocratic leadership) as being the most appropriate leadership style for an internship. It was also possible to emphasise some important points in an internship experience, including remuneration (desired by the interns) and the tasks performed (the interns want to be given responsibilities and meaningful work during their internship). Some of the results obtained are according to the state-of-the-art, however, others diverge. View Full-Text Keywords: survey; interns’ perceptions; servant leadership; horizontal culture; brain drain
Funding Acknowledgements: Type of funding sources: Public grant(s) – National budget only. Main funding source(s): Health Research Council of New Zealand (HRC) National Heart Foundation of New Zealand (NHF) Segmentation of the left ventricular myocardium and cavity in 3D echocardiography (3DE) is a critical task for the quantification of systolic function in heart disease. Continuing advances in 3DE have considerably improved image quality, prompting increased clinical uptake in recent years, particularly for volumetric measurements. Nevertheless, analysis of 3DE remains a difficult problem due to inherently complex noise characteristics, anisotropic image resolution, and regions of acoustic dropout. One of the primary challenges associated with the development of automated methods for 3DE analysis is the requirement of a sufficiently large training dataset. Historically, ground truth annotations have been difficult to obtain due to the high degree of inter- and intra-observer variability associated with manual 3DE segmentation, thus, limiting the scope of AI-based solutions. To address the lack of expert consensus, we instead used labels derived from cardiac magnetic resonance (CMR) images of the same subjects. By spatiotemporally registering CMR labels to corresponding 3DE image data on a per subject basis (Figure 1), we collated 520 annotated 3DE images from a mixed cohort of 130 human subjects (2 independent single-beat acquisitions per subject at end-diastole and end-systole) consisting of healthy controls and patients with acquired cardiac disease. Comprising images acquired across a range of patient demographics, this curated dataset exhibits variation in image quality, 3DE acquisition parameters, as well as left ventricular shape and pose within the 3D image volume. To demonstrate the utility of such a dataset, nn-UNet, a self-configuring deep learning method for semantic segmentation was employed. An 80/20 split of the dataset was used for training and testing, respectively, and data augmentations were applied in the form of scaling, rotation, and reflection. The trained network was capable of reproducing measurements derived from CMR for end-diastolic volume, end-systolic volume, ejection fraction, and mass, while outperforming an expert human observer in terms of accuracy as well as scan-rescan reproducibility (Table I). As part of ongoing efforts to improve the accuracy and efficiency of 3DE analysis, we have leveraged the high resolution and signal-to-noise-ratio of CMR (relative to 3DE), to create a novel, publicly available benchmark dataset for developing and evaluating 3DE labelling methods. This approach not only significantly reduces the effects of observer-specific bias and variability in training data arising from conventional manual 3DE analysis methods, but also improves the agreement between cardiac indices derived from 3DE and CMR. Figure 1. Data annotation workflow Table I. Results
Modern power grids have high levels of distributed energy resources, automation, and inherent flexibility. Those characteristics have been proven to be favorable from an environmental, social and economic perspective. Despite the increased versatility, modern grids are becoming more vulnerable to high-impact low-probability (HILP) threats, particularly for the distribution networks. On one hand, this is due to the increasing frequency and severity of weather events and natural disasters. On the other hand, it is aggravated by the increased complexity of smart grids. Resilience is broadly defined as the capability of a system to mitigate the effects of and recover from HILP events, which is often confused with reliability that is concerned with low-impact high-probability (LIHP) ones. In this paper, a distribution system in Portugal is simulated to showcase how the utilization of flexibility and mobile energy resources (MERs) should be considered differently relative to HILP vs LIHP threats.
Optimizing research on the developmental origins of health and disease (DOHaD) involves implementing initiatives maximizing the use of the available cohort study data; achieving sufficient statistical power to support subgroup analysis; and using participant data presenting adequate follow-up and exposure heterogeneity. It also involves being able to undertake comparison, cross-validation, or replication across data sets. To answer these requirements, cohort study data need to be findable, accessible, interoperable, and reusable (FAIR), and more particularly, it often needs to be harmonized. Harmonization is required to achieve or improve comparability of the putatively equivalent measures collected by different studies on different individuals. Although the characteristics of the research initiatives generating and using harmonized data vary extensively, all are confronted by similar issues. Having to collate, understand, process, host, and co-analyze data from individual cohort studies is particularly challenging. The scientific success and timely management of projects can be facilitated by an ensemble of factors. The current document provides an overview of the ‘life course’ of research projects requiring harmonization of existing data and highlights key elements to be considered from the inception to the end of the project.
The expansion of autonomous driving operations requires the research and development of accurate and reliable self-localization approaches. These include visual odometry methods, in which accuracy is potentially superior to GNSS-based techniques while also working in signal-denied areas. This paper presents an in-depth review of state-of-the-art visual and point cloud odometry methods, along with a direct performance comparison of some of these techniques in the autonomous driving context. The evaluated methods include camera, LiDAR, and multi-modal approaches, featuring knowledge and learning-based algorithms, which are compared from a common perspective. This set is subject to a series of tests on road driving public datasets, from which the performance of these techniques is benchmarked and quantitatively measured. Furthermore, we closely discuss their effectiveness against challenging conditions such as pronounced lighting variations, open spaces, and the presence of dynamic objects in the scene. The research demonstrates increased accuracy in point cloud-based methods by surpassing visual techniques by roughly 33.14% in trajectory error. This survey also identifies a performance stagnation in state-of-the-art methodologies, especially in complex conditions. We also examine how multi-modal architectures can circumvent individual sensor limitations. This aligns with the benchmarking results, where the multi-modal algorithms exhibit greater consistency across all scenarios, outperforming the best LiDAR method (CT-ICP) by 5.68% in translational drift. Additionally, we address how current AI advances constitute a way to overcome the current development plateau.
Cutting and packing problems are challenging combinatorial optimization problems that have many relevant industrial applications and arise whenever a raw material has to be cut into smaller parts while minimizing waste, or products have to be packed, minimizing the empty space. Thus, the optimal solution to these problems has a positive economic and environmental impact. In many practical applications, both the raw material and the cut parts have a rectangular shape, and cutting plans are generated for one raw material rectangle (also known as plate) at a time. This is known in the literature as the (two-dimensional) rectangular cutting problem. Many variants of this problem may arise, led by cutting technology constraints, raw-material characteristics, and different planning goals, the most relevant of which are the guillotine cuts. The absence of the guillotine cuts imposition makes the problem harder to solve to optimality. Based on the Floating-Cuts paradigm, a general and flexible mixed-integer programming model for the general rectangular cutting problem is proposed. To the best of our knowledge, it is the first mixed integer linear programming model in the literature for both non-guillotine and guillotine problems. The basic idea of this model is a tree search where branching occurs by successive first-order non-guillotine-type cuts. The exact position of the cuts is not fixed, but instead remains floating until a concrete small rectangle (also known as item) is assigned to a child node. This model does not include decision variables either for the position coordinates of the items or for the coordinates of the cuts. Under this framework, it was possible to address various different variants of the problem. Extensive computational experiments were run to evaluate the model’s performance considering 16 different problem variants, and to compare it with the state-of-the-art formulations of each variant. The results confirm the power of this flexible model, as, for some variants, it outperforms the state-of-the-art approaches and, for the other variants, it presents results fairly close to the best approaches. But, even more importantly, this is a new way of looking at these problems which may trigger even better approaches, with the consequent economic and environmental benefits.
Breast cancer is the most common malignancy in women worldwide, and is responsible for more than half a million deaths each year. The appropriate therapy depends on the evaluation of the expression of various biomarkers, such as the human epidermal growth factor receptor 2 (HER2) transmembrane protein, through specialized techniques, such as immunohistochemistry or in situ hybridization. In this work, we present the HER2 on hematoxylin and eosin (HEROHE) challenge, a parallel event of the 16th European Congress on Digital Pathology, which aimed to predict the HER2 status in breast cancer based only on hematoxylin–eosin-stained tissue samples, thus avoiding specialized techniques. The challenge consisted of a large, annotated, whole-slide images dataset (509), specifically collected for the challenge. Models for predicting HER2 status were presented by 21 teams worldwide. The best-performing models are presented by detailing the network architectures and key parameters. Methods are compared and approaches, core methodologies, and software choices contrasted. Different evaluation metrics are discussed, as well as the performance of the presented models for each of these metrics. Potential differences in ranking that would result from different choices of evaluation metrics highlight the need for careful consideration at the time of their selection, as the results show that some metrics may misrepresent the true potential of a model to solve the problem for which it was developed. The HEROHE dataset remains publicly available to promote advances in the field of computational pathology.
Telecommunication operators compete not only for new clients, but, above all, to maintain current ones. The modelling and prediction of the top‐up behaviour of prepaid mobile subscribers allows operators to anticipate customer intentions and implement measures to strengthen customer relationship. This research explores a data set from a Portuguese operator, comprising 30 months of top‐up events, to predict the top‐up monthly frequency and average value of prepaid subscribers using offline and online multi‐target regression algorithms. The offline techniques adopt a monthly sliding window, whereas the online techniques use an event sliding window. Experiments were performed to determine the most promising set of features, analyse the accuracy of the offline and online regressors and the impact of sliding window dimension. The results show that online regression outperforms the offline counterparts. The best accuracy was achieved with adaptive model rules and a sliding window of 500,000 events (approximately 5 months). Finally, the predicted top‐up monthly frequency and average value of each subscriber were converted to individual date and value intervals, which can be used by the operator to identify early signs of subscriber disengagement and immediately take pre‐emptive measures.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
360 members
Orlando Frazão
  • optoelectronics Unit (UOSE)
Luis Coelho
  • CAP – Centre for Applied Photonics
Hugo Almeida Ferreira
  • CRAS - Centre for Robotics and Autonomous Systems
Pedro A. S. Jorge
  • Optoelectronics Unit
Manuel J B Marques
  • Optoelectronics and Electronics Systems Unit (UOSE)
Rua Dr. Roberto Frias, s/n, 4200-465, Porto, Portugal
Head of institution
José Manuel Mendonça
+351 222 094 000