Kepco
  • Naju, South Korea
Recent publications
Perovskite ink based on a green or non‐toxic solvent meets industrial requirements for efficient perovskite solar cells (PSCs). Perovskite inks must be developed with non‐toxic or involve the limited use of toxic solvents to fabricate efficient inkjet‐printed (IJP) perovskite photovoltaics. Herein, γ‐valerolactone is used as a solvent with a low environmental impact, and the strategy showed category 3 toxicity, even with a small quantity of toxic solvents employed to dissolve the perovskite salts. The structural, optical, and electronic properties of IJP perovskite films are improved by adding 1,3‐dimethyl‐2‐imidazolidinone (DMI) to the green perovskite ink. The IJP perovskite films developed by green solvents with 15% (volume %) of DMI exhibited high thickness uniformity (≈97%), and thicker and smoother surfaces than their counterparts. An additive‐modified IJP‐PSC device achieved a maximum power conversion efficiency (PCE) of 17.78%, higher than that of an unmodified device (14.75%). The performance of the IJP‐PSC device is superior primarily because of its exceptional film‐thickness homogeneity, larger grains, and appropriate structures. These attributes significantly decreased unwanted reactions of the perovskite with solvents, ensuring phase purity and enhancing overall efficiency. The innovative green‐solvent ink‐engineering strategy for producing large‐scale perovskite films shows great promise for advancing perovskite solar module technology (with PCE of 13.14%).
As the power grid expands to accommodate increasing loads, looped transmission line (T/L) systems become increasingly complex. Additionally, the increasing number of grid-integrated renewable energy sources cause the infeed effect phenomenon, which results from the fault current that flows to an intermediate point during a fault event. The infeed effect is an important factor to consider when determining the Zone 2 and Zone 3 settings of the distance relay. Moreover, numerous other factors must be considered to ensure correct operation of the distance relay in the event of a fault in various power system configurations. Therefore, the process of considering the Zone 2 and Zone 3 settings is time consuming, potentially increasing the risk of human error. In this paper, we present universal Zone 2 and Zone 3 settings criteria that can be applied to various power system configurations in South Korea to prevent misoperation or inoperation of distance relays. Finally, we introduce an automated distance relay settings program designed to minimize the time required and reduce human errors when determining Zone 2 and Zone 3 settings in accordance with the aforementioned criteria.
The TBM disc cutter, which is the main cutting tool of tunnel boring machines (TBMs), is replaced when it is excessively worn during the boring process. Disc cutters are usually monitored by workers at cutterhead chambers, and they check the status and wear of disc cutters. Manual measurement occasionally results in inaccurate measurement results. In order to overcome these limitations, real-time disc cutter monitoring techniques have been developed with different types of sensors. This study evaluates the distance measurement performance of an eddy-current sensor for measuring disc cutter wear via a series of laboratory experiments. This study focused on identifying the effects of various measurement environments on the sensor’s accuracy. The study considered conditions that the eddy-current sensor may encounter in shield TBM chambers, including air, water, slurry, and excavated muck. Experiments were conducted using both a small-scale disc cutter and a 17-inch full-scale disc cutter. The results indicate that the eddy-current sensor can accurately measure the distance to the disc cutter within a specific range and that its performance remains unaffected by different measurement environments.
This study presents a concise yet effective approach for vertically extrapolating Weibull parameters, using the power law approximation of the wind velocity profile, which describes the exponential increase in mean wind speed with height, as specified in IEC 61400-1. A robust method is necessary to extrapolate wind data collected at lower mast heights to higher locations. Current extrapolation methods are typically constrained in their applicable height range, requiring the development of a new model to accommodate the trend toward larger wind turbines. The proposed formulation is based on extrapolating the Weibull shape and scale parameters from a reference height and assuming a power law velocity profile controlled by the wind shear exponent. The extrapolation function was derived by stretching the Weibull distribution to align with a power law relating average wind speed to height, followed by normalization of the result. Also, a revised empirical formula for the vertical extrapolation of Weibull parameters to heights exceeding 100 m is proposed and validated for accuracy. The Weibull parameter extrapolation method introduced in this study is particularly useful for wind farm development and estimating conditions relevant to the flight testing of unmanned aerial vehicles.
Achieving zero carbon emissions by 2050 in South Korea necessitates significant restrictions on coal and natural gas power plants, along with increased reliance on renewable energy sources (RES). This transition drives the development of high voltage dc (HVdc) systems to efficiently transmit renewable energy. The intermittent nature of RES leads to real-time power imbalances, prompting research into HVdc systems for essential system services such as frequency control and power balancing. As RES expands, a multi-terminal dc network will be needed to enhance inter-grid power sharing, with centralized and decentralized control strategies managing power flow and grid stability.
Phasor Measurement Units (PMUs) are critical devices in modern power grids, providing precise voltage and current phasor measurements (synchrophasors) for real-time monitoring, fault detection, and stability assessment. While previous research suggested that arbitrary time manipulation through GPS spoofing could disrupt grid operations, our study reveals that successful attacks require specific conditions, contrary to earlier assumptions. Through careful analysis of the synchrophasor data specification (IEEE Standard C37.118.x), we demonstrate that arbitrary time manipulation does not directly lead to phase manipulation. Instead, arbitrary manipulations can cause GPS holdover (loss of lock), alert operators with erroneous timing, and ultimately invalidate the received synchrophasors. An experiment with a commercial PMU confirms our specification analysis. We identify the time spoofing conditions to avoid GPS holdover and discover that nanosecond-scale signal alignment (approximately 375 ns error) and gradual time manipulation (around 50 ns/s error) are required. Experiments on a commercial Wide Area Monitoring System (WAMS) testbed demonstrate that GPS spoofing meeting the identified criteria results in a 500-microsecond time error (10.8-degree phase error) after 12 hours without triggering alarms. Given that a 60-degree phase variation is considered a fault, triggering protection mechanisms, this GPS spoofing technique could potentially induce false faults within 70 hours. To counter this threat, we propose a practical method to distinguish GPS spoofing-induced false faults from actual faults caused by events like lightning strikes or ground shorts. Analysis of 10 real-world incidents from the past six months demonstrates that genuine faults consistently exhibit instantaneous phase variations within three electrical cycles, providing a basis for differentiation.
As perovskite solar cells (PSCs) require higher standards for commercial applications, all vacuum‐processed PSCs should become a key in future manufacturing processes of scalable PSCs compared to their currently dominating research types based on solution processes. In fact, vacuum deposition of high‐quality organic hole‐transport layers (HTLs) is crucial for successful fabrication of all vacuum‐processed scalable PSCs. Here, the study develops a triarylamine‐based single oligomer (TAA‐tetramer)−a miniaturized‐molecular form of the well‐known poly(triarylamine) (PTAA)−as a vacuum‐processable HTL in inverted PSCs. The well‐defined structure and monodisperse nature of the TAA‐tetramer render strong intermolecular π−π interactions and/or molecular ordering, resulting in simultaneously enhanced quasi‐Fermi level splitting and hole‐transport efficiency of the perovskite. The resulting all‐vacuum‐processed inverted PSCs exhibits a high power conversion efficiency (PCE) of 23.2%, which is record‐high performance reported among all‐vacuum‐processed PSCs, with exceptional device stabilities. Furthermore, the all‐vacuum‐deposition process allows the fabrication of efficient PSCs and modules with reliable scalability and minimized efficiency loss during scale‐up. Notably, the proposed HTL enabled high‐efficiency large‐area (25 cm²) single‐PSC with a PCE of 12.3%, representing one of the largest active areas and the highest performance ever reported for the large‐area device. A promising strategy for developing efficient, stable, and scalable PSCs for all‐vacuum processes is presented.
This study presents a novel surface acoustic wave (SAW)-based solar-blind ultraviolet-C (UV-C) corona sensor, marking the first reported use of HfO₂ as a sensing material for UV-C corona sensing. A 222 MHz two-port SAW delay line structure was selected as a sensor platform, and its optimal parameters were determined through Coupling of Mode (COM) modeling analysis. COMSOL simulations were conducted to investigate the effect of UV-C exposure on the HfO2 thin film, highlighting its contribution to conductivity changes. A 30 nm-thick HfO2 thin film was deposited using atomic layer deposition (ALD) within the cavity of a two-port SAW delay line, providing sufficient volume and density of absorption sites for UV-C exposure. Comprehensive material characterization of the HfO2 thin film was performed using X-ray diffraction (XRD), scanning electron microscopy (SEM), and energy-dispersive X-ray spectroscopy (EDS). The effect of annealing temperature was analyzed in detail, with results confirming that 500 °C is the optimal temperature for achieving the best performance in a SAW-based UV-C corona sensor. The sensor characteristics were measured using custom-made interface electronics, allowing frequency shifts to be visually observed on a PC monitor with compensation for environmental factors such as humidity and temperature. The developed sensor demonstrated response and recovery times of 2.8 s and 4 s, respectively, with a measured sensitivity of 563 ppm/(mW·cm⁻²). Furthermore, the effect of HfO₂ film thickness on the sensor’s response to UV-C exposure was examined in detail, showing that increased thickness leads to a higher frequency shift, thereby enhancing sensitivity. The feasibility of the sensor for real-world applications was validated through successful testing under simulated corona discharge detection.
Accurate timing services are important for effectively operating power grids. Traditionally, supervisory control and data acquisition (SCADA) systems make use of time stamps to quickly identify and respond to faults in the event of incidents. However, with the increasing penetration of renewable energy sources such as solar and wind, the grid has become more susceptible to even minor disturbances. In response, utilities have begun deploying wide-area monitoring systems (WAMS) to monitor real-time voltage and frequency oscillations, which mainly consist of phasor measurement units (PMUs). These systems require a time stamp at every measurement, with a high level of accuracy, which means that the timing services must guarantee 24/7 availability.
As the operational years of power facilities increase, the importance of accurately assessing the remaining lifespan or risk of failures of distribution transformers for the stable power supply has also grown. However, unlike the typical transformer failures, it is not quantified that the failures are also influenced by weather conditions such as temperature, humidity, precipitation, etc. In this regard, we propose the method for investigating the impact of the weather conditions on the failures of distribution transformers to improve the asset management. By employing survival analysis approach, frameworks are employed to quantify the spatial influence of weather conditions on distribution transformer failures, and the weather risk factors are integrated into lifespan indicators to improve maintenance strategies and reliability assessments. Distribution transformer failure cases of over 11 years in South Korea and the corresponding historical weather conditions are configured to enable the survival analysis and derive the meaningful real-world analysis results. Specifically, weather data are quantified and interpolated to configure the spatial data. The notable results underline quantifying significant impact of high summer temperature and humidity in term of hazard ratios. These findings indicate that incorporating historical weather data into asset management indices can significantly enhance the accuracy of transformer lifespan predictions and inform maintenance planning. This study represents the first long-term survival analysis correlating historical weather conditions with transformer failures and provides a guideline for improving power-facility management in the context of environmental changes.
This study investigates the mechanical wear characteristics of 170 kV gas-insulated switchgear (GIS) circuit breaker contacts by conducting accelerated life testing. Samples identical to those used in 170 kV GIS were subjected to up to 20,000 cycles, which is the maximum guaranteed lifespan. Visual inspections, contact resistance measurements, mass change assessments, and operational characteristic evaluations, including trip and close coil excitation currents, were performed. Considerable surface damage and increased contact resistance were observed after approximately 12,000 operations, indicating that mechanical wear before arc formation had a substantial impact on the electrical performance and lifespan of circuit breaker contacts. The results reveal that the lifetime and reliability of GIS circuit breakers can be improved by addressing the pure mechanical wear before arc occurrence during the maintenance of GIS circuit breakers.
The pole drop test is the most widely accepted test for detecting short circuits in the field winding of salient pole synchronous machines (SPSM). Although it is a simple and effective test, there are concerns on the reliability due to the inherent asymmetry between the poles, especially for cases where the percentage of shorted turns is low. Considering that this is the primary test used in industry for detecting shorted field turns in SPSMs, improving the sensitivity of the test is highly desirable for reliable fault detection. In this paper, it is shown that the sensitivity of the pole drop test can be significantly improved by increasing the excitation frequency of the test voltage above the 50/60 Hz currently used, which is not reported elsewhere. An electrical equivalent circuit analysis and an experimental study on a 4 pole, 30 kVA synchronous motor and a 14 pole, 100 kVA hydrogenator prototype are given to verify the claims made.
Fermi polarons are emerging quasiparticles when a bosonic impurity immersed in a fermionic bath. Depending on the boson-fermion interaction strength, the Fermi-polaron resonances exhibit either attractive or repulsive interactions, which impose further experimental challenges on understanding the subtle light-driven dynamics. Here, we report the light-driven dynamics of attractive and repulsive Fermi polarons in monolayer WSe2 devices. Time-resolved polaron resonances are probed using femtosecond below-gap Floquet engineering with tunable exciton-Fermi sea interactions. While conventional optical Stark shifts are observed in the weak interaction regime, the resonance shift of attractive polarons increases, but that of repulsive polarons decreases with increasing the Fermi-sea density. A model Hamiltonian using Chevy ansatz suggests the off-resonant pump excitation influences the free carriers that interact with excitons in an opposite valley, thereby reducing the binding energy of attractive polarons. Our findings may enable coherent Floquet engineering of Bose-Fermi mixtures in ultrafast time scales.
In this paper, we propose the XPaC (XGBoost Prediction and Cumulative Prospect Theory (CPT)) model to minimize the operational losses of the power grid, taking into account both the prediction of electric vehicle (EV) charging demand and the associated uncertainties, such as when customers will charge, how much electric energy they will need, and for how long. Given that power utilities supply electricity with limited resources, it is crucial to efficiently control EV charging peaks or predict charging demand during specific periods to maintain stable grid operations. While the total amount of EV charging is a key factor, when and where the charging occurs can be even more critical for the effective management of the grid. Although numerous studies have focused on individually predicting EV charging patterns or demand and evaluating the effectiveness of EV charging control, comprehensive assessments of the actual operational benefits and losses resulting from charging control based on predicted charging behavior remain limited. In this study, we firstly compare the performance of LSTM (Long Short-Term Memory), GRU (Gated Recurrent Unit), and decision tree-based XGBoost regression models in predicting hourly charging probabilities and the need for grid demand control. Using the predicted results, we applied the CPT algorithm to analyze the optimal operational scenarios and assess the expected profit and loss for the power grid. Since the charging control optimizer with XPaC incorporates real-world operational data and uses actual records for analysis, it is expected to provide a robust solution for managing the demand arising from the rapid growth of electric vehicles, while operating within the constraints of limited energy resources.
In low-inertia systems with a high penetration of renewable energy, the rotational kinetic energy and inertia constant are significant factors in determining frequency stability. The energy released owing to the frequency decrease during contingency represents a portion of the inertia that a synchronous machine possesses in the normal state. However, when securing inertia or planning additional resources to secure frequency stability, inertia in the normal state is analyzed as the standard rather than the amount of energy released during a fault. Therefore, in this paper, we define the actual energy emitted from a synchronous machine as Effective inertia. In order to evaluate Effective inertia in various operating conditions, we conducted a comprehensive review on approximately 24,627 cases from the years 2019, 2020, and 2021. As a result, in systems with low rotational kinetic energy, both low- and high-frequency nadirs were observed, indicating high uncertainty. However, Effective inertia presented a consistent trend regarding the energy release aligned with the minimum frequency. For instance, the rotational kinetic energy required to satisfy the frequency standard was 23 GWs, while the required Effective inertia was 858 MWs. We emphasize that securing inertia based on rotational kinetic energy includes additional imaginary energy that does not contribute to frequency, resulting in an energy requirement greater than that needed for Effective inertia. Therefore, in order to secure the frequency stability of the future system, the actual required energy amount based on Effective inertia will be presented and utilized in the inertia market and FFR (Fast Frequency Response) resource design.
Due to the aging of the power systems, it is one of the important issues to identify the remaining useful life (RUL) of the power facilities including the environmental effects for the reliable power system operation. However, it is hard to identify the effects of spatially distributed environmental effects on each individual facility and manage the individual history of millions of distribution facilities. As a practical solution, we propose a spatiotemporal analysis to derive the comprehensive effects of the lightning strike that is one of the environmental effects on the RUL of the distribution transformers, and provides the guideline for the lightning protection through the decision of lightning-prone areas. We acquired the data related to the distribution transformer failures and the lightning strikes in South Korea, and a spatiotemporally weighted regression analysis is applied to the data for deriving the spatially cumulative effects of a lightning, which is hard to be identified through each field post-analysis. In addition, we propose a method for validating the regression model through the model evaluation using the categorized transformer failures by field post-analysis. All the proposed processes are performed through the real-world data analysis, and practical results are presented for the reinforcement of lightning protection within the Korean power system.
This study analyzes recloser operation in the South Korean distribution system to propose effective operational strategies for improving safety and efficiency. This research is based on actual data, such as recloser operation data and fault statistics provided by the Ministry of the Interior and Safety and the Korea Electric Power Corporation, without the use of simulation tools or experiments. Key operational elements, such as reclosure counts, sequence settings, and high-current interruption features, were analyzed. First, an analysis of reclosure counts revealed that over 73% of faults were cleared after the first reclosure, and when the second reclosure was included, more than 90% were successfully restored. This finding suggests that reducing the number of reclosures from the standard three to one or two would not significantly impact fault restoration performance while simultaneously reducing arc generation, thereby improving safety. Additionally, a review of recloser sequence settings highlighted the fact that the traditional 2F2D (two fast, two delayed) sequence often led to frequent instantaneous tripping, increasing the risk of arc generation. The 1F1D (one fast, one delayed) sequence, which applies a delayed trip after an initial fast trip, offers a better fault-clearing performance and reduces the risk of arc generation. Lastly, an analysis of the high-current interruption feature suggested that enabling this function for faults with low reclosing success rates, particularly in cases of short-circuit faults, and setting an immediate trip threshold for fault currents exceeding 3 kA would enhance both safety and efficiency. This operational strategy was implemented in the South Korean distribution system over a three-year period, starting in 2021. While there was a 2.1% decrease in reclosure success rates, this strategy demonstrated that similar success levels could be maintained while reducing the number of reclosures, thus mitigating equipment damage risks and improving safety measures. The refined recloser operation plan derived from this study is expected to enhance the overall stability and reliability of distribution systems.
Institution pages aggregate content on ResearchGate related to an institution. The members listed on this page have self-identified as being affiliated with this institution. Publications listed on this page were identified by our algorithms as relating to this institution. This page was not created or approved by the institution. If you represent an institution and have questions about these pages or wish to report inaccurate content, you can contact us here.
240 members
Koudjo M. Koumadi
  • KEPCO Research Institute, Power Distribution ICT Group
Jungjoo Kim
  • Next Generation Transmission & Substation Laboratory
B.N. Ha
  • KEPCO Research Institute
Information
Address
Naju, South Korea