Jongseok Kim’s research while affiliated with Sungkyunkwan University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Persistent Memory I/O-Aware Task Placement for Mitigating Resource Contention
  • Conference Paper

September 2024

Hyunwoo Ahn

·

Jongseok Kim

·

Euiseong Seo






FIGURE 1. Design overview of compressed swap schemes.
FIGURE 2. Interaction among swap front-end, zswap and zpool.
FIGURE 3. zpool management with the z3fold scheme.
FIGURE 4. Number of file pages and anonymous pages evicted from the main memory during the execution of each application.
FIGURE 7. Categorization of swap pages and victim selection weight of each category.

+2

ezswap: Enhanced Compressed Swap Scheme for Mobile Devices
  • Article
  • Full-text available

September 2019

·

2,498 Reads

·

13 Citations

IEEE Access

The limited memory capacity of mobile devices leads to the popular use of compressed swap schemes, which reduce the I/O operations involving the swapping in and out of infrequently accessed pages. However, most of the current compressed swap schemes indiscriminately compress and store all swap-out pages. Considering that both energy and computing power are scarce resources in mobile devices, and modern applications frequently deal with already-compressed multimedia data, this blind approach may cause adverse impacts. In addition, they focus only on anonymous pages and not on file-mapped pages, because the latter are backed by on-disk files. However, our observations revealed that, in mobile devices, file-mapped pages consume significantly more memory than anonymous pages. Last but not least, most of the current compressed swap schemes blindly follow the least-recently-used (LRU) discipline when choosing the victim pages for replacement, not considering the compression ratio or data density of the cached pages. To overcome the aforementioned problems and maximize the memory efficiency, we propose a compressed swap scheme, called enhanced zswap (ezswap), for mobile devices. ezswap accommodates not only anonymous pages, but also clean file-mapped pages. It estimates the compression ratio of incoming pages with their information entropy, and selectively compresses and caches the pages only with beneficial compression ratios. In addition, its admission control and cache replacement algorithms are based on a costbenefit model that considers not only the access recency of cached pages but also their information density and expected eviction cost. The proposed scheme was implemented in the Linux kernel for Android. Our evaluation with a series of commercial applications demonstrated that it reduced the amount of flash memory read by up to 55%, thereby improving the application launch time by up to 22% in comparison to the original zswap.

Download

Citations (3)


... We observed such drift and will address this challenge in future work. Power/Energy Characterization and Optimization: Researchers used the tools and methods above to characterize the energy efficiency of critical workloads and primitives in AI and HPC running on different scales [38]- [44], and to study the efficiency of the latest innovations in GPUs and other accelerators [45], [46]. Prior work also investigated the impact of frequency capping, power capping, DVFS, and input data composition on energy efficiency [24], [47]- [51]. ...

Reference:

Methodology for Fine-Grain GPU Power Visibility and Insights
Know Your Enemy To Save Cloud Energy: Energy-Performance Characterization of Machine Learning Serving
  • Citing Conference Paper
  • February 2023

... Traditionally, in High-Performance Computing (HPC), CNN inference is performed solely on GPUs. Therefore, multiple works only focus on power efficiency for inference on GPUs, such as [18] and [22]. PELSI, on the other hand, focuses on HMPSoCs where embedded CPUs and GPUs are comparable in performance, and both are used for inference to maximize efficiency. ...

A DNN Inference Latency-aware GPU Power Management Scheme
  • Citing Conference Paper
  • October 2021

... To fulfill user expectations of seamless and rapid application relaunch, mobile systems preserve all execution-related data (called anonymous data in Linux [4]), such as stack and heap, in main memory. This practice, known as keeping applications alive in the background [1,[5][6][7][8], enables faster relaunches. However, it also results in significant main memory capacity requirements for each application. ...

ezswap: Enhanced Compressed Swap Scheme for Mobile Devices

IEEE Access