September 2024
What is this page?
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
Publications (7)
November 2023
·
10 Reads
February 2023
·
18 Reads
·
8 Citations
October 2022
·
6 Reads
October 2021
·
15 Reads
·
3 Citations
June 2021
·
2 Reads
Journal of KIISE
September 2019
·
2,498 Reads
·
13 Citations
IEEE Access
The limited memory capacity of mobile devices leads to the popular use of compressed swap schemes, which reduce the I/O operations involving the swapping in and out of infrequently accessed pages. However, most of the current compressed swap schemes indiscriminately compress and store all swap-out pages. Considering that both energy and computing power are scarce resources in mobile devices, and modern applications frequently deal with already-compressed multimedia data, this blind approach may cause adverse impacts. In addition, they focus only on anonymous pages and not on file-mapped pages, because the latter are backed by on-disk files. However, our observations revealed that, in mobile devices, file-mapped pages consume significantly more memory than anonymous pages. Last but not least, most of the current compressed swap schemes blindly follow the least-recently-used (LRU) discipline when choosing the victim pages for replacement, not considering the compression ratio or data density of the cached pages. To overcome the aforementioned problems and maximize the memory efficiency, we propose a compressed swap scheme, called enhanced zswap (ezswap), for mobile devices. ezswap accommodates not only anonymous pages, but also clean file-mapped pages. It estimates the compression ratio of incoming pages with their information entropy, and selectively compresses and caches the pages only with beneficial compression ratios. In addition, its admission control and cache replacement algorithms are based on a costbenefit model that considers not only the access recency of cached pages but also their information density and expected eviction cost. The proposed scheme was implemented in the Linux kernel for Android. Our evaluation with a series of commercial applications demonstrated that it reduced the amount of flash memory read by up to 55%, thereby improving the application launch time by up to 22% in comparison to the original zswap.
Citations (3)
... We observed such drift and will address this challenge in future work. Power/Energy Characterization and Optimization: Researchers used the tools and methods above to characterize the energy efficiency of critical workloads and primitives in AI and HPC running on different scales [38]- [44], and to study the efficiency of the latest innovations in GPUs and other accelerators [45], [46]. Prior work also investigated the impact of frequency capping, power capping, DVFS, and input data composition on energy efficiency [24], [47]- [51]. ...
- Citing Conference Paper
February 2023
... Traditionally, in High-Performance Computing (HPC), CNN inference is performed solely on GPUs. Therefore, multiple works only focus on power efficiency for inference on GPUs, such as [18] and [22]. PELSI, on the other hand, focuses on HMPSoCs where embedded CPUs and GPUs are comparable in performance, and both are used for inference to maximize efficiency. ...
- Citing Conference Paper
October 2021
... To fulfill user expectations of seamless and rapid application relaunch, mobile systems preserve all execution-related data (called anonymous data in Linux [4]), such as stack and heap, in main memory. This practice, known as keeping applications alive in the background [1,[5][6][7][8], enables faster relaunches. However, it also results in significant main memory capacity requirements for each application. ...
- Citing Article
- Full-text available
September 2019
IEEE Access