Kuo-Chan Huang

Kuo-Chan Huang
National Taichung University of Education · Department of Computer Science

Ph.D.

About

83
Publications
4,065
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
582
Citations
Citations since 2017
17 Research Items
305 Citations
201720182019202020212022202301020304050
201720182019202020212022202301020304050
201720182019202020212022202301020304050
201720182019202020212022202301020304050
Introduction
Additional affiliations
August 2007 - July 2021
National Taichung University of Education
Position
  • Professor
Education
September 1994 - July 1998
National Chiao Tung University
Field of study
  • Computer Science

Publications

Publications (83)
Article
Full-text available
Workflow computing has become an essential part of many scientific and engineering fields, while workflow scheduling has long been a well-known NP-complete research problem. Major previous works can be classified into two categories: heuristic-based and guided random-search-based workflow scheduling methods. Monte Carlo Tree Search (MCTS) is a rece...
Article
Full-text available
Usually, a large number of concurrent bag-of-tasks (BoTs) application execution requests are submitted to cloud data centers (CDCs), which needs to be optimally scheduled on the physical cloud resources to obtain maximal performance. In the current paper, NP-Hard cloud task scheduling (CTS) problem for scheduling concurrent BoT applications is mode...
Article
Full-text available
Resource allocation is vital for improving system performance in big data processing. The resource demand for various applications can be heterogeneous in cloud computing. Therefore, a resource gap occurs while some resource capacities are exhausted and other resource capacities on the same server are still available. This phenomenon is more appare...
Chapter
Fully probing plays an important role in the nonogram solving algorithm developed by Wu et al., whose implementation, named LalaFrogKK, has won several nonogram tournaments since 2011. Different fully probing methods affect the overall nonogram solving performance greatly as shown in previous studies. In this paper, we explore fully probing efficie...
Article
Full-text available
Most modern parallel programs are written with the moldable property. However, most existing parallel computing systems treat such parallel programs as rigid jobs for scheduling, resulting in two drawbacks. The first is inflexibility and inefficiency in processor allocation, leading to resource fragmentation and thus poor performance. The second is...
Article
Nonogram is a typical two-dimensional logical puzzle game in which each pixel involves two constraints associated to the intersecting row and column. The recently proposed approaches can efficiently solve many puzzles via logical deduction based on the 2-SAT formulas. This paper proposes a set of new logical properties for inferencing the consisten...
Article
This paper presents our experimental studies on improving the speed of solving nonogram puzzles. Our approach is based on the algorithm developed by Wu et al. A puzzle solving program, named LalaFrogKK, implementing the algorithm has won several nonogram tournaments since 2011. The algorithm consists of three major parts: propagation based on line...
Article
Cloud computing and services have greatly changed the way how people develop and use software, and also raised a lot of new research issues. In this paper, we investigate two such important issues, service deployment and service request scheduling, for composite cloud services in dynamic cloud environments. We present load-aware service deployment...
Chapter
Internet of Things is an emerging paradigm to enable easy data collection and exchange among a wide variety of devices. When the scale of Internet of Things enlarges, the cloud computing system could be applied to mine these big data generated by Internet of Things. This paper proposes a task scheduling approach for time-critical data streaming app...
Conference Paper
This paper addresses the distributed resource allocation problem in the federated cloud environment, in which the deployment and management of multiple clouds aim to meet the clients’ requirement. In such an environment, users could optimize service delivery by selecting the most suitable provider, in terms of cost, efficiency, flexibility, and ava...
Article
Traditionally, high-performance computing (HPC) systems usually deal with the socalled best-effort jobs which do not have deadlines and are scheduled in an as-quick-as-possible manner. Recently the concept of HPC as a service (HPCaaS) was proposed, aiming to transform HPC facilities and applications into a more convenient and accessible service mod...
Article
This paper presents a Platform of Extensible Workflow Simulation Service (Pewss), which we have developed to provide a cloud service for aiding research work in workflow scheduling. The simulation has been a major tool for performance evaluation and comparison in workflow scheduling research. However, researchers usually have to develop their own s...
Article
Full-text available
Cloud computing is an emerging technology for rapidly provisioning and releasing resources on-demand from a shared resource pool. When big data is analyzed/mined on the cloud platform, the efficiency of resource provisioning would affect the system performance. This work proposes a framework for proactive resource provisioning in IaaS (Infrastructu...
Article
Full-text available
Cloud computing is an emerging technology which relies on virtualization techniques to achieve the elasticity of shared resources for providing on-demand services. When the service demand increases, more resources are required to satisfy the service demand. Single cloud generally cannot provide unlimited services with limited physical resources; th...
Article
Nowadays, many large-scale scientific and engineering applications are usually constructed as dependent task graphs, or called workflows, for describing complex interrelated computation and communication among constituent software modules or programs. Therefore, scheduling workflows efficiently becomes an important issue in modern parallel computin...
Conference Paper
Cloud computing and services have greatly changed the way how people develop and use software, and also raised a lot of new research issues. In this paper, we investigate two such important issues, service deployment and service request scheduling, for composite cloud services in dynamic cloud environments. We present load-aware service deployment...
Article
Full-text available
Composite cloud services based on the methodologies of Software as a Service and Service-Oriented Architecture are transforming how people develop and use software. Cloud service providers are confronting the service selection problem when composing composite cloud services. This paper deals with an important type of service selection problem, mini...
Article
Full-text available
Parallel computation has been widely applied in a variety of large-scale scientific and engineering applications. Many studies indicate that exploiting both task and data parallelisms, i.e. mixed-parallel workflows, to solve large computational problems can get better efficacy compared with either pure task parallelism or pure data parallelism. Sch...
Conference Paper
Task ranking and allocation are two major steps in list-based workflow scheduling. This paper explores various possibilities, evaluates recent approaches in the literature, and proposes several new task ranking and allocation heuristics. A series of simulation experiments have been conducted to evaluate the proposed heuristics. Experimental results...
Article
Workflow scheduling on parallel systems has long been known to be a NP-complete problem. As modern grid and cloud computing platforms emerge, it becomes indispensable to schedule mixed-parallel workflows in an online manner in a speed-heterogeneous multi-cluster environment. However, most existing scheduling algorithms were not developed for online...
Article
Workflow scheduling has long been an important research topic in the field of parallel computing. Clustering-based methods are one of the major types of workflow scheduling approaches which have been shown superior to other kinds of methods in many cases due to their advantage of minimizing inter-task communication costs. Most previous research dea...
Article
This study proposes a novel efficient resource allocation mechanism for federated clouds, which takes the communication overhead into consideration, to improve system throughput and reduce resource repacking overhead in the auto-scaling mechanism. In general, when the amount of service requests increases, more and more resources are allocated to sa...
Article
Cloud computing has caused a revolution in our way of developing and using software. Software development and deployment based on the new models of Software as a Service (SaaS) and Service-Oriented Architecture (SOA) are expected to bring a lot of benefits for users. However, software developers and service providers have to address new challenging...
Article
List-based workflow scheduling has received much research attention and been implemented in many existing workflow computing systems as more and more scientific and engineering applications need to exploit task parallelism for performance improvement. In this paper, we propose two task-ranking mechanisms and one task allocation method for the two m...
Article
In a social network, individuals often simultaneously belong to multiple social communities; therefore, the detection of relationships among individuals is very important. However, most of community detection methods only apply a single relationship in dynamic social networks with multi-relationships among individuals. Therefore, this study propose...
Article
As cloud computing emerges and gains acceptance, more and more software applications of various domains are transforming into the SaaS model. Recently, the concept of HPC as a Service (HPCaaS) was proposed to bring the traditional high performance computing field into the era of cloud computing. One of its goals aims to allow users to get easier ac...
Article
In a heterogeneous multi-cluster (HMC) system, processor allocation is responsible for choosing available processors among clusters for job execution. Traditionally, processor allocation in HMC considers only resource fragmentation or processor heterogeneity, which leads to heuristics such as Best-Fit (BF) and Fastest-First (FF). However, those heu...
Article
Recently, the concept of HPC as a Service (HPCaaS) was proposed to bring the traditional high performance computing field into the era of cloud computing. One of its goals aims to allow users to get easier access to HPC facilities and applications. This paper deals with related job submission and scheduling issues to achieve such goal. Traditionall...
Article
Traditionally, users who submit parallel jobs to supercomputing centers need to specify the amount of processors that each job requires. Job schedulers then allocate resources to each job according to the processor requirement. However, this kind of allocation has been shown leading to degraded system utilization and job turnaround time when mismat...
Conference Paper
Cloud computing has caused a revolution in our way of using software. Software development and deployment based on the new models of Software as a Service (SaaS) and Service-Oriented Architecture (SOA) are expected to bring a lot of benefits for users. However, software developers and software service providers will have to address new challenging...
Article
Full-text available
Grid performance are usually measured by the average turnaround time of all jobs in the system. A job’s turnaround time consists of two parts: queue waiting time and actual execution time, which in a heterogeneous grid environment, are severely affected by the resource fragmentation and speed heterogeneity factors. Most existing processor allocatio...
Conference Paper
Most previous workflow scheduling research focused on scheduling a single workflow on parallel systems. Recent researches show that utilizing idle time slots between scheduled tasks is a promising direction for efficient multiple workflow scheduling. Stavrinides and Karatza proposed a list scheduling approach to efficient utilization of the idle ti...
Chapter
This chapter elaborates the quality of service (QoS) aspect of load sharing activities in a computational grid environment. Load sharing is achieved through appropriate job scheduling and resource allocation mechanisms. A computational grid usually consists of several geographically distant sites each with different amount of computing resources. D...
Chapter
This chapter elaborates the quality of service (QoS) aspect of load sharing activities in a computational grid environment. Load sharing is achieved through appropriate job scheduling and resource allocation mechanisms. A computational grid usually consists of several geographically distant sites each with different amount of computing resources. D...
Conference Paper
Many large-scale scientific applications are usually constructed as workflows due to large amounts of interrelated computation and communication. Workflow scheduling has long been a research topic in parallel and distributed computing. However, most previous research focuses on single workflow scheduling. As cloud computing emerges, users can now h...
Article
Scheduling workflow applications in grid environments is a great challenge, because it is an NP-complete problem. Many heuristic methods have been presented in the literature and most of them deal with a single workflow application at a time. In recent years, several heuristic methods have been proposed to deal with concurrent workflows or online w...
Conference Paper
This paper proposes a processor allocation technique named temporal look-ahead processor allocation (TLPA) that makes allocation decision by evaluating the allocation effects on subsequent jobs in the waiting queue. TLPA has two strengths. First, it takes multiple performance factors into account when making allocation decision. Second, it can be u...
Article
Resource discovery is an important mechanism in P2P applications. Chord is usually one of the structured overlays applied in the resource discovery mechanism. Chord adopts the finger table to record the connection between the node and its successors in order to support resource discovery in O(log(N)) (N is the number of nodes). However, Chord has s...
Chapter
In a computational Grid environment, a common practice is to try to allocate an entire parallel job onto a single participating site. Sometimes a parallel job, upon its submission, cannot fit in any single site due to the occupation of some resources by running jobs. How the job scheduler handles such situations is an important issue which has the...
Article
Multi-cluster is the common underlying architecture of most grid and cloud environments, which usually consist of multiple clusters located at different places. One important characteristic of such computing environments is the performance difference between intra-cluster and inter-cluster communications. Intra-cluster communication networks usuall...
Article
Workflow management systems have been widely used in many business process management (BPM) applications. There are also a lot of companies offering commercial software solutions for BPM. However, most of them adopt a simple client/server architecture with one single centralized workflow-management server only. As the number of incoming workflow re...
Conference Paper
Cloud computing opens new opportunities for application providers because with the policy “add as needed and pay as used” they can economize the cost for computing resources. In cloud environments, issues such as resource allocation and dynamic resource provisioning based on users’ QoS constraints are yet to be addressed for interactive workflow ap...
Conference Paper
Scheduling workflow applications in grid environments is a great challenge, because it is an NP-complete problem. Many heuristic methods have been presented in the literature and most of them deal with a single workflow application at a time. In recent years, there are several heuristic methods proposed to deal with concurrent workflows or online w...
Article
Full-text available
Cloud computing is growing increasingly popular and appears well-suited to meet the demand of resource sharing. Peer-to-Peer networking is an emerging technique for resource discovery which is an im-portant mechanism in Cloud Computing. Chord is usually one of the structured overlays applied in the re-source discovery mechanism. Chord adopts the fi...
Conference Paper
Most workflow management systems nowadays are based on centralized client/server architecture. Under this architecture, the response time of request might increase unacceptably when the number of users who login to the system increase quickly and a large amount of requests are sent to the centralized server within a short time period. Parallel serv...
Article
In a computational grid environment, a common practice is try to allocate an entire parallel job onto a single participating site. Sometimes a parallel job, upon its submission, cannot fit in any single site due to the occupation of some resources by running jobs. How the job scheduler handles such situations is an important issue which has the pot...
Conference Paper
Many parallel computer systems installed in computing centers worldwide, which adopts backfilling based job scheduling policies, require that users should provide estimated job execution time when submitting a job to the system. This paper presents an approach, taking advantage of the estimated job execution time, to effectively allocating processo...
Conference Paper
Full-text available
In a heterogeneous grid environment, there are two major factors which would severely affect overall system performance: speed heterogeneity and resource fragmentation. Moreover, the relative effect of these two factors changes with different workload and resource conditions. Processor allocation methods have to deal with this issue. However, most...
Conference Paper
Job scheduling has attracted much research attention in recent years. Various job scheduling methods have been proposed and evaluated under different workload and grid conditions. However, few efforts have been made to analyze the underlying causes that lead to the performance results of the proposed scheduling methods. This paper presents our stud...
Chapter
This chapter elaborates the quality of service (QoS) aspect of load sharing activities in a computational grid environment. Load sharing is achieved through appropriate job scheduling and resource allocation mechanisms. A computational grid usually consists of several geographically distant sites each with different amount of computing resources. D...
Chapter
Most current grid environments are established through collaboration among a group of participating sites which volunteer to provide free computing resources. Therefore, feasible load sharing policies that benefit all sites are an important incentive for attracting computing sites to join and stay in a grid environment. Moreover, a grid environment...
Article
Grid computing systems integrate geographical computing resources across virtual organizations. In grid systems, one of the most important challenges is how to efficiently exploit shared computing resources. This study addresses the job migration policies for exploiting computing resources in grid computing systems. In this study, we propose two jo...
Conference Paper
Full-text available
In a computational grid environment, a common practice is try to allocate an entire parallel job onto a single participating site. Sometimes a parallel job, upon its submission, cannot fit in any single site due to the occupation of some resources by running jobs. How the job scheduler handles such situations is an important issue which has the pot...
Conference Paper
Taiwan UniGrid (Taiwan University Grid) is a Grid computing platform, which is founded by a community of educational and research organizations interested in Grid computing technologies in Taiwan. In this paper, we present the design and development of a middleware for Taiwan UniGrid. Taiwan UniGrid middleware consists of three primary modules: 1)...
Conference Paper
Multi-core CPU has become the trend of microprocessor technology. This development not only will influence the architecture of computer systems, but also bring new ways of software programming, most importantly, parallel processing. Task mapping plays an important role in the overall performance of a parallel numerical algorithm. In this paper, we...
Conference Paper
In this paper, we propose an improved model for predicting HPL (High performance Linpack) performance. In order to accurately predict the maximal LINPACK performance we first divide the performance model into two parts: computational cost and message passing overhead. In the message passing overhead, we adopt Xu and Hwang’s broadcast model instead...
Conference Paper
Full-text available
A grid has to provide strong incentive for participating sites to join and stay in it. Participating sites are concerned with the performance improvement brought by the gird for the jobs of their own local user communities. Feasible and effective load sharing is key to fulfilling such a concern. This paper explores the load-sharing policies concern...
Article
This paper presents our experience in parallelizing a Monte Carlo/fluid hybrid simulation of radio frequency glow discharge in plasma physics. The application is a dynamic loosely-synchronous system with time-increasing workload. We adopt adaptive processor allocation to effectively parallelize the time-increasing workload. The results show that ad...
Article
This paper proposes waiting ratio as a basis in evaluating various scheduling methods for dynamic workloads consisting of multi-processor jobs on parallel computers. We evaluate commonly used methods as well as several methods proposed in this paper by simulation studies. The results indicate that some commonly used methods do not improve the waiti...
Article
Full-text available
On parallel processors or in distributed computing environments, generating and sharing one stream of random numbers for all parallel processing elements is usually impractical. A more attractive method is to allow each processing element to generate random numbers independently. This paper investigates parallel use of multiplicative congruential g...
Conference Paper
Processor allocation and job scheduling are two complementary techniques for improving the performance of parallel systems. This paper presents an effort in studying the issues of processor allocation and job scheduling on the emerging computing grid platform and developing an integrated approach to efficient workload management. The experimental r...
Article
SAN clusters are usually built for high-performance storage systems. This paper evaluates the execution performance of SAN-PC Cluster for CFD applications. Feasibility exploration of this architecture and comparisons with other parallel computers are provided as useful references for those people who are planning to build PC clusters. The experimen...
Article
Recently, the superior and continuously improving cost-performance ratio of commodity hardware and software has made PC clustering a popular alternative for high-performance computing in both academic institutes and industrial organizations. The purpose of this work is to use PC clusters to solve a weather-prediction model in parallel mode, and the...
Article
Full-text available
When a die is cast, the outcome is one of the six sides, i.e. the outcome is discrete and uniformly distributed over the range R={1,2,3,4,5,6}. Generating random numbers with such a distribution is very easy: obtain a random number w∈W, the domain of the random numbers, and take (wmod|R|)+1. However, many uniform discrete distributions have a rathe...
Conference Paper
This paper presents the benchmarking results and performance evaluation of the PC cluster built at the National Center for High-Performance Computing (NCHC) in Taiwan. The evaluation compares different cluster architecture and software platforms. The results indicate that PC cluster in general has the advantage of better cost/performance ratio, and...
Article
The adaptive parallel computing model of plasma source for semiconductor processing is developed. By using the Monte Carlo method in the low-pressure plasma regime is characterized by its time-varying problem size. We proposed an adaptive processor allocation methodology for the parallel simulation to dynamically adapt to the workload variation and...
Article
This paper presents two design patterns useful for parallel computations of master-slave model. These patterns are concerned with task management and parallel and distributed data structures. They can be used to help addressing the issues of data partition and mapping, dynamic task allocation and management in parallel programming with the benefit...
Article
Full-text available
This paper presents a parallel pattern compiled code logic simulator which can handle the transport delay as well as the inertial delay of the logic gate. It uses Potential-Change Frame, incorporating inertial functions, to execute event-canceling operation of gates, thus eliminating the conventional time wheel mechanism. As a result, it can adopt...
Conference Paper
Full-text available
This work proposes an approach to generate weighted random patterns which can maximally excite a circuit during its burn-in testing. The approach is based on a probability model and a maximization procedure to obtain signal transition probability distribution for primary inputs and to generate weighted random patterns according to the obtained prob...
Conference Paper
This paper presents two design patterns useful for parallel computations of master-slave model. These patterns are concerned with task management and parallel and distributed data structures. They can be used to help address the issues of data partition and mapping, dynamic task allocation, and load balancing in parallel programming with the benefi...
Conference Paper
Full-text available
This paper presents a parallel pattern compiled code logic simulator which can handle the transport delay as well as the inertial delay of the logic gate. It uses Potential-Change Frame, incorporating with inertial functions, to execute event-cancelling operation for gates, thus eliminating the conventional time wheel mechanism. As a result, it can...
Article
Full-text available
Cache performance in modern computers is important for program efficiency. A cache is thrashing if a significant amount of time is spent moving data between the memory and the cache. This paper presents two cache thrashing examples, one in scientific computing and one in image processing, both of which involve several one-dimensional arrays that ar...
Article
Full-text available
LAN-connected workstations are a heterogeneous environment, where each workstation provides time-varying computing power, and thus dynamic load balancing mechanisms are necessary for parallel applications to run efficiently. Parallel basic linear algebra subprograms (BLAS) have recently shown promise as a means of taking advantage of parallel compu...
Conference Paper
Full-text available
This paper presents our experience and future plan for exploring high-performance scientific computing in distributed computing environments. The purpose of our research is to ease the development of high-performance scientific applications in distributed computing environments. This paper addresses issues arising when applying object-oriented prog...
Article
Full-text available
Parallel BLAS libraries have recently shown promise as a means of taking advantage of parallel computing in solving scientific problems. However, little work has been done on providing such a parallel library in LAN-connected workstations. Our motivation for this research lies in the strong belief that since LAN-connected workstations are highly co...
Conference Paper
A new testing scheme, transient transition count (TTC) testing, which is able to detect hard-to-test faults and redundant faults, in addition to conventional stuck-at faults, is proposed. The scheme is based on applying a pair of transition patterns and observing the transition count of the transient output response of a circuit to defect faults. A...

Network

Cited By