
Oscar H. Mondragon- PhD
- Professor (Full) at Universidad Autónoma de Occidente
Oscar H. Mondragon
- PhD
- Professor (Full) at Universidad Autónoma de Occidente
About
16
Publications
5,172
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
91
Citations
Introduction
Oscar H. Mondragon currently works at the Automatics and Electronics Department, Universidad Autonoma de Occidente. Oscar does research in HPC Systems, Cloud Computing, and Distributed Computing.
Current institution
Additional affiliations
August 2011 - July 2016
August 2007 - present
June 2006 - December 2006
Telecom Italia Labs
Position
- Research Intern
Education
August 2011 - July 2016
August 2011 - December 2013
June 2005 - December 2006
Publications
Publications (16)
Moving high-performance computing (HPC) applications from HPC clusters to cloud computing clusters, also known as the HPC cloud, has recently been proposed by the HPC research community. Migrating these applications from the former environment to the latter can have an important impact on their performance, due to the different technologies used an...
In the above article
[1]
, the Abstract is incorrect. It should read as follows:
Fault tolerance and the availability of applications, computing infrastructure, and communications systems during unexpected events are critical in cloud environments. The microservices architecture, and the technologies that it uses, should be able to maintain acceptable service levels in the face of adverse circumstances. In this paper, we discus...
Cloud computing systems are rapidly evolving toward multicloud architectures supported on heterogeneous hardware. Cloud service providers are widely offering different types of storage infrastructures and multi-NUMA architecture servers. Existing cloud resource allocation solutions do not comprehensively consider this heterogeneous infrastructure....
There is a lack of computing platforms to collect and analyze key data from traffic videos in an automatic and speedy way. Computer vision can be used in combination with parallel distributed systems to provide city authorities tools for automatic and fast processing of stored videos to determine the most significant driving patterns that cause tra...
The dramatic urban population growth worldwide brings several problems to cities. Urban centers must be prepared to face critical everyday situations and sudden risk events. Information systems play a fundamental role in achieving resilience in cities, allowing quick and effective responses to threatening events. This paper presents a software arch...
Every day the number of traffic cameras in cities rapidly increase and huge amount of video data are generated. Parallel processing infrastruture, such as Hadoop, and programming models, such as MapReduce, are being used to promptly process that amount of data. The common approach for video processing by using Hadoop MapReduce is to process an enti...
In next-generation cloud computing clusters, performance of data-intensive applications will be limited, among other factors, by disks data transfer rates. In order to mitigate performance impacts, cloud systems offering hierarchical storage architectures are becoming commonplace. The Hadoop File System (HDFS) offers a collection of storage policie...
We present a detailed examination of time agreement characteristics for nodes within extreme-scale parallel computers. Using a software tool we introduce in this paper, we quantify attributes of clock skew among nodes in three representative high-performance computers sited at three national laboratories. Our measurements detail the statistical pro...
Scientific workloads running on current extreme-scale systems routinely generate tremendous volumes of data for postprocessing. This data movement has become a serious issue due to its energy cost and the fact that I/O bandwidths have not kept pace with data generation rates. In situ analytics is an increasingly popular alternative in which post-si...
Next-generation applications increasingly rely on in situ analytics to guide computation, reduce the amount of I/O performed, and perform other important tasks. Scheduling where and when to run analytics is challenging, however. This paper quantifies the costs and benefits of different approaches to scheduling applications and analytics on nodes in...
The move towards high-performance computing (HPC) applications comprised of coupled codes and the need to dramatically reduce data movement is leading to a reexamination of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the performance impact of a move away from strict space-sharing of nodes for HPC a...
Ubiquitous computing is a concept that has gained great importance around the world where major projects are being developed about this topic.
Los sistemas ubicuos tienen diversas aplicaciones en diferentes campos
tales como la industria, la educación y la salud. Sirven de apoyo a los
profesionales de la salud para mejorar la atención, a través del acceso
eficiente a información fundamental para el diagnóstico, tratamiento y
seguimiento del paciente.
La evolución actual de las redes de d...