Category Archives: cloud

NGScloud: RNA-seq analysis of non-model species using cloud computing

Bioinformatics has just published online our recent work on cloud computing tools for the Next-Generation Sequencing (NGS) area. This journal paper can be accessed here.

RNA-seq analysis usually requires large computing infrastructures. NGScloud is a bioinformatic system developed to analyze RNA-seq data using the cloud computing services of Amazon that permit the access to ad hoc computing infrastructure scaled according to the complexity of the experiment, so its costs and times can be optimized. The application provides a user-friendly front-end to operate Amazon’s hardware resources, and to control a workflow of RNA-seq analysis oriented to non-model species, incorporating the cluster concept, which allows parallel runs of common RNA-seq analysis programs in several virtual machines for faster analysis.

NGScloud is freely available at https://github.com/GGFHF/NGScloud/. A manual detailing installation and how-to-use instructions is available with the distribution.

This work is part of the PhD work from Fernando Mora-Márquez, which I’m currently co-advising.

J.L. Vázquez-Poletti

Modeling and Simulation of the Atmospheric Dust Dynamic: Fractional Calculus and Cloud Computing

The International Journal of Numerical Analysis and Modeling has just made available our latest work which puts together fractional calculus and cloud computing for solving one of the Martian research key challenges. It can be accessed here.

The dust aerosols have an important effect on the solar radiaion in the Martial atmosphere and both surface and atmospheric heating rates, which are also basic drivers of atmospheric dynamics. Aerosols cause an attenuation of the solar radiation traversing the atmosphere and this attenuation is modeled by the Lambert-Beer-Bouguer law, where the aerosol optical thickness plays an important role. Through Angstrom law, the aerosol optical thickness can be approximated and this law allows to model attenuation of the solar radiation traversing the atmosphere by a fractional diffusion equation. The analytical solution is available in the case of one space dimension. When we extend the fractional diffusion equation to the case of two or more space variables, we need large and massive computations to approach numerically the solutions. In this case a suitable strategy is to use the cloud computing to carry out the simulations. We present an introduction to cloud computing applied to the fractional diffusion equation in one dimension.

J.L. Vázquez-Poletti

CloudMix: Generating Diverse and Reducible Workloads for Cloud Systems

Our latest contribution to the 10th IEEE International Conference Cloud Computing (CLOUD 2017) is available online and can be accessed here.

The prosperity of cloud computing offers common infrastructures to a wide range of applications. Understanding these applications’ workload behaviors is the premise of designing, managing, and optimizing cloud systems. Considering the heterogeneity and diversity of cloud workloads, for the sake of fairness, cloud benchmarks must be able to accurately replicate their behaviors in cloud systems, including both the usages of cloud resources and the microarchitectural behaviors beyond the virtualization layer. Furthermore, workloads spanning long durations are usually required to achieve representativeness in evaluation. Hence the more challenging issue is to significantly reduce the evaluation duration while still preserving their workload characteristics.

This paper presents our efforts towards generating cloud workloads of diverse behaviors and reducible durations. Our benchmark tool, CloudMix, employs a repository of reducible workload blocks (RWBs) as the high level abstraction of workload behaviors, including usages of the two most important cloud resources (CPU and memory) and their pairing microarchitectural operations. CloudMix further introduces an efficient methodology to combine RWBs to synthesize and replicate diverse cloud workloads in real-world traces. The effectiveness of CloudMix is demonstrated by generating a variety of reducible workloads according to a Google cluster trace and by applying these workloads in job scheduling optimization on Hadoop YARN. The evaluation results show: (i) when the workload durations are reduced by 100 times, the replication errors of workload behaviors are smaller than 2.08%; (ii) when providing fast evaluations (workload durations are reduced by 10 to 100 times) to recommend the optimal setting in YARN job scheduling, the performance degradation in the recommended setting is just 0.69% compared to that of the actual optimal setting.

J.L. Vázquez-Poletti

Performance study of a signal-extraction algorithm using different parallelisation strategies for the Cherenkov Telescope Array’s real-time-analysis software

Concurrency and Computation: Practice and Experience just published our latest work on parallelisation strategies in the context of the the Cherenkov Telescope Array project. This is a result of an ongoing collaboration with CIEMAT (Spain) and INAF (Italy) and it can be accessed here.

In this work, a signal-extraction algorithm pertaining to the Cherenkov Telescope Array’s real-time-analysis pipeline has been parallelised using SSE, POSIX Threads and CUDA. Because of the observatory’s constraints, the online analysis has to be conducted on site, on hardware located at the telescopes, and compels a search for efficient computing solutions to handle the huge amount of measured data. This work is framed in a series of studies which benchmark several algorithms of the real-time-analysis pipeline on different architectures to gain an insight into the suitability and performance of each platform.

J.L. Vázquez-Poletti

Synopsis-Based Approximate Request Processing for Low Latency and Small Correctness Loss in Cloud Online Services

The International Journal of Parallel Programming has just made available online our latest work on approximate request processing in cloud online services. This is the result of our collaboration with the Institute of Computing Technology from the Chinese Academy of Sciences and it can be accessed here.

SARP: Synopsis-Based Approximate Request Processing for Low Latency and Small Correctness Loss in Cloud Online Services

Despite the importance of providing quick responsiveness to user requests for online services, such request processing is very resource expensive when dealing with large-scale service datasets. These often exceed the service providers’ budget when services are deployed on a cloud, in which resources are charged in monetary terms. Providing approximate processing results in request processing is a feasible solution for such problem that trades off result correctness (e.g. prediction or query accuracy) for response time reduction. However, existing techniques in this area either use parts of datasets or skip expensive computations to produce approximate results, thus resulting in large losses in result correctness on a tight resource budget. In this paper, we propose Synopsis-based Approximate Request Processing (SARP), a SARP framework to produce approximate results with small correctness losses even using small amount of resources. To achieve this, SARP conducts computations over synopses, which aggregate the statistical information of the entire service dataset at different approximation levels, based on two key ideas: (1) offline synopsis management that generates and maintains a set of synopses that represent the aggregation information of the dataset at different approximation levels. (2) Online synopsis selection that considers both the current resource allocation and the workload status so as to select the synopsis with the maximal length that can be processed within the required response time. We demonstrate the effectiveness of our approach by testing the recommendation services in e-commerce sites using a large, real-world dataset. Using prediction accuracy as the result correctness metric, the results demonstrate: (i) SARP achieves significant response time reduction with very small correctness losses compared to the exact processing results; (ii) using the same processing time, SARP demonstrates a considerable reduction in correctness loss compared to existing approximation techniques.

J.L. Vázquez-Poletti

A Cloud for Clouds: Weather Research and Forecasting on a Public Cloud Infrastructure

Springer has finally made available our latest paper in collaboration with the Spanish State Meteorological Agency. It can be accessed here.

A Cloud for Clouds: Weather Research and Forecasting on a Public Cloud Infrastructure

The Weather Research & Forecasting (WRF) Model is a high performance computing application used by many worldwide meteorological agencies. Its execution may benefit from the cloud computing paradigm and from public cloud infrastructures in particular, but only if the parameters are chosen wisely. An optimal infrastructure by means of cost can be instantiated for a given deadline, and an optimal infrastructure by means of performance can be instantiated for a given budget. With this in mind, we provide the optimal parameters for the execution of the WRF on a public cloud infrastructure such as Amazon Web Services.

J.L. Vázquez-Poletti

Special Issue on Cloud-based Simulations and Data Analysis

Scientific Programming (Hindawi, JCR:0.559) has just announced the call for papers for a Special Issue on cloud-based simulations and data analysis in which I’m participating as Guest Editor together with Dr. Fabrizio Messina (University of Catania, Catania, Italy) and Dr. Lars Braubach (Nordakademie, Elmshorn, Germany).

Cloud-based Simulations and Data AnalysisIn many areas, including commercial as well as scientific fields, the generation and storage of large amounts of data have become essential. Manufacturing and engineering companies use cloud-based high-performance computing technologies and simulation techniques to model, simulate, and predict behavior of complicated models, involving the preliminary analysis of existing data as well the generation of data during the simulations. Having large amounts of data the following question arises, how can it be efficiently processed? Cloud computing holds the promise of providing elastic computational resources thereby adapting towards the concrete application needs and thus is a promising base technology for such processing techniques. But, cloud computing itself is not the complete solution and in order to exploit its underlying power novel algorithms and techniques have to be conceived.

In this special issue we invite original contributions providing novel ideas towards simulation and data processing in the context of cloud computing approaches. The aim of this special issue is to assemble visions, ideas, experiences, and research achievements in these areas.

Potential topics include, but are not limited to:

  • Techniques for cloud-based simulations
  • Computational Intelligence for cloud-based simulations
  • Service composition for cloud-based simulations
  • Computational Intelligence for data analysis
  • Software architectures for cloud-based simulations
  • Cloud-based data mining
  • Big data analytics for predictive modeling

Authors can submit their manuscripts via the Manuscript Tracking System at http://mts.hindawi.com/submit/journals/sp/csda/.

Manuscript Due: Friday, 29 April 2016
First Round of Reviews: Friday, 22 July 2016
Publication Date: Friday, 16 September 2016

J.L. Vázquez-Poletti

Special Issue on High Performance Computing for Big Data

I’m very happy to announce that I’m serving as Guest Editor at the Computers Journal for a Special Issue on “High Performance Computing for Big Data”. The deadline for submissions is March 31st, 2016.

HPCBigData-SpecialIssue

Big Data is, right now one of the hottest topics in computing research. This is because of:

  • the numerous challenges that include (and are not limited to) capture, search, storage, sharing, transfer, representation and privacy of the data;
  • and the wide spectrum of areas covered, that range from Bioinformatics to Space Science, and are a research challenge by themselves.

New technologies and algorithms have emerged from Big Data to efficiently manage and process great quantities of data within reasonable elapsed times. However, there are computing barriers that cannot be crossed without the proper resources.

The many ways that High Performance Computing can be delivered for facing Big Data challenges offer a wide spectrum of research opportunities. From FPGAs to cloud computing, technologies and algorithms can be brought to a whole different level and foster incredible insights from massive information repositories.

The papers accepted for publication in this Special Issue cover both fundamental issues and new concepts related to the application of High Performance Computing to the Big Data area.

J.L. Vázquez-Poletti

A multi-dimensional job scheduling

Since march Future Generation Computer Systems has made available (online) our paper entitled “A multi-dimensional job scheduling”. This work is the result of a collaboration with the research group led by Prof. Lucio Grandinetti (University of Calabria, Italy) and it can be accessed here.

Future Generation Computer SystemsWith the advent of new computing technologies, such as cloud computing and contemporary parallel processing systems, the building blocks of computing systems have become multi-dimensional. Traditional scheduling systems based on a single-resource optimization, like processors, fail to provide near optimal solutions. The efficient use of new computing systems depends on the efficient use of several resource dimensions. Thus, the scheduling systems have to fully use all resources. In this paper, we address the problem of multi-resource scheduling via multi-capacity bin-packing. We propose the application of multi-capacity-aware resource scheduling at host selection layer and queuing mechanism layer of a scheduling system. The experimental results demonstrate performance improvements of scheduling in terms of waittime and slowdown metrics.

J.L. Vázquez-Poletti

Special Issue on HPC for Advanced Modeling and Simulation of Materials

At the beginning of 2016 the Journal of Computer Physics Communications (Elsevier, JCR:3.122, Q1) will close a Special Issue in which I’m very honored to serve as Guest Editor. You may be interested in the following Call for Papers.

ComputerPhysicsCommunicationsSupercomputers are rapidly evolving as advances in architecture and semiconductor technology. High performance computing has been applied to accelerate the advanced modeling and simulation of materials. The current trend will provide challenges in parallelism because of increased processing units, accelerators, complex hierarchical memory systems, interconnection networks, storage and uncertainties in programming models. The interdisciplinary collaboration is becoming more and more important in high performance computation. Realistic material modeling and simulation need to combine material modeling methods, mathematical models, parallel algorithms and tools for exploiting supercomputers effectively.

002-N-NICS-03R

Topics

Topics include but are not limited to:

  • Numerical methods and parallel algorithms for the advanced modeling and simulation of materials
  • Use of hardware accelerators (MIC, GPUs, FPGA) and heterogeneous hardware in computational material science
  • Mathematical modeling and high performance computing tools in large-scale material simulation
  • Programming model for material algorithm scalability and resilience
  • Visualization on material data
  • Multi-scale modeling and simulation in materials science
  • Performance modeling and auto-tuning methods in material simulation
  • Big data of materials science
  • Accelerate dissipative particle dynamics by hardware accelerators
  • Large-scale material modeling based on the new features of message passing programming model

Submission Format and Guideline

All submitted papers must be clearly written in excellent English and contain only original work, which has not been published by or is currently under review for any other journal or conference. Papers must not exceed 25 pages (one-column, at least 11pt fonts) including figures, tables, and references. A detailed submission guideline is available as “Guide to Authors” at: http://www.journals.elsevier.com/computer-physics-communications/

All manuscripts and any supplementary material should be submitted through Elsevier Editorial System (EES). The authors must select as “SI: CPC_HPCME 2015” when they reach the “Article Type” step in the submission process. The EES website is located at: http://ees.elsevier.com/cpc/

All papers will be peer-reviewed by three independent reviewers. Requests for additional information should be addressed to the guest editors.

Editors in Chief

N. Stanley. Scott

Guest Editors

Fei Gao
Jose Luis Vazquez-Poletti
Jianjiang Li
Jue Wang

Important dates

Submission deadline: 2016.1.20
Acceptance deadline: 2016.4.20
Publication: 2016.6.20

J.L. Vázquez-Poletti