Usually terms such as high performance and high availability are addressed by big corporations and institutions; however, something has changed over the past years as a real revolution is emerging from university classrooms. This week HPCwire has published an article describing some of the promising work being carried out by my students.
This year I have been honored to advise projects that respond to three critical areas that make their way to the media headlines nowadays: communications security, emergency medical services and P2P digital currencies.
Access the article here. And if you are curious about the rest of the projects, access the complete list here.
Last week, the School of Computing at Queen’s University (Canada) published our latest work in the form of a technical report. This technical report, result of the collaboration with Prof. Patrick Martin‘s research group, is entitled “Estimating Resource Costs of Executing Data-Intensive Workloads in Public Clouds” and can be accessed here.
The promise of “infinite” resources given by the cloud computing paradigm has led to recent interest in exploiting clouds for large-scale data-intensive computing. In this technical report, we present an analytical model to estimate the resource costs for executing data-intensive workloads in a public cloud. The cost model quantifies the cost-effectiveness of a resource configuration for a given workload with consumer performance requirements expressed as Service Level Agreements (SLAs), and is a key component of a larger framework for resource provisioning in clouds. We instantiate the cost model for the Amazon cloud, and experimentally evaluate the impact of key factors on the accuracy of the model.
The IEEE Xplore Digital Library has made available our paper entitled “Applications of neural-based spot market prediction for cloud computing”, which was presented at the IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS 2013) last September. It can be accessed here.
Advances in service-oriented architectures (SOA), virtualization, high-speed networks, and cloud computing have resulted in attractive pay-as-you-go services. Job scheduling on these systems results in commodity bidding for computing time. This bidding is institutionalized by Amazon for its Elastic Cloud Computing (EC2) environment and bidding methods exist for other cloud computing vendors as well as multi-cloud and cluster computing brokers such as SpotCloud. Commodity bidding for computing has resulted in complex spot price models that have ad-hoc strategies to provide demand for excess capacity. In this paper we discuss vendors who provide spot pricing and bidding and present a predictive model for future spot prices based on neural networking giving users a high confidence on future prices aiding bidding on commodity computing.
This is another work resulting from a collaboration with Prof. Lucio Grandinetti‘s research group at University of Calabria, Italy.
Last week the 4CaaSt project faced its final year review at the European Commission.
This project aims to create an advanced PaaS Cloud platform which supports the optimized and elastic hosting of Internet-scale multi-tier applications. 4CaaSt embeds all the necessary features, easing programming of rich applications and enabling the creation of a true business ecosystem where applications coming from different providers can be tailored to different users, mashed up and traded together.
The result? We passed our final review!
It has been 3 years of hard work and we have held general assemblies almost all over the European territory.
We have fostered innumerable and interesting collaborations.
And of course, we also established many new friendships!
Bye bye 4CaaSt project… and bye bye to all of you who I’m honored to call “my colleagues” since 3 years ago, for now.
“Painful though parting be, I bow to you as I see you off to distant clouds” (Emperor Saga)
Today a new book on Cloud Computing and Big Data has been published by IOS Press. I had the pleasure and honor to team up with Dr. Charlie Catlett, Dr. Wolfgang Gentzsch, Prof. Lucio Grandinetti and Prof. Gerhard R. Joubert for its edition.
Cloud computing offers many advantages to researchers and engineers who need access to high performance computing facilities for solving particular compute-intensive and/or large-scale problems, but whose overall high performance computing (HPC) needs do not justify the acquisition and operation of dedicated HPC facilities. There are, however, a number of fundamental problems which must be addressed, such as the limitations imposed by accessibility, security and communication speed, before these advantages can be exploited to the full.
This book presents 14 contributions selected from the International Research Workshop on Advanced High Performance Computing Systems, held in Cetraro, Italy, in June 2012. The papers are arranged in three chapters. Chapter 1 includes five papers on cloud infrastructures, while Chapter 2 discusses cloud applications.
The third chapter in the book deals with big data, which is nothing new – large scientific organizations have been collecting large amounts of data for decades – but what is new is that the focus has now broadened to include sectors such as business analytics, financial analyses, Internet service providers, oil and gas, medicine, automotive and a host of others.
This book will be of interest to all those whose work involves them with aspects of cloud computing and big data applications.
- Title: Cloud Computing and Big Data
- Editors: Catlett, C. , Gentzsch, W., Grandinetti, L., Joubert, G.R., Vazquez-Poletti, J.L.
- Pub. date: October 2013
- Pages: 264
- Volume: 23 of Advances in Parallel Computing
- ISBN: 978-1-61499-321-6
- J.L. Vázquez-Poletti
This month the Journal of Software: Practice and Experience has published online our paper entitled “Autonomic resource contention-aware scheduling”. It can be accessed here.
The complexity of computing systems introduces a few issues and challenges such as poor performance and high energy consumption. In this paper, we first define and model resource contention metric for high performance computing workloads as a performance metric in scheduling algorithms and systems at the highest level of resource management stack to address the main issues in computing systems. Second, we propose a novel autonomic resource contention-aware scheduling approach architected on various layers of the resource management stack. We establish the relationship between distributed resource management layers in order to optimize resource contention metric. The simulation results confirm the novelty of our approach.
This work is the result of a collaboration with Prof. Lucio Grandinetti‘s research group from University of Calabria, Italy.
The Journal of Concurrency and Computation: Practice and Experience has published online our work entitled “A performance/cost model for a CUDA drug discovery application on physical and public cloud infrastructures”, while waiting for its inclusion in a special issue on distributed, parallel, and GPU-accelerated approaches to Computational Biology.
Virtual Screening (VS) methods can considerably aid Drug Discovery research, predicting how ligands interact with drug targets. BINDSURF is an efficient and fast blind VS methodology for the determination of protein binding sites depending on the ligand, that uses the massively parallel architecture of GPUs for fast unbiased pre-screening of large ligand databases. In this contribution, we provide a performance/cost model for the execution of this application on both a physical and public cloud infrastructure. With our model it is possible to determine which is the best infrastructure by means of execution time and costs for any given problem to be solved by BINDSURF. Conclusions obtained from our study can be extrapolated to other GPU based VS methodologies.
This work is the result of a collaboration with a multidisciplinary research group from Catholic University of Murcia (Spain). Also, this is the first paper published by my PhD student Richard M. Wallace with our research group. Congratulations!
I had the honor to be invited by the GridKa School organization to give a plenary talk in this year’s edition, which takes place from 26th to 30th August at Karlsruhe Institute of Technology (KIT).
My talk, entitled “Cloud Computing: Expanding Humanity’s Limits to Planet Mars”, focused in the use of cloud computing for the exploration of Planet Mars.
As another tool that Humanity has used for expanding its limits, cloud computing was born and evolved in consonance with the different challenges where it has been applied.
Due to its seamless provision of resources, dynamism and elasticity, this paradigm has been brought into the spotlight by the Space scientific community and in particular that devoted to the exploration of Planet Mars. This is the case of Space Agencies in need of great amounts of on demand computing resources and with a budget to take care of.
The Red Planet represents the next limit to be reached by Humanity, attracting the attention of many countries as a destination for the next generation manned spaceflights. However, theres is still much research to do on Planet Mars and many computational needs to fulfill.
My talk reviewed the cloud computing approach by NASA and then it focused on the Mars MetNet Mission, with which our research group is actively collaborating. This Mission is being put together by Finland, Russia and Spain, and aims to deploy several tens of weather stations on the Martian surface. The Atmospheric Science research is a crucial area in the exploration of the Red Planet and represents a great opportunity for harnessing and improving current computing tools, and establish interesting collaborations between countries.
The received feedback was very great and some collaboration opportunities have also arisen, making this travel another successful one.
I had the honor to spend the last two weeks of July at Mendoza (Argentina) invited by Universidad Nacional de Cuyo. The reason was that the VI Latin American Symposium on High Performance Computing (HPCLatAm 2013) was taking place.
My job at Mendoza was double:
- The second week I gave a keynote talk on how, since the end of 2009, we have been providing HPC in the cloud solutions for critical applications pertaining to the exploration of planet Mars area. I also provided recent results from the latest application we are working on.
The event has been a total success and collaborations with some Argentinian academic institutions are on the way!
The Journal of Systems and Software will publish our work entitled “Solidifying the foundations of the cloud for the next generation Software Engineering”. Right now it’s “In Press” state but it can be accessed here.
Infrastructure clouds are expected to play an important role in the next generation Software Engineering but currently there are some drawbacks. These clouds are too infrastructure oriented and they lack advanced service oriented capabilities such as service elasticity, quality of service or admission control to perform a holistic management of a whole application. The deployment of complex multi-tier applications on top of IaaS infrastructures requires to provide the IaaS platforms with an extra service layer that provides advanced service management functionality.
In the present contribution we will introduce the benefits of a cloud based service-oriented architecture, which produces a set of research and scientific challenges. Then, current efforts to face these challenges will be described and finally, some conclusions on the work that still needs to be done at the IaaS level will be provided.