Spot Price prediction for Cloud Computing using Neural Networks

The International Journal of Computing has made available our paper entitled “Spot Price prediction for Cloud Computing using Neural Networks”. This work is the result of a collaboration with the research groups led by Prof. Lucio Grandinetti (University of Calabria, Italy) and Associate Prof. Volodymyr O. Turchenko (Ternopil National Economic Universit, Ukraine).

Paper-CIST

Advances in service-oriented architectures, virtualization, high-speed networks, and cloud computing has resulted in attractive pay-as-you-go services. Job scheduling on such systems results in commodity bidding for computing time. Amazon institutionalizes this bidding for its Elastic Cloud Computing (EC2) environment. Similar bidding methods exist for other cloud-computing vendors as well as multi–cloud and cluster computing brokers such as SpotCloud. Commodity bidding for computing has resulted in complex spot price models that have ad-hoc strategies to provide demand for excess capacity. In this paper we will discuss vendors who provide spot pricing and bidding and present the predictive models for future short-term and middle-term spot price prediction based on neural networks giving users a high confidence on future prices aiding bidding on commodity computing.

 

J.L. Vázquez-Poletti

Chapter in the Handbook of Research on Architectural Trends in Service-Driven Computing

At the end of June the Handbook of Research on Architectural Trends in Service-Driven Computing has been released by IGI Global. This publication, divided in 2 volumes, explores, delineates, and discusses recent advances in architectural methodologies and development techniques in service-driven computing. The handbook is an inclusive reference source for organizations, researchers, students, enterprise and integration architects, practitioners, software developers, and software engineering professionals engaged in the research, development, and integration of the next generation of computing.

Book

We participated in the elaboration of this publication with the 28th Chapter, entitled “Admission Control in the Cloud: Algorithms for SLA-Based Service Model”.

Cloud Computing is a paradigm that allows the flexible and on-demand provisioning of computing resources. For this reason, many institutions have moved their systems to the Cloud, and in particular, to public infrastructures. Unfortunately, an increase in the demand for Cloud results in resource shortages affecting both providers and consumers. With this factor in mind, Cloud service providers need Admission Control algorithms in order to make a good business decision on the types of requests to be fulfilled. At the same time, Cloud providers have a desire to maximize the net income derived from provisioning the accepted service requests and minimize the impact of un-provisioned resources. This chapter introduces and compares Admission Control algorithms and proposes a service model that allows the definition of Service Level Agreements (SLAs).

Title: Handbook of Research on Architectural Trends in Service-Driven Computing
Editors: Raja Ramanathan and Kirtana Raja
Pub. date: June 2014
Pages: 759
Volume: 23 of Advances in Parallel Computing
ISBN13: 9781466661783
J.L. Vázquez-Poletti

Regulated Condition-Event Matrices for Cloud Environments

Scalable Computing: Practice and Experience has just published our recent paper entitled “Regulated Condition-Event Matrices for Cloud Environments”. This work is the result of a collaboration with Prof. Patrick Martin (Queen’s University, Canada) and introduces the PhD Thesis core of my student Richard M. Wallace. The paper can be accessed here.

SCPE-Paper

Distributed event-based systems (DEBS) are networks of computing devices. These systems have been successfully implemented by commercial vendors. Cloud applications depend on message passing and inter-connectivity methods exchanging data and performing inter-process communication. Both DEBS and Clouds need time-coordinated methods of control not dependent on a single time domain. While DEBS have specific implementation languages for complex events, Cloud systems do not. Clouds and DEBS have not yet presented an explicit separation of temporally based event processing from computations. Using a regulated, isomorphic, temporal architecture (RITA), a specific language and separation of temporal event processing from processing computation is achieved. RITA provides a functional programming style for developers using familiar language constructs for integration with existing processing code without forcing the developer to work in multiple coding paradigms requiring extensive “glue code” allowing coding paradigms to work together. This paper introduces RITA as a guarded condition-event system that has explicit separation of event processing and computation with constructs allowing integration of time-aware events for multiple time domains found in Cloud or existing distributed computing systems.

 

J.L. Vázquez-Poletti

Invited talk at HPC2014 (Cetraro, Italy)

From July 7th to 11th Cetraro (Italy) will host again its famous International Advanced Workshop on High Performance Computing. Its main aim is to present and debate advanced topics, open questions, future developments, and challenging applications related to advanced high-performance distributed computing and data systems, encompassing implementations ranging from traditional clusters to warehouse-scale data centers, and with architectures including hybrid, multicore, distributed, and cloud models.

HPC2014 Cetraro

And this year’s motto is “from Clouds and Big Data to Exascale and Beyond”, which is itself a statement of intentions.

For the second time, I’m very honored to attend as invited speaker. This year I’ll give a talk entitled “Clouds for Meteorology, two cases study”.

Clouds for Meteorology, two cases study

Meteorology is among the most promising areas that benefit from cloud computing, due to its intersection with society’s critical aspects. Executing meteorological applications involves HPC and HTC challenges, but also economic ones.  My talk will introduce two cases with different backgrounds and motivations, but always sharing a similar cloud methodology: the first one is about weather forecasting in the context of planet Mars exploration; and the second one deals with data processing from weather sensor networks, in the context of an agriculture improving plan at Argentina.

I’ll of course take the advantage of this travel to meet again with many colleagues from the previous edition of HPC, in order to continue and expand current collaborations, which have been very productive in past 2 years.

 

J.L. Vázquez-Poletti

Student Cloud Computing projects featured by HPCwire

Usually terms such as high performance and high availability are addressed by big corporations and institutions; however, something has changed over the past years as a real revolution is emerging from university classrooms. This week HPCwire has published an article describing some of the promising work being carried out by my students.

HPCwire

This year I have been honored to advise projects that respond to three critical areas that make their way to the media headlines nowadays: communications security, emergency medical services and P2P digital currencies.

Student Projects

Access the article here. And if you are curious about the rest of the projects, access the complete list here.

J.L. Vázquez-Poletti

Estimating Resource Costs of Executing Data-Intensive Workloads in Public Clouds

Last week, the School of Computing at Queen’s University (Canada) published our latest work in the form of a technical report. This technical report, result of the collaboration with Prof. Patrick Martin‘s research group, is entitled “Estimating Resource Costs of Executing Data-Intensive Workloads in Public Clouds” and can be accessed here.

Technical Report No. 2013-613

The promise of “infinite” resources given by the cloud computing paradigm has led to recent interest in exploiting clouds for large-scale data-intensive computing. In this technical report, we present an analytical model to estimate the resource costs for executing data-intensive workloads in a public cloud. The cost model quantifies the cost-effectiveness of a resource configuration for a given workload with consumer performance requirements expressed as Service Level Agreements (SLAs), and is a key component of a larger framework for resource provisioning in clouds. We instantiate the cost model for the Amazon cloud, and experimentally evaluate the impact of key factors on the accuracy of the model.

J.L. Vázquez-Poletti

Applications of neural-based spot market prediction for cloud computing

The IEEE Xplore Digital Library has made available our paper entitled “Applications of neural-based spot market prediction for cloud computing”, which was presented at the IEEE 7th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS 2013) last September. It can be accessed here.

 Applications of neural-based spot market prediction for cloud computing

Advances in service-oriented architectures (SOA), virtualization, high-speed networks, and cloud computing have resulted in attractive pay-as-you-go services. Job scheduling on these systems results in commodity bidding for computing time. This bidding is institutionalized by Amazon for its Elastic Cloud Computing (EC2) environment and bidding methods exist for other cloud computing vendors as well as multi-cloud and cluster computing brokers such as SpotCloud. Commodity bidding for computing has resulted in complex spot price models that have ad-hoc strategies to provide demand for excess capacity. In this paper we discuss vendors who provide spot pricing and bidding and present a predictive model for future spot prices based on neural networking giving users a high confidence on future prices aiding bidding on commodity computing.

This is another work resulting from a collaboration with Prof. Lucio Grandinetti‘s research group at University of Calabria, Italy.

J.L. Vázquez-Poletti

CloudCatalyst Project Officially Launched

Lisbon, 21st October 2013 –  CloudCatalyst was officially launched in Lisbon, Portugal. The kick off meeting gathered the consortium partners that are motivated to promote the potential of cloud computing throughout Europe.

The project main objectives are to support startups and SMEs moving into the cloud and provide them with innovative tools. Communities in at least 7 EU countries will be supported to produce innovative and sustainable businesses through cloud computing offers that help boost productivity, efficiency and competitiveness of the European economy.

The project is funded by the European Commission under the 7th Framework Programme and brings together 7 partners from 4 member states.

CloudCatalyst partners recognize, as it’s also defined by the “ENTREPRENEURSHIP 2020 ACTION PLAN”[i], recently presented by the European Commission, that in order to bring Europe back to growth and higher levels of employment, more entrepreneurs are needed. In CloudCatalyst partner’s point of view, the 2020 action plan for entrepreneurship is an unique chance to transform all the challenges of Cloud Computing deployment into opportunities – fostering innovation and new business creation within European entrepreneurial ecosystem.

CloudCatalyst project will be led by Portugal Telecom which has solid experience in the promotion and management of European Funded projects. Portugal Telecom has also recently presented a very ambitions Cloud Computing roadmap focused in accelerating the development and deployment of Cloud Computing technologies and internet services.

To promote entrepreneurship and job creation, this project will be aligned with the business and knowledge transfer strategy promoted by UPTEC, the University of Porto Science and Technology Park. UPTEC is the structure of the University of Porto dedicated to incubate start-ups and host Business Innovation Centres, supporting an effective knowledge and technology transfer among academia, companies and markets. In the last 5 years, UPTEC has helped create 115 new start-ups and more than 1200 jobs. UPTEC was further able to attract to Porto city highly innovative companies, creating a flourishing ecosystem of innovators and entrepreneurs.

CloudCatalyst project has also a solid technical background, lead by the UCM, Universidad Complutense de Madrid, founders of the OpenNebula open-source project. OpenNebula, a success story in exploitation of FP7 research results, has played an important role in driving and supporting the transition to cloud computing and thus accelerating the pace of innovation in Europe. With thousands of deployments worldwide, OpenNebula has a very wide user base that includes leading companies in banking, technology, telecom and hosting, and research and supercomputing centers.

As a central hub for the exchange of knowledge, best practices and guidelines in the European Cloud ecosystem, EuroCloud role in this project will be fundamental for implementing an efficient dissemination action plan, but foremost, EuroCloud will be responsible for the exploitation of the results and for guarantee the future sustainability of the develop support services.

Finally, Si.mobil will provide important contributions based on the experience gathered during start:Cloud.si program – a competitive educational programme for entrepreneurs and start-up companies on development and bringing new innovative cloud services to the market. The program ran in 2013 for the second consecutive year, with 48 teams participating in 2 years and astonishing 20 solutions being developed and commercialized. Given a small scale of Slovenian market numbers are very encouraging. In addition Si.mobil has gathered wide experience while developing and implementing cloud services brokerage platform and business model. Their approach has been recognized as preferred within Telecom Austria Group in 2012 and will be implemented within other members of TAG group – with Si.mobil taking the lead in cloud services business for the whole group.

 

Contacts CloudCatayst: 

  • Dissemination Manager: Dalibor Baskovc – dbaskovc@eurocloud.org
  • Project Coordination: Andreia Jesus / Frederico Melo Santos- info@cloudcatalyst.eu
  • For further information please visit: www.cloudcatalyst.eu

 


[i] ENTREPRENEURSHIP 2020 ACTION PLAN – http://ec.europa.eu/enterprise/policies/sme/public-consultation/files/report-pub-cons-entr2020-ap_en.pdf

So long 4CaaSt!

Last week the 4CaaSt project faced its final year review at the European Commission.

European Commission

This project aims to create an advanced PaaS Cloud platform which supports the optimized and elastic hosting of Internet-scale multi-tier  applications. 4CaaSt embeds all the necessary features, easing programming of rich applications and enabling the creation of a true business ecosystem where applications coming from different providers can be tailored to different users, mashed up and traded together.

The result? We passed our final review!

4CaaSt final year review

It has been 3 years of hard work and we have held general assemblies almost all over the European territory.

f3d7941 NewImage2OLYMPUS DIGITAL CAMERAOLYMPUS DIGITAL CAMERA

We have fostered innumerable and interesting collaborations.

DSC01545 NewImage7 OLYMPUS DIGITAL CAMERA

And of course, we also established many new friendships!

DSC00095 NewImage3 PC026862

Bye bye 4CaaSt project… and bye bye to all of you who I’m honored to call “my colleagues” since 3 years ago, for now.

4caast

“Painful though parting be, I bow to you as I see you off to distant clouds” (Emperor Saga)

J.L. Vázquez-Poletti

New book on Cloud Computing and Big Data

Today a new book on Cloud Computing and Big Data has been published by IOS Press. I had the pleasure and honor to team up with Dr. Charlie Catlett, Dr. Wolfgang Gentzsch, Prof. Lucio Grandinetti and Prof. Gerhard R. Joubert for its edition.

Cloud Computing and Big Data

Cloud computing offers many advantages to researchers and engineers who need access to high performance computing facilities for solving particular compute-intensive and/or large-scale problems, but whose overall high performance computing (HPC) needs do not justify the acquisition and operation of dedicated HPC facilities. There are, however, a number of fundamental problems which must be addressed, such as the limitations imposed by accessibility, security and communication speed, before these advantages can be exploited to the full.

This book presents 14 contributions selected from the International Research Workshop on Advanced High Performance Computing Systems, held in Cetraro, Italy, in June 2012. The papers are arranged in three chapters. Chapter 1 includes five papers on cloud infrastructures, while Chapter 2 discusses cloud applications.

The third chapter in the book deals with big data, which is nothing new – large scientific organizations have been collecting large amounts of data for decades – but what is new is that the focus has now broadened to include sectors such as business analytics, financial analyses, Internet service providers, oil and gas, medicine, automotive and a host of others.

This book will be of interest to all those whose work involves them with aspects of cloud computing and big data applications.

Detailed Information

Title: Cloud Computing and Big Data
Editors: Catlett, C. , Gentzsch, W., Grandinetti, L., Joubert, G.R., Vazquez-Poletti, J.L.
Pub. date: October 2013
Pages: 264
Volume: 23 of Advances in Parallel Computing
ISBN: 978-1-61499-321-6
J.L. Vázquez-Poletti