Category Archives: IT Research Management

International Workshop on Clouds for Business and Business for Clouds (C4BB4C)

The International Workshop on Clouds for Business and Business for Clouds (C4BB4C) will be held at the 10th IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA2012).

Cloud Computing has acquired enough maturity to expand its field of application to business. Yet there are not only institutions which use this paradigm in their production line but there are also those which are offering services through the cloud.

This workshop intends to put together efforts done from service producers and consumers in order to make Cloud Computing provide an added value to the economy of any kind of institution. Technologies, policies and heuristics will be shared, not discarding those coming from other areas that would benefit Cloud Computing.

The Workshop intends also to focus on how services are delivered through the cloud as a popular strategic technology choice for businesses that provides a flexible, ubiquitous and consistent platform accessible from anywhere at any time.

The interface of software services and cloud computing provides a rich area for research and experience to give a unique insight into how cloud-based service can be achieved in practice. We encourage submissions of research papers on work in the area of Cloud Computing, Service Engineering, and especially welcome papers describing developments in cloud-enabled business process management and all related areas, such as deployment techniques, business models for cloud-based enterprises and experience reports from practitioners.

Extended version of selected papers will be included in a special issue of Scientific Computing: Practice and Experience.

This Workshop couldn’t be possible without the following projects: MEDIANET (Comunidad de Madrid S2009/TIC-1468), HPCcloud (MICINN TIN2009-07146) and 4CaaSt (Gr. Agree. 258862).

J.L. Vázquez-Poletti

New Book on European Research Activities in Cloud Computing

What’s new in the European research and development area?

The research and development community involved in distributed computing is searching for viable solutions that will increase the adoption of cloud computing. This is the case of the collaborative work done by multi-national teams in the context of the FP7 programme of the European Commission.

Students, researchers and developers working in the field of distributed computing will find in this book a snapshot of the on-going activities in research and development of cloud computing undertaken at the European level. These activities are organized by the latest hot topics of cloud computing research, which include services, management, automation and adoption.

This book is a result of the collaboration with Prof. Dana Petcu from West University of Timisoara (Romania) and among its chapters, the reader will find one devoted to 4CaaSt and another to StratusLab, projects where the DSA-Research group is actively participating.

Detailed Information

Title: European Research Activities in Cloud Computing

Editors: Dana Petcu and José Luis Vázquez-Poletti

Publisher: Cambridge Scholars Publishing

Date Of Publication: Jan 2012

Isbn13: 978-1-4438-3507-7

Isbn: 1-4438-3507-2

J.L. Vázquez-Poletti

Key Research Challenges in Cloud Computing

In a recent keynote at the 3rd EU-Japan Symposium on Future Internet and New Generation Networks we presented our view about key research challenges in cloud computing. We briefly covered the state of the art and the open challenges in the main cloud layers that provide the tools and the infrastructure to develop and to run the applications in the Future Internet of Services. In this post we try to summarize the main points of the presentation.

Cloud Computing as an Enabler for the Internet of Services

The Internet of Services is a vision of the Internet of the Future where not only the software applications are available as a service on the Internet, such as the software itself, but also the tools to develop the software and the platform (servers, storage and communication) to run the software. In this scenario, SaaS cloud computing would represent the software applications that are available as a service in the Internet, while PaaS and IaaS would represent the enablers for the Internet of Services providing the tool services to develop applications and the infrastructure services to run the applications.

Research Challenges

Cloud Computing research addresses the challenges of meeting the requirements of next generation private, public and hybrid cloud computing architectures; and the challenges of allowing applications and development platforms to take advantage of the benefits of cloud computing. We are at the beginning of the road, there are still many technology challenges to be researched and adoption barriers to be overcome. Fortunately because cloud solution architectures include technology components from different fields, many research challenges in Cloud Computing have been already addressed to a certain degree by different research communities, mostly virtualization, Grid and autonomic computing.

Here I do not try to give an exhaustive list of challenges but to briefly describe those challenges that I think should be firstly addressed to unleash the full potential of cloud computing. I organized the challenges in the following six different categories.

Platform Management. Challenges in delivering middleware capabilities for building, deploying, integrating and managing applications in a multi-tenant, elastic and scalable environments.

  • Scalability and multi-tenancy of application containers
  • Placement optimization algorithms of containers in resources

Cloud-enabled Applications. Challenges in building cloud-enabled applications and platforms to take advantage of the scalability, agility and reliability of the cloud.

  • Elastic and scalable applications and frameworks on very large-scale environments
  • Self-scaling, self-awareness, self-knowledge, and self-management capabilities of services
  • Novel applications of cloud computing
  • Power-efficient applications and platforms
  • Research challenges in the aggregation of resources from diverse cloud providers adding additional layers of service management

Cloud Aggregation. Research challenges in the aggregation of resources from diverse cloud providers adding additional layers of service management.

  • Novel architectural models for aggregation of cloud providers
  • Brokering algorithms for high availability, performance, proximity, legal domains, price, or energy efficiency
  • Sharing of resources between cloud providers
  • Networking in the deployment of services across multiple cloud providers
  • SLA negotiation and management between cloud providers
  • Additional privacy, security and trust management layers atop providers
  • Support of context-aware applications
  • Automatic management of service elasticity

Cloud Management. Research challenges in delivering infrastructure resources on-demand in a multi-tenant, secure, elastic and scalable environment.

  • Scalable management of network, computing and storage capacity
  • Scalable orchestration of virtualized resources and data
  • Placement optimization algorithms for energy efficiency, load balancing, high availability and QoS
  • Accounting, billing, monitoring and pricing models
  • Security, privacy and trust issues in the cloud
  • Energy efficiency models, metrics and tools at system and datacenter levels

Cloud Enablement. Research challenges in enhancing platform infrastructure to support cloud management requirements.

  • Technologies for virtualization of infrastructure resources
  • Virtualization of high performance infrastructure components
  • Autonomic and intelligent management of resources
  • Implications of Cloud paradigm on networking and storage systems
  • Support for vertical elasticity
  • Provision of service related metrics

Cloud Interoperability. Challenges to ensure that the available cloud services can work together and interoperate successfully.

  • Common and standard interfaces for cloud computing
  • Portability of virtual appliances across diverse clouds providers

Building an Open Cloud Ecosystem

There are several large projects funded by the European Commission that are already addressing these challenges or are building testbeds to bridge the usability and cultural gaps of cloud computing. Most of these projects are re-using existing open-source components, so actively contributing to build an open cloud ecosystem. A very good example is the high number of innovative projects using and contributing to the OpenNebula open-source community.

References

Ignacio M. Llorente

Open Source Tools Released by RESERVOIR to Support Cloud Deployment and Usage

Brussels, September 27, 2010 – Are you looking for the delivery of services on an on-demand basis, across countries, at competitive costs and without requiring a large capital investment in infrastructure? Are you interested in the latest technologies in Cloud Computing? RESERVOIR enables the migration of resources across distributed administrative domains, maximizing resource exploitation, and minimizing costs to the end-user with guaranteed quality of service. How does it work? RESERVOIR defines an open federated infrastructure cloud architecture and delivers a framework of open source components you can download from the RESERVOIR website and integrate to build your own open source cloud infrastructure.

Open Source Components

Several key components of the RESERVOIR architecture are being released as open source middleware. The Claudia platform offers a Service Management toolkit to deploy and control the scalability of service among a public, private or hybrid IaaS cloud. It provides a Dashboard and a standard TCloud API based on OVF to support provisioning of PaaS and SaaS. The Claudia platform is available through the Morfeo open source community. The Claudia platform can also be integrated with the OpenNebula cloud management framework.

OpenNebula is an open source toolkit, with excellent performance and scalability to manage tens of thousands of virtual machines, with high integration capabilities to fit into any existing data center, and with the most advanced functionality for building private, public and hybrid clouds. It provides the most common cloud interfaces to expose its functionality for virtual machine, storage and network management The OpenNebula platform is available under Apache license on its community site and on the Morfeo open source community. Explanations are available on how to integrate the Claudia and the OpenNebula platforms.

To help secure the integrated Claudia and OpenNebula platforms, security services are also planned for release on Morfeo. The security services provide access control for the public interfaces of the IaaS cloud, and allow securing an IaaS federation. Role based access control will protect both the Claudia and OpenNebula public interfaces. Role based access control is provided in combination with X509 certificates to provide authorisation, authentication and integrity checks. Security services are also provided to secure the IaaS federation. They allow providing authentication between data centres within a cloud federation, and enforcing global security policies in a federation.

RESERVOIR at ICT2010 – Learn More about these Components Developed for Building Clouds

The RESERVOIR R&D stand called “Deploying Complex Multi-tier Applications on a Federated Cloud Infrastructure” will be placed in the ICT Connects zone of the ICT2010 conference in Brussels on September 27-29, 2010. This exhibit will show, via an interactive demonstration, how complex multi-tier applications can be securely deployed on a federated cloud infrastructure. It will also demonstrate how virtualisation and business service management techniques can be used to manage resources and services on-demand, at competitive costs with a high quality of service. This demonstration will present how RESERVOIR innovation will improve consumers’ accessibility to government and business services. The networking session “Research/Industry Collaboration on Open Source Cloud Middleware” is presented by RESERVOIR in collaboration with SLA@SOI and takes place on September 28, from 11:00 to 12:30. The session aims to identify better ways to collaborate for developing European open source technology for building clouds.

Want to Build a Cloud Infrastructure Using the RESERVOIR Framework?

RESERVOIR is also providing training, giving insight into the RESERVOIR Framework. Our experts offer consulting on the architecture, the individual RESERVOIR components, and how to integrate these to build an open source cloud infrastructure. Training also aims to teach users how to create service definitions and submit them to a RESERVOIR infrastructure for deployment. Details on RESERVOIR training can be accessed from the ‘Technical Information’ section on www.reservoir-fp7.eu.

RESERVOIR, Open Source and European Perspectives

The European Commission recently highlighted, in an Expert Group’s Report on the Future of Cloud Computing, the need for coordinating open source initiatives between Research and Industry, aimed at promoting the emergence of flexible cloud-based infrastructure-as-a-service offerings. The FP7 RESERVOIR project is making significant contributions in this direction by defining an open federated infrastructure cloud architecture, and delivering a framework of open source components for building infrastructure clouds.

Further information on RESERVOIR can be found at www.reservoir-fp7.eu

The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 215605.

DSA-RESEARCH IN EU’S BONFIRE PROJECT TO BUILD A MULTI-SITE CLOUD

BonFIRE is a 8,5-million-Euro EU-funded initiative (EU grant agreement 257386) funded by the 7th FWP (Seventh Framework Programme) under the Future Internet Experimental Facility and Experimentally-driven Research (ICT-2009.1.6) area, aimed at designing, building and operating a multi-site cloud facility to support applications, services and systems research targeting the Internet of Services community within the Future Internet.

BonFIRE will operate a Cloud facility based on an Infrastructure as a Service delivery model with guidelines, policies and best practices for experimentation. BonFIRE will adopt a federated multi-platform approach providing interconnection and interoperation between novel service and networking testbeds. The platform will offer advanced services and tools for services research including cloud federation, virtual machine management, service modelling, service lifecycle management, service level agreements, quality of service monitoring and analytics.

DSA-Research will join a consortium of world leading industrial and academic organisations in cloud computing to deliver a robust, reliable and sustainable facility for large scale experimentally-driven cloud research. Multinational companies (ATOS, HP, SAP), renowned universities and super computing centres (EPCC, HLRS Stuttgart, IBBT, TUB), research centres (IT Innovation, FhG Fokus, INRIA, i2CAT) and technology analysts (451 Group) provide the complimentary expertise and infrastructure resources necessary to accelerate the research and development within the Internet of Services community.

This news consolidates DSA-Research’s position at the cutting edge of cloud computing research worldwide, following two recent announcements of its participation in the EU’s StratusLab project, aimed at bringing cloud and virtualization to grid computing, and its participation in the EU’s 4CaaSt project, aimed at building the PaaS cloud of the future.

Ignacio M. Llorente

The Next Generation of Cloud Computing Platforms

Cloud computing is transforming the way we use the web but there’s still a long way to go before we make full use of the promise it offers. Projects Magazine – the leading research and development magazine in the areas of science and technology – interviews Professor Ignacio M. Llorente, the head of the DSA-Research group.

New Research Projects on Cloud Computing: HPCcloud and NUBA

We are happy to announce that our proposal “HPCcloud: Distributed Virtual Infrastructures to Provision Computing Resources” (MICINN TIN2009-07146) submitted to the National Program in Basic Research of the Spanish Ministry of Science and Innovation has been accepted.  This project, which was endorsed by several relevant companies and research centers, aims at conducting research about elastic management of computing clusters on geographically-distributed virtual infrastructures as instrument for on-demand provision of computing resources.

DSA-Research will also participate in the NUBA strategic research program. The “NUBA: Normalized Usage of Business-oriented Architectures” (MITyC TSI-020301-2009-30) program will be funded by the Avanza R&D Plan of the Spanish Ministry of Industry, Tourism and Trade. This research initiative will be coordinated by Telefonica I+D with 8 partners: Atos Origin, Catón, Xeridia, EyeOS, Centro de Supercomputación de Galicia, Barcelona Supercomputing Center/Centro Nacional de Supercomputación and DSA-Research at Universidad Complutense de Madrid. The aim of NUBA is to advance the state-of-the-art in business models and technology for the dynamic deployment of federated Cloud platforms, integrating infrastructure from different providers, to execute elastic and configurable business services with the required Quality of Service.

This funding will contribute to the development of new research lines on Cloud computing and to the improvement and maintenance of the OpenNebula Virtual Infrastructure Manager for the next three years (2010-2012).

Ignacio Martin Llorente

Mathematica Goes Cloudy

Since the release of gridMathematica at the end of 2002, Wolfram Research stated clearly the interest for making parallel computing available for its flagship product Mathematica. Nevertheless, it has not been until the release of Mathematica 7 (six years later) when they have seriously tackled the usability of the parallel computing. Why now?

Curiously enough, the profitability of cloud computing as service on-demand (e.g. Amazon EC2) has only recently convinced IT analysts about how the future technology for business will be, probably seeking for more flexible solutions in a global world with a deep economical instability.

Research cannot be put aside either, and shortages for some scientific and industrial activities will be unavoidable. Wolfram Research has identified this issue and prepare for the forthcoming events, allowing users to benefit from Cloud Computing. Problems like protein folding, DNA sequencing or MonteCarlo simulations are conceivable as “embarrassingly parallel problems” solvable with the help of Amazon EC2. On the other hand, CFDs, heat transfer and other multi-physics simulations could very well enter into the category of “middle-to-highly coupled problems” to be handle by R-Systems supercomputing services. This approach complements the High Performance Computing capabilities of the multicore processors already implemented in the last version of Mathematica. Still gridMathematica remains as a separate product.

I am unaware about how Mathematica plans to merge its per-site License philosophy and the new on-demand accounting, but surely, as stated by Schoeller Porter (technical develop specialist  in the Wolfram Partnerships Group) they want to keep their customer satisfied by having it easy, although not necessarily cheap. Moreover the webMathematica interface could very well be the first approach to a complete web-service on demand. Imagine: “Just login, let us compute your problem and pay the bill”. All in one and independent of how difficult or which architecture is underlying.

Probably plenty of users, from medical research institutions, finance stock analysts, scientists simulating biological ecosystems, etc, will be interested. They would rather focus more in their actual domain interest and bother less about how to reach it.

For this solely reason, auditing mechanisms showing how useful the parallelization was or how many resources were used will be mandatory; also open-source solutions showing us what the real balance of price/effort for the whole thing is.

Alejandro Lorca Extremera

Cloud and Grid are Complementary Technologies

There is a growing number of posts and articles trying to show how cloud computing is a new paradigm that supersedes Grid computing by extending its functionality and simplifying its exploitation, even announcing that Grid computing is dead. It seems that new technologies and paradigms have always the mission objective to substitute existing ones. Some of these contributions do not fully understand what grid computing is, focusing their comparative analysis on simplicity of interfaces, implementation details or basic computing aspects. Others posts define Cloud in the same terms as Grid or create a taxonomy which includes Grid and cluster computing technologies.

Grid is as an interoperability technology, enabling the integration and management of services and resources in a distributed, heterogeneous environment. The technology provides support for the deployment of different kinds of infrastructures joining resources which belong to different administrative domains. In the special case of a Compute Grid infrastructure, such as EGEE or TeraGrid, Grid technology is used to federate computing resources spanning multiple sites for job execution and data processing. There are many success cases demonstrating that Grid technology provides the support required to fulfill the demands of several collaborative scientific and business processes.
On the other hand, I do not think there is a single definition for cloud computing as it denotes multiples meanings for different communities (SaaS, PaaS, IaaS…). From my view, the only new feature offered by cloud systems is the provision of virtualized resources as a service, being virtualization the enabling technology. In other words, the relevant contribution of cloud computing is the Infrastructure as a Service (IaaS) model. Virtualization rather than other non significant issues, such as the interfaces, is the key advance. At this point, I should remark that virtualization has been used by the Grid community before the arrival of the “Cloud”.

Once I have clearly stated my position about Cloud and Grid, let me show how I see Cloud (and virtualization as enabling technology) and Grid as complementary technologies that will coexist and cooperate at different levels of abstraction in future infrastructures.

There will be a Grid on top of the Cloud

Before explaining the role of cloud computing as resource provider for Grid sites, we should understand the benefits of the virtualization of the local infrastructure (Enterprise or Local Cloud?). How can I access on demand to a cloud provider if I have not previously virtualized my local infrastructure?.

Existing virtualization technologies allow a full separation of resource provisioning from service management. A new virtualization layer between the service and the infrastructure layers decouples a server not only from the underlying physical resource but also from its physical location, without requiring any modification within service layers from both the service administrator and the end-user perspectives. Such decoupling is the key to support the scale-out of a infrastructure in order to supplement local resources with cloud resources to satisfy peak or fluctuating demands.

Getting back to the Grid computing case, the virtualization of a Grid site provides several benefits, which overcome many of the technical barriers for Grid adoption:

  • Easy support for VO-specific worker nodes
  • Reduce gridification cycles
  • Dynamic balance of resources between VO’s
  • Fault tolerance of key infrastructure components
  • Easier deployment and testing of new middleware distributions
  • Distribution of pre-configured components
  • Cheaper development nodes
  • Simplified training machines deployment
  • Performance partitioning between local and grid services
  • On-demand access to cloud providers

If you are interested in more details about how virtualization and cloud computing can support compute Grid infrastructures you can have a look at my presentation “An Introduction to Virtualization and Cloud Technologies to Support Grid Computing” (EGEE08). I also recommend the report “An EGEE Comparative study: Clouds and grids – evolution or revolution?”.

There exist technology which supports the above use case. The OpenNebula engine enables the dynamic deployment and re-allocation of virtual machines on a pool of physical resources, providing support to access on-demand to Amazon EC2 resources. On the other hand, Globus Nimbus provides a free, open source infrastructure for remote deployment and management of virtual machines, allowing you to create compute clouds.

There will be a Grid under the Cloud

There is a growing interest in the federation of cloud sites. Cloud providers are opening new infrastructure centers at different geographical locations (see IBM or Amazon Availability Zones) and it is clear that no single facility/provider can create a seemingly infinite infrastructure capable of serving massive amounts of users at all times, from all locations. David Wheeler once said, “Any problem in computer science can be solved with another layer of indirection… But that usually will create another problem“, in the same line, federation of cloud sites involves many technological and research challenges, but the good news is that some of them are not new, and have been already studied and solved by the Grid community.

As stated above Grid is not only about computing. Grid is a technology for federation. In the last years, there has been a huge investment in research and development of technological components for sharing of resources across sites. Several middleware components for file transferring, SLA negotiation, QoS, accounting, monitoring… are available, most of them are open-source. As also predicted by Ian Foster in his post “There’s Grid in them thar Clouds”, those will be the components that could enable the federation of cloud sites. On the other hand, other components have to be defined and developed from scratch, mainly those related to the efficient management of virtual machines and services within and across administrative domains. That is exactly the aim of the Reservoir project, the European initiative in Cloud Computing.

Conclusions

In order to conclude this post let me venture some predictions about the coexistence of Grid and Cloud computing in future infrastructures:

  • Virtualization, cloud, grid and cluster are complementary technologies that will coexist and cooperate at different levels of abstraction
  • Although there are early adopters of virtualization in the Grid/cluster/HPC community, its full potential has not been exploited yet
  • In few years, the separation of job management from resource management through a virtualized infrastructure will be a common practice
  • Emerging open-source VM managers, such as OpenNebula, will contribute to speed up the adoption
  • Grid/cluster/HPC infrastructures will maintain a resource base scaled to meet the average workload demand and will transparently access to cloud providers to meet peak demands
  • Grid technology will be used for the federation of clouds

In summary, let’s try to forget about hypes and concentrate on the complementary functionality provided by both paradigms. My message to the user community, the relevant issue is to evaluate which technology meets your requirements. It is unlikely that a single technology will meet all needs. My message to the Grid community, please do not see Cloud as a threat. Virtualization and Cloud are needed to solve many of the technical barriers for wider Grid adoption. My message to the Cloud community, please try to take advantage of the research and development performed by the Grid community in the last decade.

Ignacio Martín Llorente