Since the concepts of cloud computing are relatively new we will at first introduce a minimal background to the reader, then we will dive headlong into the OpenShift project which is split into two main areas:
The OpenShift Express service, which will be your starting objective for leveraging cloud applications
The OpenShift Flex service which can be used by advanced users for rolling in production your cloud applications
What is cloud computing? We’re hearing this term everywhere, but what does it really mean? We have all used the cloud knowingly or unknowingly. If you have Gmail, Hotmail, or any other popular mailing service then you have used the Cloud. Simply put, cloud computing is a set of pooled computing resources and services delivered over the
Web. When you diagram the relationships between all the elements it resembles a cloud.
Client computing, however, is not a completely new thing in the computer industry. Those of you who have been in the trenches of IT for a decade or two should remember that the first type of client-server application were the mainframe and terminal applications. At that time, storage and CPU was very expensive and the mainframe
pooled both types of resources and served them to thin-client terminals.
With the advent of the PC revolution, which brought mass storage and cheap CPUs to the average corporate desktop, the file server gained popularity as a way to enable document sharing and archiving. True to its name, the file server served storage resources to the clients in the enterprise, while the CPU cycles needed to do productive
work were all produced and consumed within the confines of the PC client.
In the early 1990s, the budding Internet finally had enough computers attached to it that academics began seriously thinking about how to connect to those machines together to a create massive, shared pools of storage and compute power that would be much larger than what any one institution could afford to build. This is when the idea of “the grid” began to take shape.
In general, the terms grid and cloud seem to be converging due to some similarities; however there are a list of important differences between them which are often not understood, generating confusion and clutter within the marketplace.
Grid Computing requires the use of software that can divide and farm out pieces of a program as one large system image to several thousand computers. Hence, it may or may not be in the cloud depending on the type of use you make of it. One concern about the grid is that if one piece of the software on a node fails, other pieces of the software on the other nodes may fail too. This is alleviated if that component has a failover component on another node, but problems can still arise if the components rely on other pieces of software to accomplish one or more grid computing tasks.
Cloud Computing evolves from grid computing and provides on-demand resource provisioning. With cloud computing, companies can scale up to massive capacities in an instant without having to invest in a new infrastructure, train new personnel, or license new software. If the users are systems administrators and integrators, they care how things are maintained in the cloud. They upgrade, install, and virtualize the servers and applications. If the users are consumers, they do not care how things are run in the system.
Cloud computing and grid computing, however, do bear some similarities, and as a matter of fact, they are not always mutually exclusive. In fact, they are both used to economize computing by maximizing existing resources.
However, the difference between the two lies in the way the tasks are computed in each respective environment. In a computational grid, one large job is divided into many small portions and executed on multiple machines. This characteristic is fundamental to a grid; not so much to a cloud.
Cloud computing is intended to allow the user to avail various services without investing in the underlying architecture. Cloud services include the delivery of software, infrastructure, and storage over the Internet (either as separate components or a complete platform) based on the effective user demand.
Having gone through the basics of cloud computing, we should now account for the benefits which are guaranteed when you transition to a cloud computing approach:
On demand service provisioning: By using self-service provisioning, customers can get cloud services easily, without going through a lengthy process. The customer simply requests a number of computing, storage, software, process, or other resources from the service provider.
Elasticity: This particular characteristic of cloud computing—its elasticity— means that customers no longer need to predict traffic, but can promote their sites aggressively and spontaneously. Engineering for peak traffic becomes a thing of the past.
Cost reduction: As a matter of fact, companies are often challenged to increase the functionality of IT while minimizing capital expenditures. By purchasing just the right amount of IT resources on demand, the organization can avoid purchasing unnecessary equipment.
Application programming interfaces (APIs): The accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud computing systems typically use REST-based APIs.
Along with these advantages, cloud computing also bears some disadvantages or potential risks, which you must account for.
The most compelling threat is that sensitive data processed outside the Enterprise brings with it an inherent level of risk, because outsourced services bypass the “physical, logical, and personnel controls” IT shops exert over in-house programs. In addition, when you use the cloud, you probably won’t know exactly where your data is hosted. In
fact, you might not even know what country it will be stored in, leading to potential issues with local jurisdiction.
As Gartner Group suggests (HTTP://WWW.GARTNER.COM), you should always ask providers to supply-specific information on the hiring and oversight of privileged administrators. Besides this, the cloud provider should provide evidence that encryption schemes were designed and tested by experienced specialists. It is also important to understand if the providers will make a contractual commitment to obey local privacy requirements on
behalf of their customers.
Another classification of cloud resources can be made on the basis of the location where the cloud is hosted:
The decision between the different kinds of cloud computing is a matter of discussion between experts and it generally depends on several key factors. For example, as far as security is concerned, although public clouds offer a very secure environment; private clouds offer an inherent level of security that meets even the highest of standards. In addition, you can add security services such as Intrusion Detection Systems (IDS) and dedicated firewalls.
A private cloud might be the right choice for large organization carrying a well-run data-center with a lot of spare capacity. It would be more expensive to use a public cloud even if you have to add new software to transform that data center into a cloud.
On the other hand, as far as scalability is concerned, one negative point of private clouds is that their performance is limited to the number of machines in your cloud cluster. Should you max out your computing power, another physical server will need to be added. Besides this, public clouds are typically delivering a pay-as-you-go model,
where you pay by the hour for the computing resources you use. This kind of utility pricing is an economical way to go if you’re spinning up and tearing down development servers on a regular basis.
So, by definition, the majority of public cloud deployments are generally used for web servers or development systems where security and compliance requirements of larger organizations and their customers are not an issue.
As opposed to public clouds, private clouds are generally preferred by mid-size and large enterprises because they meet the security and compliance requirements of those larger organizations that also need dedicated high-performance hardware.
Cloud computing can be broadly classified into three layers of cloud stack, also known as Cloud Service Models or SPI Service Model:
Infrastructure as a Service (IaaS) : This is the base layer of the cloud stack. It serves as a foundation for the other two layers, for their execution. It includes the delivery of computer hardware (servers, networking technology, storage, and data center space) as a service. It may also include the delivery of operating systems and virtualization technology to manage the resources. IaaS makes the acquisition of hardware easier, cheaper, and faster.
Platform as a Service layer (PaaS) offers a development platform for developers. The end users write their own code and the PaaS provider uploads that code and presents it on the Web.
By using PaaS, you don’t need to invest money to get that project environment ready for your developers. The PaaS provider will deliver the platform on the Web, and in most cases, you can consume the platform using your browser. There is no need to download any software. This combination of simplicity and cost efficiency empowers small and
mid-size companies, or even individual developers, to launch their own Cloud SaaS.
The final segment in cloud computing is Software as a Service (SaaS) which is based on the concept of renting software from a service provider rather than that of buying it yourself. The software is hosted on centralized network servers to make the functionality available over the Web or Intranet. Also known as “software on demand,”
it is currently the most popular type of cloud computing because of its high flexibility, great services, and enhanced scalability and less maintenance. Yahoo! mail, Google docs, and CRM applications are all instances of SaaS.
You might wonder if it’s possible that some services can be defined both as a platform and as a software. The answer is, of course, yes! For example, we have mentioned Facebook: we might define Facebook both as a platform where various services can be delivered and also as business applications (Facebook API), which are developed by
the end user.
Up until the last few months, it was common to hear that JBoss AS was still missing a cloud platform while other competitors such as SpringSource already had a solid cloud infrastructure.
Well, although it’s true that the application server was missing a consolidated cloud organization, but this does not mean that there was little or no interest on the subject. If you have a look at the JBoss world 2010 labs, there has been a lot of discussing about cloud. One first effort exhibited at JBoss labs was CirrAS (HTTP://WWW.JBOSS.ORG/STORMGRIND/PROJECTS/CIRRAS), a set of appliances that could automatically deploy a clustered JBoss AS server in the Cloud. Built using the BoxGrinder project (HTTP://BOXGRINDER.ORG/), CirrAS composed of a set of three appliances: a front-end appliance, a back-end appliance, and management appliance.Unfortunately, the project didn’t grow any further and, up to August 2011, the portfolio of JBoss cloud applications was still minute.
At that time, RedHat announced the availability of OpenShift platform for deploying and managing Java EE applications on JBoss AS 7 servers running the cloud. Finally, it’s time for the application server to spread its wings over the clouds!
OpenShift is the first PaaS to run CDI applications and plans support for Java EE 6, extending the capabilities of PaaS to even the richest and most demanding applications. OpenShift delivers two kinds of services for rapidly deploying Java applications on the cloud:
OpenShift Express enables you to create, deploy, and manage applications within the cloud. It provides disk space, CPU resources, memory, network connectivity, and an Apache or JBoss server. Depending on the type of application you are building, you also have access to a template filesystem layout for that type (for example, PHP, WSGI,
and Rack/Rails). OpenShift Express also generates a limited DNS for you.
The first thing needed to get started with OpenShift Express is an account, which can be obtained with a very simple registration procedure at:
Once you’ve registered and confirmed your e-mail, the next step will be installing on your Linux distribution the client tools needed to deploy and manage your applications in the cloud.
For this purpose, we suggest you use either Fedora 14 (or higher) or Red Hat Enterprise 6 (or higher).
Then you need to grab a copy of the openshift.repo file, which contains the base URL of rpm files and keys necessary to validate them. This file should be available at:
Now, copy this file into the /etc/yum.repos.d/ filesystem using either sudo or root access privileges:
$ sudo mv openshift.repo /etc/yum.repos.d/
And then install the client tools:
$ sudo yum install rhc
Get Updates on Tech posts, Interview & Certification questions and training schedules