Saturday, March 31, 2012

Data location in the cloud

After data goes into the cloud, you may not have control over where it’s stored geographically. Consider these issues:

✓ Specific country laws: Laws governing data differ across geographic boundaries. Your own country’s legal protections may not apply if your data is located outside of the country. A foreign government may be able to access your data or keep you from fully controlling your data when you need it.

✓ Data transfer across country borders: A global company with subsidiaries or partners (or clients for that matter) in other countries may be concerned about cross-border transfer of data due to local laws. Virtualization makes this an especially tough problem because the cloud provider might not know where the data is at any particular moment.

✓ Co-mingling of data: Even if your data is in a country that has laws you’re comfortable with, your data may be physically stored in a database along with data from other companies. This raises concerns about virus attacks or hackers trying to get at another company’s data.

✓ Secondary data use: In public cloud situations, your data or metadata may be vulnerable to alternative or secondary uses by the cloud service provider.

• Without proper controls or service level agreements, your data may be used for marketing purposes (and merged with data from other organizations for these alternative uses). The recent uproar about Facebook mining data from its network is an example.

• The service provider may own any metadata (see the “Sorting Out Metadata Matters” section later in this chapter for a description of metadata) it has created to help manage your data, lessening your ability to maintain control over your data. You should always be aware of where your data is and how it is being used, therefor taking a few for these types of issues could be the one thing you need to protect yourself.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Tuesday, March 27, 2012

Considering cloud hardware

When your company is establishing a cloud data center, think about the hardware elements in a different way. The following sections summarize considerations.


Cooling
Cloud data centers have the luxury of being able to engineer the way systems (boards, chips, and more) are cooled. When systems are cooled via air conditioning, they require tremendous amounts of power. However, purpose-built cloud data centers can be engineered to be cooled by water, for example (which is 3,000 times more efficient than air in cooling equipment).


CPU, memory, and local disk
Traditional data tends to be filled with a lot of surplus equipment (either to support unanticipated workloads or because an application or process wasn’t engineered to be efficient). Surplus memory, CPUs, and disks take up valuable space and, of course, they need to be cooled. The cloud data center typically supports self-service provisioning of resources so capacity is added only when you need it. With this hardware being so important, I am sure they have someone with training in computer repair on stand-by!


Data storage and networking
Data storage and networking need to be managed collectively if they’re going to be efficient. This problem has complicated the way the traditional data centers have been managed, and has forced organizations to buy a lot of additional hardware and software. The cloud data center can be engineered to overcome this problem. The cloud knows where its data needs to be because it is so efficient in the way it manages workloads. The cloud actually is engineered to manage data efficiently.


Redundancy
Data centers must always move data around the network for backup and disaster recovery. Traditional data centers support so many different workloads that many approaches to backup and recovery have to be taken. This makes backing up and recovering data complicated and expensive. The cloud, in contrast, is designed to handle data workloads consistently. For example, in a cloud data center you can establish a global policy about how and when backups will be handled. This can be then handled in an automated manner, reducing the cost of handling backup and recovery.


Software embedded within the data center
We talk a lot about software in the context of applications, but a considerable amount of software is linked at a systems level. This type of system level software is a big cost in the traditional data center simply because there are so many more workloads with so many operating systems and related software elements.

As you know, cloud data centers have fewer elements because they have simpler workloads. There are some differences in how software costs are managed depending on the type of cloud model. Cloud providers understand these costs well and design their offerings to maximize revenue. It will help you understand pricing by understanding the cost factors for each of the models.

The following gives you a sense of the difference between IaaS, PaaS, and SaaS when it comes to embedded software costs:

✓ An Infrastructure as a Service (IaaS) operation is likely to have higher software costs because although it provides only an environment for running applications, it has to build that environment according to equivalent environments in corporate data centers. Therefore, the IaaS vendor has to spend a lot of resources on management and security software in addition to the operating systems.

✓ With a Platform as a Service (PaaS) operation, the provider delivers a full software stack. To reduce cost, the PaaS vendor is likely to provide a software stack consisting of proprietary components. The licensing costs may be lower for IaaS than the PaaS environment because the operator is likely to force the use of specific software products. However, the PaaS vendor must maintain and support the software stack it provides.

✓ With Software as a Service (SaaS), the SaaS vendor provides a proprietary application as its value to customers. While the vendor invests in this software, it typically relies on partners to support many of the other functions. These vendors also take advantage of open-source components.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Friday, March 23, 2012

Scaling the Cloud

From the provider’s point of view, the whole point of cloud computing is to achieve economies of scale by managing a very large pool of computing resources in a highly economic and efficient fashion.

An important point to note is that the Y-axis of user populations is logarithmic. That means that the curve is much less steep than if we drew it on a proportional scale of equal steps. If we drew it on a proportional scale, we’d need miles of paper.

We deliberately didn’t put units on the X-axis. Instead, note the following:

✓ One end of the X-axis shows data center costs between $1–$50 per user per annum. That reflects, for example, the prices that Google charges for Google Apps or even the cost of providing free email (from Google, Microsoft, or Yahoo, which is paid for by ads). The cost per user is extremely low.

✓ The other end of the X-axis shows data center costs between $1,000– $5,000 per user per annum. That might be the cost of, for example, providing a print server that’s almost always idle.

Basically you have very efficient use of computer resources and, on the right, very inefficient use of resources. Points on the line indicate the kind of computing resources that serve specific group sizes:

✓ Inefficient servers: This is a 1:1 user-to-server ratio (or close to 1). The cost of managing a single server in a data center will be thousands of dollars per year and this is as expensive as computing ever gets per user.

✓ Virtual machines: Applications and user numbers that can’t use a whole server get virtualized (split among several virtual servers). This is efficient (making better use of underused servers), but also inefficient (virtualization requires significant overhead, as does running the multiple guest operating systems).

✓ Efficient servers (and small clusters): User populations from the hundreds to 1,000 can be served reasonably efficiently with a single or multiple servers if there’s only one application being run on a server; servers can be highly efficient, yielding a relatively low cost per user.

✓ Mainframe and large Unix clusters: They’re shown separately on the grid only for the sake of space. Both can handle very large database applications from thousands to tens of thousands of users.

✓ Grids: From the hundreds of thousands to a million users, you’re in the area where Software as a Service (SaaS) vendors such as Salesforce.com operate. Business applications offered by SaaS vendors present a thorny scaling problem because it’s a transactional database application. The main Salesforce.com CRM application runs on a grid of about 1,000 computers.

✓ Large grids: Concurrent users above one million. Still a very heavy workload and only possible via a scale-out (which lets a single workload expand by using more of the identical inexpensive resources) approach with a grid. Twitter and Linked-In are examples.

✓ Massively scaled grid: This is for user populations in the tens of millions. Example: Each query on Google search is resolved by a purpose-built grid of up to 1,000 servers; Google routes queries to many such grids. Yahoo also has a massively scaled-out email system. It caters to more than 260 million users, of which tens of millions must be active at a time.

The same servers used in corporate environments could be used just as easily in scaled out arrangements, where workloads aren’t at all mixed. The reduction in per-user costs doesn’t, at the moment, come from using different computer equipment or different operating systems: It comes from running a small number (or even just one) workload and scaling it up as much as possible. That’s how cloud computing reduces costs dramatically. I know this is a lot of info for you to take in and even more for you to understand if you have not attended college anywhere including one of the online universities that you have available to you.

No corporation that runs a mixed workload is ever going to achieve cloud computing’s economies of scale.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Tuesday, March 20, 2012

Comparing Financial Damage: Traditional versus Cloud

How much does a data center cost to run? It depends on these things:

✓ How big it is. How many virtual servers? Is the data center massive? How much square footage; how many servers? Does it cost $5 million a year to run?

✓ Where it is. How much does office space cost. What about cost of staff? Is the data center close to inexpensive power sources?

✓ What it’s doing. Does the data center protect sensitive data? What is its kind of business? What level of compliance must it adhere to? Clearly, you have many ways to look at the situation.



Traditional data center
Although each data center is a little different, the average cost per year to operate a large data center is usually between $10 million to $25 million. Where’s the bulk of the money going? This might surprise you.

✓ 42 percent: Hardware, software, disaster recovery arrangements, uninterrupted power supplies, and networking. (Costs are spread over time, amortized, because they are a combination of capital expenditures and regular payments.)

✓ 58 percent: Heating, air conditioning, property and sales taxes, and labor costs. (In fact, as much as 40 percent of annual costs are labor alone.)

The reality of the traditional data center is further complicated because most of the costs maintain existing (and sometimes aging) applications and infrastructure. Some estimates show 80 percent of spending on maintenance.

Before you conclude that you need to throw out the data center and just move to the cloud, know the nature of the applications and the workloads at the core of data centers:

✓ Most data centers run a lot of different applications and have a wide variety of workloads.

✓ Many of the most important applications running in data centers are actually used by only a relatively few employees. For example, transaction management applications (which are critical to a company’s relationship to customers and suppliers) might only be used by a few employees.

✓ Some applications that run on older systems are taken off the market (no longer sold) but are still necessary for business.

Because of the nature of these applications, it probably wouldn’t be cost effective to move these environments to the cloud.



Cloud data center
In this case cloud data centers means data centers with 10,000 or more servers on site, all devoted to running very few applications that are built with consistent infrastructure components (such as racks, hardware, OS, networking, and so on).

Cloud data centers are
✓ Constructed for a different purpose.
✓ Created at a different time than the traditional data center.
✓ Built to a different scale.
✓ Not constrained by the same limitations.
✓ Perform different workloads than traditional data centers.

Because of this design approach, the economics of a cloud data center are significantly different.

To create a basis for analyzing this, we used figures on the costs of creating a cloud data center described in a Microsoft paper titled “The Cost of a Cloud: Research Problems in Data Center Networks” by Albert Greenberg, James Hamilton, David A. Maltz, and Parveen Patel.

We took estimates for how much it cost to build a cloud data center and looked at three cost factors:
✓ Labor costs were 6 percent of the total costs of operating the cloud data center.
✓ Power distribution and cooling were 20 percent.
✓ Computing costs were 48 percent.

Of course, the cloud data center has some different costs than the traditional data center (such as buying land and construction).

This explanation of costs is designed to give you an idea of where the difference between the traditional data center and the cloud data center are. The upfront costs in constructing cloud data centers are actually spread across hundreds of thousands of individual users. Therefore, after they’re constructed, these cloud data centers are well positioned to be profitable because they support so many customers with a large number of servers executing a single application.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Friday, March 16, 2012

Seeing the Many Aspects of Your Cloud Strategy

You have to think about several issues before sending your organization into the cloud. There isn’t just one approach. You might choose one or more of these approaches at different times for different reasons.

Consider a few simple examples:
✓ Your company is building a new application that will change the way you sell products online. You want to stress test this new application before releasing it to customers. Although you have a few extra resources inside your firewall, they aren’t extensive enough to demonstrate if the new application will really scale. Using a cloud Infrastructure as a Service enables you to test the application effectively.

✓ Your company has run its own email internally for more than 20 years. It takes up a lot of space in the data center and requires a staff of ten people. Money is tight and the CIO must cut staff and capital expenses. The CIO finds a Software as a Service platform that can run the corporate email for a fraction of the cost of running email internally. Your company makes the move and the savings are dramatic.

✓ Your company is building a new but highly experimental application that might transform its business model. It might not be worth spending a lot of money on software and hardware upfront. In fact, if the project succeeds, the new application may be deployed in the cloud (and not within your company’s own data center). Therefore, the company uses a Platform as a Service (PaaS) that includes its own well-designed and fully vetted development environment, new generation tools, and interfaces that allow it to connect to many different environments. No need to pretest all the components provided by the PaaS vendor — they’re well designed and have been tested. The new application built on this platform is completed in record time and deployed to a test group of customers directly from the cloud service.

✓ Your company has started using a third-party SaaS solution for its customer-management application. It has successfully replaced the on premise customer-relationship management package that you’ve been running in the data center for years. Now your company wonders what else could be moved out of the data center into the cloud. How about the mainframe transaction processing system that handles all orders worth more than $1 million? After some investigation, you realize that because the system is only used by a few individuals in the company and the information needs to be carefully governed, the cloud isn’t a good choice.

✓ Your CIO has seen some new software that could solve a serious problem, but you aren’t convinced that the solution is right. Instead of buying a license, your company decides to use it as a service. After six months, it proves valuable. The software company offers you the opportunity to use the Software as a Service or on premise.

As you can see, planning your cloud strategy has many different dimensions — maybe more than what you might have thought about in the past. You need a road map to think about how a cloud strategy can be used to support your company’s business goals.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Monday, March 12, 2012

Salesforce.com and automation application

Salesforce.com built and delivered a sales force automation application (which automates sales functions such as tracking sales leads and prospects and forecasting sales) that was suitable for the typical salesperson and built a business around making that application available over the Internet through a browser.

The company then expanded by encouraging the growth of a software ecosystem around its extended set of customer relationship management (CRM) applications, prompting other companies to integrate their business applications with those of Salesforce.com (or build components to add to Salesforce.com). It began, for example, by allowing customers to change tabs and create their own database objects. Next, the company added what it called the AppExchange, which added published application programming interfaces (APIs) so that third-party software providers could integrate their applications into the Salesforce.com platform.

Most AppExchange applications are more like utilities than full-fledged packaged apps. Many of the packages sold through the AppExchange are for tracking. For example, one tracks information about commercial and residential properties another optimizes the sales process for media/advertising companies; still another package analyzes sales data.

Salesforce.com took its offerings a step further by offering its own language called Apex. Apex is used only within the Salesforce.com platform and lets users build business applications and manage data and processes. A developer can use Apex to change the way the application looks. It is, in essence, the interface as a service.

With the advent of cloud computing, Salesforce.com has packaged these offerings into what it calls Force.com, which provides a set of common services its partners and customers can use to integrate into their own applications. Salesforce. com has thus started to also become a Platform as a Service vendor. Among the hundreds of applications that run on Force.com, it now offers a variety of HR software, and financial, supply chain, inventory, and risk management components. Just as Amazon is currently the trailblazer among the Infrastructure as a Service vendors, Salesforce. com is the trailblazer among the Software as a Service vendors. However, many vendors are now providing Applications as a Service. It has become a popular option for selling software.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Friday, March 9, 2012

Understanding Infrastructure as a Service

Infrastructure as a Service (IaaS) is the delivery of computer hardware (servers, networking technology, storage, and data center space) as a service. It may also include the delivery of operating systems and virtualization technology to manage the resources.

The IaaS customer rents computing resources instead of buying and installing them in their own data center. The service is typically paid for on a usage basis. The service may include dynamic scaling so that if the customer winds up needing more resources than expected, he can get them immediately (probably up to a given limit).

Dynamic scaling as applied to infrastructure means that the infrastructure can be automatically scaled up or down, based on the requirements of the application.

Additionally, the arrangement involves an agreed-upon service level. The service level states what the provider has agreed to deliver in terms of availability and response to demand. It might, for example, specify that the resources will be available 99.999 percent of the time and that more resources will be provided dynamically if greater than 80 percent of any given resource is being used.

Currently, the most high-profile IaaS operation is Amazon’s Elastic Compute Cloud (Amazon EC2). It provides a Web interface that allows customers to access virtual machines. EC2 offers scalability under the user’s control with the user paying for resources by the hour. The use of the term elastic in the naming of Amazon’s EC2 is significant. The elasticity refers to the ability that EC2 users have to easily increase or decrease the infrastructure resources assigned to meet their needs. The user needs to initiate a request, so this service provided isn’t dynamically scalable. Users of EC2 can request the use of any operating system as long as the developer does all the work. Amazon itself supports a more limited number of operating systems (Linux, Solaris, and Windows). For an up-to-the-minute description of this service, go to http://aws.amazon.com/ec2.

Companies with research-intensive projects are a natural fit for IaaS. Cloudbased computing services allow scientific and medical researchers to perform testing and analysis at levels that aren’t possible without additional access to computing infrastructure.

Other organizations with similar needs for additional computing resources may boost their own data centers by renting the computer hardware — appropriate allocations of servers, networking technology, storage, and data center space — as a service. Instead of laying out the capital expenditure for the maximum amount of resources to cover their highest level of demand, they purchase computing power when they need it.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Tuesday, March 6, 2012

Computing on the Cloud

What is cloud computing? Cloud computing is the next stage in evolution of the Internet. The cloud in cloud computing provides the means through which everything — from computing power to computing infrastructure, applications, business processes to personal collaboration — can be delivered to you as a service wherever and whenever you need.

Cloud computing is offered in different forms:
✓ Public clouds
✓ Private clouds
✓ Hybrid clouds, which combine both public and private

In general the cloud — similar to its namesake of the cumulus type — is fluid and can easily expand and contract. This elasticity means that users can request additional resources on demand and just as easily deprovision (or release) those resources when they’re no longer needed. This elasticity is one of the main reasons individual, business, and IT users are moving to the cloud.

In the traditional data center it has always been possible to add and release resources. However, this process couldn’t be done in an automated or selfservice manner.

This evolution to cloud computing — already underway — can completely change the way companies use technology to service customers, partners, and suppliers. Some businesses already have IT resources almost entirely in the cloud. They feel that the cloud model provides a more efficient, costeffective IT service delivery.

This doesn’t mean that all applications, services, and processes will necessarily be moved to the cloud. Many businesses are much more cautious and are taking a hard look at their most strategic business processes and intellectual property to determine which computing assets need to remain under internal company control and which computing assets could be moved to the cloud.

Source of Information : cloud computing for dummies 2010 retail ebook distribution

Saturday, March 3, 2012

Enable AirPrint Support for Shared Printers

If you happen to have an HP ePrint-compatible printer already on your network, you can skip ahead to Print From an AirPrint-Capable App. Otherwise, you must perform a preliminary step to make printing from your iPad possible: install a third-party sharing tool.

Fundamentally, all these utilities do approximately the same thing: they make local and network printers visible, and available, to AirPrint over your Wi-Fi network. However, they go about it in somewhat different ways, and some of them offer useful additional features too.


Mac OS X AirPrint Utilities:
• AirPrint Activator: This no-frills utility lets you make any printer that your Mac can see available to AirPrint. Unfortunately, it also requires you to remove, re-add, and share each printer manually— a tedious process, especially if you have several printers. AirPrint Activator requires both Mac OS X 10.6.5 Snow Leopard or later and iTunes 10.1 or later. (http://netputing.com/airprintactivator/donationware)

• FingerPrint: After installing FingerPrint, any printers that are already shared on your network become available to AirPrint—no additional configuration required. In addition to printing, it also lets you send any document from your iPad to your Dropbox, to a specified folder on your Mac, or, for graphics, directly to iPhoto— wirelessly, without having to connect via iTunes. (http://www.collobos.com/, $7.99)

• Printopia: Printopia lets AirPrint work with any printer connected to your Mac, whether or not it’s already shared. Like FingerPrint, it can also send documents directly to your Dropbox or to a folder on your Mac. And, although it lacks a direct-to-iPhoto feature, it adds another useful capability—“printing” to a PDF or PNG file on your Mac. And, it even works with Mac OS X 10.5 Leopard. For all these reasons it’s my current favorite of these utilities. (http://ecamm.com/mac/printopia/, $9.95)


Windows AirPrint Utilities:
• AirPrint Installer: This utility requires iTunes 10.1 or later and works with printers that you’ve shared from your Windows PC. (http://jaxov.com/2010/11/download-airprint-installer-forwindows-7-xp-vista/, free)

• AirPrint Activator for Windows: The description is in German, but the software is available in English, German, and French. (http://www.iblueray.de/viewforum.php?f=21, free)

Source of Information : TidBITS-Take Control of Working with Your iPad 2011