From the provider’s point of view, the whole point of cloud computing is to achieve economies of scale by managing a very large pool of computing resources in a highly economic and efficient fashion.
An important point to note is that the Y-axis of user populations is logarithmic. That means that the curve is much less steep than if we drew it on a proportional scale of equal steps. If we drew it on a proportional scale, we’d need miles of paper.
We deliberately didn’t put units on the X-axis. Instead, note the following:
✓ One end of the X-axis shows data center costs between $1–$50 per user per annum. That reflects, for example, the prices that Google charges for Google Apps or even the cost of providing free email (from Google, Microsoft, or Yahoo, which is paid for by ads). The cost per user is extremely low.
✓ The other end of the X-axis shows data center costs between $1,000– $5,000 per user per annum. That might be the cost of, for example, providing a print server that’s almost always idle.
Basically you have very efficient use of computer resources and, on the right, very inefficient use of resources. Points on the line indicate the kind of computing resources that serve specific group sizes:
✓ Inefficient servers: This is a 1:1 user-to-server ratio (or close to 1). The cost of managing a single server in a data center will be thousands of dollars per year and this is as expensive as computing ever gets per user.
✓ Virtual machines: Applications and user numbers that can’t use a whole server get virtualized (split among several virtual servers). This is efficient (making better use of underused servers), but also inefficient (virtualization requires significant overhead, as does running the multiple guest operating systems).
✓ Efficient servers (and small clusters): User populations from the hundreds to 1,000 can be served reasonably efficiently with a single or multiple servers if there’s only one application being run on a server; servers can be highly efficient, yielding a relatively low cost per user.
✓ Mainframe and large Unix clusters: They’re shown separately on the grid only for the sake of space. Both can handle very large database applications from thousands to tens of thousands of users.
✓ Grids: From the hundreds of thousands to a million users, you’re in the area where Software as a Service (SaaS) vendors such as Salesforce.com operate. Business applications offered by SaaS vendors present a thorny scaling problem because it’s a transactional database application. The main Salesforce.com CRM application runs on a grid of about 1,000 computers.
✓ Large grids: Concurrent users above one million. Still a very heavy workload and only possible via a scale-out (which lets a single workload expand by using more of the identical inexpensive resources) approach with a grid. Twitter and Linked-In are examples.
✓ Massively scaled grid: This is for user populations in the tens of millions. Example: Each query on Google search is resolved by a purpose-built grid of up to 1,000 servers; Google routes queries to many such grids. Yahoo also has a massively scaled-out email system. It caters to more than 260 million users, of which tens of millions must be active at a time.
The same servers used in corporate environments could be used just as easily in scaled out arrangements, where workloads aren’t at all mixed. The reduction in per-user costs doesn’t, at the moment, come from using different computer equipment or different operating systems: It comes from running a small number (or even just one) workload and scaling it up as much as possible. That’s how cloud computing reduces costs dramatically. I know this is a lot of info for you to take in and even more for you to understand if you have not attended college anywhere including one of the online universities that you have available to you.
No corporation that runs a mixed workload is ever going to achieve cloud computing’s economies of scale.
Source of Information : cloud computing for dummies 2010 retail ebook distribution
1 comment:
Post a Comment