Wednesday, August 31, 2011

Surfing the Web

Browsers have actually been around for a long time, but were never really called browsers. Instead, they were called text readers or read-only applications, because what these programs did was open simple files of text and let someone read them—like a book. These programs were on computers called dumb terminals.

It seems odd to call a computer dumb, but compared to the computers used today, these computers weren’t very smart. All they did was display information from big, monster servers called mainframes that were the size of an average living room. These servers weren’t all that smart, either, but they were good enough to take a lot of information and help businesspeople and scientists make sense of it.

The problem was that all these dumb terminals could only talk to the servers they were connected to. There was an Internet back then, but there was no World Wide Web; Internet traffic was mainly limited to messaging and file transfers, using tools such as Usenet, Archie, or Gopher.

Then, in 1990, a scientist in Switzerland, Sir Tim Berners-Lee, got a brilliant idea. What if you could read files on any computer connected to the Internet any time you wanted? You could put those files on a special server that had one job—showing those files to anyone who asked for them. Sir Berners-Lee, who was knighted for his work at the CERN institute, knew this idea would only work if all of these files were made readable by any computer. File compatibility was (and still can be) a huge obstacle for users to overcome.

So, Sir Berners-Lee suggested that people use HyperText Markup Language (HTML) files. Because they are essentially ASCII text files, HTML files could be read by any computer, would let people create any content they wanted, and would have hyperlinks—something that would revolutionize the way people absorbed material.

Browsers came about as instruments to read all of these new HTML files. As with the dumb terminals, Sir Berners-Lee just wanted people to read information quickly in files—not to change their content. So he and his colleagues figured out a way to make a program that did nothing but read and display HTML. Other people got involved and made the application read more complicated HTML code.

People began reading the information on the Web page and calling the process of reading those pages browsing—and that’s where the browser name comes from. Later, when the general public started using the Web, the verb browsing got morphed into surfing. The name browser stuck, though, because it still more accurately describes what this type of application does. You can call any program like this a browser, of course. A program that does nothing but show pictures could be a picture browser. But these days the name is more synonymous with Web browsers, such as the most famous open-source browser today: Firefox.

Source of Information : Cengage-Introducing Fedora 2010

Saturday, August 27, 2011

Introducing Firefox

In the olden days of the Internet (all of 17 years ago), life was uncomplicated. The simple concept of hyperlinks on a text page was just emerging. Some links went to other pages; others went to files to be downloaded—perhaps a picture or two. Browsers such as Lynx only had to contend with text—life was good.

In 1993, everything changed forever. The National Center for Supercomputing Applications (NCSA) at the University of Illinois created Mosaic—a browser capable of displaying text and pictures. Suddenly, users could see illustrated Web pages, which facilitated the flow of information. A year later, one of the Mosaic developers left NCSA and launched his own browser—Netscape Navigator 1.1.

Since then, the capability of browsers has grown even more in response to more complex content. Need to hear a sound file? The browser will take care of it. Need to view a Flash animation? Not only will a browser display it for you, but the browser can also automatically go get the required viewer if you don’t already have it.

These sophisticated features are a long way from the early Internet days, that’s for sure.

One of the direct descendants of that early Netscape browser is Firefox, a crossplatform open-source browser that has taken the desktop world by storm, no matter what the platform. Even on Windows, traditionally the bailiwick of Internet Explorer, Firefox has a 10+ percent browser share, which may not seem like a lot, except when you consider it’s only been around for a couple of years.

What makes Firefox special is its speed, stability, and security. Unlike Internet Explorer, which is tied closely to the Windows operating system on a code level, Firefox is a separate application. So, even if someone can figure out how to maliciously hack Firefox, it won’t damage anything beyond the browser. When Internet Explorer is hacked, all of Windows can become vulnerable.

Another unique feature of Firefox is the available extensions. Because Firefox is open, developers can create small add-on programs that can handle a variety of tasks, like displaying newsfeeds, synchronizing a user’s settings with any Firefox browser they use, blocking advertising . . . it’s a long list.

Finally, something that Firefox has had for quite some time, before Internet Explorer picked it up, is a tabbed interface. Tabs let you display multiple pages in a single window, a very useful feature for power surfers.

Source of Information : Cengage-Introducing Fedora 2010

Tuesday, August 23, 2011

Fedora Software Repositories

One of the brilliant features of Fedora is that it only comes on one CD disc. Not every operating system can brag about that. Indeed, many Linux distributions are delivered to users through multiple disks.

Formerly, the strategy in delivering a Linux distribution to your home or office was not very complex: all of the applications a user needed or would ever need in the future would be delivered on one complete set of CDs (or, later, one or two DVDs). The advantage here was that once you downloaded and burned all of those CDs, you would be all set to run that distribution without having to download additional software later. But the average CD image download, discussed. That’s a lot of data, even for today’s broadband connections. Having to do this for three, five, or even seven CDs is a very time-consuming undertaking for most users, unless they are willing to pay to have those same CDs delivered by mail.

Fedora flips the model around a bit. Working with the knowledge that a majority of Internet users now have broadband access, the Fedora Project has decided to send out just the absolutely necessary Fedora applications on one CD and leave the rest online on servers scattered around the world for users to download as needed.

This may seem inefficient, since you must have Internet access of some kind to make this work. But consider that most operating systems update themselves via Internet anyway, so in order to keep Fedora up to date, online access was needed anyway. And only downloading and burning one CD is a lot faster than downloading and burning CDs plural. Delivering a ‘‘core’’ distribution also gives users much greater flexibility in picking and choosing what software applications they want on their system. It also means their hard drives won’t be loaded with stuff they don’t need.

To give you an idea of just how much more software is available, consider these numbers: a standard installation of Fedora has around 1,100 packages. Currently, there are over 12,000 total application packages available.

Fedora, like its sibling Red Hat Enterprise Linux, organizes its software in repositories. There are three primary repositories for Fedora, each holding a specific class of software. Let’s walk through them now.

The three official Fedora repositories are pretty clearly named, but let’s examine them anyway.
» Fedora. This repository holds all of Fedora’s officially supported software. Everything that Fedora must have to actually run is in here, and all of the software is under a free software license. Additional applications in this repository include AbiWord, Evolution, Firefox, Gaim, OpenOffice.org, and Thunderbird.

» Updates. This repository contains any software that has been updated because of a bug or security fix.

» Source. All of the source code packages for Fedora software are found in this repository.


Adding Repositories
These are not the only repositories that Fedora can use. There aremany communityrun repositories on the Internet for Fedora, each holding specialized software that the Fedora Project does not want to host.

There are many software applications out there that can run on Linux, but because their licenses are completely proprietary, some Linux distributions won’t touch them with a 10-foot pole. By virtue of its Linux origins, Fedora’s makers feel obligated to abide by this philosophy, keeping totally commercial packages away from Fedora.

But there is an important distinction here. While the Fedora Project does not release commercial software with Fedora, that does not preclude letting users have access to a commercial repository after they have downloaded and installed

Fedora. A fine distinction, to be sure, but it gives users the advantage of making their own choices about what software they want to use. All of the package managers in Fedora work off a master list of repositories stored on your PC. From this file, known as sources.list, the package managers know which repositories to check for new software and if there are any updates available for software installed on your system. If you want these managers to peruse another repository, you will need to modify sources.list with the new information.

Fedora users in the know are aware of three such third-party repositories that will get you access to the latest in cutting-edge software for Fedora. These are the Dribble, Freshrpms, and rpm.livna.org repositories. Fortunately, you won’t have to add these repositories one at a time. Instead, you can use one command to add RPM Fusion to your sources.list, which will accomplish the same thing.

This operation will be done using a command-line application. Command-line applications are always run in a Terminal window, one of the plainest and most versatile tools found in Fedora. To start Terminal, click on the Applications | System Tools | Terminal menu command

Source of Information : Cengage-Introducing Fedora 2010

Thursday, August 18, 2011

How Fedora Installs Software

In the Windows world, there is usually one way to install software: clicking on an installation application that starts up and runs the whole setup for you from start to finish.

In Fedora, like most Linux distributions, there are three methods of software installation. Admittedly, one way to install sure sounds attractive and less confusing, but the one-size-fits-all installation service comes with a potentially bad price: Windows installation routines can often overwrite important underpinnings in the operating system for the sake of the application that’s currently being installed. This is good for your installed application, but potentially very bad for any pre-existing application on your system that was using that same section of Windows’ code.

In Fedora, all of the three installation methods take great pains to install applications using only what’s already in Fedora. If what the application needs is not installed in Fedora already, it has what is known as a dependency. The installing user (that would be you) will be told about any dependencies and asked how to proceed. A description of the three installation methods is easy to provide:

» Self-Contained Installation Program. This methodology is very much like the method used by Windows. A special installation application is run that automagically handles the application’s setup on your PC. This type of installation is not common on Fedora machines, though some of the larger consumer applications (OpenOffice.org or Firefox) can be installed in this manner. There is one important difference from Windows: no existing software is changed by the installation application. Dependencies are usually handled well, but it’s not foolproof.

» Compiled from Source. Remember how any user can get to the source code of any free software application? Well, once you have that code, you can perform what’s known as a compilation to turn that code (which only humans, at least the smart ones, can read) into something the PC can read and work with. Software compilation isn’t hard, but it is time-consuming at times, and dependencies are not automatically handled.

» Package Management. This method is unique to UNIX-based systems. All of the files and settings needed to install and run an application are included in one package. Fedora uses RPM-based, or .rpm, packages. (Other Linux distributions, such as Debian or Ubuntu, use Debian-based, or .deb, packages.)

As you may have guessed, package management is the preferred method of software installation in Fedora. Package installation is actually performed by an application known as a package manager. It helps keep track of all of the applications that are already installed on your PC and also helps keep track of those dependencies we mentioned. If you install a package that needs some additional software tools to properly operate on your Fedora system, it’s the package manager that will figure out what other packages you need.

In Fedora, there are actually three package managers that will assist you in your installation needs:

» PackageKit. This robust graphical package manager lists every package available for Fedora, which lets you search for software applications from a very big list. Applications are categorized by type, status on your system (installed or not), or origin.

» Software Update. Another graphical tool, this package manager has one job to do: keep your system as up to date as possible. If there’s a new version of any of your installed applications out there, Software Update will know about it and flag it for you to download and install.

» yum. The core package manager for Fedora, this command line application makes getting new packages as easy as typing one line of text and pressing the Enter key.

Each of these three package managers is configured to find all of the packages from Fedora’s package repositories. In the next section, we’ll walk through repositories and how they work.

Source of Information : Cengage-Introducing Fedora 2010

Thursday, August 11, 2011

Application virtualization layer

The application layer consists of operating systems and business applications that run in Hyper-V partitions. Again, not all software behaves well in a virtualized environment. However, Microsoft puts candidate applications and operating systems through a certification process. Applications that pass are listed at www.windowsservercatalog.com.


System Center Virtual Machine Manager
Once you select the right applications, you have to manage them. The traditional method of sliding an installation CD into the server’s CD/DVD drive is not a good practice for a virtualized environment. To take full advantage of the fast server build and process migration capabilities inherent in Microsoft’s virtualization strategy, you need to manage your VMs and hardware resources from a central console. Out-of-the-box management of your VM environment is possible with server manager. However, as the sophistication of your virtualization environment increases, management becomes more challenging. Managing more complex VM and host environments is the role of SystemCenter Virtual Machine Manager (VMM).

A simple VMM configuration components of VMM communicate with each other and with host systems via the VMM Server. The VMM Admin Console is an MMC snap-in that allows the performance of the following:

» Configuration of the VMM environment
» Creating, deleting, starting, and stopping VMs
» Conversion of physical servers to virtualized systems
» VM monitoring

The VMM Library contains the profiles used to create VMs, including templates, virtual disks, and CD/DVD ISO images. In addition to access by the VMM Server, the library is also accessed by the VMM Self-Service Portal (not shown in the diagram). The portal is used by IT staff to create and manage VMs using predefined profiles. Portal rights and permissions are configurable to control who can access and what they can do.

The VMM Agent resides on Hyper-V hosts. The VMM Server uses the agent to effect changes to the virtualization environment and to monitor its health.

Centralized management is not possible in a heterogeneous environment. For example, some datacenters might house both Microsoft and VMware VMs. To ensure a single management solution, Microsoft has designed VMM to support the following:

» MS Virtual Server
» Hyper-V
» VMware ESX

VMM also supports Powershell scripting. In fact, any operation you perform with VMM—including ESX operations––can be automated with Powershell scripts.

Other capabilities supported by VMM include:

» Analysis of which server is the best choice for a new VM
» Assessment of the impact of a workload migration
» Automation of workload migration
» Automation of VM cluster placement when high availability is specified

Finally, VMM integrates with SCOM.


System Center Operations Manager
SCOM allows the administrator to monitor physical and virtual environments from a single console, including:

» Overall performance
» Processor
» Memory
» I/O


System Center Data Protection Manager
Data Protection Manager (DPM) plays a key role in the dynamic datacenter by backing up critical systems. Using Microsoft Volume Shadow Copy (VSS), it is capable of performing block-level synchronization of VMs in as little as 2-3 min, with a repeat cycle as short as every 30 min. In addition to VMs, DPM also backs up nonvirtual machines.

To enable backups, a DPM agent is placed on each target device. The nature of the desired backup and restore determines where the agent is installed. Agent installation options include:

» In the VM—This allows backup of the virtual workload only. The administrator can restore the backups to the same VM or to another protected location.

» On the virtual host—Placing an agent on the virtual host enables backup of the VMs themselves, allowing an administrator to restore an entire VM if necessary.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation

Monday, August 8, 2011

Hardware virtualization layer

The hardware virtualization layer is created by installing Microsoft Hyper-V on one or more compatible hardware platforms. Hyper-V, Microsoft’s entry into the hypervisor market, is a very thin layer that presents a small attack surface. It can do this because Microsoft does not embed drivers. Instead, Hyper-V uses vendor-supplied drivers to manage VM hardware requests.

Each VM exists within a partition, starting with the root partition. The root partition must run Windows 2008 Server _64 or Windows 2008 Server Core _64. Subsequent partitions, known as child partitions, usually communicate with the underlying hardware via the root partition. Some calls directly from a child partition to Hyper-V are possible using WinHv (defined below) if the OS running in the partition is “enlightened.” An enlightened OS understands how to behave in a Hyper-V environment. Communication is limited for an unenlightened OS partition, and applications there tend to run much more slowly than those in an enlightened one. Performance issues are generally related to the requirement for emulation software to interface hosted services.

The Hyper-V components responsible for managing VM, hypervisor, and hardware communication are the VMBus, VSCs, and VSPs. These and other Hyper-V components.

» Advanced Programmable Interrupt Controller (APIC)—An APIC allows priority levels to be assigned to interrupt outputs.

» Hypercalls—Hypercalls are made to Hyper-V to optimize partition calls for service. An enlightened partition may use WinHv or UnixHv to speak directly to the hypervisor instead of routing certain requests through the root partition.

» Integration Component (IC)—An IC allows child partitions to communicate with other partitions and the hypervisor.

» Memory Service Routine (MSR)

» Virtualization Infrastructure Driver (VID)—The VSD provides partition management services, virtual processor management services, and memory management services.

» VMBus—The VMBus is a channel-based communication mechanism. It enables interpartition communication and device enumeration. It is included in and installed with Hyper-V Integration Services.

» Virtual Machine Management Service (VMMS)—The VMMS is responsible for managing VM state associated with all child partitions. A separate instance exists for each VM.

» Virtual Machine Worker Process (VMWP)—The VMWP is a user-mode component of the virtualization stack. It enables VMMSs for the root partition so it can manage VMs in the child partitions.

» Virtualization Service Client (VSC)—The VSC is a synthetic device instance residing in a child partition. It uses hardware resources provided by VSPs. A VSC and VSP communicate via the VMBus.

» Virtualization Service Provider (VSP)—The VSPs reside in the root partition. They work with VSCs to provide device support to child partitions over the VMBus.

» Windows Hypervisor Interface Library (WinHv)—The WinHv is a bridge between a hosted operating system’s drivers and the hypervisor. It allows drivers to call the hypervisor using standard Windows calling conventions when an enlightened environment is running within the partition.

» Windows Management Instrumentation (WMI)—The WMI exposes a set of APIs for managing virtual machines.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation

Saturday, August 6, 2011

Building a dynamic Datacenter

Evolution of IT to a strategic asset is impossible unless your datacenter is configured and managed to react quickly to changing business needs. In addition to agility, the cost of maintaining a mature datacenter should fall well below that associated with traditional datacenter infrastructures. Finally, the dynamic datacenter must minimize business impact when critical systems fail. Server virtualization plays an important role in reaching these objectives by providing the following:

» Increasing utilization of server processing, memory, input-output (I/O), and storage resources

» Decreasing server sprawl by aggregating applications on fewer hardware platforms

» Improving IT service levels by enabling quick deployment of new servers and operating systems while providing the means to recover failed systems well within maximum downtime constraints

» Supporting legacy systems by allowing older software solutions to run on newer hardware platforms

» Streamlining management and security

» Reducing challenges associated with application compatibility, hardware or other applications

» Improving time to recovery during business continuity events


Virtualization layers in a dynamic datacenter
Microsoft’s vision of a virtualized datacenter consists of three layers above the physical layer. Positioned immediately above the hardware layer is Hyper-V, abstracting hardware resources from future VMs. Above Hyper-V is the application layer in which virtualization-capable operating systems and applications come together to form a collection of VMs. The Model layer comes last, providing the tools and processes necessary to bring together the other three layers, configure them in a standard way, and create a cohesive processing environment. Tools used in the Model layer include System Center Operations Manager (SCOM), System Center Virtual Machine Manager, System Center Configuration Manager (SCCM), and Visual Studio development tools.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation

Wednesday, August 3, 2011

Microsoft’s IT Maturity Model

Microsoft’s maturity model consists of four levels: Basic, Standardized, Rationalized, and Dynamic. Each level above Basic steps IT closer to the role of strategic asset.

» Basic—This is a fire-fighting mode. Most organizations find themselves at this level, with IT agility and cost containment constrained by complete reliance on manual, uncoordinated implementation and management of resources.

» Standardized—IT managers, realizing the futility of using manual processes to dislodge IT from its traditional role of electronic data processing, begin to automate critical processes to enable the business to compete more effectively. They also implement enterprise monitoring and management tools.

» Rationalized—Technology begins to remove itself as a business constraint. Instead, IT is able to quickly react to changes in strategic or operational business requirements. IT teams no longer struggle to stay ahead of the changing business environment. In fact, they actually begin to drive business process improvements.

» Dynamic—The final stage in Microsoft’s model is IT’s arrival as a strategic asset. No longer perceived as a hole into which management dumps budget, it is seen by business units as a partner in the development and maintenance of competitive advantage.

Depicts the model, adding two additional elements. The first is our addition of “Continuous Improvement.” Arriving at the final maturity level should never result in a sense of completion, secure in the knowledge that we have “arrived,” while the business moves forward and our position as a strategic asset rests on an increasingly unstable foundation. There is always room to improve, with new opportunities and challenges arising every day.

The second is the triumvirate comprising the foundation upon which an organization drives IT improvements: people, process, and technology. It is not enough to throw virtualization and additional management tools at a struggling IT environment. People must be convinced of the need for change. Maturing an IT organization requires more than intent. It also requires changes in IT culture.

Processes are often documented and never reviewed again, even if they are followed religiously. Managers and staff should regularly assess each process by asking the following questions:

1. Is every task in the process necessary? Are we doing things because of reasons we can no longer remember?

2. Are we doing enough to meet customer or stakeholder expectations?

3. Are we doing more than expected, incurring unnecessary costs?

4. Can someone else do it cheaper or with better outcomes?

The final element is technology. One of the building blocks of a dynamic IT organization is the proper use of virtualization. To help organizations move along the maturity continuum, Microsoft provides virtualization across all components of the IT infrastructure. Using these tools helps arrive at a dynamic datacenter with centralized, optimized desktop management.

Source of Information : Elsevier-Microsoft Virtualization Master Microsoft Server Desktop Application and Presentation