Friday, December 31, 2010

Windows Server 2008 R2 Disks

Windows Server 2008 R2 enables administrators to define how disks are presented and used within the system. Depending on the type and size of a disk, administrators can determine which particular type of disk and volumes they should consider deploying on their systems.

Windows disks can be defined as basic or dynamic disks. Furthermore, these same disks can be defined as Master Boot Record (MBR) or GUID Partition Table (GPT) disks. A simple way to clearly differentiate how to choose between these disk types is to consider that basic disks only support simple volumes, whereas dynamic disks allow logical volumes to be created across multiple physical disks. Choosing between MBR and GPT disks depends on the size of the disk, as well as understanding how many partitions you will need to create on the disk.

Windows Server 2008 R2 also supports VHD or virtual hard disks, for Hyper-V virtual machines. VHD disks can now also be created and mounted directly within a Windows host operating system, regardless of whether the Windows Server 2008 R2 system is hosting the Hyper-V role.


Master Boot Record Disks
Master Boot Record (MBR) disks utilize the traditional disk configuration. The configuration of the disk, including partition configuration and disk layout, is stored on the first sector of the disk in the MBR. Traditionally, if the MBR became corrupted or moved to a different part of the disk, the data became inaccessible. MBR disks have a limitation of three primary partitions and a single extended partition that can contain several logical drives. Choosing to create an MBR disk should provide administrators with a more compatible disk that can easily be mounted and/or managed between different operating system platforms and third-party disk management tools.


GUID Partition Table (GPT) Disks
GPT disks were first introduced in Windows with Windows Server 2003 Service Pack 1. GPT disks are recommended for disks that exceed 2TB in size. GPT disks can support an unlimited number of primary partitions and this can be very useful when administrators are leveraging large external disk arrays and need to segment data for security, hosting, or distributed management and access. GPT disks are only recognized by Windows Server 2003 SP1 and later Windows operating systems. Attempting to manage a GPT disk using a previous operating system or third-party MBR disk management tool will be blocked and virtually inaccessible.


Basic Disk
A Windows disk is defined as a basic or a dynamic disk regardless of whether the disk is an MBR or a GPT disk. A basic disk supports only simple volumes or volumes that exist on a single disk and partition within Windows. Basic disks contain no fault tolerance managed by the Windows operating system, but can be fault tolerant if the disk presented to Windows is managed by an external disk controller and is configured in a fault-tolerant array of disks. Basic disks are easier to move across different operating systems and usually are more compatible with Windows and third-party disk and file system services and management tools. Basic disks also support booting to different operating systems stored in separate partitions. Furthermore, and most important, if the disk presented to Windows is from a SAN that includes multiple paths to the disk, using a basic disk will provide the most reliable operation as a different path to the disk might not be recognized if the disk is defined within Windows as a dynamic disk.


Dynamic Disk
Dynamic disks extend Windows disk functionality when managing multiple disks using Windows Server 2008 R2 is required. Windows administrators can configure dynamic disks to host volumes that span multiple partitions and disks within a single system. This allows administrators to build fault-tolerant and better performing volumes when RAID controllers are not available or when a number of smaller disks need to be grouped together to form a larger disk.

In some server deployments, dynamic disks are required as the disk controllers do not support the necessary performance, fault-tolerance, or volume size requirements to meet the recommended system specifications. In these cases, dynamic disks can be used to create larger volumes, fault-tolerant volumes, or volumes that can read and write data across multiple physical disks to achieve higher performance and higher reliability. Dynamic disks are managed by the operating system using the Virtual Disk Service (VDS).


Virtual Hard Disks
Virtual hard disks or VHDs are used by virtual machines to emulate Windows disks.
Virtual hard disks can be created on an existing Windows Server 2008 R2 system using the Hyper-V Management console or they can be created directly using the Disk Management console. VHDs are primarily created on the Windows host system as a file on an existing Windows volume that has a .vhd extension. VHD disks can be created to be fixed size or dynamically expanding. A fixed-sized VHD that is 10GB in size will equate to a 10GB file on the Windows host server volume. A dynamically expanding VHD file will expand as files are stored on it, only as necessary. VHD files can easily be moved across servers and between virtual machines, and also can be expanded quite easily, granted that the VHD is not in use and there is ample free space on the host volume. VHD files can be attached directly to a Windows Server 2008 R2 host using the Disk Management console, unlike in previous releases, which required scripts to mount the file. This added functionality is a needed improvement to the integrated VSS Hyper-V backup functionality, included with Windows Server Backup and available to third-party backup software vendors.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Wednesday, December 29, 2010

File System Access Services and Technologies

Windows Server 2008 R2 provides administrators with many different options to present file data to end users. These, of course, include the traditional file sharing methods, but also include presenting file data using web services. By default, Windows Server 2008 R2 systems running the File Services role support Windows 2000 clients and later. To support legacy Windows clients, UNIX clients, or legacy Apple Mac clients might require additional services and security modifications to the data. Several of the options available for presenting file data to end users are included in the proceeding sections.


Windows Folder Sharing
This is the traditional and most commonly used method to access server data using the server message block (SMB) protocol over TCP/IP. Windows systems, many UNIX systems, and current Apple Mac systems can access Microsoft servers using this protocol. The path to access the data uses the Universal Naming Convention (UNC) path of \\server\sharename.


Distributed File System (DFS) Namespaces and Replication
This method utilizes Windows folder sharing under a unified namespace. The main difference between standard Windows Server folder sharing and DFS shares is that the actual server name is masked by a unified name, commonly the Active Directory domain name, but in some cases, a single server name and share can be used to access data stored on several servers. Also with DFS, the underlying data can be replicated or synchronized between servers. One limitation of DFS is that the client accessing the DFS namespace must be a DFS-aware client so it can utilize the benefits of DFS and, in some cases, just locate and access the data.


WWW Directory Publishing
Using this method, administrators can make folders and files available through a web browser for read and/or write operations. This can be a useful tool to make files available to remote users with only Internet access. Some common types of files typically published in websites can include employee handbooks, time sheets, vacation requests, company quarterly reports, and newsletters. Additionally, file publishing through the web can be performed using Windows SharePoint Services and Microsoft Office SharePoint Server. Microsoft Exchange 2007 and 2010 also enable administrators to provide access to designated file shares through the Outlook Web Access interface.


File Transfer Protocol Service
The File Transfer Protocol (FTP) service is one of the oldest services available to transfer files between systems. FTP is still commonly used to make large files available and to present remote users and customers alike with a simple way to send data to the organization. FTP is very efficient, and that is why it still has a place in today’s computer and network infrastructure. Standard FTP, however, is not secure by default and should only be used with secure and monitored connections. FTP is compatible with most web browsers, making it very easy to include and utilize links to FTP data within websites to improve file transfer performance. Some common types of files typically made available using FTP sites include company virtual private network (VPN) clients, software packages, product manuals, and to present a repository for customers and vendors to transfer reports, large databases, and other types of data.


Secure File Transfer Protocol (FTPS)
As security becomes more and more of an expectation rather than a necessity for a simple service, Microsoft supports Secure File Transfer Protocol, or Secure FTP, for data transfer services. Using an encryption algorithm for data security and integrity purposes, FTPS provides a method to upload and download data with a significantly more secure FTPS than was typically done in the past using unsecured FTP.


Windows SharePoint Services (WSS)
Windows SharePoint Services (WSS) can be used to present files in document libraries, but the data is stored in Microsoft SQL databases and not in the file system. Because WSS stores file data in SQL databases, separate backups are required and the data stored in WSS is not directly accessible in the file system, except in the form of web links. WSS does have some benefits to managing file data, including document management features such as version history, check-in and checkout functionality, and the ability to notify users or groups when a document has been added, updated, or removed from a WSS document library.

Services for NFS
“Services for NFS” is a suite of services that provides the ability for Windows administrators to simplify the integration of Windows systems into legacy UNIX networks. In previous versions of Windows, Services for NFS or Services for UNIX (SFU) included User Name Mapping services, gateway for NFS, client for NFS, and server for PCNFS (IBM’s implementation of NFS). With Windows Server 2008 R2, the only components included are the client and server for NFS. Mapping UNIX users to Active Directory users is now available as a feature of the Identity Management for UNIX role services, which are part of the Active Directory Domain Services role. Services for NFS allows UNIX systems running the NFS protocol to access data stored on Windows Server 2008 R2 systems. Client for NFS allows the Windows system to access data stored on UNIX systems running the NFS protocol.

Services for Mac
This service was removed in Windows Server 2008 as current Apple Mac devices can connect to Microsoft servers by default using the SMB protocol. To support legacy Apple Mac clients, Windows administrators would need to deploy Windows Server 2003 systems with file and/or print services for Mac installed or provide alternate ways for Mac users to access data, such as FTP or web access.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Monday, December 27, 2010

Windows Server 2008 R2 File System Technologies - Remote Storage Service (RSS)

The Remote Storage Service was included with Windows 2000 Server and Windows Server 2003. The Remote Storage Service enables administrators to migrate or archive data to lower-cost, slower disks or tape media to reduce the required storage space on file servers.

This service, however, has been discontinued in Windows Server 2008 and is not included in Windows Server 2008 R2 either. Many organizations that required this sort of functionality have turned to third-party vendors to provide this type of hierarchical storage management. However, the New File Management Tasks node within the File Server Resource Manager console provides a function that allows administrators to schedule processes that will report on files that might be candidates for moving to alternate storage through a function called file expiration. This can be configured to notify both administrators and end-user file owners of upcoming files that will be expired and moved to alternate volumes. One main difference, however, is that file expiration does not leave a link in the original file location as the Remote Storage Service previously did.


Distributed File System (DFS)
As the file services needs of an organization change, it can be a challenging task for administrators to design a migration plan to support the new requirements. In many cases when file servers need additional space or need to be replaced, extensive migration time frames, scheduled outages, and, sometimes, heavy user impact results.

In an effort to create highly available file services that reduce end-user impact and simplify file server management, Windows Server 2008 R2 includes the Distributed File System (DFS) service. DFS provides access to file data from a single namespace that can be used to represent a single server or a number of servers that store different sets or replicated sets of the same data. For example, when using DFS in an Active Directory domain, a DFS namespace named \\companyabc.com\UserShares could redirect users to
\\Server10\UserShares or to a replicated copy of the data stored at
\\Server20\UserShares.

Users and administrators both can benefit from DFS because they only need to remember a single server or domain name to locate all the necessary file shares.


Distributed File System Replication (DFSR)
With the release of Windows 2003 R2 and continuing with Windows Server 2008 and Windows Server 2008 R2, DFS has now been upgraded. In previous versions, DFS Replication was performed by the File Replication Service (FRS). Starting with Windows Server 2003 R2, DFS Replication is now performed by the Distributed File System Replication service, or DFSR. DFSR uses the Remote Differential Compression (RDC) protocol to replicate data. The RDC protocol improves upon FRS with better replication stability, more granular administrative control, and additional replication and access options. Also, starting with Windows Server 2008 R2, RDC improves replication by only replicating the portions of files that have changed, as opposed to replicating the entire file, and replication can now be secured in transmission.

Source of Information :  Sams - Windows Server 2008 R2 Unleashed

Saturday, December 25, 2010

Windows Server 2008 R2 File System Technologies - Volume Shadow Copy Service (VSS)

Windows Server 2003 introduced a file system service called the Volume Shadow Copy
Service (VSS). The VSS enables administrators and third-party independent software vendors to take snapshots of the file system to allow for faster backups and, in some cases, point-in-time recovery without the need to access backup media. VSS copies of a volume can also be mounted and accessed just like another Windows volume if that should become necessary.


Shadow Copies of Shared Folders
Volume shadow copies of shared folders can be enabled on Windows volumes to allow administrators and end users to recover data deleted from a network share without having to restore from backup. The shadow copy runs on a scheduled basis and takes a snapshot copy of the data currently stored in the volume. In previous versions of Windows prior to Windows Server 2003, if a user mistakenly deleted data in a network shared folder, it was immediately deleted from the server and the data had to be restored from backup. A Windows Server 2003, Windows Server 2008, or Windows Server 2008 R2 NTFS volume that has shadow copies enabled allows a user with the correct permissions to restore deleted or overwritten data from a previously stored shadow copy backup. It is important to note that shadow copies are stored on local volumes and if the volume hosting the shadow copy becomes inaccessible or corrupted, so does the shadow copy. Shadow copies are not a replacement for backups and should not be considered a disaster recovery tool.


Volume Shadow Copy Service Backup
The Volume Shadow Copy Service in Windows Server 2008 R2 also provides the ability for Windows Backup and third-party software vendors to utilize this technology to improve backup performance and integrity. A VSS-compatible backup program can call on the Volume Shadow Copy Service to create a shadow copy of a particular volume or database, and then the backup can be created using that shadow copy. A benefit of utilizing VSSaware backups is that the reliability and performance of the backup is increased as the backup window will be shorter and the load on the system disk will be reduced during the backup.

Source of Information : Sams - Windows Server 2008 R2 Unleashed

Friday, December 24, 2010

2010 Motherboards You Care The Most About

Chipsets often follow the lead of processor technology, so support for sixcore processors was a big modification in high-end motherboards. Intel introduced the H55 and H57 chipsets, similar to the P55 chipset with the addition of the H models letting you take advantage of the graphics capabilities built into some Core i3 and Core i5 processors. AMD announced the 890GX, which offers a built-in Radeon HD 4290 graphics core and six native 6Gbps SATA ports, as well as the 890FX chipset, which supports AMD’s six-core processors and two graphics cards in CrossFire at full x16 speed.

Winner: Evga X58 FTW3
$289.99; www.evga.com
The Evga X58 FTW3 was by far the most popular choice for builders in our Dream PC roundup, and it’s certainly designed for enthusiasts. The X58 FTW3 includes Evga’s EVBot for handheld overclocking, an onboard CPU temperature monitor, and Evga’s EZ Voltage read points. The 8-phase power design combines with a large passive heatsink to offer high overclocking potential, and there’s a handy onboard Clear CMOS button for when your frequencies go too high. If you run into trouble, we also like the onboard diagnostics LED readout, and the feature came in handy to help us figure out what was wrong with a test system. With Evga’s E-LEET tuning utility, you can adjust the system overclock within the OS.

The motherboard features three PCI-E slots, and you can run two cards in SLI or CrossFire at full x16 speed or three graphics cards with one running at x16 speed and the other two at x8. This X58 board also offers a wide variety of connectivity for your storage devices and peripherals. There are two 6Gbps SATA ports, six 3Gpbs SATA ports, two USB 3.0 ports, and support for 12 USB 2.0 ports (eight external, four internal). RAID 0/1/0+1/5 configurations are available. For memory, the X58 FTW3 offers six DIMM slots that can hold up to 24GB of DDR3-1600. Evga tops it off with a three-year limited warranty.


First Runner-Up: Gigabyte 890FXA-UD7
$245.99; www.gigabyte.com
This 890FX motherboard offers an XL-ATX form factor, which gives the space to support up to 4-way CrossFire and another add-in card. In total, there are six PCI-E x16 slots. Gigabyte designed the board so that you install graphics cards in the first, third, and fifth slots, which work at either x16 or x8 speed depending upon how the other PCI-E slots are filled. The 890FXA-UD7 also natively supports Phenom II X6 processors, giving you the ability to use AMD’s top performers.

There are six 6Gbps SATA ports and two 3Gbps SATA ports, along with 14 USB 2.0 ports (eight external, six internal) and two USB 3.0 ports. On/Off Charge technology lets Apple products (iPhone/iPad/iPod touch) draw more power than standard USB ports to charge faster. Another feature we like is Gigabyte’s Auto Unlock support to turn on a previously disabled core on AMD Phenom II dualor tri-core processors. The built-in power, Clear CMOS, and reset buttons make it easy for you to tinker with BIOS settings and return your PC to a working order if something goes wrong.


Second Runner-Up: MSI P55A-GD85
$229.99; www.msi.com
MSI started the year off with a bang by releasing the P55-GD85, which includes a Marvell chip to add two 6Gbps SATA ports, an NEC chip to add two USB 3.0 ports, and a PLX PCI-E bridge chip to improve PCI-E bandwidth and maintain the capacity necessary for the 6Gbps SATA and USB 3.0 connectivity. Power users will also appreciate the P55-GD85’s DrMos (a technology to stabilize power delivery) and SuperPipe heat dissipater, which is a full copper 8mm heatpipe. In short, the board is ideal for those who want to overclock their Intel LGA1156 processor.

The P55-GD85 supports up to 16GB of DDR3 memory running at up to 2,133MHz, and there are four DIMM slots. There are two PCI-E x16 slots (one runs at x16 speed, while the second can run at x8), and the board supports either SLI or CrossFire. We also like that those not familiar with overclocking can use the OC Genie to easily boost their computer’s speed.

2010 Processors You Care The Most About

Source of Information : Computer Power User (CPU) January 2011

Thursday, December 23, 2010

Windows Server 2008 R2 File System Technologies

Windows Server 2008 R2 provides many services that can be leveraged to deploy a highly reliable, manageable, and fault-tolerant file system infrastructure.


Windows Volume and Partition Formats
When a new disk is added to a Windows Server 2008 R2 system, it must be configured by choosing what type of disk, type of volume, and volume format type will be used. To introduce some of the file system services available in Windows Server 2008 R2, you must understand a disk’s volume partition format types.

Windows Server 2008 R2 enables administrators to format Windows disk volumes by choosing either the file allocation table (FAT) format, FAT32 format, or NT File System (NTFS) format. FAT-formatted partitions are legacy-type partitions used by older operating systems and floppy disk drives and are limited to 2GB in size. FAT32 is an enhanced version of FAT that can accommodate partitions up to 2TB and is more resilient to disk corruption. Data stored on FAT or FAT32 partitions is not secure and does not provide many features. NTFS-formatted partitions have been available since Windows NT 3.51 and provide administrators with the ability to secure files and folders, as well as the ability to leverage many of the services provided with Windows Server 2008 R2.


NTFS-Formatted Partition Features
NTFS enables many features that can be leveraged to provide a highly reliable, scalable, secure, and manageable file system. Base features of NTFS-formatted partitions include support for large volumes, configuring permissions or restricting access to sets of data, compressing or encrypting data, configuring per-user storage quotas on entire partitions and/or specific folders, and file classification tagging.

Several Windows services require NTFS volumes; as a best practice, we recommend that all partitions created on Windows Server 2008 R2 systems are formatted using NT File System (NTFS).


File System Quotas
File system quotas enable administrators to configure storage thresholds on particular sets of data stored on server NTFS volumes. This can be handy in preventing users from inadvertently filling up a server drive or taking up more space than is designated for them. Also, quotas can be used in hosting scenarios where a single storage system is shared between departments or organizations and storage space is allocated based on subscription or company standards.

The Windows Server 2008 R2 file system quota service provides more functionality than was included in versions older that Windows Server 2008. Introduced in Windows 2000
Server as an included service, quotas could be enabled and managed at the volume level only. This did not provide granular control; furthermore, because it was at the volume level, to deploy a functional quota-managed file system, administrators were required to create several volumes with different quota settings. Windows Server 2003 also included the volume-managed quota system, and some limitations or issues with this system included the fact that data size was not calculated in real time. This resulted in users exceeding their quota threshold after a large copy was completed. Windows Server 2008 and Windows Server 2008 R2 include the volume-level quota management feature but also can be configured to enable and/or enforce quotas at the folder level on any particular NTFS volume using the File Server Resource Manager service. Included with this service is the ability to screen out certain file types, as well as real-time calculation of file copies to stop operations that would exceed quotas thresholds. Reporting and notifications regarding quotas can also be configured to inform end users and administrators during scheduled intervals, when nearing a quota threshold, or when the threshold is actually reached.


Data Compression
NTFS volumes support data compression, and administrators can enable this functionality at the volume level, allowing users to compress data at the folder and file level. Data compression reduces the required storage space for data. Data compression, however, does have some limitations, as follows:

• Additional load is placed on the system during read, write, and compression and decompression operations.

• Compressed data cannot be encrypted.


Data Encryption
NTFS volumes support the ability for users and administrators to encrypt the entire volume, a folder, or a single file. This provides a higher level of security for data. If the disk, workstation, or server the encrypted data is stored on is stolen or lost, the encrypted data cannot be accessed. Enabling, supporting, and using data encryption on Windows volumes and Active Directory domains needs to be considered carefully as there are administrative functions and basic user issues that can cause the inability to access previously encrypted data.


File Screening
File screening enables administrators to define the types of files that can be saved within a Windows volume and folder. With a file screen template enabled, all file write or save operations are intercepted and screened and only files that pass the file screen policy are allowed to be saved to that particular volume or folder. The one implication with the file screening functionality is that if a new file screening template is applied to an existing volume, files that would normally not be allowed on the volume would not be removed if they are already stored on it. File screening is a function of the File Server Resource Manager service.


File Classification Infrastructure
Windows Server 2008 R2 includes a new feature called the File Classification Infrastructure (FCI). The FCI enables administrators to create classification policies that can be used to identify files and tag or classify files according to properties and policies defined by the file server administrators. FCI can be managed by using the File Server Resource Manager console and allows for file server administrators to identify files and classify these files by setting specific FCI property values to these files based on the folder they are stored in and/or based on the content stored within the file itself. When a file is classified by FCI, if the file is a Microsoft Office file, the FCI information is stored within the file itself and follows the file wherever it is copied or moved to. If the file is a different type of file, the FCI information is stored within the NTFS volume itself, but the FCI information follows the file to any location it is copied or moved to, provided that the destination is an NTFS volume hosted on a Windows Server 2008 R2 system.

Source of Information :  Sams - Windows Server 2008 R2 Unleashed

Tuesday, December 21, 2010

CPU’s Year-Sized Review of The Best in Tech

We’ve come a long way since the first issue of CPU. In 2001, Chief Fruit Man Steve Jobs was trying to convince the world it needed to listen to music on an iPod. Intel’s flagship desktop CPU was a Pentium 4; AMD’s counterpart was an Athlon XP. Forget about multicore computing, unless you wanted to operate a system with each hand. Double-data rate SDRAM was finding its way into enthusiasts’ machines, and a little GPU called Radeon had been challenging Nvidia’s GeForce for about a year. Speaking of challengers, Microsoft jumped into the videogame console business, introducing its Xbox to face off against the mighty PlayStation 2, with Nintendo’s GameCube seemingly left to eat the table scraps.

Online, it’s a similar story. People were Googling, to be sure, but they were also Asking and Lycos. . . ing. No one was Facebooking, tweeting, or YouTube-ing. A healthy chunk of folks were still dialing-up the Internet. So, nine years later, how we doin’? Nowadays, it’s iPod iconoclasts who are working at least as hard to persuade you not to buy an iSomething. “More cores” is 2001’s “more gigahertz.” Both Intel and AMD have solidly embraced DDR3, and it won’t be too long before they’ll embrace DDR4. This year, we saw Nvidia finally introduce Fermi, its answer to AMD’s Cypress. Microsoft and Sony, four years after the Wii, leveled the motion-sensitive playing field with Move and Kinect, respectively, as the casual gamer became too big to ignore.

Although all the big names in silicon gave us plenty to talk about (such as multiple hexa-core processors and a 3-billion-transistor GPU architecture, for starters), some of the top tech storylines developed online. Google decided that picking fights against other tech companies wasn’t a big enough challenge and took on the People’s Republic of China itself, announcing it would no longer filter its search results for Google.cn. And Google didn’t stop with China, either, when a number of countries raised concerns after discovering Google’s fleet of Street View cars harvested—inadvertently or not—data from a number of private Wi-Fi networks. The Net Neutrality movement saw its efforts throttled, when the U.S. Court of Appeals ruled that the FCC had no authority to regulate Comcast’s activity. Facebook became the most popular Web site in the world. It also became a movie and an email address.

Mobile and gaming had big years, too. The iPad had people talking tablets, and The Steve told us we were holding our phones the wrong way. Android gained ground without, interestingly enough, a lot of help from the much-hyped yet woefully undersold Nexus One (you might remember it as the Googlephone); don’t hold your breath for a Nexus Two. The videogame industry continued to state its case as mainstream entertainment, with more than 20 titles shipping 1 million units in 2010 (saying nothing of this year’s late additions, Call of Duty: Black Ops and World of Warcraft: Cataclysm). And, for the first time, it also stated its case to the Supreme Court in Schwarzenegger v. Entertainment Merchants Association, a case involving the legality of banning the sale of certain violent videogames to minors.

But all of this, really, is periphery. You’re here to hear about the best hardware and software of 2010. Rest assured, we have that—pages of that. We have the best processors, motherboards, graphics cards, LCDs, printers, media apps, and more. So, if you somehow discovered how to hibernate for the last 12 months, fear not. We’ll fill you in on what you missed. And in “Best Of The Next” on page 72, we declare our predictions for the best loot of 2011, just in case you’re debating whether you should slip into another yearlong slumber.

So, as we put 2010 in our rearview, get ready to kick off another wild decade in tech. A lot can change in a little time.

Source of Information : Computer Power User (CPU) January 2011

Sunday, December 19, 2010

When considering the whole system (hardware, software, network and applications) isn’t the cost of the hardware negligible?

Hardware cost is just a fraction of total system cost, although it must be noted that the cost of storage is an increasing porting of hardware cost. This has given rise to some new storage approaches (SAN for Storage Area Network and NAS for Network-Attached Storage). And hardware cost is far from being negligible for high-end systems.

Just because hardware cost is reducing does not mean that choice of hardware is irrelevant. Indeed, the choice of a platform sets compatibility constraints within which the company will have to live for many years; migrating to another platform brings costs and risks. As indicated in some earlier FAQs, hardware costs and, even more, price/performance continually decrease. Software prices have also dropped significantly, although not to the same degree, because of the volume effects (compare the price of proprietary operating systems to those of UNIX and NT).

The price reduction in software has been less than for hardware because standardization of software is much less than for hardware. It is worth noting that well standardized fields generally show very low prices (operating systems, for example), while more fragmented domains see much higher prices, offering comfortable margins to their vendors (as with cluster-based systems).

It seems likely that current software trends (such as free software, Java, software components, etc.) will drive software cost erosion faster than in the past. This will drive the major portions of IT expenditure towards exploitation: integration, operation, administration, safety, performance tuning, and so on.

Source of Information : Elsevier Server Architectures 2005

Saturday, December 18, 2010

Given that they are extremely complex to implement, will MPP architectures be confined to scientific and technical markets?

To take advantage of a massively parallel architecture, one must have applications written for it. The cost of developing such applications—or of adapting other applications for MPP use—restricts the number of applications. For applications needing intense numerical computation, companies computed the ROI (Return On Investment) and decided that the investment was worthwhile.

MPP architectures force the development of applications using the message- passing paradigm. For a long time, the major barrier was the lack of interface standards and development environments. Some candidate standards have appeared (such as PVM, MPI, or OpenMP), and there is a steadily-increasing number of applications built on these.

We should note here that the efficiency of such applications is usually dependent on the performance (in particular, the latency) of message-passing (including all relevant hardware and software times). Not unreasonably, the first MPP applications to appear were those that could straightforwardly be decomposed into more or less independent subtasks, which needed only a small number of synchronizations to interwork effectively. Thus, mainly scientific and technical applications (in fields like hydrodynamics, thermodynamics, and seismology) were first to appear for MPP.

There is another brake on the flourishing of this class of machine—the lack of low-level standards providing access to the interconnect. This lack means that there is substantial difficulty in adapting higher-level libraries to the various hardware (and interconnect) platforms; effecting such a port requires high technical knowledge and skills, so moving between one manufacturer and another is difficult. Various industry initiatives, such as VIA or Infiniband, could bring sufficient standardization that MPP could spread.

MPP products are challenged in yet another direction: the appearance of peer- to-peer systems, grid computing, and clusters of PCs. A key factor of such solutions is low cost, driving the development of applications that can quickly turn into industry standards. The rise of these approaches could condemn the MPP proper to an early grave.

To finish this discussion, recall that in 1984 Teradata brought to market what were probably the first MPP systems. These systems, deploying Teradata’s own proprietary DBMS, were used in decision support, needing the processing of very large volumes of data (several terabytes). With the advent of large clusters and MPP systems from other manufacturers, the major database vendors subsequently produced versions of their software to run on this class of machine.

Are RISC processors dead, killed by Intel?

Source of Information : Elsevier Server Architectures 2005

Friday, December 17, 2010

GOOGLE BUG LETS MALICIOUS WEBSITES HARVEST YOUR EMAIL

A bug in the Google Apps Script API could have allowed for emails to be sent to Gmail users without their permission according to Google. The bug in Google had been demonstrated by a web page hosted on Google Blogger. This web page gathered the Google account email address of the user visiting the web page and send an email to it. Google, has released a fix to this issue after an article exposing the privacy threat appeared on Techcrunch written by Michael Arrington who, in turn, had given voice to the author of the exploit complaining of the lack of interest from Google security response team.

Sunday, December 12, 2010

PAM

PAM (actually Linux-PAM, or Linux Pluggable Authentication Modules) allows a system administrator to determine how applications use authentication to verify the identity of a user. PAM provides shared libraries of modules (located in /lib/security) that, when called by an application, authenticate a user. The term “Pluggable” in PAM’s name refers to the ease with which you can add and remove modules from the authentication stack. The configuration files kept in the /etc/pam.d directory determine the method of authentication and contain a list (i.e., stack) of calls to the modules. PAM may also use other files, such as /etc/passwd, when necessary.

Instead of building the authentication code into each application, PAM provides shared libraries that keep the authentication code separate from the application code. The techniques of authenticating users stay the same from application to application. In this way, PAM enables a system administrator to change the authentication mechanism for a given application without ever touching the application.

PAM provides authentication for a variety of system-entry services (login, ftp, and so on). You can take advantage of PAM’s ability to stack authentication modules to integrate system-entry services with different authentication mechanisms, such as RSA, DCE, Kerberos, and smartcards.

From login through using su to shutting the system down, whenever you are asked for a password (or not asked for a password because the system trusts that you are who you say you are), PAM makes it possible for the system administrator to configure the authentication process. It also makes the configuration process essentially the same for all applications that use PAM for authentication.

The configuration files stored in /etc/pam.d describe the authentication procedure for each application. These files usually have names that are the same as or similar to the names of the applications that they authenticate for. For example, authentication for the login utility is configured in /etc/pam.d/login. The name of the file is the name of the PAM service2 that the file configures. Occasionally one file may serve two programs. PAM accepts only lowercase letters in the names of files in the /etc/pam.d directory.

PAM warns you about errors it encounters, logging them to the /var/log/messages or /var/log/secure files. Review these files if you are trying to figure out why a changed PAM file is not working properly. To prevent a malicious user from seeing information about PAM unnecessarily, PAM sends error messages to a file rather than to the screen.


Caution: Do not lock yourself out of the system
Editing PAM configuration files correctly takes care and attention. It is all too easy to lock yourself out of the computer with a single mistake. To avoid this problem, always keep backup copies of the PAM configuration files you edit, test every change thoroughly, and make sure you can still log in once the change is installed. Keep a Superuser session open until you have finished testing. When a change fails and you cannot log in, use the Superuser session to replace the newly edited files with their backup copies.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Thursday, December 2, 2010

A Fork In The Open Office Road

The open-source office productivity suite space has gotten turbulent since Oracle acquired Sun Microsystems and thus rights associated with the free OpenOffice.org suite. First, several OpenOffice.org developers broke ties to form The Document Foundation and release the LibreOffice productivity suite, reportedly downloaded 80,000 times in its first week. Oracle, which sells a paid OpenOffice version, later stated it will continue “investing substantial resources in OpenOffice.org” and “continue developing, improving, and supporting OpenOffice.org as open source.” TDF took this to mean that “Oracle has no immediate plans to support” TDF or “transfer community assets such as the OpenOffice.org trademark” to TDF, adding that it hoped Oracle would change its tune when it “sees the volunteer community—an essential component of OpenOffice’s past success—swing its support behind the new Foundation.” In late October, however, several TDF members were asked to vacate their OOo Community Council seats due to conflict of interest, with several doing just that via email.