Wednesday, September 15, 2010

Diskeeper 2010 keeps disk defrags to a minimum

Diskeeper’s software makes a valiant attempt at preventing fragmentation from ever occurring

In the life of a Windows machine, nothing is certain but malware sweeps and disk defragmentation. Both chores are deployed regularly in hopes of holding off bit rot, and each has a knack for sucking up resources and generally getting in everyone’s way—despite ongoing vendor efforts to tuck these tasks into the background. I suspect that anti-malware efforts will always be messy—at least until application whitelisting tactics like the ones that Andrew Garcia become more broadly accepted. However, there may be new hope on the disk fragmentation front: Diskeeper’s Diskeeper 2010 software.

Diskeeper, the company responsible for the disk defrag tool that ships along with Windows, is out to attack disk fragmentation by preventing it from occurring in the first place. Diskeeper 2010 ships with a new feature called IntelliWrite that works by intercepting disk writes and carrying them out with a focus on keeping files contiguous. By keeping fragmentation to a minimum— the company claims that up to 85 percent of fragmentation can be avoided—Diskeeper 2010 promises to cut power consumption, disk wear and redundant I/O operations. Cutting down on unnecessary I/O can be particularly beneficial when dealing with virtual machines situated on shared storage, or with systems running on solid-state drives, with their inherent write-cycle limitations.

This all depends, of course, on how regularly IT administrators conduct disk defragmentation jobs on the Windows desktops and servers in their care. Opinions on the exact performance implications of disk fragmentation fall all over the map, and much depends on a company’s particular environment. Few deny, however, that disk fragmentation catches up with all Windows machines eventually. With that said, IT administrators who turn frequently to disk defragmentation tools should have a look at Diskeeper 2010, which is available in 30-day trial versions for each of its editions: Professional, Pro Premier Server, EnterpriseServer and Administrator. For a breakdown of each of these versions and their prices, check out this comparison page: www.diskeeper.com/Diskeeper/comparisonchart.aspx .



All editions include the IntelliWrite feature
For this review, I tested the Pro Premier with HyperFast edition, which sells for $99.95. The least costly edition, Professional, sells for $59.95 per seat. All editions include the Intelli-Write feature. I conducted most of my Diskeeper 2010 tests on Windows XP SP3 virtual machines with NTFSformatted drives running under the VMware vSphere 4 setup in our lab. Since Diskeeper 2010’s Intelli-Write feature works by inserting itself between Windows and users’ write operations, I was interested in figuring out how much overhead Diskeeper added in this new, more active role. I measured the overheard by running the suite of hard-drive benchmarks from Futuremark’s PCMark05 both with and without IntelliWrite enabled. Over multiple runs, I found that the overhead introduced by IntelliWrite hovered around an acceptable 2 percent. I was also interested in measuring the amount of fragmentation prevention that IntelliWrite would buy me. After casting around for a repeatable means of fragmenting my test disk, I settled on extracting all the files and folders from a Windows XP ISO image onto a defragmented test drive.

I then recorded the number of lowperforming fragments as reported by Diskeeper, deleted the extracted files and manually defragmented the disk. Over multiple runs, with IntelliWrite enabled and disabled, I found that my disk ended up with 87 percent fewer low-performing fragments on the IntelliWrite runs. For the f ragmentat ion that remained following my IntelliWriteenabled runs, a feature introduced in an earlier version of Diskeeper, Invisi-Tasking, was ready to sweep in and finish the job. InvisiTasking monitors the utilization of the machine on which Diskeeper is installed, and jumps in to defragment files when the system is idle. I had automatic defragmentation turned off for most of my IntelliWrite test runs, but when I ran the two in tandem, I found that what fragmentation IntelliWrite left behind was dealt with by InvisiTasking within 5 minutes. Diskeeper 2010’s graphical interface sports a status dashboard that’s packed with information about disk status, about which resources are available for use by InvisiTasking and about the rate at which IntelliWrite is avoiding disk fragmentation.
In fact, I found the interface a bit too packed with information: All of the information on the dashboard is arranged on a single page, and I had to do a lot of scrolling to find the information I was after. The dashboard page contains links for jumping to different sections, but I’d rather see the whole thing exploded out into something like a navigation tree.

I found the properties dialogs for my individual disks easier to navigate, and I appreciated the options for scheduling when to enable or disable InvisiTasking.


Source of Information : eWeek February 15 2010

Tuesday, September 14, 2010

SlickEdit has enough new features to replace IDE—at a price

SlickEdit’s mix of capabilities might have what it takes to entice me to tool unification

I’ve always been the type of programmer who’s used two editors: one for quick edits and another for larger-scale work. On Windows, my preference for quick edits has been SciTE, which I regard as a “better than Notepad” editor that does just what I need: syntax highlighting, quick open and close times (that’s very important for us impatient programmer types), line numbers, and a good search and replace that includes regular expressions.

For bigger work, I turn to full-blown integrated development environments (IDEs), where I typically tap Visual Studio or Eclipse, depending on the project. Then along comes SlickEdit 2009, which offers some of the best attributes of these two classes of editor. I’ve always been happy with my two-worlds approach to text editing, but SlickEdit’s mixture of capabilities might have just what it takes to entice me to tool unification. While SlickEdit is an editor by name, the venerable tool (the release I tested is the product’s 14th) has long hovered a bit closer to the IDE end of the spectrum. For instance, where it’s common with slimmer editors to drop out of the application to debug or compile from the command line, SlickEdit features such broad language support and support for creating projects that developers of all stripes can do most of their coding
and compiling from within the tool.

In fact, with Version 2009, SlickEdit has piled on enough new features— including a complete debugger for Python, Perl and PHP code—that it can actually replace your IDE for all practical purposes. I expect most programmers to love SlickEdit 2009 once they try it, but there is one significant downside to this product: its cost. A single-developer license on Windows or Linux costs $299; on Solaris, it’s $399.

This kind of pricing is clearly intended for businesses that determine they can save money by using the product, and there’s certainly an argument to be made that productivity will increase with an editor like this. However, the availability of free options, such as Eclipse and the express versions of Visual Studio, might make it tough for programmers to convince their CFOs that the gain will be significant enough to justify the cost. That debate could be tougher still in shops that have already invested in costly Visual Studio enterprise licenses.



Slick editing
SlickEdit 2009 sports a set of new features that jumped out at me early in my testing. The first was a new symbol coloring capability that takes syntax highlighting a step further, for example, by coloring a private member variable differently from a member function.

Another cool SlickEdit 2009 feature called Smart Open helped me locate and open files more quickly by taking the first few file name characters I typed into the smart open dialog and pulling up a list of matching files from various directories within my workspace. Since large software projects often end up with dozens or even hundreds of directories, this could be a big time-saver.

Also impressive was SlickEdit’s new Source Diff feature, which works like a traditional diff/merge utility with one important difference: This source diff program will ignore whitespace differences, and that’s a big help when comparing two files with few code differences but several layout and formatting differences. This is particularly common when dealing with files that have been run through an editor that “beautified” the code by replacing tabs with spaces, moving braces to or from a new line, and the like.

I’ve seen cases in which diff utilities have practically choked while displaying the changes between two files that had just one code change between them. SlickEdit’s Source Diff, on the other hand, can see past superficial whitespace and formatting changes, and show you only those lines that have actual code changes.

I tried out the Source Diff by taking an existing C# source file that had all the opening braces on their own lines and then manually moving the braces to the end of the previous line—a superficial change that does not impact the code. Then I removed several comments.

Finally, I made a “real” change and modified the characters in a string constant. I toggled the Source Diff feature on, compared the files, and, sure enough, the program ignored the superficial brace changes and highlighted the lines where I removed the comments and changed the string constants. Other new features include the ability to export and import some or all of your settings. This is important to any programmer who has ever had to move from one workstation to another. And the ability to export only some settings is useful if you want to share settings with somebody else without replacing all the settings.

Although not new with the 2009 version, there are several features in SlickEdit that tempt me to give up my old standby editors. One feature lets you create macros for SlickEdit, as you would with any good IDE. SlickEdit has a powerful feature that enables you to create a template (or use an existing one) that can be used for creating new source files. These templates are categorized.

For example, there’s a C++ template for a singleton pattern. If you create a new file based on this template, the file will start out with existing code for the beginnings of a singleton class. Other not-so-new features include member list pop-ups; easy code navigation, such as jumping to an identifier’s definition; auto-completion (something I’ve come to rely on quite heavily, as it makes coding much faster); and an impressive regular expression evaluator.

Source of Information : eWeek February 15 2010

Monday, September 13, 2010

BlockMaster SafeStick SuperSonic provides blazing speed

This small, light USB removable flash memory storage outperforms the competition and is well-controlled by centralized management software

BlockMaster, headquartered in Lund, Sweden, has been making rugged, encrypted USB removable storage for more than five years, and its take on the market is slightly different from that of Lexar and IronKey, whose devices we’ve recently reviewed. Weighing in at a svelte 9 grams, the SafeStick SuperSonic is much smaller and lighter than the others. It’s more of an armor-plated USB key than a USB key immersed in a brick of epoxy. The drive also ships with a very capable management console, SafeConsole, which can be used to lock down every aspect of removable media that I could think of. The SafeStick Super-Sonic took a beating during my testing. It survived most acts of torture, but overall was not as unflappable as the IronKey and Lexar drives. Provided with three units, I placed one suspended under a buoy in Barnegat Bay for almost two winter months. I had to chip through the ice to get it, but once I dried it off and allowed it to reach room temperature, it functioned perfectly. And I did my usual throwing it off the roof of a four-story building, spiking it down a flight of stairs, running it through the washing machine and giving it a brief stint in the toaster oven—all of which it survived with nary a scratch.

The SafeStick SuperSonic was lucky enough to join me on my vacation—I mean field testing—in the Philippines in January. It survived days tied to a car’s bumper, a ferry ride during a tropical depression, and tied to the outside of my backpack during hikes up the Mayon and Pinatubo volcanoes. It partied for 18 straight hours in honor of Santo Nino and then rested on the beach in Boracay for three days. When we returned home, I performed my favorite test—smashing it between two 20-pound dumbbells. After a few whacks, I was left with a very thin—and unusable— memory key.

The SafeStick SuperSonic will definitely take more abuse than a regular USB key and is no slouch when it comes to durability, but the IronKey and Lexar units are more solidly constructed. If you’ve got normal users, this is not an issue. However, if you’re equipping a team of U.S. Navy SEALS who might actually need bulletproof USB removable storage, then skip the SafeStick SuperSonic.



Impressive performance
What the SafeStick SuperSonic lacks in durability, it more than makes up for in performance. It did extremely well in all performance tests, which is expected because it uses SLC flash memory. Using the ATTO Disk Benchmark, throughput maxed out at 25.5MB/s write and 33.1MB/s read using an 8,192KB/s transfer size. Copying a 691MB file to the encrypted volume took 33.9 seconds (20.4MB/s), and copying it back took 22.8 seconds (30.3MB/s). For reference, a “normal” or “el cheapo” USB stick does 6.6MB/s write and 24.0MB/s read in ATTO and takes 3 minutes and 18.15 seconds to write and
46.84 seconds to read a 987MB file. BlockMaster provides highly configurable management software, SafeConsole, in several versions. Each contains different features, ranging from the free Intro version, which only lets you set up password policy, to Enforce&Enable, which packs the whole enchilada. After installing SafeConsole on my Windows 2003 Server SE test machine, I ran the configuration wizard. I integrated SafeConsole with Microsoft Active Directory and imported the users and OUs (organizational units) from my directory. Then I created various management accounts (administrator, manager, support) and assigned passwords and applied restrictions so that SafeConsole could be managed only from my local IP address range. From there, it’s reasonably straightforward to create policies such as password creation and recovery, and device backup, and assign them to users and/ or OUs. The lack of help is troubling. Most things are easy to figure out with the decent descriptions that usually appear below an option, and top-level general settings are explained in the PDF documentation, but I did encounter settings with no explanation.

Lost drive management is worthy of note. Administrators can configure SafeSticks to connect to SafeConsole at least every x number of days, and if they don’t, can set the drive to “lost” or “disabled.” I was dutifully warned in the PDF manual that this is based on the system clock and can easily be tricked. Lost drives can be unlocked with data intact. Disabled drives wipe data and must be reprovisioned. There are some helpful features to control removable USB storage. An administrator can configure authentication via Windows credentials (the user name is mapped to the device before deployment, so when a user gets the device, he or she just plugs it in), while preventing all other removable storage from being mounted. Files can be published securely to SafeSticks over a network from the management console. Usage can be restricted to certain file types. The drives currently support Windows and Mac OS, but not Linux. BlockMaster SafeStick SuperSonic and SafeConsole Enforce&Enable are very good solutions for deploying and managing rugged and encrypted removable USB media. They meet MIL-STD-810F waterproof standards and are going for FIPS 140-2 certification. Although less rugged than the Lexar and IronKey, the SafeStick SuperSonic is smaller and speedier.

Pricing for a 4GB SafeStick Super- Sonic USB key is $139; an 8GB version costs $219 (volume discounts apply). The Intro version of SafeConsole is free for all orders of more than 100 devices, while pricing begins at $14 per device per year for the Enforce version and $18 per device per year for SafeConsole Enforce&Enable

Source of Information : eWeek February 15 2010

Sunday, September 12, 2010

Tough-love security

AppLocker blocks everything except code that is expressly permitted by policy.

When I started using Windows 7 full time on my primary system, I wanted to take better advantage of the new operating system’s baked-in security features. I had been running as a limited rights user that needed a separate administrator password to effect system changes throughout my time with Windows Vista, and I had gotten used to the routine of right-click/Run as Administrator/password to install anything. Since I was going to use Windows 7 Ultimate, I decided to give the new AppLocker a try to see if such a lockdown was a feasible option on a heavily used workstation. AppLocker is Microsoft’s take on application whitelisting. It blocks everything from running except code that is expressly permitted by policy.

Initially, I set up AppLocker with the default rules. Myeveryday, limited-rights user account could only run executables and scripts installed to either the Program Files or Windows directories and could only install signed Windows installers (or unsigned ones saved to a specific folder in the Windows directory). After a period of acclimation, I deleted those exceptions for Windows installer packages as well. In sum, to run any application from a different directory or to install anything, I had to expressly run it as an administrator. So AppLocker dictates that my user account can only run apps installed in two approved locations, and Least Privilege/User Account Control says my user account cannot save things to those two locations.

It’s pretty good security, provided I don’t do anything stupid with my administrator password. After six months of use, I generally forget that AppLocker is running in the background, since I’ve trained myself to install new programs or updates in the new manner. Indeed, I’ve found it works well most of the time. There is still code that can’t deal with this type of security, and the most glaring examples are Web browser add-ons.
WebEx has been the most troublesome application.

Neither in Internet Explorer nor Firefox has my limited-rights user account been able to join a conference. The only solution I’ve found is to run IE as administrator (it doesn’t work in Firefox), but that defeats the purpose of locking down my security, as I am exempting one of the most commonly attacked platforms from my security policy. So I’ve started joining WebEx conferences from my iPhone instead. I know software developers have little impetus to design their code to work under such circumstances because hardly anyone is going to use their computer in that way.

AppLocker likely has an unheralded future ahead of it, if only because the
majority of Windows 7 users don’t have access to the feature. In January, Microsoft announced that it had moved more than 60 million copies of Windows 7 in the last two months of 2009. But what percentage of those are the Ultimate SKU, the only consumer edition to include the AppLocker feature? The volume licensed Enterprise edition also comes with AppLocker functionality, and I see some companies leveraging the feature for kiosks or other limited-use workstations. But I can’t see many companies deploying it to their user base. Many IT professionals I’ve talked to confide that they still haven’t taken away local admin rights from their users, so AppLocker isn’t even on their radar. Are there any corporations out there trying to implement AppLocker across their user base? If so, I’d love to hear your story.

Source of Information : eWeek February 15 2010

Saturday, September 11, 2010

Salesforce.com assembles an array of development tools for Force.com

Force.com is a very promising addition to Salesforce’s line of services and is well worth further evaluation

About two year s ago, Salesforce.com introduced its Force.com offering, through which the software-as-a-service giant invited developers to create cloud-based applications that would run on Salesforce.com’s own infrastructure. When I fi rst learned about Salesforce.com’s efforts to allow development for the cloud through its own Force.com site, I was a bit skeptical about the initiative. However, as I started to use the tools and feel my way around, my doubts gave way to intrigue.

Salesforce.com has assembled an almost overwhelming array of development tools for the service— enough to ensure that developers of various skill levels and tool persuasions should find a suitable path to developing a Force.com application. I found options that ranged from filling out form-based applications through a Web interface to writing raw code right from my desktop. All told, Force.com is a very promising addition to Salesforce’s line of services and is well worth further evaluation, particularly for organizations already using the company’s applications.



Force.com and Eclipse
Among the numerous tools provided with this service is a powerful plug-in for building Force.com applications from within the Eclipse IDE (integrated development environment), which I recently put to the test. I focused on the Force.com IDE, an Eclipse plug-in that enables developers to work from their desktops as they develop software for deployment on Force.com servers. It’s an approach that reminded me of the Amazon AWS Toolkit for Eclipse that I recently reviewed. As with that Amazon plug-in—and most other Eclipse plug-ins I’ve tested—installation of the Force.com IDE was a snap. The version of the plug-in I tested didn’t yet support the latest Eclipse release, Version 3.5, but since Eclipse lives inside its own isolated directory on my computer, I had no trouble maintaining simultaneous 3.5 and 3.4 installations.

Development for the Force.com platform is a bit unusual compared with traditional Eclipse development because almost all aspects of the development take place on the Force.com servers. Therefore, when I created a new Force.com project in Eclipse, I was asked to provide my Force.com credentials, which were freely available through the Force.com developer site. As I worked in Eclipse, my code was compiled not on my local machine, but on the remote Force.com servers. Likewise, when I tested and ran the code, it all took place remotely. Yet, I found the integration seamless and felt as if the code I was working with could have been running locally.



Yet another language: Apex
Development for Force.com requires programmers to use a new language called Apex, which runs on top of the Apex framework. This framework is fundamental to the Salesforce.com servers and is not new to Force.com: It’s been part of Salesforce.com for several years. At first, I was a little concerned when I heard that the development would require the use of a new language. Fortunately, the language has traditional C-style syntax and looks very similar to Java and C#, so I felt at home using it.

However, Apex does differ from C# and Java in certain respects: Conceptually it behaves similar to a database language, such as T-SQL or PL/SQL. Apex includes database constructs such as data select statements, transaction support and triggers. Salesforce has done a beautiful job of integrating data access right into the language, which is something I’ve longed for more than once in my programming career. (Take a look at the Apex developer site for more information.)

Because Apex code runs on multitenant servers that are in the cloud, it’s vital that the code doesn’t get out of control, thereby hogging resources and causing other problems on the platform. With that in mind, the Apex language was built to be monitored by what Salesforce calls a “governor.” If you’re familiar with .NET managed code or the Java run-time, think of either of those on steroids. The governor carefully manages and monitors the running code—not just its data aspects, but all aspects, including loops and other control structures, to keep it within isolated boundaries. The documentation refers to Apex as a “virtual virtual machine,” which seems fitting.



Other features
The Eclipse plug-in is certainly fullfeatured. One particularly cool feature is for running “anonymous” code right on the remote server, on the fly. Most developers have probably done this kind of thing before in other languages (albeit in a locally run debugger) when the developer inspects an object, calls a member function right in the inspector, and watches the code run right then and there. Similarly, with the “Execute Anonymous” pane that the Force.com plug-in provides for Eclipse, I could execute code on the fly on the server and see the results in a Results text box.

The Apex language includes support for unit testing, and the Eclipse plug-in supports this as well. By creating unit tests in my classes, I could easily test out my classes on the remote server from within Eclipse. Another interesting feature is the Schema Explorer, which allows developers to browse and inspect their database model on the server, graphically, from within Eclipse.

The Force.com development site is huge, and the Eclipse plug-in is only one small piece of it. For those who aren’t keen on writing code, the site makes available various designers that can be used to piece together an entire application right on the Website. The resulting application will run on the Force.com cloud as a set of Web pages. There are other design tools available for building Force.com applications that run inside Adobe Flash or Adobe Air run-times.

What’s more, Force.com applications can be readily integrated with other providers’ cloud services, including Amazon’s Web Services, Google’s App Engine and Facebook. There are tool kits that work together with Java, as well as .NET. And there are plenty of developer-contributed libraries (such as AJAX support), as well as stand-alone tools such as a data explorer.

Source of Information : eWeek February 15 2010

Friday, September 10, 2010

Microsoft Forefront UAG 2010 makes DirectAccess feasible

UAG 2010 addresses many of the shortcomings of DirectAccess

Microsoft’s Forefront Unified Access Gateway 2010 addresses many of the shortcomings of the company’s new always-on remote connectivity solution, Direct-Access, providing sorely needed measures of performance and availability scaling, global management and backward compatibility to help move Direct-Access beyond mere pilot projects to actual deployment on real networks.

I tested DirectAccess last October, (tinyurl.com/yelgw6m) and found that the product (which is baked into Windows 7 Enterprise and Ultimate on the client side, as well as Windows Server 2008 R2) made for an interesting and effective pilot project. However, its lack of scale, global manageability and backward OS compatibility on both the client and server sides would effectively limit its usefulness on most live domains and networks. Into the breach steps UAG, which addresses each of those concerns. Administrators who install

UAG on each DirectAccess server in the network can scale DirectAccess management and performance beyond a single server by creating an array to aggregate all the servers. UAG’s NAT64 and DNS64 implementations provide DirectAccess connectivity to IPv4-only intranet servers and applications, while SSL (Secure Sockets Layer) VPN functionality provides access to remote clients using older operating systems or those not joined to the domain. For the purposes of this test, I focused on the enhancements to DirectAccess that UAG affords, and did not look at UAG’s SSL VPN implementation. Forefront UAG 2010, which started shipping in December, is licensed through Microsoft’s volume licensing program and requires both preserver licenses and CALs (Client Access Licenses). Each Forefront UAG server license costs $6,341 (which does not include the license cost for the underlying Windows Server 2008 R2 OS), while CALs (which can be purchased per user or per device) are $15 each. Large customers ordering more than 10,000 access licenses are eligible for a volume discount.



Making use of IPv6
On its own, DirectAccess makes use of IPv6 to route traffic from a remote Windows 7 client over the Internet to the DirectAccess server that bridges the traffic to a protected intranet server. IPv6 support is incomplete throughout the Internet, so Direct- Access employs transition technologies such as 6to4 and Teredo to traverse the IPv4 Internet or NAT (Network Address Translation) networks. But if the intranet server doesn’t support IPv6 with a dual-layer IP stack, DirectAccess can’t complete the connection. Forefront UAG 2010 solves this problem by implementing NAT64 and DNS64 at the network edge. When a remote client tries to access an intranet server, UAG sends two DNS (Domain Name System) lookups to the intranet DNS server—one for an IPv4 A record and one for an IPv6 AAAA record. If the DNS server has both records, it will return the AAAA record to UAG, and standard DirectAccess communications will be employed. If an application does not support IPv6 while the server itself does, administrators should disable the IPv6 support on the server or remove its AAAA record from DNS to avoid complications.

If only an A record gets returned, UAG assumes the server uses only IPv4, and NAT64 must be employed. NAT64 adds a prefix to the server’s IPv4 address and returns the full value (prefix plus IPv4 address) to the requesting client. When the client begins communications with the server, UAG strips off the prefix and creates a new IPv4 packet to send to the server. When the server responds through the same UAG gateway, UAG recrafts the packet for IPv6 with the prefix and sends it to the client. To test UAG’s ability to deliver DirectAccess connectivity to legacy applications and servers, I added a Windows Server 2003-based member server running Exchange 2003 to my test network. Although Windows Server 2003 does support IPv6, it has a dual-stack IP implementation that doesn’t work with DirectAccess.

Through UAG, my remote client was able to access Exchange as if the machine were connected directly to the intranet. I was able to connect to Outlook Web Access and to Exchange from Outlook without needing to change settings on the client. I also tested UAG’s DirectAccess functionality with a non-Microsoft Web application. I added an old firewall appliance that doesn’t support IPv6 to the intranet and added the appropriate A record to my DNS server. Again, from my remote client, I was able to successfully access the appliance’s Web-based management console via UAG DirectAccess. For my tests, I deployed UAG DirectAccess in an end-to-edge configuration, terminating encryption and authentication at the UAG server at the network edge.



Load balancing
Forefront UAG 2010 allows administrators to scale the management and performance of DirectAccess, which, by itself, requires administrators to individually configure each Direct- Access server. UAG allows administrators to define one UAG DirectAccess server as an array master, effectively replacing the DirectAccess management snap-in with a UAG snap-in, through which a policy created on the master will be automatically replicated to all member servers in the array.

When I added a second UAG server, I used Windows’ built-in Network Load Balancing technology. I had to define virtual IP addresses (one on the intranet and two on the Internet) to represent the cluster; create a certificate for the VIP; and ensure a certificate was exported to the store on each UAG server. With two servers in my UAG array, from my remote client I initiated a connection to my Exchange server on the intranet. By looking at certificate information on the client, I determined which UAG server in the array was parsing the connection. I paused that UAG server’s virtual machine, simulating a server failure.

After a minute, the connection failed over to the second UAG server, re-establishing the connection between the remote client and Exchange server. The delay is due to a
60-second wait period before Windows will break an IP Security association. This delay is put in place to avoid excessive IPSec negotiations with clients on lousy network connections, but the wait period can mean a minute-long lack of remote connectivity that could lead to some support calls.

Although the IPSec timeout is not configurable, Microsoft officials have said there are programmatic workarounds that can be done on the client end to break the connection. If this timeout becomes an issue for customers, they said, Microsoft will look into providing a fix to do that.

Source of Information : eWeek February 15 2010

Thursday, September 9, 2010

Ubuntu - Sharing a Printer with Windows

It is usually best to use a native printing protocol. For Ubuntu, LPD and CUPS are native. Most versions of Windows support network printing to LPD servers, so sharing with LPD should be enough, but it requires users to configure their printers.

Native Windows environments can share printers using the Server Message Block (SMB) protocol. This allows Windows users to browse the Network Neighborhood and add any shared printers-very littlemanual configuration is required. For Ubuntu to share a printer with Windows users requires installing SAMBA, an open source SMB server.

On the print server:

1. Install SAMBA on the print server. This providesWindows SMB support:

sudo apt-get install samba

2. Create a directory for the print spool:

sudo mkdir /var/spool/samba/

3. Edit the SAMBA configuration file: /etc/samba/smb.conf.

4. Under the [global] section, change workgroup = to match yourWindows Workgroup. For example, my office workgroup is SLUGGO:

[global]
workgroup = SLUGGO

5. Under the [global] section is an area for printer configuration. Uncomment (remove the leading ;) the load printers = yes andCUPSprinting lines.

########## Printing ##########
# If you want to automatically load your printer list rather
# than setting them up individually then you’ll need this
load printers = yes
# lpr(ng) printing. You may wish to override the location of the
# printcap file
; printing = bsd
; printcap name = /etc/printcap
# CUPS printing. See also the cupsaddsmb(8) manpage in the
# cupsys-client package.
printing = cups
printcap name = cups

6. Set the [printers] section to look like this:

[printers]
comment = All Printers
browseable = no
security = share
use client driver = yes
guest ok = yes
path = /var/spool/samba/
printable = yes
public = yes
writable = yes
create mode = 0700

This setting allows any Windows client to access the printers without a password.

7. (Optional) Under the [printers] section, set browseable = yes. This allows Windows systems to see the printers through the Network Neighborhood.

8. Restart the SAMBA server:

sudo /etc/init.d/samba restart

On the Windows client, you can add the printer as if it were a Windows printer. For example, if the server’s name is printer.home.com and the printer is Okidata, then the shared printer resource would be \\printer.home.com\Okidata.Windows clients will need to install their own print drivers.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Wednesday, September 8, 2010

Ubuntu - Sharing a Printer with LPD

Enabling LPDsupport is a little more complex, since Ubuntu does not normally include servers.

On the print server:

1. Install xinetd on the print server. This is the extended Internet daemon for running processes.

sudo apt-get install xinetd

2. Create a configuration file for the printer service. This requires creating a file called /etc/xinetd.d/printer. The contents should look like this:

service printer
{
socket_type = stream
protocol = tcp
wait = no
user = lp
group = sys
server = /usr/lib/cups/daemon/cups-lpd
server_args = -o document-format=application/octet/stream
}

3. Restart the xinetd server:

sudo /etc/init.d/xinetd restart

On the printer client:

1. Go to System -> Administration -> Printing, to open the printer applet.

2. Double-click New Printer to configure the device.

3. Select a Network Printer and the Unix Printer (lpd) protocol.

4. Enter the print server hostname (or IP address) in the Host field and the CUPS printer name under the Queue field.

5. Continue through the remaining screens to select the printer type and configure it.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Tuesday, September 7, 2010

Ubuntu - Sharing a Printer with CUPS

To share the printer with CUPS, you will need to configure both the printer server and the client.

On the print server:

1. Edit /etc/cups/cupsd.conf and change the line that reads Listen localhost:631 to Port 631. This tells CUPS to allow printing from any remote system, and not just localhost.

2. (Optional) Edit /etc/cups/cupsd.conf and change Browsing off to Browsing on. This allows the server to announce the printer’s availability to other hosts on the network.

3. Restart the CUPS subsystem on the print server:
sudo /etc/init.d/cupsys restart # Hardy Heron (8.04 LTS) and older sudo /etc/init.d/cups restart # Jaunty Jackalope and newer

The default is an announcement every 30 seconds. You can change this by specifying a BrowseInterval. For example, BrowseInterval 15 will announce every 15 seconds, and a value of 300 will announce every five minutes.

On the print client:

1. Go to System -> Administration -> Printing to open the printer applet.

2. Add a New Printer.

3. Select a Network Printer and the CUPS Printer (IPP) protocol.

4. Enter the printer hostname and printer name as a URL. For example, if the server is named printer.home.com and the printer is called Okidata, then you would use ipp://printer.home.com/printers/Okidata.

5. Click the Forward button and select the printer model.

6. Create a description for the printer

7. Click on the Apply button to create the printer.

If you enabled browsing in Step 2 of the server configuration, then Ubuntu clients will try to automatically discover and configure the remote printer.

CUPS RUNNETH OVER
CUPS provides many configuration options, but it has a long history of being a security risk. The CUPS installation includes a web-based administration interface. By default, it is not accessible remotely. (But if you followed the steps under Sharing With CUPS, then it is remotely accessible.) The URL for this interface is http://localhost:631/.

Although you can use the CUPS web interface to view and manage the print queue, the default administration interface does not permit adding new printers or changing configurations. This functionality is disabled in Ubuntu, primarily because of security risks. Enabling this interface is not recommended. Instead, if you need to modify printer configurations, use the System -> Preferences -> Printing application.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Monday, September 6, 2010

Adding a Printer in Ubuntu

Adding a printer under Ubuntu is straightforward. Go to System -> Administration
-> Printing to open the printer applet. From there, you can double-click New Printer to configure the device.

The first step in adding a printer requires specifying which kernel device communicates with the printer. The system will search for local and network printers. You also have the option to configure a local printer using a USB or parallel port, or a network printer. Although the local printer configuration is easy (select the detected USB printer or parallel port), networked printers require additional information.

CUPS Printer (IPP)—The Common Unix Printing System allows the sharing of printers between different Unix computers. You will need to provide a URL for the printer, such as ipp://server/printer name.

Windows Printer (SMB)—Windows printers are very common. In small offices, a user with a printer directly connected to a Windows host can share the printer with the network. You will need to provide the Windows hostname, printer name, and any username and password needed to access the device.

Unix Printer (LPD)—The Line Printer Daemon protocol is one of the oldest and most reliable network printing options. Most standalone network printers support LPD. For this option, you will need to provide the hostname and the name of the LPD print queue.

HP JetDirect—This is another common protocol for standalone printers. You only need to provide the hostname (and port number if it’s not the default 9100).

The printer configuration applet’s layout changes with each Ubuntu version. However, the core functionality remains the same. Use sudo apt-get install cups-pdf to add a printer for generating PDF files.

The second step for adding a printer requires you to specify the type of printer. If your exact printer model is not listed, chances are good that there is a model that is close enough. In the worst case, you can always select one of the generic printer options. Finally, you should name the printer. Give it a descriptive name so that you can recognize it later.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Sunday, September 5, 2010

Enabling Multiple CPUs (SMP)

Many of today’s computers have multiple CPUs. Some are physically distinct and others are virtual, such as hyperthreading and dual-core CPUs. In any case, these processors support symmetric multiprocessing (SMP) and can dramatically speed up Linux.

The kernel supports multiple CPUs and hyperthreading. If your computer has two CPUs that both support hyperthreading, the system will appear to have a total of four CPUs.

Older versions of Ubuntu, such as Hoary and Breezy, had different kernels available for SMP. To take advantage of multiple processors, you would need to install the appropriate kernel.

sudo apt-get install kernel-image-2.4.27-2-686-smp

Without installing an SMP kernel, you would only use one CPU on an SMP system.

Dapper Drake (6.06 LTS) changed this requirement. Under Dapper, all of the default kernels have SMP support enabled. The developers found that there was no significant speed impact from using an SMP kernel on a non-SMP system, and this simplified the number of kernels they needed to maintain.

There are a couple of ways to tell if your SMP processors are enabled in both the system hardware and kernel:

• /proc/cpuinfo—This file contains a list of all CPUs on the system. Alternately, you can use sudo lshw -class cpu.

• top—The top command shows what processes are running. If you run top and press 1, the header provides a list of all CPUs individually and their individual CPU loads. (This is really fun when running it on a system with 32 CPUs. Make sure the terminal window is tall enough to prevent scrolling!)

• System Monitor—The SystemMonitor applet can be added to the Gnome panels.When you click it, it shows the differentCPUloads.

In each of these cases, if only one CPU is listed, then you are not running SMP. Multiple CPUs in the listings indicate SMP mode.



Disabling SMP
In some situations, such as application benchmarking or hardware debugging, you may want to disable SMP support. This can be done with the kernel parameters nosmp or maxcpus=1. If this is a temporary need, then you can boot the system, catch GRUB at the menu by pressing Esc, and enter boot nosmp maxcpus=1 at the prompt. If you have multiple boot options, then you may need to edit the kernel line and add nosmp maxcpus=1 to the kernel boot line.

Some kernels may not work with nosmp, but in my experience maxcpus=1 always works.

The default boot loader gives you three seconds to press the escape key before it boots the operating system.



Missing SMP?
If you find that you only have one active CPU on a multiple CPU system, then there are few generic debugging options. The problem is unlikely to be related to Ubuntu—it is probably a general Linux kernel problem.

• Check with the motherboard’s manufacturer to see if Linux supports the chipset. For example, I have an old dual-CPU motherboard that is not supported by Linux.

• Check the Linux Hardware FAQ for the motherboard or chipset. This will tell you if other people managed to get it to work. Web sites such as https://wiki.ubuntu.com/HardwareSupport and www.faqs.org/docs/Linux-HOWTO/SMP-HOWTO.html are good places to start.

• If all else fails, post a query to any of the Linux or Ubuntu hardware forums. Maybe someone else knows a workaround. Some good forums include ubuntuforums.org, www.linuxhardware.org, and www.linuxforums.org. Be sure to include details such as the make and model of the motherboard, Ubuntu version, and other peripherals. It is generally better to provide too much information when asking for help, rather than too little.

Unfortunately, if SMP is not enabled after the basic installation, then it probably will not work. But you might get lucky—if someone has a patch, then you will probably need to recompile the kernel.

Compiling the kernel is not for the weak-of-heart. Many aspects of Linux are now automated or have easy-to-use graphical interfaces, but compiling the kernel is not one of them. There are plenty of help pages and FAQs for people daring enough to compile their own kernel.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Saturday, September 4, 2010

Configuring Services from the Command Line

The GUI applications work well when you have a GUI, but are not ideal for remote system administrating or for managing the Ubuntu Server installation
(which lacks a GUI). Usually administrators need to use the command line to create, delete, or rename links in the /etc/rc*.d/ directories in order to
modify system services. However, there is an alternative. The sysv-rc-conf tool offers a middle ground by allowing easy access to the boot services
without requiring manual modification of the different startup files found in /etc/init.d/ and /etc/rc*.d/.

sudo apt-get install sysv-rc-conf

Running this tool (sudo sysv-rc-conf) brings up a text list of all services and runlevels (see Figure 3-4). Using this tool, you can immediately start or stop services by pressing + or -, and spacebar enables or disables the service in specific runlevels. The tool also supports the mouse; clicking a check box enables or disables the service.

As with the default Services applet, selecting or clearing a service will immediately change the service’s running status and alter the service’s boot-up configuration.

The sysv-rc-conf tool only recognizes services in the /etc/rc*.d/ and /etc/init.d/ directories. It is not Upstart aware. Upstart scripts for cron, hal, bootclean and other services do contain scripts in /etc/init.d/, but they are listed by sysv-rc-conf as not being used in any runlevel.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Friday, September 3, 2010

Configuring Boot-Up Services with bum

The Boot-Up Manager (bum) is a powerful GUI for managing startup services. Unlike the default Services applet, bum lists all startup services, including ones that you created. bum also includes an advanced menu for changing the startup priorities and viewing the startup sequence by runlevel. And best yet: bum is available for all Ubuntu platforms and is even Upstart-aware.

To use bum, you first need to install it: sudo apt-get install bum. To run it, use sudo bum. bum might take a minute to start up; it looks for package descriptions related to each startup service. The basic window shows the service name with a one-line description. An icon indicates whether the service is currently running, and a check box allows you to enable or disable it.

The most power part of bum comes from the tiny check box labeled Advanced at the bottom of the screen. This check box creates three tabs: Summary, Services, and Startup, and shutdown scripts. The Summary tab contains the basicwindow. However, the Services tab is just plain awesome. It lists every service and the startup order for each runlevel. The table permits sorting by service name, runlevel startup order, or even current status. But the best part happens when you highlight any service: the text box provides a description of the service, so you can tell exactly what it does.

The final advanced tab shows you the services found in /etc/rcS.d/. These are generally system critical startup and shutdown scripts that are needed regardless of the runlevel. Because these are critical (like keyboard setup and console drives), bum does not allow you to modify the settings. (You can look, but don’t touch.) To modify these, you will need to use the command line.

With the default Services applet, changes take effect as soon as you click on a check box. With bum, alterations are not performed until you press the Apply button.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Thursday, September 2, 2010

Configuring Services with the GUI

Managing services by hand can be time-consuming. Ubuntu includes an easy applet for enabling and disabling some system services: System -> Administration -> Services Enabling or disabling services only requires changing a check box.

Checking or unchecking a service will immediately change the service’s current running status. It will also alter the service’s boot status. This way, if you uncheck a service, you don’t need to manually stop any running processes and it will not start at the next boot. Checking a service starts it immediately and schedules it to start with the each reboot.

Although this tool does identify some of the better-known services, it does not list custom services and does not identify different runlevels. Since Ubuntu normally runs at runlevel 2, you are only modifying services that start during runlevel 2. In order to control more of the boot options, you either need to modify the files in the /etc/init.d and /etc/rc*.d directories, or you need a better tool, like bum or sysv-rc-conf.

The Services applet was removed from Karmic Koala (9.10). To configure startup services, either use the command line or install bum.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Wednesday, September 1, 2010

Understanding Upstart

Starting with Edgy Eft (Ubuntu 6.10), the System V init was replaced with an event-driven process manager called Upstart. The location of the Upstart
configuration files varieswith the Ubuntu version. (It’s new, so you can expect it to move around until it becomes more standard.) Under Jaunty Jackalope (9.04), the configurations files are in /etc/event.d/.With Karmic Koala (9.10), they are under /etc/init/.

Each Upstart control file lists the runlevels where it should start and stop, which commands to execute, and the set of dependencies. Besides the
required command to run, Upstart supports optional pre- and post-command scripts. For example, a pre-script may prepare the environment, the command
(script or exec) starts the service, and post-script can clean up any unnecessary files after running the command.

Karmic’s dbus.conf file is a good example of a simple Upstart script. This file uses a pre-start script block to create the required directories and initialize the system. It runs a single command to start the service (exec dbus-daemon--system --fork), and then it runs a single command after starting the service.

# dbus - D-Bus system message bus
#
# The D-Bus system message bus allows system daemons and user
# applications
# to communicate.

description "D-Bus system message bus"

start on local-filesystems
stop on runlevel [06]

expect fork
respawn

pre-start script
mkdir -p /var/run/dbus
chown messagebus:messagebus /var/run/dbus
exec dbus-uuidgen --ensure
end script

exec dbus-daemon --system --fork

post-start exec kill -USR1 1

To control Upstart scripts, you can use the start, stop, and restart commands. In addition, the status command lets you know if a service is running. For example, the following commands exercise the cron daemon.

$ sudo stop cron # same as: sudo /etc/init.d/cron stop
cron stop/waiting
$ sudo status cron
cron stop/waiting
$ sudo start cron # same as: sudo /etc/init.d/cron start
cron start/running, process 30166
$ sudo status cron
cron start/running, process 30166
$ sudo restart cron # same as: sudo /etc/init.d/cron restart
cron start/running, process 30182
$ sudo status cron
cron start/running, process 30182

For backward compatibility, Upstart includes scripts that will run any init scripts located under the /etc/rc*.d/ directories. You do not need to
port scripts from init to Upstart, and you don’t need to worry about installing software that is not configured for using Upstart.

If the service is started with a single command, then use exec. More complex services can be started using a script block that contains the script to run.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations