Tuesday, September 30, 2008

Audio Rippers and Encoders in Ubuntu Linux

The application you use to rip audio files from CD and encode them into space-saving MP3 or Ogg Vorbis formats is commonly referred to as a ripper. For some time now, the most commonly used non–command-line ripper was Grip, which still has its dedicated following. Other simpler-to-use rippers, however, have surfaced more recently, such as RipperX, Goobox, and the one that comes bundled with Ubuntu: Sound Juicer.


Sound Juicer
Sound Juicer is a relatively new program that is pretty straightforward to use and quite capable in terms of what it does. It isn’t perfect, though, and it still can be a bit quirky. In addition, unlike many of its ripping cousins, Sound Juicer does not automatically create a playlist for the songs you rip and encode, and it lacks a simple means by which to adjust the encoding bitrate.

Despite these limitations, there are still many people who prefer Sound Juicer to the competition, so you might as well give it a try to see how you like it. To get started, just place the CD you want to rip in your drive; Sound Juicer will start up automatically and display the title of your CD, the artist’s name, and titles of all the tracks in the application window.

You can go about things another way by going to Applications -> Sound & Video -> Sound Juicer CD Extractor and then placing your CD in the drive. In this case, however, you might have to go to the Sound Juicer Disc menu and select Re-read Disc before your album and track information will appear.

If you are not connected to the Internet, these bits of album information will not appear because album and track data are not embedded in the CD itself. What happens instead is that the audio ripper or player on your machine sends the digital ID of the CD you’re playing to an online database, such as freeDB.org or CD DataBase (CDDB), which is now officially known as Gracenote. In turn, the online database sends the album information for that CD back to the player or ripper.


Setting the Default Folder for Ripped Files in Sound Juicer
By default, Sound Juicer is set up to rip your CDs and encode audio tracks in Ogg Vorbis format, and the default location in which Sound Juicer saves these files is your home folder. Because Sound Juicer, like all other rippers, will create an artist folder for each CD you rip, you will end up with a lot of folders in your home folder if you rip albums from a large number of artists. It is best to create a Music folder within your home folder, as you did for your graphika account, and then make that folder your default location for ripped music.

To set the default from within Sound Juicer, go to the Edit menu and select Preferences. This will open the Preferences window where you can change the output path by clicking the menu button next to the word Folder (the button itself should say Home at this point) and then selecting Other in the menu that appears. After that, browse to your Music folder, click it once in the list to select it, and then click the Open button.


Ripping and Encoding Sound Files in Sound Juicer
To start ripping the audio tracks from the CD you have in your drive, youb first need to select the format in which you wish to encode the tracks. To do this, go to Edit -> Preferences, and then in the Output Format menu select the encoding format of your liking. The most of common of these is MP3 (MP3 audio) or, in the Linux world, CD Quality, Lossy (Ogg multimedia). Once you’ve made your choice, click Close.

After that, all you need to do is click the Extract button at the bottom of the Sound Juicer window to rip and encode all of the tracks on the CD. If there are certain tracks you do not care to rip and encode, just uncheck the checkboxes next to the names of those songs before you click Extract. If you only want to rip and encode a few of the songs in the list, it might be better to first go to the Edit menu, select Deselect All, and then check the checkboxes next to the songs you do want to rip before clicking Extract.

While the songs are being ripped and encoded, Sound Juicer will show you its progress in the lower-left corner of the window, and when it’s all done, it will tell you so in a small window. Once you get that message, click Close. You can eject the CD by going to the File menu and selecting Eject. If you want to rip and encode another CD, pop it into the drive; just as with the first CD, the album, artist, and song titles will all appear in the program window, and you can rip away yet again.

Once you’re done with your ripping chores, you could check out the results of your efforts using one of the players discussed in the following sections, of course. But the quickest and perhaps the most interesting way to play back your newly ripped files is to open a Nautilus window, and then navigate to the new tracks within your Music folder. Once you’re there, place your cursor over any one of tracks and a little eighth note in one of those comic bubbles will appear, and the track, without so much as a single mouse click, will mysteriously start playing.

Source of Information : Ubuntu for Non-Geeks

Monday, September 29, 2008

How Do Companies Make Money with Linux?

Open source enthusiasts believe that better software can result from an open source software development model than from proprietary development models. So in theory, any company creating software for its own use can save money by adding its software contributions to those of others to gain a much better end product for themselves. Companies that want to make money selling software need to be more creative than they did in the old days. While you can sell the software you create that includes GPL software, you must pass the source code of that software forward. Of course, others can then recompile that product, basically using your product without charge. Here are a few ways that companies are dealing with that issue:

• Software subscriptions—Red Hat, Inc. sells its Red Hat Enterprise Linux products on a subscription basis. For a certain amount of money per year, you get binary code to run Linux (so you don’t have to compile it yourself), guaranteed support, tools for tracking the hardware and software on your computer, and access to the company’s knowledge base. While Red Hat’s Fedora project includes much of the same software and is also available in binary form, there are no guarantees associated with the software or future updates of that software. A small office or personal user might take the risk on Fedora (which is itself an excellent operating system), but a big company that’s running mission-critical applications will probably put down a few dollars for RHEL.

• Donations—Many open source projects accept donations from individuals or open source companies that use code from their projects. Amazingly, many open source projects support one or two developers and run exclusively on donations.

• Bounties—Software bounties are a fascinating way for open source software companies to make money. Let’s say that you are using XYZ software package and you need a new feature right away. By paying a software bounty to the project itself, or to other software developers, you can have your needed improvements moved to the head of the queue. The software you pay for will remain covered by its open source license, but you will have the features you need, at probably a fraction of the cost of building the project from scratch.

• Boxed sets, mugs, and T-shirts—Many open source projects have online stores where you can buy boxed sets (some people still like physical CDs and hard copies of documentation) and a variety of mugs, T-shirts, mouse pads, and other items. If you really love a project, for goodness sake, buy a T-shirt!

This is in no way an exhaustive list, because more creative ways are being invented every day to support those who create open source software. Remember that many people have become contributors to and maintainers of open source software because they needed or wanted the software themselves. The contributions they make for free are worth the return they get from others who do the same.

Source of Information : Linux Bible 2008 Edition

Sunday, September 28, 2008

Tracking Process Performance Statistics

The tools to analyze the performance of applications are varied and have existed in one form or another since the early days of UNIX. It is critical to understand how an application is interacting with the operating system, CPU, and memory system to understand its performance. Most applications are not self-contained and make many calls to the Linux kernel and different libraries. These calls to the Linux kernel (or system calls) may be as simple as "what's my PID?" or as complex as "read 12 blocks of data from the disk." Different systems calls will have different performance implications. Correspondingly, the library calls may be as simple as memory allocation or as complex as graphics window creation. These library calls may also have different performance characteristics.

Kernel Time Versus User Time
The most basic split of where an application may spend its time is between kernel and user time. Kernel time is the time spent in the Linux kernel, and user time is the amount of time spent in application or library code. Linux has tools such time and ps that can indicate (appropriately enough) whether an application is spending its time in application or kernel code. It also has commands such as oprofile and strace that enable you to trace which kernel calls are made on the behalf of the process, as well as how long each of those calls took to complete.

Library Time Versus Application Time
Any application with even a minor amount of complexity relies on system libraries to perform complex actions. These libraries may cause performance problems, so it is important to be able to see how much time an application spends in a particular library. Although it might not always be practical to modify the source code of the libraries directly to fix a problem, it may be possible to change the application code to call different or fewer library functions. The ltrace command and oprofile suite provide a way to analyze the performance of libraries when they are used by applications. Tools built in to the Linux loader, ld, helps you determine whether the use of many libraries slows down an application's start time.

Subdividing Application Time
When the application is known to be the bottleneck, Linux provides tools that enable you to profile an application to figure out where time is spent within an application. Tools such as gprof and oprofile can generate profiles of an application that pin down exactly which source line is causing large amounts of time to be spent.

Source of Information : Optimizing Linux® Performance

Friday, September 26, 2008

Linux in the Real World - Linux in Small Business

Often a small business can consolidate the Web services it needs into one or two Linux servers. It can meet its basic office computing needs with mature open source applications such as OpenOffice.org, GIMP, and a Firefox browser. But can a small business run entirely on open source software alone? When Jim Nanney started his Coast Grocery business (www.coastgrocery.com), where residents of the Mississippi Gulf Coast can order groceries online for delivery, he set out to do just that. In part, he just wanted to see if he could rely solely on open source software. But he also figured that cost savings of at least $10,000 by not buying commercial software could help make his small business profitable a lot faster.

To allow customers to order groceries online, Jim selected the open source e-commerce software called osCommerce (www.oscommerce.com). The osCommerce software is built with the PHP Web scripting language and uses a MySQL database. Jim runs the software from a Linux system with an Apache Web server. On the office side of the business, Jim relies entirely on Fedora Linux systems. He uses OpenOffice.org Writer for documents, GIMP and Inkscape for logos and other artwork, and GnuCash for accounting. For Web browsing, Firefox is used. So far, there has been no need to purchase any commercial software.

Here are some of the advantages that Jim has derived from his all–open source business:

• Community support—The communities surrounding osCommerce and Fedora have been very helpful. With active forums and 24-hour IRC channels, it has been easier to get help with those projects than with any proprietary software. Also unlike proprietary software, participants are generally quite knowledgeable and often include the developers of the software themselves.

• Long-term security—Jim disputes conventional wisdom that betting your business on proprietary software is safer than relying on open source. If a software company goes out of business, the small business could go down, too. But with open source, you have the code, so you could always pay someone to update the code when necessary or fix it yourself.

• Easier improvements—By doing some of his own PHP programming, Jim had a lot of flexibility related to adding features. In some cases, he could take existing code and modify it to suit his needs. To create a special shopping list feature, he found it easiest to write code from scratch. In the process of using the software, when he found exploitable bugs, he submitted the code fixes back to the project.

• No compatibility problems—On those occasions where he needed to provide information to others, compatibility has not been a problem. When he makes business cards, door hangers, or other printed material, he saves his artwork to PDF or SVG formats to send to a commercial printer. Regular documents can be exported to Word, Excel, or other common formats.

For businesses starting on a shoestring, in many cases open source software can offer both the cost savings and flexibility needed to help the business survive during the difficult start-up period. Later, it can help those same businesses thrive, because open source solutions can often be easily scaled up as the business grows.

Source of Information : Linux Bible 2008 Edition

Thursday, September 25, 2008

Linux in the Real World - Schools

Cost savings, flexibility, and a huge pool of applications have made Linux a wonderful alternative to proprietary systems for many schools. One project has been particularly successful in schools: the K12 Linux Terminal Server Project (www.k12ltsp.org).

K12LTSP is based on the Linux Terminal Service Project (www.ltsp.org) and Fedora
(www.fedoraproject.org), but is tuned to work particularly in schools. With K12LTSP, you centralize all your school’s applications on one or more server machines. Then you can use lowend PCs (old Pentiums or thin clients) as workstations. With thin clients starting under $200 or old PCs already hanging around your school, you can service a whole class or even a whole school for little more than the cost of the servers and some networking hardware.

By centralizing all the school’s software on a limited number of servers, K12LTSP can offer both security (only a few servers to watch over) and convenience (no need to reinstall hundreds of Windows machines to upgrade or enhance the software). Each client machine controls the display, mouse, and keyboard, while all of the user’s applications and files are stored on and run from the server.

The K12LTSP distribution contains many battle-tested open source applications, including full GNOME and KDE desktops, Evolution e-mail, Firefox browser, OpenOffice.org office suite, and the GNU Image Manipulation Program (GIMP) image application. It also adds DansGuardian (open source Web content filtering) and educational software (such as Gcompris). Applications that are not available in Linux can often be replaced with similar Linux applications or may be run from a Web browser.

Many schools in Oregon have adopted K12LTSP, including those attended by Linus Torvalds’ children in Portland, Oregon. Adoption of K12LTSP has also begun in Atlanta, Georgia and many other cities across the United States.

Source of Information : Linux Bible 2008 Edition

Wednesday, September 24, 2008

Managing E-mail with Evolution in Ubuntu

The default e-mail reader for Dapper Drake is called Evolution. This program is an open source clone of Microsoft Outlook. Besides viewing and composing e- mail, it also manages your calendar, task list, and contacts. Evolution also enables you to manage multiple e-mail accounts. While it natively supports many different mail server configurations, it does have a couple of quirks.


Configuring an Account
The most powerful part of Evolution is its list of supported mail protocols. It natively supports the Post Office Protocol (POP, also called POP3) and Instant Message Access Protocol (IMAP), as well as Microsoft Exchange and Novell GroupWise. This means that you should be able to use Evolution at home and in most corporate and small office environments.

When you first run Evolution (by clicking the mail icon in the default top panel or by selecting Applications -- > Internet -- > Evolution Mail), it asks you to set up an account. You can later add or edit accounts by running Evolution and selecting Edit -- > Preferences -- > Mail Accounts. You will be asked to provide three main types of information.

• Identity-This specifies the e-mail address and the name of the person on the address.

• Receiving options-This identifies how you retrieve your e-mail. For example, if you use a POP mail server, then you will specify the server's address and your account name.

• Sending options-The way you receive mail is not necessarily the same as the way you send mail. For example, you may receive mail using POP, but send using SMTP.

Your specific configuration will depend on your mail server. Most ISPs provide some type of mail server and instructions for configuring mail readers. Although they are unlikely to specify the configuration for Evolution, they should list the server's host name, protocol (for example, POP3 or IMAP), and any required security steps such as using SSL (or TLS) for encryption.

There are other options you can configure after creating a new account (select the Edit option under Mail Accounts). For example, you can specify how often to check for new mail and whether to save a copy of every out-going e-mail message.

Besides using e-mail from your local ISP, you will probably want to manage your free e-mail accounts. Some of the most common free e-mail accounts are Google Gmail, Yahoo! Mail, and Microsoft MSN Hotmail. Knowing how to configure e-mail for these free mail services will help you configure mail for most other mail services.

Source of Information : Hacking Ubuntu byNeal Krawetz

Tuesday, September 23, 2008

Tracking Linux Memory Performance Statistics

Each system-wide Linux performance tool provides different ways to extract similar statistics. Although no tool displays all the statistics, some of the tools display the same statistics.

Memory Subsystem and Performance
In modern processors, saving information to and retrieving information from the memory subsystem usually takes longer than the CPU executing code and manipulating that information. The CPU usually spends a significant amount of time idle, waiting for instructions and data to be retrieved from memory before it can execute them or operate based on them. Processors have various levels of cache that compensate for the slow memory performance. Tools such as oprofile can show where various processor cache misses can occur.

Memory Subsystem (Virtual Memory)
Any given Linux system has a certain amount of RAM or physical memory. When addressing this physical memory, Linux breaks it up into chunks or "pages" of memory. When allocating or moving around memory, Linux operates on page-sized pieces rather than individual bytes. When reporting some memory statistics, the Linux kernel reports the number of pages per second, and this value can vary depending on the architecture it is running on.

On the IA32 architecture, the page size is 4KB. In rare cases, these page-sized chunks of memory can cause too much overhead to track, so the kernel manipulates memory in much bigger chunks, known as HugePages. These are on the order of 2048KB rather than 4KB and greatly reduce the overhead for managing very large amounts of memory. Certain applications, such as Oracle, use these huge pages to load an enormous amount of data in memory while minimizing the overhead that the Linux kernel needs to manage it. If HugePages are not completely filled with data, these can waste a significant amount of memory. A half-filled normal page wastes 2KB of memory, whereas a half-filled HugePage can waste 1,024KB of memory.

The Linux kernel can take a scattered collection of these physical pages and present to applications a well laid-out virtual memory space.

Swap (Not Enough Physical Memory). All systems have a fixed amount of physical memory in the form of RAM chips. The Linux kernel allows applications to run even if they require more memory than available with the physical memory. The Linux kernel uses the hard drive as a temporary memory. This hard drive space is called swap space.

Although swap is an excellent way to allow processes to run, it is terribly slow. It can be up to 1,000 times slower for an application to use swap rather than physical memory. If a system is performing poorly, it usually proves helpful to determine how much swap the system is using.

Buffers and Cache (Too Much Physical Memory). Alternatively, if your system has much more physical memory than required by your applications, Linux will cache recently used files in physical memory so that subsequent accesses to that file do not require an access to the hard drive. This can greatly speed up applications that access the hard drive frequently, which, obviously, can prove especially useful for frequently launched applications. The first time the application is launched, it needs to be read from the disk; if the application remains in the cache, however, it needs to be read from the much quicker physical memory. This disk cache differs from the processor cache mentioned in the previous chapter. Other than oprofile, valgrind, and kcachegrind, most tools that report statistics about "cache" are actually referring to disk cache.

In addition to cache, Linux also uses extra memory as buffers. To further optimize applications, Linux sets aside memory to use for data that needs to be written to disk. These set-asides are called buffers. If an application has to write something to the disk, which would usually take a long time, Linux lets the application continue immediately but saves the file data into a memory buffer. At some point in the future, the buffer is flushed to disk, but the application can continue immediately.

It can be discouraging to see very little free memory in a system because of the cache and buffer usage, but this is not necessarily a bad thing. By default, Linux tries to use as much of your memory as possible. This is good. If Linux detects any free memory, it caches applications and data in the free memory to speed up future accesses. Because it is usually a few orders of magnitude faster to access things from memory rather than disk, this can dramatically improve overall performance. When the system needs the cache memory for more important things, the cache memory is erased and given to the system. Subsequent access to the object that was previously cached has to go out to disk to be filled.

Active Versus Inactive Memory. Active memory is currently being used by a process. Inactive memory is memory that is allocated but has not been used for a while. Nothing is essentially different between the two types of memory. When required, the Linux kernel takes a process's least recently used memory pages and moves them from the active to the inactive list. When choosing which memory will be swapped to disk, the kernel chooses from the inactive memory list.

High Versus Low Memory. For 32-bit processors (for example, IA32) with 1GB or more of physical of memory, Linux must manage the physical memory as high and low memory. The high memory is not directly accessible by the Linux kernel and must be mapped into the low-memory range before it can be used. This is not a problem with 64-bit processors (such as AMD64/ EM6T, Alpha, or Itanium) because they can directly address additional memory that is available in current systems.

Kernel Usage of Memory (Slabs). In addition to the memory that applications allocate, the Linux kernel consumes a certain amount for bookkeeping purposes. This bookkeeping includes, for example, keeping track of data arriving from network and disk I/O devices, as well as keeping track of which processes are running and which are sleeping. To manage this bookkeeping, the kernel has a series of caches that contains one or more slabs of memory. Each slab consists of a set of one or more objects. The amount of slab memory consumed by the kernel depends on which parts of the Linux kernel are being used, and can change as the type of load on the machine changes.

Source of Information : Optimizing Linux® Performance: A Hands-On Guide to Linux® Performance Tools

Sunday, September 21, 2008

Enabling NFS in Ubuntu

Under Linux and most Unix operating systems, the network file system (NFS) is the common way to share directories. With other Unix and Linux operating systems, NFS is part of the core installation. But with Ubuntu, you need to install it as a package. There are three main components required by NFS:

• portmap-This package provides support for remote procedure calls (RPC) and is used by NFS. You don't need to install portmap by itself-the apt-get commands for the other two components will install portmap as a requirement.

• nfs-common-Although portmap provides support for RPC function, this package actually provides the RPC functions for NFS. This package is required for NFS clients and servers. It provides basic RPC functions like file locking and status. If you only need to install an NFS client (meaning you will mount a directory exported by some other server), then you can use: sudo apt-get install nfs-common. Installing nfs-common will generate an error message, "Not starting NFS kernel daemon: No exports." This is expected since it is not configured.

• nfs-kernel-server-This package adds kernel modules so you can actually export a directory for use by a remote host; with this package, you get a server. You can install it using: sudo apt-get install nfs-kernel-server. This brings in portmap and nfs-common as required packages.

NFS is a great collaboration tool because entire file systems can be shared transparently. Everyone sees the same files and file changes are immediately accessible by everyone. The main limitation is operating system support. Although NFS exists for Linux, BSD, HP-UX, AIX, Solaris, BeOS, Mac OS X, and even OS/2, Windows does not natively include it. If you want to use NFS with Windows, consider installing the Windows Services for UNIX (http://www.microsoft.com/technet/interopmigration/unix/sfu/). This free product from Microsoft includes NFS server and client support.


Acting as an NFS Client
Mounting a remote file system with NFS is really easy. Just as the mount command can be used to access a hard drive, CD-ROM, or other block device, it can be used to mount a remote file system. You just need three items: the server's name, the directory name on the server that is being exported, and the mount point on your local system (a directory) for the connection. For example, to mount the directory /home/project from the server sysprj1 and place it at /mnt/project on your local computer, you would use:

sudo mkdir /mnt/project # to make sure it exists
sudo mount -t nfs sysprj1:/home/project /mnt/project

Now, all the files under /home/project on the host sysprj1 are accessible from the local directory /mnt/project. The access is completely transparent-anything you can do on your local file system can be done over this NFS mount.

If you don't know the name of the exported directory, NFS enables you to browse the list of exported partitions using the showmount -e command. This lists the directories and list of clients that can access it. The client list returned from the server can be an entire domain (for example, *.local.net) or a list of clients. Access restrictions are set by the NFS server and follow the Unix permissions. If you find that you cannot access the directory after mounting it, check the permissions with ls -l. If you do not have permission, then talk to the administrator for the NFS server.

$ showmount -e sysprj1
/home/projects *.local.net
/media/cdrom *.local.net

When you are done with the mounted partition, you can remove it using sudo umount /mnt/project.

For short-term access, you will probably want to use mount and umount to access the directory as needed. For long-term collaboration, you can add the entry in /etc/fstab. For example:

sysprj1:/home/project /mnt/project nfs defaults 0 0

Having the entry in /etc/fstab will make sure the directory is mounted every time you reboot. You can also use sudo mount /mnt/project (specifying only the mount point) as a shortcut since mount consults /etc/fstab when determining devices. NFS has one huge limitation. If the server goes down then all file accesses to the network partition will hang-up to hours-before failing. The hang-up is due to network timeouts and retries. If your connection to the server is unstable, then don't use NFS.


Acting as an NFS Server
NFS servers export directories for use by NFS clients. This is a two-step process. First, you need to create a file called /etc/exports. This file contains a list of directories to export and clients that are permitted to access the directories. Special access permissions can also be specified such as ro for read-only, rw for read-write, and sync for synchronous writes.

The NFS server will not start if /etc/exports is missing or contains no exported directories. The default file contains only a few comments, so the server will not start. After you create your first entries, you will need to start the server. The easy way to start it is with the command sudo /etc/init.d/nfs-kernel-server start.

After modifying the /etc/exports file, you need to tell the NFS server to actually export the entries.
sudo exportfs -r # re-export all entries in /etc/exports

The exportfs command can also be used for other tasks:
• List the current export table-Run exportfs without any parameters.

• Export a specific directory once-This is useful if the export is not intended to be permanent (/etc/exports is really for permanent mounts). You will need to specify options, and the list of clients is specified before the directory. For example:

sudo exportfs -o ro,async '*.local.net:/media/cdrom'

• Un-export directory-If the entry is still listed in /etc/exports, then the removal is temporary; the mount will be re-exported the next time you reboot or restart the NFS server.

sudo exportfs -u '*.local.net:/media/cdrom'

You can export anything that is mounted. This includes CD-ROM drives, USB thumb drives, and even mounted NFS partitions from other servers! Although you cannot export single files or block devices, you can export the entire /dev directory (not that you would want to). NFS offers no security, encryption, or authentication. Furthermore, established NFS connections can be easily hijacked. NFS is fine for most internal, corporate networks and for use within your home, but don't use it to share files across the Internet.

Source of Information : Hacking Ubuntu Serious Hacks Mods and Customizations

Saturday, September 20, 2008

OSI Open Source Definition

For software developers, Linux provides a platform that lets them change the operating system as they like and get a wide range of help creating the applications they need. One of the watchdogs of the open source movement is the Open Source Initiative (www.opensource.org). This is how the OSI Web site describes open source software:

The basic idea behind open source is very simple: When programmers can read, redistribute, and modify the source code for a piece of software, the software evolves. People improve it, people adapt it, people fix bugs. And this can happen at a speed that, if one is used to the slow pace of conventional software development, seems astonishing.
We in the open source community have learned that this rapid evolutionary process produces better software than the traditional closed model, in which only a very few programmers can see the source and everybody else must blindly use an opaque block of bits. While the primary goal of open source software is to make source code available, other goals are also defined by OSI in its Open Source Definition. Most of the following rules for acceptable open source licenses are to protect the freedom and integrity of the open source code:

• Free distribution—An open source license can’t require a fee from anyone who resells the software.

• Source code—The source code has to be included with the software and not be restricted from being redistributed.

• Derived works—The license must allow modification and redistribution of the code under the same terms.

• Integrity of the author’s source code—The license may require that those who use the source code remove the original project’s name or version if they change the source code.

• No discrimination against persons or groups—The license must allow all people to be equally eligible to use the source code.

• No discrimination against fields of endeavor—The license can’t restrict a project from using the source code because it is commercial or because it is associated with a field of endeavor that the software provider doesn’t like.

• Distribution of license—No additional license should be needed to use and redistribute the software.

• License must not be specific to a product—The license can’t restrict the source code to a particular software distribution.

• License must not restrict other software—The license can’t prevent someone from including the open source software on the same medium as non–open source software.

• License must be technology-neutral—The license can’t restrict methods in which the source code can be redistributed.


Open source licenses used by software development projects must meet these criteria to be accepted as open source software by OSI. More than 40 different licenses are accepted by OSI to be used to label software as “OSI Certified Open Source Software.” In addition to the GPL, other popular OSI-approved licenses include:

• LGPL—The GNU Lesser General Public License (LGPL) is a license that is often used for distributing libraries that other application programs depend upon.

• BSD—The Berkeley Software Distribution License allows redistribution of source code, with the requirement that the source code keep the BSD copyright notice and not use the names of contributors to endorse or promote derived software without written permission.

• MIT—The MIT license is like the BSD license, except that it doesn’t include the endorsement and promotion requirement.

• Mozilla—The Mozilla license covers use and redistribution of source code associated with the Mozilla Web browser and related software. It is a much longer license than the others just mentioned because it contains more definitions of how contributors and those reusing the source code should behave. This includes submitting a file of changes when submitting modifications and that those making their own additions to the code for redistribution should be aware of patent issues or other restrictions associated with their code.

The end result of open source code is software that has more flexibility to grow and fewer boundaries in how it can be used. Many believe that the fact that many people look over the source code for a project will result in higher quality software for everyone. As open source advocate Eric S. Raymond says in an often-quoted line, “Many eyes make all bugs shallow.”

Source of Information : Linux Bible 2008 Edition

Friday, September 19, 2008

LINUS TORVALDS

Linus Benedict Torvalds was born in Helsinki, Finland, in 1969. A member of the minority Swedishspeaking population, he attended the University of Helsinki from 1988 to 1996, graduating with a Masters degree in Computer Science.

He started Linux not through a desire to give the world a first-class operating system but with other goals in mind. Its inspiration is in part due to Helsinki winters being so cold. Rather than leave his warm flat and trudge through the snow to the university’s campus in order to use its powerful minicomputer, he wanted to be able to connect to it from home! He also wanted to have a platform to use to experiment with the properties of the Intel 386, but that’s another story. Torvalds needed an operating system capable of such tasks. Linux was born.

It took Torvalds the better part of a year to come up with the very first version of Linux, during which he worked alone in a darkened room. In 1991, he announced his creation to the world, describing Linux as “just a hobby,” and saying it would never be big. It wouldn’t be until 1994 that it reached version 1.0.

In the early days, Torvalds’s creation was fairly primitive. He was passionate that it should be free for everyone to use, and so he released it under a software license that said that no one could ever sell it.

However, he quickly changed his mind, adopting the GNU Public License. Torvalds was made wealthy by his creation, courtesy of the dot.com boom of the late 1990s, even though this was never his intention; he was driven by altruism. Nowadays, he lives in Portland, Oregon, with his wife and children, having moved to the United States from Finland in the late 1990s.

Initially, Torvalds worked for Transmeta, developing CPU architectures as well as overseeing kernel development, although this wasn’t part of his official work. He still programs the kernel, but currently he oversees the Open Source Development Lab, an organization created to encourage open source adoption in industry and which is also referred to as the home of Linux.

Source of Information : Apress Beginning Ubuntu Linux 3rd Edition

In the Beginning of Linux

Linux was created 16 years ago, in 1991. A period of 16 years is considered a lifetime in the world of computing, but the origin of Linux actually harks back even further, into the early days of modern computing in the mid-1970s. Linux was created by a Finnish national named Linus Torvalds. At the time, he was studying in Helsinki and had bought a desktop PC. His new computer needed an operating system. Torvalds’s operating system choices were limited: there were various versions of DOS and something called Minix. It was the latter that Torvalds decided to use.

Minix was a clone of the popular Unix operating system. Unix was used on huge computers in businesses and universities, including those at Torvalds’s university. Unix was created in the early 1970s and has evolved since then to become what many considered the cutting edge of computing. Unix brought to fruition a large number of computing concepts in use today and, many agree, got almost everything just right in terms of features and usability. Versions of Unix were available for smaller computers like Torvalds’s PC, but they were considered professional tools and were very expensive. This was in the early days of the home computer craze, and the only people who used IBM PCs were businesspeople and hobbyists.

Torvalds liked Unix because of its power, and he liked Minix because it ran on his computer. Minix was created by Andrew Tanenbaum, a professor of computing, to demonstrate the principles of operating system design to his students. Because Minix was also a learning tool, people could also view the source code of the program—the original listings that Tanenbaum had entered to create the software.

But Torvalds had a number of issues with Minix. Although it’s now available free of charge, at the time Minix was only available for a fee, although in many universities, it was possible to obtain copies free of charge from professors who paid a group licensing fee. Nevertheless, the copyright issue meant that using Minix in the wider world was difficult, and this, along with a handful of technical issues, inspired Torvalds to create from scratch his own version of Unix, just as Tanenbaum had done with Minix.

From day one, Torvalds intended his creation to be shared among everyone who wanted to use it. He encouraged people to copy it and give it to friends. He didn’t charge any money for it, and he also made the source code freely available. The idea was that people could take the code and improve it.

This was a master stroke. Many people contacted Torvalds, offering to help out. Because they could see the program code, they realized he was onto a good thing. Soon, Torvalds wasn’t the only person developing Linux. He became the leader of a team that used the fledgling Internet to communicate and share improvements.

It’s important to note that when we talk here about Linux, we’re actually talking about the kernel—the central program that runs the PC hardware and keeps the computer ticking. This is all that Torvalds initially produced back in 1991. It was an impressive achievement, but needed a lot of extra add-on programs to take care of even the most basic tasks. Torvalds’s kernel needed additional software so that users could enter data, for example. It needed a way for users to be able to enter commands so they could manipulate files, such as deleting or copying them. And that’s before you even consider more complicated stuff like displaying graphics on the screen or printing documents. Linux itself didn’t offer these functions. It simply ran the computer’s hardware. Once it booted up, it expected to find other programs. If they weren’t present, then all you saw was a blank screen.

Linux is a pretty faithful clone of Unix. If you were to travel back in time 20 or 30 years, you would find that using Unix on those old mainframe computers, complete with their teletype interfaces, would be similar to using Linux on your home PC. Many of the fundamental concepts of Linux, such as the file system hierarchy and user permissions, are taken directly from Unix.

Most clones or implementations of Unix are named so that they end in an “x.” One story has it that Torvalds wanted to call his creation Freax, but a containing directory was accidentally renamed Linux on an Internet server. The name stuck.

The popular conception of Linux is that it is created by a few hobbyists who work on it in their spare time. This might have been true in the very early days. Nowadays, in addition to these “bedroom programmers,” Linux is programmed by hundreds of professionals around the world, many of whom are employed specifically for the task. Torvalds adds to the effort himself and also coordinates the work.

Source of Information : Apress Beginning Ubuntu Linux 3rd Edition

Thursday, September 18, 2008

Linux Terminal Emulation Graphics capabilities

The most important part of terminal emulation is how it displays information on the monitor. When you hear the phrase ‘‘text mode,’’ the last thing you’d think to worry about is graphics. However, even the most rudimentary dumb terminals supported some method of screen manipulation (such as clearing the screen and displaying text at a specific location on the screen).


Character sets
All terminals must display characters on the screen (otherwise, text mode would be pretty useless). The trick is in what characters to display, and what codes the Linux system needs to send to display them. A character set is a set of binary commands that the Linux system sends to a monitor to display characters. There are several character sets that are supported by various terminal emulation packages:

• ASCII The American Standard Code for Information Interchange. This character set contains the English characters stored using a 7-bit code, and consists of 128 English letters (both upper and lower case), numbers, and special symbols. This character set was adopted by the American National Standards Institute (ANSI) as US-ASCII. You will often see it referred to in terminal emulators as the ANSI character set.

• ISO-8859-1 (commonly called Latin-1) An extension of the ASCII character set developed by the International Organization for Standardization (ISO). It uses an 8-bit code to support the standard ASCII characters as well as special foreign language characters for most Western European languages. The Latin-1 character set is popular in multinational terminal emulation packages.

• ISO-8859-2 ISO character set that supports Eastern European language characters.

• ISO-8859-6 ISO character set that supports Arabic language characters.

• ISO-8859-7 ISO character set that supports Greek language characters.

• ISO-8859-8 ISO character set that supports Hebrew language characters.

• ISO-10646 (commonly called Unicode) ISO 2-byte character set that contains codes for most English and non-English languages. This single character set contains all of the codes defined in all of the ISO-8859-x series of character sets. The Unicode character set is quickly becoming popular among open source applications.

By far the most common character set in use today in English-speaking countries is the Latin-1 character set. The Unicode character set is becoming more popular, and may very well one day become the new standard in character sets. Most popular terminal emulators allow you to select which character set to use in the terminal emulation.


Control codes
Besides being able to display characters, terminals must have the ability to control special features on the monitor and keyboard, such as the cursor location on the screen. They accomplish this using a system of control codes. A control code is a special code not used in the character set, which signals the terminal to perform a special, nonprintable operation.

Common control code functions are the carriage return (return the cursor to the beginning of the line), line feed (put the cursor on the next horizontal row), horizontal tab (shift the cursor over a preset number of spaces), arrow keys (up, down, left, and right), and the page up/page down keys. While these codes mainly emulate features that control where the cursor is placed on the monitor, there are also several other codes, such as clearing the entire screen, and even a bell ring (emulating the old typewriter end-of-carriage bell).

Control codes were also used in controlling the communication features of dumb terminals. Dumb terminals were connected to the computer system via some type of communication channel, often a serial communication cable. Sometimes data needed to be controlled on the communication channel, so developers devised special control codes just for data communication purposes. While these codes aren’t necessarily required in modern terminal emulators, most support these codes to maintain compatibility. The most common codes in this category are the XON and XOFF codes, which start and stop data transmission to the terminal, respectively.


Block mode graphics
As dumb terminals became more popular, manufacturers started experimenting with rudimentary graphics capabilities. By far the most popular type of ‘‘graphical’’ dumb terminal used in the Unix world was the DEC VT series of terminals. The turning point for dumb terminals came with the release of the DEC VT100 in 1978. The DEC VT100 terminal was the first terminal to support the complete ANSI character set, including block mode graphic characters.

The ANSI character set contains codes that not only allowed monitors to display text but also rudimentary graphics symbols, such as boxes, lines, and blocks. By far one of the most popular dumb terminals used in Unix operations during the 1980s was the VT102, an upgraded version of the VT100. Most terminal emulation programs emulate the operation of the VT102 display, supporting all of the ANSI codes for creating block mode graphics.


Vector graphics
The Tektronix company produced a popular series of terminals that used a display method called vector graphics. Vector graphics deviated from the DEC method of block mode graphics by making all screen images (including characters) a series of line segments (vectors). The Tektronix 4010 terminal was the most popular graphical dumb terminal produced. Many terminal emulation packages still emulate its capabilities.

The 4010 terminal displays images by drawing a series of vectors using an electron beam, much like drawing with a pencil. Since vector graphics doesn’t use dots to create lines, it has the ability to draw geometric shapes using higher precision than most dot-oriented graphics terminals. This was a popular feature among mathematicians and scientists.

Terminal emulators use software to emulate the vector graphics drawing capabilities of the Tektronix 4010 terminals. This is still a popular feature for people who need precise graphical drawings, or those who still run applications that used the vector graphics routines to draw complicated charts and diagrams.


Display buffering
A key to graphics displays is the ability of the terminal to buffer data. Buffering data requires having additional internal memory within the terminal itself to store characters not currently being displayed on the monitor.

The DEC VT series of terminals utilized two types of data buffering:

• Buffering data as it scrolled off of the main display window (called a history)

• Buffering a completely separate display window (called an alternate screen)

The first type of buffering is known as a scroll region. The scroll region is the amount of memory the terminal has that enables it to ‘‘remember’’ data as it scrolls off of the screen. A standard DEC VT102 terminal contained a viewing area for 25 lines of characters. As the terminal displays a new line of characters, the previous line is scrolled upward. When the terminal reaches the bottom line of the display, the next line causes the top line to scroll off the display.

The internal memory in the VT102 terminal allowed it to save the last 64 lines that had scrolled off of the display. Users had the ability to lock the current screen display and use arrow keys to scroll backward through the previous lines that had ‘‘scrolled off’’ of the display. Terminal emulation packages allow you to use either a side scrollbar or a mouse scroll button to scroll through the saved data without having to lock the display. Of course, for full emulation compatibility, most terminal emulation packages also allow you to lock the display and use arrow and page up/page down to scroll through the saved data.

The second type of buffering is known as an alternative screen. Normally, the terminal writes data directly to the normal display area on the monitor. A method was developed to crudely implement animation by using two screen areas to store data. Control codes were used to signal the terminal to write data to the alternative screen instead of the current display screen. That data was held in memory. Another control code would signal the terminal to switch the monitor display between the normal screen data and the data contained in the alternative screen almost instantaneously. By storing successive data pages in the alternative screen area, then displaying it, you could crudely simulate moving graphics. Terminals that emulate the VT00 series of terminals have the ability to support the alternative screen method.


Color
Even back in the black-and-white (or green) dumb terminal days, programmers were experimenting with different ways to present data. Most terminals supported special control codes to produce the following types of special text:

• Bold characters
• Underline characters
• Reverse video (black characters on white background)
• Blinking
• Combinations of all of the above features

Back in the old days, if you wanted to get someone’s attention, you used bold, blinking, reverse video text. Now there’s something that could hurt your eyes! As color terminals became available, programmers added special control codes to display text in various colors and shades. The ANSI character set includes control codes for specifying specific colors for both foreground text and the background color displayed on the monitor. Most terminal emulators support the ANSI color control codes.

Source of Information : Wiley Linux Command Line and Shell Scripting Bible

Wednesday, September 17, 2008

Tracking Linux CPU Performance Statistics

Each system-wide Linux performance tool provides different ways to extract similar statistics. Although no tool displays all the statistics, some of the tools display the same statistics.


Run Queue Statistics
In Linux, a process can be either runnable or blocked waiting for an event to complete. A blocked process may be waiting for data from an I/O device or the results of a system call. If a process is runnable, that means that it is competing for the CPU time with the other processes that are also runnable. A runnable process is not necessarily using the CPU, but when the Linux scheduler is deciding which process to run next, it picks from the list of runnable processes. When these processes are runnable, but waiting to use the processor, they form a line called the run queue. The longer the run queue, the more processes wait in line.

The performance tools commonly show the number of processes that are runnable and the number of processes that are blocked waiting for I/O. Another common system statistic is that of load average. The load on a system is the total amount of running and runnable process. For example, if two processes were running and three were available to run, the system's load would be five. The load average is the amount of load over a given amount of time. Typically, the load average is taken over 1 minute, 5 minutes, and 15 minutes. This enables you to see how the load changes over time.


Context Switches
Most modern processors can run only one process or thread at a time. Although some processors, such hyperthreaded processors, can actually run more than one process simultaneously, Linux treats them as multiple single-threaded processors. To create the illusion that a given single processor runs multiple tasks simultaneously, the Linux kernel constantly switches between different processes. The switch between different processes is called a context switch, because when it happens, the CPU saves all the context information from the old process and retrieves all the context information for the new process. The context contains a large amount of information that Linux tracks for each process, including, among others, which instruction the process is executing, which memory it has allocated, and which files the process has open. Switching these contexts can involve moving a large amount of information, and a context switch can be quite expensive. It is a good idea to minimize the number of context switches if possible.

To avoid context switches, it is important to know how they can happen. First, context switches can result from kernel scheduling. To guarantee that each process receives a fair share of processor time, the kernel periodically interrupts the running process and, if appropriate, the kernel scheduler decides to start another process rather than let the current process continue executing. It is possible that your system will context switch every time this periodic interrupt or timer occurs. The number of timer interrupts per second varies per architecture and kernel version. One easy way to check how often the interrupt fires is to use the /proc/interrupts file to determine the number of interrupts that have occurred over a known amount of time.


Interrupts
In addition, periodically, the processor receives an interrupt by hardware devices. These interrupts are usually triggered by a device that has an event that needs to be handled by the kernel. For example, if a disk controller has just finished retrieving a block from the drive and is ready for the kernel to use it, the disk controller may trigger an interrupt. For each interrupt the kernel receives, an interrupt handler is run if it has been registered for that interrupt; otherwise, the interrupt is ignored. These interrupt handlers run at a very high priority in the system and typically execute very quickly. Sometimes, the interrupt handler has work that needs to be done, but does not require the high priority, so it launches a "bottom half," which is also known as a soft-interrupt handler. If there are a high number of interrupts, the kernel can spend a large amount of time servicing these interrupts. The file /proc/interrupts can be examined to show which interrupts are firing on which CPUs.


CPU Utilization
CPU utilization is a straightforward concept. At any given time, the CPU can be doing one of seven things. First, it can be idle, which means that the processor is not actually doing any work and is waiting for something to do. Second, the CPU can be running user code, which is specified as "user" time. Third, the CPU can be executing code in the Linux kernel on behalf of the application code. This is "system" time. Fourth, the CPU can be executing user code that has been "nice"ed or set to run at a lower priority than normal processes. Fifth, the CPU can be in iowait, which mean the system is spending its time waiting for I/O (such as disk or network) to complete. Sixth, the CPU can be in irq state, which means it is in high-priority kernel code handling a hardware interrupt. Finally, the CPU can be in softirq mode, which means it is executing kernel code that was also triggered by an interrupt, but it is running at a lower priority (the bottom-half code). This can happen when a device interrupt occurs, but the kernel needs to do some work with it before it is ready to hand it over to user space (for example, with a network packet).

Most performance tools specify these values as a percentage of the total CPU time. These times can range from 0 percent to 100 percent, but all three total 100 percent. A system with a high "system" percentage is spending most of its time in the kernel. Tools such as oprofile can help determine where this time is being spent. A system that has a high "user" time spends most of its time running applications. If a system is spending most of its time iowait when it should be doing work, it is most likely waiting for I/O from a device. It may be a disk, network card, or something else causing the slowdown.

Source of Information : Optimizing Linux Performance

Tuesday, September 16, 2008

SUSE Linux Family Products - Personal category

Novell divides its SUSE Linux products into Enterprise and Personal. This is essentially the distinction between the versions that are sold with a paid-for software maintenance system and those that are not. The Personal category now consists of just one product, SUSE Linux Professional. (In the past there was a cut-down version of SUSE Linux Professional known as SUSE Linux Personal; with the release of 9.3 this product was dropped. Do not confuse Novell’s customer category Personal with SUSE’s former product SUSE Linux Personal.)

SUSE Linux Professional
SUSE Linux Professional now contains versions for both the x86 (Intel-compatible PC 32-bit) and x86-64 (Athlon 64, Opteron, and Intel EM64T) platforms. It consists of five CDs and two double-layer DVDs. The five CDs form is an installation set for x86 machines. One of the DVDs is an installation DVD for both x86 and x86-64; the other DVD provides the source packages. The Professional version contains a wide range of software, including desktop and server software and development tools. It actually contains considerably more packages than the Enterprise Server versions but should be regarded as essentially an unsupported version, but limited installation support is included in the price of the boxed set. A new version of SUSE Linux Professional appears twice a year.

A Live DVD version (it’s been a DVD since version 9.2; previously this was a Live CD) is released with each version. This is available by FTP and can be burned to disk. This version cannot be installed, but booting a PC from this DVD provides a live Linux system that can be used to evaluate SUSE Linux without installing it or, if you want, as a way of carrying a Linux system around with you (perhaps with a USB stick to hold your files).

Traditionally SUSE did not provide ISO images of the distribution for download. This changed in the summer of 2005 when the full ISO images for version 9.3 were provided in this way. The Professional version has always been made available in an FTP version that allows for network installation, either directly from the FTP site or using a local mirror.

Recently, SUSE has also begun to offer a DVD ISO image (by FTP) of a cut-down installable version of the Professional distribution. This should be thought of as an evaluation edition, or as a replacement for the old Personal edition. This version is made available rather later in the product cycle than the FTP version and is known as the FTP DVD version.

openSUSE
Although the software concerned was almost all open source and freely distributable, the development of SUSE Linux was traditionally a closed process. Beta testing was done internally by the company with the help of volunteers from partner companies and the members of the public who carried out the testing under non-disclosure agreements.

When the first beta version of 10.0 was ready in August 2005, the beta testing process and the development of SUSE was opened up with the start of the openSUSE project. This is intended to create a community around the development of SUSE Linux and make the cutting-edge version of SUSE an entirely free one. In some ways the concept is similar to the Fedora project, which plays a similar role in the development of Red Hat; however, openSUSE aims to draw in a wider genuine participation by outside users and developers and has an interest in desktop usability and the needs of end users.

Future versions of SUSE Linux (at least in the short term) will be available both from openSUSE and as boxed versions that will include the traditional manuals and additional non-free software (such as Sun Java, Adobe Acrobat Reader, proprietary drivers of certain kinds, and so on).

Source of Information : SUSE Linux 10 Bible

Monday, September 15, 2008

Booting from a USB Drive in Ubuntu

USB drives can be used as bootable devices. If your computer supports booting from a USB drive, then this is a great option for developing a portable operating system, emergency recovery disk, or installing the OS on other computers.

Although most systems today support USB drives, the ability to boot from a USB thumb drive is not consistent. Even if you create a bootable USB drive, your BIOS may still prevent you from booting from it. It seems like every computer has a different way to change BIOS settings. Generally, you power on the computer and press a key before the operating system boots. The key may be F1, F2, F10, Del, Esc, or some other key or combination of keys. It all depends on your computer's BIOS. When you get into the BIOS, there is usually a set of menus, including one for the boot order. If you can boot from a USB device, this is where you will set it. However, every computer is different, and you may need to have the USB drive plugged in when you power-on before seeing any options for booting from it.


Different USB Devices
Even if your computer supports booting from a USB device, it may not support all of the different USB configurations. In general, thumb drives can be configured one of three ways:

Small USB floppy drives - Thumb drives configured as USB floppy devices (that is,, no partitions) with a capacity of 256 MB or less are widely supported. If your computer cannot boot this configuration, then the chance of your computer booting any configuration is very slim.

Large USB floppy drives - These are USB floppy devices with capacities greater than 256 MB. My own tests used two different 1 GB thumb drives and a 250 GB USB hard drive.

USB hard drives - In my experience, this is the least-supported bootable configuration. I only have one computer that was able to boot from a partitioned USB hard drive.


Changing between a USB hard drive or USB floppy drive is as simple as formatting the base device or using fdisk and formatting a partition. However, converting a large USB floppy device into a small USB floppy device is not direct.

1. Use dd to create a file that is as big as the drive you want to create. For example, to create a 32 MB USB drive, start with a 32 MB file:

dd if=/dev/zero of=usbfloppy.img bs=32M count=1

2. Treat this file as the base device. For example, you can format it and mount it.

mkfs usbfloppy.img
sudo mkdir /mnt/usb
sudo mount -o loop usbfloppy.img /mnt/usb


3. When you are all done configuring the USB floppy drive image, unmount it and copy it to the real USB device (for example, /dev/sda). This will make the real USB device appear as a smaller USB floppy device.

sudo umount /mnt/usb
dd if=usbfloppy.img of=/dev/sda

Source of Information : Hacking Ubuntu Serious Hacks Mods and Customizations

Sunday, September 14, 2008

Setting Up Apache in Ubuntu

From a technical perspective, you could say that a web server is just a special kind of file server: all it does is offer files that are stored in a dedicated directory structure. The root of this structure is called the document root, and the file format that offers the files is HTML, the hypertext markup language. But a web server can provide more than just HTML files. In fact, the web server can serve just about anything, as long as it is specified in the HTML file. Therefore, a web server is a very good source for streaming audio and video, accessing databases, displaying animations, showing photos, and much more.

Apart from the web server where the content is stored, the client also has to use a specific protocol to access this content as well, and this protocol is HTTP (the hypertext transfer protocol). Typically, a client uses a web browser to generate HTTP commands that retrieve content, in the form of HTML and other files, from a web server.

You’ll likely encounter two different versions of Apache web server. The most recent version is 2.x, and this is the one installed by default on Ubuntu Server. You may, however, encounter environments that still use the earlier 1.3. This often happens if, for instance, custom scripts have been developed for use with 1.3, and those scripts aren’t compatible with 2.x.


Apache Components
Apache is a modular web server, which means that the core server (whose role is essentially to serve up HTML documents) can be extended using a variety of optional modules. For example, the libapache2-mod-php5 module allows your Apache web server to work with scripts written in PHP 5. Likewise, many other modules are available for Apache. To give you an initial impression, I’ll list some of the most useful modules:

• libapache2-mod-auth-mysqld: This module tells Apache how to handle user authentication against a MySQL database.

• libapache2-mod-auth-pam: This module instructs Apache how to authenticate users, using the Linux PAM mechanism.

• libapache-mod-frontpage: This module instructs Apache how to handle web pages using Microsoft FrontPage extensions.

• libapache2-mod-mono: This module tells Apache how to interpret ASP.NET code.

This is a short and incomplete list of all the modules you can use on the Apache web server: http://modules.apache.org currently lists more than 450 modules. It’s important that you determine exactly which modules you need for your server so that you can extend its functionality accordingly. Now, let’s move on to the configuration of the Apache web server itself.


Starting, Stopping, and Testing the Apache Web Server
Like almost all other services you can use on Ubuntu Server, the Apache web server is not installed automatically. The two packages that are available to install Apache are the apache package and the apache2 package. At present, apache2 is the more common, and only in specific situations does it make sense to use the older apache package. To check if Apache has already been installed, use dpkg -l | grep apache. If this command doesn’t show an Apache server, install it using apt-get install apache2.

The most important part of the Apache web server is the HTTP daemon (httpd) process. This process is started from the script /etc/init.d/apache2; to run it from the command line, use /etc/init.d/apache2 start. If this command finishes without any errors, your web server is up and running, which you can check with the ps aux | grep apache command. This command shows that different instances of the Apache web server are ready and waiting for incoming connections.

After starting the Apache web server, you can test its availability in several ways. The best way, however, is to just try to connect, because, after being installed, a default web server is listening for incoming requests. So wait no longer: launch a browser and connect to HTTP port 80 on your local host. It should show you a page, it doesn’t look very nice, but that’s only because you haven’t configured anything yet.

Source of Information : Apress Beginning Ubuntu Server Administration From Novice To Professional

The Problems with Windows

The world’s most popular operating system is Windows, which is made by the Microsoft Corporation. Linux has no links with Windows at all. Microsoft doesn’t contribute anything to Linux and, in fact, is rather hostile toward it, because it threatens Microsoft’s market dominance. This means that installing Linux can give you an entirely Microsoft-free PC. How enticing does that sound?

Windows is used on 91 percent of the world’s desktop computers. In other words, it must be doing a good job for it to be so popular, right?

Let’s face facts. On many levels, Windows is a great operating system, and since the release of Windows XP in particular, Microsoft has cleaned up its act. Windows XP does a much better job compared to previous versions of Windows (and Vista makes even more improvements). But the situation is far from perfect. Windows XP is notoriously insecure and virtually every day a new security hole is uncovered. The United States Computer Emergency Readiness Team (www.us-cert.gov) reported 812 security vulnerabilities for Microsoft Windows during 2005. That’s 15 vulnerabilities per week! In June 2005, the computer security company Sophos (www.sophos.com) advertised that its Windows antivirus program defended against over 103,000 viruses!

This has led to an entire industry that creates antivirus programs, which are additional pieces of software you have to install once your computer is up and running for it to run without the risk of data loss or data theft.

There have been a couple of viruses for Linux, but they’re no longer “in the wild” (that is, they are no longer infecting PCs). This is because the security holes they exploited were quickly patched, causing the viruses to die out. This happened because the majority of Linux users update their systems regularly, so any security holes that viruses might exploit are patched promptly. Compare that to Windows, where most users aren’t even aware they can update their systems, even when Microsoft gets around to issuing a patch (which has been known to take months).

So is Linux the solution to these problems? Most would agree that it’s a step in the right direction, at the very least. Most Linux users don’t install antivirus programs, because there are virtually no Linux-specific viruses. As with all software, security holes are occasionally discovered in Linux, but the way it is built means exploiting those holes is much more difficult.

There’s also the fact that Linux encourages you to take control of your computer, as opposed to treating it like a magical box. As soon as you install Linux, you become a power user. Every aspect of your PC is under your control, unlike with Windows. This means fixing problems is a lot easier, and optimizing your system becomes part and parcel of the user experience.

Source of Information : Apress Beginning Ubuntu Linux 3rd Edition

Wednesday, September 10, 2008

Changing Your User Information in Ubuntu Linux

Linux users are assigned a name, known as a username, by the root operator. One method of assigning usernames is to use one's first initial and last name in lowercase; for example, Bernice Hudson would have a username of bhudson. Each user must also have a password, which is used with the username either at a graphical or text-based login.

Older versions of Linux operating systems limited the length of usernames to 8 characters. The current version of Ubuntu limits usernames to 32 characters. Good passwords should be a minimum of 8 characters long and contain uppercase and lowercase letters, along with numbers. Random passwords for users can be generated using the mkpasswd command. Good passwords are not birthdays, anniversaries, your pet's name, the name of your significant other, or the model of your first car!

You cannot change your username, but you can change your user information, such as address, phone, and so on. You make these changes using the chfn or change finger information command. This command will modify the contents of your entry in the system password file /etc/passwd, which is used by the finger command to display information about a system's user. For example, type chfn at the command line and press Enter:

$ chfn
Password:
Changing the user information for andrew
Enter the new value, or press ENTER for the default
Full Name: Andrew Hudson
Room Number [None]: 17
Work Phone [01225445566]: 01225112233
Home Phone [01225112233]: 01225445566

You are led through a series of prompts to enter new or updated information. Note that the chfn command will not let you use any commas when entering information. You can verify this information in a couple ways, for example, by looking at the contents of /etc/passwd:

$ grep andrew /etc/passwd
andrew:x:1000:1000:Andrew Hudson,17,01225112233,01225445566:\
/home/andrew:/bin/bash

You also can verify the updated user information by using the finger command:

$ finger andrew
Login: andrew Name: Andrew Hudson
Directory: /home/andrew Shell: /bin/bash
Office: 17, +0-122-511-2233 Home Phone: +0-122-544-5566
On since Tue May 30 20:54 (BST) on :0 (messages off)
On since Tue May 30 20:55 (BST) on pts/0 from :1.0
No mail.
No Plan.

Source of Information : Ubuntu Unleashed

Tuesday, September 9, 2008

What is So Great About Linux

Leveraging work done on UNIX and GNU projects helped to get Linux up and running quickly. The culture of sharing in the open source community and adoption of a wide array of tools for communicating on the Internet have helped Linux move quickly through infancy and adolescence to become a mature operating system.

The simple commitment to share code is probably the single most powerful contributor to the growth of the open source software movement in general, and Linux in particular. That commitment has also encouraged involvement from the kind of people who are willing to contribute back to that community in all kinds of ways. The willingness of Linus to incorporate code from others in the Linux kernel has also been critical to the success of Linux. The following sections characterize Linux and the communities that support it.

Features in Linux
If you have not used Linux before, you should expect a few things to be different from using other operating systems. Here is a brief list of some Linux features that you might find cool:

No constant rebooting—Uptime is valued as a matter of pride (remember, Linux and other UNIX systems are most often used as servers, which are expected to, and do, stay up 24/7/365). After the original installation, you can install or remove most software without having to reboot your computer.

Start/stop services without interrupting others—You can start and stop individual services (such as Web, file, and e-mail services) without rebooting or even interrupting the work of any other users or features of the computer. In other words, you should not have to reboot your computer every time someone sneezes. (Installing a new kernel is just about the only reason you need to reboot.)

Portable software—You can usually change to another Linux, UNIX, or BSD system and still use the exact same software! Most open source software projects were created to run on any UNIX-like system and many also run on Windows systems, if you need them to. If it won’t run where you want it to, chances are that you, or someone you hire, can port it to the computer you want. (Porting refers to modifying an application or driver so it works in a different computer architecture or operating system.)

Downloadable applications—If the applications you want are not delivered with your version of Linux, you can often download and install them with a single command, using tools such as apt, urpmi, and yum.

No settings hidden in code or registries—Once you learn your way around Linux, you’ll find that (given the right permissions on your computer) most configuration is done in plain text files that are easy to find and change. Because Linux is based on openness, nothing is hidden from you. Even the source code, for GPL-covered software, is available for your review.

Mature desktop—The X Window System (providing the framework for your Linux desktop) has been around longer than Microsoft Windows. The KDE and GNOME desktop environments provide graphical interfaces (windows, menus, icons, and so forth) that rival those on Microsoft systems. Ease-of-use problems with Linux systems are rapidly evaporating.

Freedom—Linux, in its most basic form, has no corporate agenda or bottom line to meet. You are free to choose the Linux distribution that suits you, look at the code that runs the system, add and remove any software you like, and make your computer do what you want it to do. Linux runs on everything from supercomputers to cell phones and everything in between. Many countries are rediscovering their freedom of choice and making the switch at government and educational levels. France, Germany, Korea, and India are just a few that have taken notice of Linux. The list continues to grow.

There are some aspects of Linux that make it hard for some new users to get started. One is that Linux is typically set up to be secure by default, so you need to adjust to using an administrative login (root) to make most changes that affect the whole computer system. Although this can be a bit inconvenient, trust me, it makes your computer safer than just letting anyone do anything. This model was built around a true multiuser system. You can set up logins for everyone who uses your Linux computer, and you (and others) can customize your environment however you see fit without affecting anyone else’s settings.

For the same reason, many services are off by default, so you need to turn them on and do at least minimal configuration to get them going. For someone who is used to Windows, Linux can be difficult just because it is different from Windows.

Source of Information : Linux Bible 2008 Edition

Monday, September 8, 2008

Linux Performance Hunting Tips - Take Copious Notes (Save Everything)

Probably the most important thing that you can do when investigating a performance problem is to record every output that you see, every command that you execute, and every piece of information that you research. A well-organized set of notes allows you to test a theory about the cause of a performance problem by simply looking at your notes rather than rerunning tests. This saves a huge amount of time. Write it down to create a permanent record.

When starting a performance investigation, create a directory for the investigation, open a new "Notes" file in GNU emacs, and start to record information about the system. Then store performance results in this directory and store interesting and related pieces of information in the Notes file. Suggest that you add the following to your performance investigation file and directory:

Record the hardware/software configuration— This involves recording information about the hardware configuration (amount of memory and type of CPU, network, and disk subsystem) as well as the software environment (the OS and software versions and the relevant configuration files). This information may seem easy to reproduce later, but when tracking down a problem, you may significantly change a system's configuration. Careful and meticulous notes can be used to figure out the system's configuration during a particular test.

Example: Save the output of cat /proc/pci, dmesg, and uname -a for each test.

Save and organize performance results— It can be valuable to review performance results a long time after you run them. Record the results of a test with the configuration of the system. This allows you to compare how different configurations affect the performance results. It would be possible just to rerun the test if needed, but usually testing a configuration is a time-consuming process. It is more efficient just to keep your notes well organized and avoid repeating work

Write down the command-line invocations— As you run performance tools, you will often create complicated and complex command lines that measure the exact areas of the system that interest you. If you want to rerun a test, or run the same test on a different application, reproducing these command lines can be annoying and hard to do right on the first try. It is better just to record exactly what you typed. You can then reproduce the exact command line for a future test, and when reviewing past results, you can also see exactly what you measured.

Record research information and URLs— As you investigate a performance problem, it is import to record relevant information you found on the Internet, through e-mail, or through personal interactions. If you find a Web site that seems relevant, cut and paste the text into your notes. (Web sites can disappear.) However, also save the URL, because you might need to review the page later or the page may point to information that becomes important later in an investigation.


As you collect and record all this information, you may wonder why it is worth the effort. Some information may seem useless or misleading now, but it might be useful in the future. (A good performance investigation is like a good detective show: Although the clues are confusing at first, everything becomes clear in the end.) Keep the following in mind as you investigate a problem:

The implications of results may be fuzzy— It is not always clear what a performance tool is telling you. Sometimes, you need more information to understand the implications of a particular result. At a later point, you might look back at seemingly useless test results in a new light. The old information may actually disprove or prove a particular theory about the nature of the performance problem.

All information is useful information (which is why you save it)— It might not be immediately clear why you save information about what tests you have run or the configuration of the system. It can prove immensely useful when you try to explain to a developer or manager why a system is performing poorly. By recording and organizing everything you have seen during your investigation, you have proof to support a particular theory and a large base of test results to prove or disprove other theories.

Periodically reviewing your notes can provide new insights— When you have a big pool of information about your performance problem, review it periodically. Taking a fresh look allows you to concentrate on the results, rather than the testing. When many test results are aggregated and reviewed at the same time, the cause of the problem may present itself. Looking back at the data you have collected allows you test theories without actually running any tests.

Although it is inevitable that you will have to redo some work as you investigate a problem, the less time that you spend redoing old work, the more efficient you will be. If you take copious notes and have a method to record the information as you discover it, you can rely on the work that you have already done and avoid rerunning tests and redoing research. To save yourself time and frustration, keep reliable and consistent notes.

For example, if you investigate a performance problem and eventually determine the cause to be a piece of hardware (slow memory, slow CPU, and so on), you will probably want to test this theory by upgrading that slow hardware and rerunning the test. It often takes a while to get new hardware, and a large amount of time might pass before you can rerun your test. When you are finally able, you want to be able to run an identical test on the new and old hardware. If you have saved your old test invocations and your test results, you will know immediately how to configure the test for the new hardware, and will be able to compare the new results with the old results that you have stored.

Source of Information : Optimizing Linux® Performance

Saturday, September 6, 2008

Using Keyboard Shortcuts in Ubuntu

Your other good friends when using BASH are the Ctrl and Alt keys. These keys provide shortcuts to vital command-line shell functions. They also let you work more efficiently when typing by providing what most programs call keyboard shortcuts.


Shortcuts for Working in BASH
The below listing are the most common keyboard shortcuts in BASH (there are many more; see BASH’s man page for details). If you’ve explored the Emacs text editor, you might find these shortcuts familiar. Such keyboard shortcuts are largely the same across many of the software packages that originate from the GNU Project. Often, you’ll find an option within many Ubuntu software packages that lets you use Emacs-style navigation, in which case, these keyboard shortcuts will most likely work equally well.


Navigation Shortcut key
Left/right cursor key : Move left/right in text
Ctrl+A : Move to beginning of line
Ctrl+E : Move to end of line
Ctrl+right arrow : Move forward one word
Ctrl+left arrow : Move left one word

Editing Shortcut key
Ctrl+U : Delete everything behind cursor to start of line
Ctrl+K : Delete from cursor to end of line
Ctrl+W : Delete from cursor to beginning of word
Alt+D : Delete from cursor to end of word
Ctrl+T : Transpose characters on left and right of cursor
Alt+T : Transpose words on left and right of cursor

Miscellaneous Shortcut key
Ctrl+L : Clear screen (everything above current line)
Ctrl+U : Undo everything since last command
Alt+R : Undo changes made to the line
Ctrl+Y : Undo deletion of word or line caused by using Ctrl+K, Ctrl+W, and so on
Alt+L : Lowercase current word (from the cursor to end of word)

Source of Information : Beginning Ubuntu Linux - From Novice To Professional

Friday, September 5, 2008

Ubuntu Linux Disk Quotas

On large systems with many users, you need to control the amount of disk space a user has access to. Disk quotas are designed specifically for this purpose. Quotas, managed per each partition, can be set for both individual users as well as groups; quotas for the group need not be as large as the aggregate quotas for the individuals in the groups.
When files are created, both a user and a group own them; ownership of the files is always part of the metadata about the files. This makes quotas based on both users and groups easy to manage.

To manage disk quotas, you must have the quota and quotatool packages installed on your system. Quota management with Ubuntu is not enabled by default and has traditionally been enabled and configured manually by system administrators. Sysadmins use the family of quota commands, such as quotacheck to initialize the quota database files, edquota to set and edit user quotas, setquota to configure disk quotas, and quotaon or quotaoff to control the service. (Other utilities include warnquota for automatically sending mail to users over their disk space usage limit.)


Implementing Quotas
To reiterate, quotas might not be enabled by default, even if the quota software package is installed on your system. When quotas are installed and enabled, you can see which partitions have either user quotas, group quotas, or both by looking at the fourth field in the /etc/fstab file. For example, one line in /etc/fstab shows that quotas are enabled for the /home partition:

/dev/hda5 /home ext3 defaults,usrquota,grpquota 1 1

The root of the partition with quotas enabled will have the files quota.user or quota.group in them (or both files, if both types of quotas are enabled), and the files will contain the actual quotas. The permissions of these files should be 600 so that users cannot read or write to them. (Otherwise, users would change them to allow ample space for their music files and Internet art collections.) To initialize disk quotas, the partitions must be remounted. This is easily accomplished with the following:

$ sudo mount -o ro,remount partition_to_be_remounted mount_point

The underlying console tools (complete with man pages) are

• quotaon, quotaoff—Toggles quotas on a partition.

• repquota—A summary status report on users and groups.

• quotacheck—Updates the status of quotas (compares new and old tables of disk usage); it is run after fsck.

• edquota—A basic quota management command.


Manually Configuring Quotas
Manual configuration of quotas involves changing entries in your system’s file system table, /etc/fstab, to add the usrquota mount option to the desired portion of your file system. As an example in a simple file system, quota management can be enabled like this:

LABEL=/ / ext3 defaults,usrquota 1 1

Group-level quotas can also be enabled by using the grpquota option. As the root operator, you must then create a file (using our example of creating user quotas) named quota.user in the designated portion of the file system, like so:

$ sudo touch /quota.user

You should then turn on the use of quotas using the quotaon command:

$ sudo quotaon -av

You can then edit user quotas with the edquota command to set hard and soft limits on file system use. The default system editor (vi unless you change your EDITOR environment variable) will be launched when editing a user’s quota. Any user can find out what their quotas are with

$ quota -v

Source of Information : Ubuntu 7.10 Linux Unleashed

Thursday, September 4, 2008

Managing Bash with Key Sequences in Ubuntu

Sometimes, you’ll enter a command from the Bash command line and either nothing happens at all or else something totally unexpected happens. In such an event, it’s good to know that some key sequences are available to perform basic Bash management tasks. Here are some of the most useful key sequences.

Ctrl+C: Use this key sequence to quit a command that is not responding (or simply takes too long to complete). This key sequence works in most scenarios where the command is operational and producing output to the screen. In general, Ctrl+C is also a good choice if you absolutely don’t have a clue as to what’s happening and you just want to terminate the command that’s running in your shell. If used in the shell itself, it will close the shell as well.

Ctrl+D: This key sequence is used to send the “end of file” (EOF) signal to a command.
Use this when the command is waiting for more input, which is indicated by the secondary prompt (>). You can also use this key sequence to close a shell session.

Ctrl+R: This is the reversed search feature. It will open the “reversed I-search” prompt, which helps you locate commands that you used previously. The Ctrl+R key sequence searches the Bash history, and the feature is especially useful when working with longer commands. As before, type the first characters of the command and you will see the last command you’ve used that started with the same characters.

Ctrl+Z: Some people use Ctrl+Z to stop a command that is running interactively on the console (in the foreground). Although it does stop the command, it does not terminate it. A command that is stopped with Ctrl+Z is merely paused, so that you can easily start it in the background using the bg command or in the foreground again with the fg command. To start the command again, you need to refer to the job number that the program is using. You can see a list of these job numbers using the jobs command.

Source of Information : Apress Beginning Ubuntu Server Administration