Saturday, July 31, 2010

Installing Ubuntu with Only a Netbook

There are two other options that work really well for installing Ubuntu on a netbook, or any other kind of Windows system. The approach is a little roundabout, but you can install Ubuntu using only a netbook and a USB drive and not burn anything to CD-ROM. Most netbooks ship with Microsoft Windows XP or Windows 7 installed. UsingWindows, download the netbook installation disk onto the netbook. At this point, you have two options: create a bootable USB device from Windows or use the Windows Ubuntu Installer. To create a bootable USB drive, download unetbootin for Windows from unetbootin.sourceforge.net. This program works like the usb-creator, allowing you to select an ISO and install it on a USB device.

With Windows, you can also open up the ISO image. Sitting at the root of the disk image is a program called wubi.exe.This is the WindowsUbuntu Installer. UsingWubi, you can installUbuntu as an application under Windows that runs a separate operating system.Wubiworks by adding itself to theWindows boot menu. This effectively turns the Windows system into a dual-boot computer.

After installing Wubi, reboot the system. You will see Ubuntu listed in the boot menu. If you boot Ubuntu, it will use the Windows boot manager to run an Ubuntu environment. From Ubuntu, you can access the host Windows system through /host and /media. More importantly, you can download the usb-creator for Ubuntu and create a bootable USB device.

Regardless of the approach you take, you should now have a bootable USB drive (or SD Card or other type of removable memory). Tell your netbook to boot off the new media. For example, with an Asus 1005HA netbook, you can press Esc after pressing the power-on button and select the SD Card or USB drive as the boot device. At this point, you can install Ubuntu for the netbook.


HIDDEN DISK PARTITIONS
Many netbooks and laptops have a boot option to restore the operating system. This works by accessing a separate partition on the hard drive that contains a bootable operating system and will restore the system to factory defaults.

During the install process, use the advanced disk partitioning option. This will show you the name of the emergency recovery partition. (It is usually named something like ‘‘XP recovery.’’) There may also be a small Extensible Firmware Interface (EFI) partition used to improve boot times. If you don’t want to accidentally press a button and overwrite your Ubuntu netbook with Windows, then be sure to reformat the drive (or remove the XP recovery partition) during the installation. If you remove the emergency recovery partition, then the operating system will ignore requests for recovery. While removing the EFI partition (fdisk partition type 0xEF) will not harm anything, keeping the small (usually 8 MB) partition can dramatically improve boot times.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Friday, July 30, 2010

Creating the Netbook Installation Media

To create the netbook installation media, you will need to download the netbook ISO (for example, ubuntu-9.10-netbook-remix-i386.iso). If your netbook has a CD-ROM drive, then simply burn the ISO to a disk and boot from it.

However, most netbooks lack a CD-ROM drive. In this case, you will need a 1-G USB thumb drive, SD Card, or other form of media that is supported by your netbook. You will also need a computer to create the CD-ROM.

The easiest way to make a bootable netbook installation image on a USB thumb drive or SD Card is to use usb-creator. This tool automates the processes of putting a CD-ROM image onto other types of removable media.

• Intrepid Ibex (8.10) and later—Install usb-creator using sudo apt-get
install usb-creator. The executable is called usb-creator-gtk.

• Hardy Heron (9.04 LTS)—There is an ugly hack for installing usb-creator on Hardy. Hardy and Intrepid are very similar and can run much of the same code.


1. Download Intrepid’s usb-creator package from

https://launchpad.net/ubuntu/intrepid/i386/usb-creator

The file will have a file name like usb-creator_0.1.10_all.deb.

2. Install the dependent packages:

sudo apt-get install syslinux mtools

3. Install Intrepid’s usb-creator package on Hardy:

sudo dpkg -i usb-creator_0.1.10_all_deb

4. The executable is called usb-creator.


The usb-creator program allows you to select the ISO image and destination device. When it finishes the installation, you can connect the USB device (or SD Card) to your netbook and boot from it. If usb-creator is not an option (for example, if you are running Dapper Drake 6.06 LTS), then follow the steps in the section ‘‘Installing a Full File System from USB’’ to copy the netbook installation image to an SD Card or USB device. Use the USB hard drive configuration.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Thursday, July 29, 2010

Using Ubuntu on a Netbook

Beginning with Jaunty Jackalope (9.04), specialized version of Ubuntu has been created for netbook systems. Netbook computers are a class of low-end laptops. They are generally smaller, less powerful computers. Physically, they usually have smaller keyboards, smaller screens, and no CD or DVD drive.

While you would not want to run a multi-user or high-volume web server on a netbook, they are ideal for simple tasks when you are out of the office. For example, you can check e-mail, surf the web, do some basic word processing, and even occasionally develop software (like patching while on the road). Netbooks are also great for watching movies on airplanes.



Installing on a Netbook
Most netbooks include multiple USB connectors and usually have a slot for an SD Card or similar memory device. Since netbooks lack CD and DVD drives, all include the ability to boot from USB, SD Card, or other types of removable memory. Many also include the ability to boot from the network.

While there is a wide range of netbooks on the market, not all are supported by Ubuntu. While some work right straight out of the box, others may need you to manually install drivers or patches, and a few have completely unsupported hardware. Usually the issues concern sound, video camera, or network support.


Even though the netbook release is relatively new and still undergoing major revision changes (the Jaunty desktop looks very different from the Karmic desktop), the installation is one of the most painless processes I have ever experienced.

Before trying to use Ubuntu on a netbook, consult the list of supported hardware at https://wiki.ubuntu.com/HardwareSupport/Machines/Netbooks.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Wednesday, July 28, 2010

Ubuntu Booting Variations and Troubleshooting

I used a variety of computers for testing the USB boot process. Every computer acted differently to different boot configurations.

• Every computerwith Boot from USB support was able to boot the original boot.img file. They were all able to install over the network.

• Most computers were able to boot the Ubuntu Live Desktop operating system when my 1-GB thumb drive was formatted as a USB floppy drive. However, one computer gave a generic boot error message.

• Only my newer computer systems could boot the USB hard drive with the ext2 file system. It didn’t make any difference if I used a real USB hard drive or thumb drive. In addition, specifying the ZIP configuration was the only way to make the hard drive configuration work on one of the computers.

• My Asus netbook had no issues booting from any of these configurations, and it even worked from a 2-GB SD Card.

Depending on the configuration variation and hardware that you use, you may see some well-known errors.

• Blank screen—If all you see is a blank screen with a blinking cursor, then something definitely did not work. This happens when the boot loader fails. It could be the result of failing to install the boot loader properly, or it could be a BIOS problem. Try rebuilding the USB drive in case you missed a step. Also, try booting the USB drive on a different computer. If it works on one computer and not on another, then it is a BIOS problem. But if it fails everywhere, then it is probably the boot loader.

• ‘‘PCI: Cannot allocate resource region. . .’’—This indicates a BIOS problem. You may be able to boot using additional kernel parameters to bypass the PCI errors, for example:

live noapic nolapic pci=noacpi acpi=off

However, you may not be able to get past this. Check if there is a BIOS upgrade available for your computer.

• Root not found—There are a variety of errors to indicate that the root partition was not available during boot. This usually happens when the USB drive is still initializing or transferring data, and not ready for the root partition to be mounted. You can fix this by extracting the initrd file and editing the conf/initramfs.conf file. Add in a mounting delay of 15 seconds (the new line should say: WAIT=15). This delay gives the USB time to initialize, configure, and transfer data.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Tuesday, July 27, 2010

Using the Live CD from a USB Floppy Drive Installing a Ubuntu Full File System

Converting the Live CD to a bootable USB floppy drive requires at least a 1-GB thumb drive.

1. Start a shellwith root privileges. This is done for convenience since nearly every command must be done as root.

sudo bash


2. Unmount and blank the thumb drive.


3. Format the disk as one big FAT16 drive. The -I parameter to mkdosfs says to format the entire device. In this example, the USB drive is /dev/sdc.

mkdosfs -I -F 16 /dev/sdc
sync


4. Mount the Live CD and the USB drive:

mkdir /mnt/usb
mkdir /mnt/iso
mount -o loop ubuntu-8.04.3-desktop-i386.iso /mnt/iso/
mount /dev/sdc /mnt/usb


5. Copy over the files. This can take 20 minutes or longer. Go watch TV or have lunch. Also, ignore the errors about symbolic links, since FAT16 does not support them.

cp -rpx /mnt/iso/* /mnt/usb/
sync


6. Set up the files for a bootable disk. Since SYSLINUX does not support subdirectories for kernel files, you need to move these to the top directory on the USB drive.

# move the kernel files and memory tester
mv /mnt/usb/casper/vmlinuz /mnt/usb/vmlinuz
mv /mnt/usb/casper/initrd.gz /mnt/usb/initrd.gz
mv /mnt/usb/install/mt86plus /mnt/usb/mt86plus
# move boot files to top of the drive
mv /mnt/usb/isolinux/* /mnt/usb/
mv /mnt/usb/isolinux.cfg /mnt/usb/syslinux.cfg
rm /mnt/usb/isolinux.bin
# Optional: Delete Windows tools and ISO files to free space
rm -rf /mnt/usb/start.* /mnt/usb/autorun.inf
rm /mnt/usb/bin /mnt/usb/programs
rm -rf /mnt/usb/isolinux
# All done
sync


7. Edit the /mnt/usb/syslinux.cfgfile and correct the kernel paths.Remove the paths /casper/ and /install/wherever you see them. This is because Step 6 moved the files to the root of the USB drive. There should be eight occurrences of /casper/ and one of /install/. After you write your changes, run sync.


8. Unmount the drive and make it bootable:

umount /mnt/usb
syslinux /dev/sdc
sync
eject /dev/sdc
exit # leave the root shell

The USB thumb drive should now be bootable! You can run the Ubuntu Live operating system or install the operating system from this USB thumb drive. For customization, you can change the boot menu by editing the /mnt/usb/syslinux.cfg file and modifying the kernels.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Monday, July 26, 2010

Using the Boot Image in Ubuntu

The boot.img.gz image is a self-contained file system and only uses 8 MB of disk space. If you have a bigger thumb drive (for example, 64 MB or 2 GB), then you can copy diagnostic tools or other stuff onto the drive.

In order to create a bootable USB drive, you will need a boot loader. The choices are GRUB or SYSLINUX. There are significant tradeoffs here. GRUB is the default boot loader used when Ubuntu is installed. However, using GRUB requires you to know the drive identifier, such as /dev/sda1.

Since you may plug in and remove USB devices, the identifier may change, breaking the boot loader’s configuration. SYSLINUX does not use a static drive identifier, but is limited to supporting FAT12 or FAT16 drives. Since USB devices are expected to be portable, use SYSLINUX:

sudo apt-get install syslinux mtools

The main steps require you to format the drive as FAT16 and use syslinux to make it bootable.

1. Start a shell with root privileges:

sudo bash

2. Unmount the USB drive, if it is already mounted.

3. Format the drive as a FAT16 USB floppy drive (in this example, /dev/sdc) and mount it:

mkdosfs -I -F 16 /dev/sdc
sync
mkdir /mnt/usb
mount -o loop /dev/sdc /mnt/usb

4. Mount the boot.img file. You will use this to provide the boot files.

zcat boot.img.gz > boot.img
mkdir /mnt/img
mount -o loop boot.img /mnt/img

5. Copy the files over to the USB drive. This can take a few minutes.

sudo bash # become root, run these commands as root
(cd /mnt/img; tar -cf—*) | (cd /mnt/usb; tar -xvf -)
sync

6. Set up the files for a bootable disk. This is done by copying over the SYSLINUX configuration files for an ISO image (isolinux.cfg) to a configuration file for a FAT16 system (syslinux.cfg):

mv /mnt/usb/isolinux.cfg /mnt/usb/syslinux.cfg
rm /mnt/usb/isolinux.bin
sync

7. Unmount the drive and make it bootable by installing the boot loader:

umount /mnt/usb
syslinux /dev/sdc
sync
eject /dev/sdc
exit # leave the root shell

Now you can boot from the USB drive in order to install the operating system.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Sunday, July 25, 2010

Starting the Ubuntu Network Install from a USB Drive

Starting the Ubuntu Network Install from a USB Drive USB drives can be used to simplify system installations. For example, if the computer can boot from a USB drive, then you can use it to launch a network installation.

Configuring the thumb drive for use as a network installation system requires some simple steps:

1. Plug in the USB drive. If it mounts, unmount it.

2. Download the boot image. There is a different boot image for every platform. Be sure to retrieve the correct one for your Ubuntu release. For example, for Hardy Heron (8.04 LTS), use:

wget http://archive.ubuntu.com/ubuntu/dists/\
hardy/main/installer-i386/current/images/netboot/boot.img.gz

3. The boot image is preconfigured as a USB floppy drive. Copy the image onto the thumb drive. Be sure to specify the base device (for example, /dev/sda) and not any existing partitions (for example, /dev/sda1).

zcat boot.img.gz > /dev/sda

4. Use sync to ensure that all writes complete, and then eject the thumb drive:

sudo sync; sudo eject /dev/sda

Now you are ready to boot off the thumb drive, and the operating system will be installed over the network.

Every PC that I tested with Boot from USB support was able to run the default network installer: boot.img.gz. However, since USB support is not consistent, this may not necessarily work on your hardware. If you cannot get it to boot, then make sure your BIOS is configured to boot from the USB drive, that it boots from the USB before booting from other devices, and that the USB drive is connected to the system. If you have multiple USB devices connected, remove all but the bootable thumb drive.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Saturday, July 24, 2010

The 10-Step Ubuntu Boot Configuration

Creating a bootable USB thumb drive requires 10 basic steps:

1. Unmount the drive. When you plug a USB drive into the computer, Ubuntu immediately mounts it. You need to unmount it before you can partition or format it.

Use the mount command to list the current mount points and identify the USB thumb drive. Be aware that the device name will likely be different for you. In this example, the device is /dev/sda1 and the drive label is NEAL.

$ mount
/dev/hda1 on / type ext3 (rw,errors=remount-ro)
proc on /proc type proc (rw)
/sys on /sys type sysfs (rw)
varrun on /var/run type tmpfs (rw)
varlock on /var/lock type tmpfs (rw)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
devshm on /dev/shm type tmpfs (rw)
lrm on /lib/modules/2.6.15-26-686/volatile type tmpfs (rw)
/dev/sda1 on /media/NEAL type vfat (rw,nodev,quiet,umask=077)

Use the unmount command to free the device:

sudo umount /dev/sda1



2. Initialize the USB device. This is needed because previous configurations could leave residues that will interfere with future configurations. The simplestway to zero a device is to use dd. Keep in mind that large drives (even 1-GB thumb drives) may take a long time to zero. Fortunately, you usually only need to zero the first few sectors.

dd if=/dev/zero of=/dev/sda # format all of /dev/sda
dd if=/dev/zero of=/dev/sda count=2048 # format the first 2048
sectors

Use the sync command (sudo sync) to make sure that all data is written. After zeroing the device, unplug it and plug it back in. This will remove any stale device partitions. Ubuntu will not mount a blank device, but it will create a device handle for it.



3. If you are making a USB hard drive, then partition the device:

sudo fdisk /dev/sda



4. Format the partitions. If you are making a USB floppy drive, then format the base device (/dev/sda). For USB hard drives, format each of the partitions (/dev/sda1, /dev/sda2, etc.).



5. Mount the partition.



6. Copy files to the partition.



7. Place the kernel and boot files on the partition.



8. Configure the boot menus and options.



9. Use the sync command (sudo sync) to make sure that all data is written and then unmount the partition.



10. Install the boot manager.


Now the device should be bootable. The next few sections show different ways to do these 10 steps.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Friday, July 23, 2010

Booting Ubuntu from a USB Drive

Beyond file sharing, USB drives can be used as bootable devices. If your computer supports booting from a USB drive, then this is a great option for developing a portable operating system, creating an emergency recovery disk, or installing the OS on other computers.

Although most systems today support USB drives, the ability to boot from a USB thumb drive is inconsistent. Even if you create a bootable USB drive, your BIOS may still prevent you from booting from it. It seems like every computer has a different way to change BIOS settings. Generally, you power on the computer and press a key before the operating system boots. The key may be F1, F2, F10, Del, Esc, or some other key or combination of keys. It all depends on your computer’s BIOS. When you get into the BIOS, there is usually a set of menus, including one for the boot order. If you can boot from a USB device, this is where you will set it. However, every computer is different, and you may need to have the USB drive plugged in when you power on before seeing any options for booting from it.



Different USB Devices
Even if your computer supports booting from a USB device, it may not support all of the different USB configurations. In general, thumb drives can be configured one of three ways:

Small USB floppy drives—Thumb drives configured as USB floppy devices (that is, no partitions) with a capacity of 256 MB or less are widely supported. If your computer cannot boot this configuration, then the chances of your computer booting any configuration is very slim.

Large USB floppy drives—These are USB floppy devices with capacities greater than 256 MB.My own tests used two different 1-GB thumb drives, a 2-GB SD Card, and a 250-GB USB hard drive.

USB hard drives—In my experience, this is the least-supported bootable configuration for older hardware. I only have one computer that was able to boot from a partitioned USB hard drive. However, every laptop I tested seems to support this configuration.

Changing between a USB hard drive and a USB floppy drive is as simple as formatting the base device or using fdisk and formatting a partition. However, converting a large USB floppy device into a small USB floppy device cannot be done directly.

1. Use dd to create a file that is as big as the drive you want to create. For example, to create a 32-MB USB drive, start with a 32-MB file:

dd if=/dev/zero of=usbfloppy.img bs=32M count=1

2. Treat this file as the base device. For example, you can format it and mount it.

mkfs usbfloppy.img
sudo mkdir /mnt/usb
sudo mount -o loop usbfloppy.img /mnt/usb

3. When you are all done configuring the USB floppy drive image, unmount it and copy it to the real USB device (for example, /dev/sda). This will make the real USB device appear to be a smaller USB floppy device.

sudo umount /mnt/usb
dd if=usbfloppy.img of=/dev/sda

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Thursday, July 22, 2010

Helping with Big Fonts

I have a couple of coworkers who wear reading glasses when sitting at the computer. More than once, they have come over to my work area and had to run back to get their glasses. To help them (and tease them), I created a Grandpa Mode macro that increases the screen font size-just for them. Permanently
setting up the ability to use Grandpa Mode requires four commands. The first two define custom commands that change the dpi value. The second two bind the commands to key sequences.

gconftool-2 -t str-set /apps/metacity/keybinding_commands/command_7 \
'gconftool-2 -t float-set /desktop/gnome/font_rendering/dpi 200’
gconftool-2 -t str-set /apps/metacity/keybinding_commands/command_8 \
'gconftool-2 -t float-set /desktop/gnome/font_rendering/dpi 96’
gconftool-2 -t str-set /apps/metacity/global_keybindings/run_command_7 \
'F7’
gconftool-2 -t str-set /apps/metacity/global_keybindings/run_command_8 \
'F8’

Now pressing Ctrl+F7 changes the resolution to 200 dpi and makes the fonts appear large (and coworkers can read it without glasses). Ctrl+F8 returns the
screen to the system default of 96 dpi.

If you want to remove Grandpa Mode, you can use the gconftool-2-unset option:

# reset default dpi
gconftool-2 -t float-set /desktop/gnome/font_rendering/dpi 96
# unset key mappings
gconftool-2-unset /apps/metacity/keybinding_commands/command_7
gconftool-2-unset /apps/metacity/keybinding_commands/command_8
gconftool-2-unset /apps/metacity/global_keybindings/run_command_7
gconftool-2-unset /apps/metacity/global_keybindings/run_command_8

Selecting the Ubuntu Version

Each Ubuntu release is designed to require only one CD-ROM for installing the system. This reduces the need for swapping disks during the installation. Unfortunately, one disk cannot hold everything needed for a complete environment. To resolve this issue, Ubuntu has many different types of initial install images that address different system needs.

Desktop—This image provides a Live Desktop. This can be used to test-drive the operating system or install a desktop or workstation system. The installation includes the Gnome graphical environment and user-oriented tools, including office applications, multimedia players, and games.

Alternate—Similar to the Desktop image, this image installs the desktop version of Ubuntu, but it does not use a graphical installer. This is a very desirable option when the graphics or mouse does not work correctly from the Desktop installer.

Server—This minimal install image has no graphical desktop. It is ideal for servers and headless (without monitor) systems. The image includes server software such as a Secure Shell server,web server, and mail server, but none is installed by default.

Netbook—Introduced with Jaunty Jackalope (9.04), the netbook edition (also called the Ubuntu Netbook Remix) is a version customized for portable netbook systems.

The names for the installation images do not exactly match the functionality. The names were chosen to avoid confusion with previous Ubuntu releases. (If they called the Desktop CD-ROM Install, people might not realize it also contains a Live Desktop.) Better names might be Live CDwith Desktop Install, OEMwith Text Desktops, and Server with Minimal System Configuration. But then again, these are pretty long names, so we’ll stick with Desktop, Alternate, Server, and Netbook.

There are more installation options than these four CD-ROM images. For example, there is an Ubuntu DVD image. The DVD contains everything found on all of the CD-ROM images, including the live operating system. There are also unofficial ports to other platforms. For example, installation disks for the PowerPC, Sun UltraSPARC, IA-64, and other architectures are available from http://cdimage.ubuntu.com/ports/releases/. While these platforms may not receive immediate updates and first-tier support, they are community supported.

Each installation disk has the option for a basic install as well as a few other common options. For example, you can verify the installation media using the check for CD defects, test your hardware with the Memory Test, and access an installed system using the Rescue option. There are also options specific to certain installation disks.

From the Ubuntu web site (ubuntu.com), it can be difficult to find anything other than the Desktop and Server versions of the current and LTS releases for download. The web sites releases.ubuntu.com and cdimage.ubuntu.com provide easy access to all of the release images.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Wednesday, July 21, 2010

Understanding Ubuntu Names

Each Ubuntu release is associated with a number and name. The release number is the year and month of the release in an internationalized format. So ‘‘6.06’’ is July 2006 and ‘‘9.04’’ is April 2009 (and not September 2004). Each release is also associated with a common name. Releases are commonly referred to by their names. For example, 9.04 is commonly called Jaunty Jackalope or simply Jaunty.

Ubuntu Releases
Warty Warthog - 4.10 - April 2006
Hoary Hedgehog - 5.04 - October 2006
Breezy Badger - 5.10 - April 2007
Dapper Drake - 6.06 LTS - July 2009 (desktop), June 2011 (server)
Edge Eft - 6.10 - April 2008
Feisty Fawn - 7.04 - October 2008
Gutsy Gibbon - 7.10 - April 2009
Hardy Heron - 8.04 LTS - April 2011 (desktop), April 2013 (server)
Intrepid Ibex - 8.10 - April 2010
Jaunty Jackalope - 9.04 - October 2010
Karmic Koala - 9.10 - April 2011
Lucid Lynx - 10.04 LTS - April 2013 (desktop), April 2015 (server)

While most releases have 18 months of support, every other year a long-term support (LTS) version is released. The LTS releases provide three years of updates for the desktop, and five years for servers. The LTS is an excellent option for systems that cannot afford to be completely replaced every 18 months.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Tuesday, July 20, 2010

Selecting a Linux Distribution

Ubuntu is a Linux distribution based on Debian Linux. Different Linux distributions target different functional niches. The goal of Ubuntu is to bring Linux into the desktop workspace. To do this, it needs to provide a stable user interface, plenty of office tools, and drivers for a myriad of peripherals, while still being user-friendly. Although different groups manage nearly every open source project, Canonical Ltd. provides a central point for development and support. Canonical, along with the Ubuntu community, can answer most of your technical (and not so technical) questions.

Ubuntu is the basis for a variety of Linux distributions—most only differ in the user interface, although some do include specific software configurations. The basic Ubuntu distribution uses the Gnome desktop and is geared toward desktop or server systems. Other distributions based on Ubuntu include:

• Kubuntu—A variation of Ubuntu with the K Desktop Environment (KDE)

• Xubuntu—A variation of Ubuntu with the Xfce Desktop Environment

• Edubuntu—A modified version of Ubuntu that is loaded with educational applications

In each case, it is possible to switch from one installed version to another. For example, you can install Ubuntu, add in KDE, and remove Gnome, and you’ll have an environment that looks like Kubuntu. To convert an Ubuntu installation toKubuntu requires changing the desktop, office applications (OpenOffice to KOffice), and swapping other tools. Instead of modifying one distribution to look like another, you should just start with the right distribution.

WHICH DISTRIBUTION IS RIGHT FOR YOU?
Different Linux distributions fill specific needs. For example, although RedHat started life as a unifying distribution, it primarily supported English applications. SuSE was a popular internationalized distribution. Many distributions were maintained by modifying other distributions. For example, ASPLinux is a version of RedHat with multilingual support for Asian and Slavic languages, and the Beowulf clustered computing environment is based on RedHat. Although RedHat has seeded many different distributions, it is not alone. Debian Linux is another distribution with a significant following. As with RedHat, Debian has been used to spawn many different niche distributions. Although Ubuntu is based on Debian, it is also seeding other distributions.
Different distributions of the Linux operating system are sometimes called flavors. There are hundreds different supported flavors of Linux, each with a different focus. You can see the listing of official distributions at www.linux.org.

Most people won’t install KDE and remove Gnome in order to change their desktop. Instead, they will add KDE to the system and keep both Gnome and KDE installed.


To give you an example of the complexity, here’s how to add KDE to an Ubuntu system that already uses the Gnome desktop:

1. Install KDE.

sudo apt-get install kubuntu-desktop

This requires about 700 MB of disk space. The installation will ask if you want Gnome (gdm) or KDE (kdm) as the default desktop.

2. Log out. This gets you out of the active Gnome desktop.

3. On the login page, select the user.

4. Select KDE from the Sessions menu.

5. Log in using KDE.

Source of Information : Wiley Ubuntu Powerful Hacks And Customizations

Monday, July 19, 2010

GRUB

GRUB is the boot loader commonly used in desktop systems, having supplanted LILO in the past few years. GRUB performs the same job as LILO: after the first-stage boot loader has run, GRUB finds a kernel, puts it into memory, and lets the system start. GRUB divide booting into three stages: 1, 1.5, and 2. The stage 1 boot loader fits into the MBR of the device; its job is to mount the devices necessary to run the stage 2 GRUB boot loader, which reads a configuration file and presents a user interface. The 1.5 boot loader is necessary when the code required to find the stage 2 boot loader doesn’t fit into the 512 bytes of the MBR. GRUB is controlled by the /boot/grub/menu.lst file stored on the boot partition configured when you install GRUB This file is divided into one section (at the start of the file) with global options and a second section containing a list of kernels to boot. A typical menu.lst file for an embedded system looks like the following:

title Linux

root (hd0,1)

kernel /zImage root=/dev/hda2 ro

The root parameter indicates that the / should be mapped to the device hd0’s first partition. The next line tells the system to get the kernel at /zImage. Because there’s only one entry, grub doesn’t display a menu. This root parameter doesn’t have an effect on the root file system that the kernel eventually mounts; it’s the root file system for the boot loader itself. The root device format is different than Linux, which can result in confusion. In GRUB, a device has the following format:

(device[bios number][,partition])

Device can be one of the following values: hd for fixed disks, fd for floppy disks, or nd for network drives. The number that follows is the identifier assigned by the computer’s BIOS. You can find this in the BIOS setup for the computer; the assigned numbers start at 0 and work upward. The partition is the logical division of the drive. To find the partitions on a drive, use the sfdisk command:

$ sudo /sbin/sfdisk -l

GRUB allows you to load the kernel from a TFTP server. To do this, you need to configure the IP parameters and use (nd) instead of (hd0,1) as the root device. For example:

ifconfig -address=10.0.0.1 -server=10.0.0.2
kernel (nd)/bzImage


This results in GRUB configuring the adapter to have an IP address of 10.0.0.1 and use the default netmask (255.0.0.0) and contact 10.0.0.2 to download the kernel file bzImage via TFTP to boot the system.

Source of Information : Pro Linux Embedded Systems

Sunday, July 18, 2010

LILO

You’re probably familiar with LILO as a boot loader for desktops and x86 systems; in the early days of Linux, this was the only boot loader. LILO has been surpassed in popularity by GRUB, which has more features, but the minimalistic nature of LILO is what makes it ideal for embedded systems. Recall from the first part of the chapter that LILO is a second-stage boot loader for an x86 system. It’s loaded from the Master Boot Record (MBR) of the first bootable device the BIOS locates. LILO gets its marching orders from the lilo.conf file. The contents of this file are written to the device’s MBR by LILO as part of the configuration process. In this file, you can specify several different boot-up configurations, setting one as the default. You can also set parameters for all configurations; LILO calls these global parameters. The structure of lilo.conf is such that the global options precede
the image section, where you tell LILO what kernel to load. A typical lilo.conf file for an embedded system looks like the following:

boot=/dev/hda
root=/dev/hda1
read-only
default=theapp
# kernel image to boot
image=/boot/zImage
label=theapp

This tells the software to load the kernel located in the /boot directory for the root device (in this case, dev/hda1). default= isn’t necessary, because the file contains just one configuration; but being explicit is a good habit, because if this file is changed, LILO will prompt you for an image label - and that could be problematic if the device doesn’t have traditional input like a mouse or keyboard.

Source of Information : Pro Linux Embedded Systems

Saturday, July 17, 2010

Linux System Network

A Linux system attached to a network is probably communicating on an Ethernet, which may in turn be linked to other local area networks (LANs) and wide area networks (WANs). Communication between LANs and WANs requires the use of gateways and routers. Gateways translate the local data into a format suitable for the\ WAN, and routers make decisions about the optimal routing of the data along the way. The most widely used network, by far, is the Internet.

Basic networking tools allow Linux users to log in and run commands on remote systems (ssh, telnet) and copy files quickly from one system to another (scp, ftp/sftp). Many tools that were originally designed to support communication on a singlehost computer (for example, finger and talk) have since been extended to recognize network addresses, thus allowing users on different systems to interact with one another. Other features, such as the Network Filesystem (NFS), were created to extend the basic UNIX model and to simplify information sharing.

Concern is growing about our ability to protect the security and privacy of machines connected to networks and of data transmitted over networks. Toward this end, many new tools and protocols have been created: ssh, scp, HTTPS, IPv6, firewall hardware and software, VPN, and so on. Many of these tools take advantage of newer, more impenetrable encryption techniques. In addition, some weaker concepts (such as that of trusted hosts) and some tools (such as finger and rwho) are being discarded in the name of security.

Computer networks offer two major advantages of over other ways of connecting computers: They enable systems to communicate at high speeds and they require few physical interconnections (typically one per system, often on a shared cable). The Internet Protocol (IP), the universal language of the Internet, has made it possible for dissimilar computer systems around the world to readily communicate with one another. Technological advances continue to improve the performance of computer systems and the networks that link them.

One way to gather information on the Internet is via Usenet. Many Linux users routinely peruse Usenet news (netnews) to learn about the latest resources available for their systems. Usenet news is organized into newsgroups that cover a wide range of topics, computer-related and otherwise. To read Usenet news, you need to have access to a news server and the appropriate client software. Many modern email programs, such as Mozilla and Netscape, can display netnews.

The rapid increase of network communication speeds in recent years has encouraged the development of many new applications and services. The World Wide Web provides access to vast information stores on the Internet and makes extensive use of hypertext links to promote efficient searching through related documents. It adheres to the client/server model that is so pervasive in networking. Typically the WWW client is local to a site or is made available through an Internet service provider. WWW servers are responsible for providing the information requested by their many clients.

Mozilla/Firefox is a WWW client program that has enormous popular appeal. Firefox and other browsers use a GUI to give you access to text, picture, and audio information: Making extensive use of these hypermedia simplifies access to and enhances the presentation of information.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Friday, July 16, 2010

RPC Network Services

Much of the client/server interaction over a network is implemented using the RPC (Remote Procedure Call) protocol, which is implemented as a set of library calls that make network access transparent to the client and server. RPC specifies and interprets messages but does not concern itself with transport protocols; it runs on top of TCP/IP and UDP/IP. Services that use RPC include NFS and NIS. RPC was developed by Sun as ONC RPC (Open Network Computing Remote Procedure Calls) and differs from Microsoft RPC.

In the client/server model, a client contacts a server on a specific port to avoid any mixup between services, clients, and servers. To avoid maintaining a long list of port numbers and to enable new clients/servers to start up without registering a port number with a central registry, when a server that uses RPC starts, it specifies the port it expects to be contacted on. RPC servers typically use port numbers that have been defined by Sun. If a server does not use a predefined port number, it picks an arbitrary number.

The server then registers this port with the RPC portmapper (the rpcbind [FEDORA] or portmap [RHEL] daemon) on the local system. The server tells the daemon which port number it is listening on and which RPC program numbers it serves. Through these exchanges, the portmapper learns the location of every registered port on the host and the programs that are available on each port. The rpcbind/portmap daemon, which always listens on port 111 for both TCP and UDP, must be running to make RPC calls.

The /etc/rpc file maps RPC services to RPC numbers. The /etc/services file lists system services.

The sequence of events for communication between an RPC client and server occurs as follows:

1. The client program on the client system makes an RPC call to obtain data from a (remote) server system. (The client issues a “read record from a file” request.)

2. If RPC has not yet established a connection with the server system for the client program, it contacts rpcbind/portmap on port 111 of the server and asks which port the desired RPC server is listening on (for example, rpc.nfsd).

3. The rpcbind/portmap daemon on the remote server looks in its tables and returns a UDP/TCP port number to the local system, the client (typically 2049 for nfs).

4. The RPC libraries on the server system receive the call from the client and pass the request to the appropriate server program. The origin of the request is transparent to the server program. (The filesystem receives the “read record from file” request.)

5. The server responds to the request. (The filesystem reads the record.)

6. The RPC libraries on the remote server return the result over the network to the client program. (The read record is returned to the calling program.)

Because standard RPC servers are normally started by the xinetd daemon, the portmap daemon must be started before the xinetd daemon is invoked. The init scripts make sure portmap starts before xinetd. You can confirm this sequence by looking at the numbers associated with /etc/rc.d/*/S*portmap and /etc/rc.d/*/S*/xinetd. If the portmap daemon stops, you must restart all RPC servers on the local system.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Thursday, July 15, 2010

Proxy Servers

A proxy is a network service that is authorized to act for a system while not being part of that system. A proxy server or proxy gateway provides proxy services; it is a transparent intermediary, relaying communications back and forth between an application, such as a browser and a server, usually outside of a LAN and frequently on the Internet. When more than one process uses the proxy gateway/server, the proxy must keep track of which processes are connecting to which hosts/servers so that it can route the return messages to the proper process. The most commonly encountered proxies are email and Web proxies.

A proxy server/gateway insulates the local computer from all other computers or from specified domains by using at least two IP addresses: one to communicate with the local computer and one to communicate with a server. The proxy server/gateway examines and changes the header information on all packets it handles so that it can encode, route, and decode them properly. The difference between a proxy gateway and a proxy server is that the proxy server usually includes cache to store frequently used Web pages so that the next request for that page is available locally and quickly; a proxy gateway typically does not use cache. The terms “proxy server” and “proxy gateway” are frequently used interchangeably.

Proxy servers/gateways are available for such common Internet services as HTTP, HTTPS, FTP, SMTP, and SNMP. When an HTTP proxy sends queries from local systems, it presents a single organization wide IP address (the external IP address of the proxy server/gateway) to all servers. It funnels all user requests to the appropriate servers and keeps track of them. When the responses come back, the HTTP proxy fans them out to the appropriate applications using each machine’s unique IP address, thereby protecting local addresses from remote/specified servers.

Proxy servers/gateways are generally just one part of an overall firewall strategy to prevent intruders from stealing information or damaging an internal network. Other functions, which can be either combined with or kept separate from the proxy server/gateway, include packet filtering, which blocks traffic based on origin and type, and user activity reporting, which helps management learn how the Internet is being used.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Wednesday, July 14, 2010

Internet Services

Linux Internet services are provided by daemons that run continuously or by a daemon that is started automatically by the xinetd daemon when a service request comes in. The /etc/services file lists network services (for example, telnet, ftp, and ssh) and their associated numbers. Any service that uses TCP/IP or UDP/IP has an entry in this file. IANA (Internet Assigned Numbers Authority) maintains a database of all permanent, registered services. The /etc/services file usually lists a small, commonly used subset of services. Visit www.rfc.net/rfc1700.html for more information and a complete list of registered services.

Most of the daemons (the executable files) are stored in /usr/sbin. By convention the names of many daemons end with the letter d to distinguish them from utilities (one common daemon whose name does not end in d is sendmail). The prefix in. or rpc. is often used for daemon names. Look at /usr/sbin/*d to see a list of many of the daemon programs on the local system.

To see how a daemon works, consider what happens when you run ssh. The local system contacts the ssh daemon (sshd) on the remote system to establish a connection. The two systems negotiate the connection according to a fixed protocol. Each system identifies itself to the other, and then they take turns asking each other specific questions and waiting for valid replies. Each network service follows its own protocol.

In addition to the daemons that support the utilities described up to this point, many other daemons support system-level network services that you will not typically interact with.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Tuesday, July 13, 2010

Types of Networks

Broadcast Networks
On a broadcast network, such as Ethernet, any of the many systems attached to the network cable can send a message at any time; each system examines the address in each message and responds only to messages addressed to it. A problem occurs on a broadcast network when multiple systems send data at the same time, resulting in a collision of the messages on the cable. When messages collide, they can become garbled. The sending system notices the garbled message and resends it after waiting a short but random amount of time. Waiting a random amount of time helps prevent those same systems from resending the data at the same moment and experiencing yet another collision. The extra traffic that results from collisions can strain the network; if the collision rate gets too high, retransmissions may result in more collisions. Ultimately the network may become unusable.



Point-to-Point Networks
A point-to-point link does not seem like much of a network because only two endpoints are involved. However, most connections to WANs (wide area networks) go through point-to-point links, using wire cable, radio, or satellite links. The advantage of a point-to-point link is its simplicity: Because only two systems are involved, the traffic on the link is limited and well understood. A disadvantage is that each system can typically be equipped for only a small number of such links; it is impractical and costly to establish point-to-point links that connect each computer to all the rest. Point-to-point links often use serial lines and modems. The combination of a modem with a point-to-point link allows an isolated system to connect inexpensively to a larger network. The most common types of point-to-point links are the ones used to connect to the Internet. When you use DSL1 (digital subscriber line), you are using a point-to-point link to connect to the Internet. Serial lines, such as T-1, T-3, ATM links, and ISDN, are all point-to-point. Although it might seem like a point-to-point link, a cable modem is based on broadcast technology and in that way is similar to Ethernet.



Switched Networks
A switch is a device that establishes a virtual path between source and destination hosts in such a way that each path appears to be a point-to-point link, much like a railroad roundhouse. The switch creates and tears down virtual paths as hosts seek to communicate with each other. Each host thinks it has a direct point-to-point path to the host it is talking to. Contrast this approach with a broadcast network, where each host also sees traffic bound for other hosts. The advantage of a switched network over a pure point-to-point network is that each host requires only one connection: the connection to the switch. Using pure point-to-point connections, each host must have a connection to every other host. Scalability is provided by further linking switches.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Monday, July 12, 2010

Networks specifications

Computers communicate over networks using unique addresses assigned by system software. A computer message, called a packet, frame, or datagram, includes the address of the destination computer and the sender’s return address. The three most common types of networks are broadcast, point-to-point, and switched. Once popular token-based networks (such as FDDI and token ring) are rarely seen anymore.

Speed is critical to the proper functioning of the Internet. Newer specifications (cat 6 and cat 7) are being standardized for 1000BaseT (1 gigabit per second, called gigabit Ethernet, or GIG-E) and faster networking. Some of the networks that form the backbone of the Internet run at speeds of almost 10 gigabits per second (OC192) to accommodate the ever-increasing demand for network services.


Network specifications
DS0
64 kilobits per second

ISDN
Two DS0 lines plus signaling (16 kilobits per second) or 128 kilobits per second

T-1
1.544 megabits per second (24 DS0 lines)

T-3
43.232 megabits per second (28 T-1s)

OC3
155 megabits per second (100 T-1s)

OC12
622 megabits per second (4 OC3s)

OC48
2.5 gigabits per seconds (4 OC12s)

OC192
9.6 gigabits per second (4 OC48s)

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Sunday, July 11, 2010

Networking and the Internet

The communications facilities linking computers are continually improving, allowing faster and more economical connections. The earliest computers were unconnected stand alone systems. To transfer information from one system to another, you had to store it in some form (usually magnetic tape, paper tape, or punch cards—called IBM or Hollerith cards), carry it to a compatible system, and read it back in. A notable advance occurred when computers began to exchange data over serial lines, although the transfer rate was slow (hundreds of bits per second). People quickly invented new ways to take advantage of this computing power, such as email, news retrieval, and bulletin board services. With the speed of today’s networks, a piece of email can cross the country or even travel halfway around the world in a few seconds.

Today it would be difficult to find a computer facility that does not include a LAN to link its systems. Linux systems are typically attached to an Ethernet network. Wireless networks are also prevalent. Large computer facilities usually maintain several networks, often of different types, and almost certainly have connections to larger networks (companywide or campuswide and beyond).



Internet
The Internet is a loosely administered network of networks (an internetwork) that links computers on diverse LANs around the globe. An internet (small i) is a generic network of networks that may share some parts in common with the public Internet. It is the Internet that makes it possible to send an email message to a colleague thousands of miles away and receive a reply within minutes. A related term, intranet, refers to the networking infrastructure within a company or other institution. Intranets are usually private; access to them from external networks may be limited and carefully controlled, typically using firewalls.



Network services
Over the past decade many network services have emerged and become standardized. On Linux and UNIX systems, special processes called daemons support such services by exchanging specialized messages with other systems over the network. Several software systems have been created to allow computers to share filesystems with one another, making it appear as though remote files are stored on local disks. Sharing remote filesystems allows users to share information without knowing where the files physically reside, without making unnecessary copies, and without learning a new set of utilities to manipulate them. Because the files appear to be stored locally, you can use standard utilities (such as cat, vim, lpr, mv, or their graphical counterparts) to work with them.

Developers have created new tools and extended existing ones to take advantage of higher network speeds and to work within more crowded networks. The rlogin, rsh, and telnet utilities, which were designed long ago, have largely been supplanted by ssh (secure shell, page 621) in recent years. The ssh utility allows a user to log in on or execute commands securely on a remote computer. Users rely on such utilities as scp and ftp to transfer files from one system to another across the network. Communication utilities, including email utilities and chat programs (e.g., talk, Internet Relay Chat [IRC], ICQ, and instant messenger [IM] programs, such as AOL’s AIM and gaim) have become so prevalent that many people with very little computer expertise use them on a daily basis to keep in touch with friends, family, and colleagues.



Intranet
An intranet is a network that connects computing resources at a school, company, or other organization but, unlike the Internet, typically restricts access to internal users. An intranet is very similar to a LAN (local area network) but is based on Internet technology. An intranet can provide database, email, and Web page access to a limited group of people, regardless of their geographic location.

The ability of an intranet to connect dissimilar machines is one of its strengths. Think of all the machines you can find on the Internet: Macintosh systems, PCs running different versions of Windows, machines running UNIX and Linux, and so on. Each of these machines can communicate via IP, a common protocol. So it is with an intranet: Dissimilar machines can all talk to one another.

Another key difference between the Internet and an intranet is that the Internet transmits only one protocol suite: IP. In contrast, an intranet can be set up to use a number of protocols, such as IP, IPX, AppleTalk, DECnet, XNS, or other protocols developed by vendors over the years. Although these protocols cannot be transmitted directly over the Internet, you can set up special gateway boxes at remote sites that tunnel or encapsulate these protocols into IP packets and then use the Internet to pass them.

You can use an extranet (also called a partner net) or a virtual private network (VPN) to improve security. These terms describe ways to connect remote sites securely to a local site, typically by using the public Internet as a carrier and employing encryption as a means of protecting data in transit.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Saturday, July 10, 2010

Shell

The shell is both a command interpreter and a programming language. As a command interpreter, the shell executes commands you enter in response to its prompt. As a programming language, the shell executes commands from files called shell scripts. When you start a shell, it typically runs one or more startup files.



Running a shell script
Assuming that the file holding a shell script is in the working directory, there are three basic ways to execute the shell script from the command line.
1. Type the simple filename of the file that holds the script.
2. Type a relative pathname, including the simple filename preceded by ./.
3. Type bash followed by the name of the file.

Technique 1 requires that the working directory be in the PATH variable. Techniques 1 and 2 require that you have execute and read permission for the file holding the script. Technique 3 requires that you have read permission for the file holding the script.



Job control
A job is one or more commands connected by pipes. You can bring a job running in the background into the foreground by using the fg builtin. You can put a foreground job into the background by using the bg builtin, provided that you first suspend the job by pressing the suspend key (typically CONTROL-Z). Use the jobs builtin to see which jobs are running or suspended.



Variables
The shell allows you to define variables. You can declare and initialize a variable by assigning a value to it; you can remove a variable declaration by using unset. Variables are local to a process unless they are exported using the export builtin to make them available to child processes. Variables you declare are called user-created variables. The shell also defines keyword variables. Within a shell script you can work with the command line (positional) parameters the script was called with.



Process
Each process has a unique identification (PID) number and is the execution of a single Linux command. When you give it a command, the shell forks a new (child) process to execute the command, unless the command is built into the shell. While the child process is running, the shell is in a state called sleep. By ending a command line with an ampersand (&), you can run a child process in the background and bypass the sleep state so that the shell prompt returns immediately after you press RETURN. Each command in a shell script forks a separate process, each of which may in turn fork other processes. When a process terminates, it returns its exit status to its parent process. An exit status of zero signifies success and nonzero signifies failure.



History
The history mechanism, a feature adapted from the C Shell, maintains a list of recently issued command lines, also called events, that provides a way to reexecute previous commands quickly. There are several ways to work with the history list; one of the easiest is to use a command-line editor.



Command-line editors
When using an interactive Bourne Again Shell, you can edit your command line and commands from the history file, using either of the Bourne Again Shell’s commandline editors (vi[m] or emacs). When you use the vi(m) command-line editor, you start in Input mode, unlike the way you normally enter vi(m). You can switch between Command and Input modes. The emacs editor is modeless and distinguishes commands from editor input by recognizing control characters as commands.



Aliases
An alias is a name that the shell translates into another name or (complex) command.
Aliases allow you to define new commands by substituting a string for the first token of a simple command.



Functions
A shell function is a series of commands that, unlike a shell script, are parsed prior to being stored in memory so that they run faster than shell scripts. Shell scripts are parsed at runtime and are stored on disk. A function can be defined on the command line or within a shell script. If you want the function definition to remain in effect across login sessions, you can define it in a startup file. Like the functions of a programming language, a shell function is called by giving its name followed by any arguments.



Shell features
There are several ways to customize the shell’s behavior. You can use options on the command line when you call bash and you can use the bash set and shopt builtins to turn features on and off.



Command-line expansion
When it processes a command line, the Bourne Again Shell may replace some words with expanded text. Most types of command-line expansion are invoked by the appearance of a special character within a word (for example, a leading dollar sign denotes a variable). See Table 9-6 on page 313 for a list of special characters. The expansions take place in a specific order. Following the history and alias expansions, the common expansions are parameter and variable expansion, command substitution, and pathname expansion. Surrounding a word with double quotation marks suppresses all types of expansion except parameter and variable expansion. Single quotation marks suppress all types of expansion, as does quoting (escaping) a special character by preceding it with a backslash.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Friday, July 9, 2010

The Bourne Again Shell

The Bourne Again Shell is based on the Bourne Shell which was written by Steve Bourne of AT&T’s Bell Laboratories. Over the years the original Bourne Shell has been expanded but it remains the basic shell provided with many commercial versions of UNIX.



sh Shell
Because of its long and successful history, the original Bourne Shell has been used to write many of the shell scripts that help manage UNIX systems. Some of these scripts appear in Linux as Bourne Again Shell scripts. Although the Bourne Again Shell includes many extensions and features not found in the original Bourne Shell, bash maintains compatibility with the original Bourne Shell so you can run Bourne Shell scripts under bash. On UNIX systems the original Bourne Shell is named sh. On Linux systems sh is a symbolic link to bash ensuring that scripts that require the presence of the Bourne Shell still run. When called as sh, bash does its best to emulate the original Bourne Shell.



Korn Shell
System V UNIX introduced the Korn Shell (ksh), written by David Korn. This shell extended many features of the original Bourne Shell and added many new features. Some features of the Bourne Again Shell, such as command aliases and commandline editing, are based on similar features from the Korn Shell.



POSIX standards
The POSIX (the Portable Operating System Interface) family of related standards is being developed by PASC (IEEE’s Portable Application Standards Committee, www.pasc.org). A comprehensive FAQ on POSIX, including many links, appears at www.opengroup.org/austin/papers/posix_faq.html.

POSIX standard 1003.2 describes shell functionality. The Bourne Again Shell provides the features that match the requirements of this POSIX standard. Efforts are under way to make the Bourne Again Shell fully comply with the POSIX standard. In the meantime, if you invoke bash with the ––posix option, the behavior of the Bourne Again Shell will more closely match the POSIX requirements.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Thursday, July 8, 2010

The Nautilus Spatial View

Nautilus gives you two ways to work with files: the traditional File Browser view and the innovative. By default, Fedora/RHEL display the Spatial view.

The Nautilus Spatial (as in “having the nature of space”) view has many powerful features but may take some getting used to. It always provides one window per folder. By default, when you open a folder, Nautilus displays a new window.

To open a Spatial view of your home directory, double-click the Home icon on the desktop and experiment as you read this section. If you double-click the Desktop icon in the Spatial view, Nautilus opens a new window that displays the Desktop folder.

A Spatial view can display icons, a list of filenames, or a compact view. To select your preferred format, click View on the menubar and choose Icons, List, or Compact. To create files to experiment with, right-click in the window (not on an icon) to display the Nautilus context menu and select Create Folder or Create Document.



Use SHIFT to close the current window as you open another window
If you hold the SHIFT key down when you double-click to open a new window, Nautilus closes the current window as it opens the new one. This behavior may be more familiar and can help keep the desktop from becoming overly cluttered. If you do not want to use the keyboard, you can achieve the same result by double-clicking the middle mouse button.



Window memory
Move the window by dragging the titlebar. The Spatial view has window memory— that is, the next time you open that folder, Nautilus opens it at the same size and in the same location. Even the scrollbar will be in the same position.



Parent-folders button
The key to closing the current window and returning to the window of the parent directory is the Parent-folders button. Click this button to display the Parent-folders pop-up menu. Select the directory you want to open from this menu. Nautilus then displays in a Spatial view the directory you specified. From a Spatial view, you can open a folder in a traditional view by right-clicking the folder and selecting Browse Folder.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Wednesday, July 7, 2010

GNUStep

The GNUStep project (www.gnustep.org), which began before both the KDE and GNOME projects, is creating an open-source implementation of the OPENSTEP API and desktop environment. The result is a very clean and fast user interface. The default look of WindowMaker, the GNUStep window manager, is somewhat dated, but it supports themes so you can customize its appearance. The user interface is widely regarded as one of the most intuitive found on a UNIX platform. Because GNUStep has less overhead than GNOME and KDE, it runs better on older hardware. If you are running Linux on hardware that struggles with GNOME and KDE or if you would prefer a user interface that does not attempt to mimic Windows, try GNUStep. WindowMaker is provided in the WindowMaker package.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Tuesday, July 6, 2010

X Window System

History of X
The X Window System (www.x.org) was created in 1984 at the Massachusetts Institute of Technology (MIT) by researchers working on a distributed computing project and a campuswide distributed environment, called Project Athena. This system was not the first windowing software to run on a UNIX system, but it was the first to become widely available and accepted. In 1985, MIT released X (version 9) to the public, for use without a license. Three years later, a group of vendors formed the X Consortium to support the continued development of X, under the leadership of MIT. By 1998, the X Consortium had become part of the Open Group. In 2001, the Open Group released X version 11, release 6.6 (X11R6.6).

The X Window System was inspired by the ideas and features found in earlier proprietary window systems but is written to be portable and flexible. X is designed to run on a workstation, typically attached to a LAN. The designers built X with the network in mind. If you can communicate with a remote computer over a network, running an X application on that computer and sending the results to a local display is straightforward.

Although the X protocol has remained stable for a long time, additions to it in the form of extensions are quite common. One of the most interesting—albeit one that has not yet made its way into production—is the Media Application Server, which aims to provide the same level of network transparency for sound and video that X does for simple windowing applications.



XFree86 and X.org
Many distributions of Linux used the XFree86 X server, which inherited its license from the original MIT X server, through release 4.3. In early 2004, just before the release of XFree86 4.4, the XFree86 license was changed to one that is more restrictive and not compatible with the GPL (page 4). In the wake of this change, a number of distributions abandoned XFree86 and replaced it with an X.org X server that is based on a pre-release version of XFree86 4.4, which predates the change in the XFree86 license. Fedora/RHEL use the X.org X server, named Xorg; it is functionally equivalent to the one distributed by XFree86 because most of the code is the same. Thus modules designed to work with one server work with the other.



The X stack
The Linux GUI is built in layers (Figure 8-1). The bottom layer is the kernel, which provides the basic interfaces to the hardware. On top of the kernel is the X server, which is responsible for managing windows and drawing basic graphical primitives such as lines and bitmaps. Rather than directly generating X commands, most programs use Xlib, the next layer, which is a standard library for interfacing with an X server. Xlib is complicated and does not provide high-level abstractions, such as buttons and text boxes. Rather than using Xlib directly, most programs rely on a toolkit that provides high-level abstractions. Using a library not only makes programming easier, but also brings consistency to applications.

In recent years, the popularity of X has grown outside the UNIX community and extended beyond the workstation class of computers it was originally conceived for. Today X is available for Macintosh computers as well as for PCs running Windows.



Client/server environment
Computer networks are central to the design of X. It is possible to run an application on one computer and display the results on a screen attached to a different computer; the ease with which this can be done distinguishes X from other window systems available today. Thanks to this capability, a scientist can run and manipulate a program on a powerful supercomputer in another building or another country and view the results on a personal workstation or laptop computer.

When you start an X Window System session, you set up a client/server environment.
One process, called the X server, displays a desktop and windows under X. Each application program and utility that makes a request of the X server is a client of that server. Examples of X clients include xterm, Compiz, gnome-calculator, and such general applications as word processing and spreadsheet programs. A typical request from a client is to display an image or open a window.



Events
The server also monitors keyboard and mouse actions (events) and passes them to the appropriate clients. For example, when you click the border of a window, the server sends this event to the window manager (client). Characters you type into a terminal emulation window are sent to that terminal emulator (client). The client takes appropriate action when it receives an event—for example, making a window active or displaying the typed character on the server.

Separating the physical control of the display (the server) from the processes needing access to the display (the client) makes it possible to run the server on one computer and the client on another computer. Most of the time, this book discusses running the X server and client applications on a single system. “Remote Computing and Local Displays” describes using X in a distributed environment.



The roles of X client and server may be counterintuitive
The terms client and server, when referring to X, have the opposite meanings of how you might think of them intuitively: The server runs the mouse, keyboard, and display; the application program is the client. This disparity becomes even more apparent when you run an application program on a remote system. You might think of the system running the program as the server and the system providing the display as the client, but in fact it is the other way around. With X, the system providing the display is the server, and the system running the program is the client.



You can run xev (X event) by giving the command xev from a terminal emulator window and then watch the information flow from the client to the server and back again. This utility opens the Event Tester window, which has a box in it, and asks the X server to send it events each time anything happens, such as moving the mouse pointer, clicking a mouse button, moving the mouse pointer into the box, typing, or resizing the window. The xev utility displays information about each event in the window you opened it from. You can use xev as an educational tool: Start it and see how much information is processed each time you move the mouse. Close the Event Tester window to exit from xev.

Source of Information : Prentice Hall A Practical Guide to Fedora and Red Hat Enterprise Linux 5th Edition

Monday, July 5, 2010

Windows Server 2008 R2 and Windows 7 Group Policy

Advanced Audit Policy
Another security-related feature that you’ll find in Server 2008 R2 and Windows 7 is a much more granular auditing infrastructure. If you look under \Computer Configuration\
Windows Settings\Security Settings\Advanced Audit Policy Configuration, you’ll see 10 different auditing categories that you can now tweak to control exactly which types of events generate security audits on Server 2008 R2 or Windows 7 systems. This new granularity, of course, is exposed only in these newest OS versions, but the fact that it’s manageable via Group Policy is a good thing.


Network List Policies
The last new security policy I’ll discuss gives you the ability to control network lists. By default, when Server 2008 R2, Windows 7, or Vista systems find new networks, whether public wireless networks or corporate LANs, a user is prompted to indicate the type of network it is (e.g., public, domain, home). But by using Network List Policies in Group Policy, you can now preconfigure how particular networks behave and which zone they should be shunted into when a user finds them. You can also control the icons and the names of the networks that appear to the user. The only downside to using this policy area for preconfiguring wireless access points is that you need to know the name of the WAP ahead of time to configure all the various options. But this policy area is still a welcome addition for controlling users who frequently roam between networks.


Name Resolution Policy
The last new policy area, although not strictly a security policy (it’s found under \Computer Configuration\ Windows Settings\Name Resolution Policy in GPE), lets you control DNS Security Extensions (DNSSEC) and Microsoft DirectAccess DNS configurations on a per-DNS domain name basis. For example, you can configure which features of DNSSEC are used for a given client talking to its DNS server, or which DNS and proxy servers a client connecting to your network via Direct-Access will use. Although not used by all shops, this feature is handy to have in
Group Policy if you’re rolling out Direct-Access to your mobile users.

Source of Information : Windows IT Pro June 2010

Sunday, July 4, 2010

Windows Server 2008 R2 and Windows 7 Group Policy - New Security Policies

The biggest new addition in the area of Group Policy–based security policy is the Application Control Policies, or AppLocker. These policies are found under \Computer
Configuration\Windows Settings\ Security Settings\Application Control Policies. Essentially, this is a significant upgrade to the old Software Restriction Policies (SRPs— which are still supported in Server 2008 R2 and Windows 7) that let you control which applications can execute on your Windows systems. Specifically, AppLocker lets you create application whitelists and blacklists to explicitly allow or deny a particular application or set of applications to execute based on a set of criteria you specify.

A major difference between what’s available in AppLocker and SRPs is that you now have more flexible rules for defining applications. For example, you can create rules by software publisher, application name, and version information held within the file.

You can also create rules for controlling script execution, which wasn’t explicitly supported in earlier Windows versions. Also, for each type of rule you create, you can enforce the rule or just work in audit mode. In audit mode, whenever a rule is hit by an application, the result is logged to the client rather than blocking or allowing that application. That way, you can run a rule in test mode before making it live, to ensure it doesn’t catch any unsuspecting applications. The only downside to AppLocker is that it works only on Server 2008 R2 and Windows 7 clients, so you can’t leverage it in earlier versions of Windows.

Source of Information : Windows IT Pro June 2010

Saturday, July 3, 2010

Windows Server 2008 R2 and Windows 7 Group Policy - New Policy–Enabled Features

The last of the changes I’ll cover are the new policies that have been added to support management of new features available in Server 2008 R2 and Windows 7. Most of the new policies relate to security settings, but a few minor updates have been made to Group Policy preferences as well. Let’s start with the new Group Policy preferences:

• Support for managing the new Power Plans for power management that were introduced in Vista. These are now available in addition to Power Options and Power Schemes. Power Plans require that the client receiving them is running at least Vista.

• Updated Scheduled Tasks preferences now support the newer Task Scheduler that shipped with Server 2008 and beyond, as well as Vista. This new Task Scheduler supports many more options than Windows 2003’s and XP’s Task Scheduler. In addition, Microsoft added Immediate Tasks for Vista and beyond, which lets you create a one-time scheduled task that runs as soon as the policy processes.

• Addition of Internet Explorer (IE) 8 in the Internet Settings preferences, which lets you now configure options specific to IE 8.

Source of Information : Windows IT Pro June 2010

Friday, July 2, 2010

Windows Server 2008 R2 and Windows 7 Group Policy- PowerShell Support

The major change in this release of Windows that I alluded to earlier is added support for PowerShell within the Group Policy universe. Microsoft added support for running PowerShell scripts within per-machine or per-user scripts policy and provided a set of 25 PowerShell cmdlets for PowerShell 2.0 that support many of the operations you can perform within Group Policy Management Console (GPMC). Let’s look first at the new scripts policy support.

When you create a new startup script or logon script in GPE, you’ll see a new tab. You can now add PowerShell scripts to your scripts policy and control whether the scripts run before or after non-PowerShell scripts. But note that only Server 2008 R2 and Windows 7 Group Policy clients will run these new PowerShellbased script policies. They won’t work on earlier versions of Windows.

Perhaps the more interesting of the PowerShell enhancements is a set of cmdlets within a new PowerShell 2.0 module for Group Policy. These cmdlets encapsulate many of the functions found within the GPMC sample scripts that used to ship with that tool. From the PowerShell cmdlets, you can perform Group Policy–related administrative tasks such as creating new GPOs or deleting existing ones, linking GPOs to OUs or domains, and repermissioning GPOs.

Note that to use the GroupPolicy module, you must be running PowerShell 2.0 on Server 2008 R2 or Windows 7. To provide this kind of GPMC PowerShell functionality on earlier versions of Windows, I’ve written a set of GPMC PowerShell 1.0 cmdlets that you can download for free at my website (www.sdmsoftware.com/freeware).

Let’s look at an example of the kind of power these new cmdlets provide. Suppose you want to create, permission, and link a GPO within a PowerShell script. The following one-line command does all that by leveraging three of the new cmdlets and the PowerShell pipeline:

new-gpo "Marketing IT GPO" |
Set-GPPermissions -TargetName
"Marketing Users" -TargetType Group
-PermissionLevel GPOEdit | new-gplink
-order 1 -Target "OU=Marketing,
DC=cpandl,DC=com"

Source of Information : Windows IT Pro June 2010

Thursday, July 1, 2010

Windows Server 2008 R2 and Windows 7 Group Policy

With the release of Windows Server 2008 came Group Policy preferences, a set of more than 20 Group Policy extensions that expanded the range of configurable settings within a Group Policy object (GPO). Following that game-changing release, you might expect new Group Policy features of a similar nature in Windows Server 2008 R2 and Windows 7. Unfortunately, most of what you’ll see, and what I discuss in this article, are incremental improvements rather than game changers. That being said, Microsoft did manage to incorporate one major change in Server 2008 R2 and Windows 7 by taking the first tentative steps toward automating Group Policy management using PowerShell. The rest of what you’ll find new in the latest Windows release is mostly updates to existing policy areas, some additional Windows components under Group Policy management, and some improvements to Group Policy preferences. Let’s look at the changes in depth.


Administrative Template Changes
The major news in Administrative Templates, or registry policy, occurred when Windows Vista shipped. With Vista, Microsoft introduced a new ADMX format and the Central Store. The ADMX format provided better multilanguage support; the Central Store took old ADM files out of the SYSVOL part of every GPO. With Server 2008 R2 and Windows 7, the greatest change in this area is the addition of yet more Administrative Template settings (more than 300). These settings cover a bevy of new Server 2008 R2 and Windows 7 features (e.g., policies to control new UI elements specific to each platform). You’ll find a full list of Administrative Template and Security policy settings in Excel format in Microsoft’s “Group Policy Settings Reference for Windows and Windows Server” (www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=18c90c80-8b0a-4906-a4f5-ff24cc2030fb).

One of the more subtle changes to Administrative Templates is a modified ADMX schema that now supports two new registry value types: REG_MULTI_SZ and REG_QWORD. Previously, you couldn’t use Administrative Templates to modify these two value types. Your choices were to deliver these kinds of values via registry scripts, or to use the Group Policy preferences’ registry extension to get these value types on client machines. Now these types are supported in the ADMX syntax, and you can create custom ADMX templates that support these new types.

Another subtle Administrative Templates change is a UI improvement. In Server 2008 and Vista, Microsoft introduced the concept of Comments to Administrative Template settings. If you chose to, you could add comments to each policy setting. These comments, and the improved Explain text that provided help for each setting, were displayed as three separate tabs within Group Policy Editor’s (GPE’s) UI. You had to flip between each tab to use them. In Server 2008 R2 and Windows 7, all three elements are presented on a single pane that you can easily see and edit.

Source of Information : Windows IT Pro June 2010