Friday, October 31, 2008

Linux Operating System

Linux is an fast, stable, and open source operating system for PCs and workstations that features professional-level Internet services, extensive development tools, fully functional graphical user interfaces (GUIs), and a massive number of applications ranging from office suites to multimedia applications. Linux was developed in the early 1990s by Linus Torvalds, along with other programmers around the world. As an operating system, Linux performs many of the same functions as Unix, Macintosh, Windows, and Windows NT. However, Linux is distinguished by its power and flexibility, along with being freely available.

Most PC operating systems, such as Windows, began their development within the confines of small, restricted personal computers, which have only recently become more versatile machines. Such operating systems are constantly being upgraded to keep up with the ever-changing capabilities of PC hardware. Linux, on the other hand, was developed in a different context. Linux is a PC version of the Unix operating system that has been used for decades on mainframes and minicomputers and is currently the system of choice for network servers and workstations. Linux brings the speed, efficiency, scalability, and flexibility of Unix to your PC, taking advantage of all the capabilities that PCs can now provide.

Technically, Linux consists of the operating system program, referred to as the kernel, which is the part originally developed by Torvalds. But it has always been distributed with a large number of software applications, ranging from network servers and security programs to office applications and development tools. Linux has evolved as part of the open source software movement, in which independent programmers joined forces to provide free and quality software to any user. Linux has become the premier platform for open source software, much of it developed by the Free Software Foundation’s GNU project. Many of these applications are bundled as part of standard Linux distributions, and most of them are also incorporated into the Ubuntu repository, using packages that are Debian compliant.

Along with Linux’s operating system capabilities come powerful networking features, including support for Internet, intranets, and Windows networking. As a norm, Linux distributions include fast, efficient, and stable Internet servers, such as the Web, FTP, and DNS servers, along with proxy, news, and mail servers. In other words, Linux has everything you need to set up, support, and maintain a fully functional network.

With the both GNOME and K Desktop Environment (KDE), Linux also provides GUIs with the same level of flexibility and power. Linux enables you to choose the interface you want and then customize it, adding panels, applets, virtual desktops, and menus, all with full drag-and-drop capabilities and Internet-aware tools.

Linux does all this at the right price: It is free, including the network servers and GUI desktops. Unlike the official Unix operating system, Linux is distributed freely under a GNU General Public License (GPL) as specified by the Free Software Foundation, making it available to anyone who wants to use it. GNU (which stands for GNU’s Not Unix) is a project initiated and managed by the Free Software Foundation to provide free software to users, programmers, and developers. Linux is copyrighted, not public domain; however, a GNU public license has much the same effect as the software’s being in the public domain.

The GNU GPL is designed to ensure that Linux remains free and, at the same time, standardized. Linux is technically the operating system kernel—the core operations—and only one official Linux kernel exists. People sometimes have the mistaken impression that Linux is somehow less than a professional operating system because it is free. Linux is, in fact, a PC, workstation, and server version of Unix. Many actually consider it far more stable and much more powerful than Microsoft Windows. This power and stability have made Linux an operating system of choice as a network server.

Source of Information : McGraw Hill Ubuntu The Complete Reference

Thursday, October 30, 2008

Using Add/Remove Applications for Software Management in Ubuntu

The Ubuntu graphical package-management tool identified in the Applications menu as Add/Remove is actually an application named gnome-app-install. Add/Remove Applications enables you to select packages arranged in categories and install or remove them. When you launch Add/Remove Applications, you are not prompted to enter your password; this is only required when making changes.

Along the left side of the screen, you will see the broad list of categories into which the applications have been divided. At the top, selected by default, is the All category, which lists every package that can be installed. The right side of the screen is split into two, with the upper portion providing the application list and the lower portion describing the currently selected application. Just above the application list are options for searching and filtering, with the default filter set to Supported Ubuntu Applications. You can change that if you want to.

Installing some new software is as simple as finding it in the package list and checking its box. After you have selected all the applications you want, click Apply. You will be prompted to enter your password so that Ubuntu can install the software. Currently installed applications are already checked, and you can remove them by unchecking them and clicking Apply.

To search for a specific application in the list, type something into the Search box at the top. Note that this searches within the current category; so if you are in the Games category and search for “office,” you will get no results. The best place to search is within the All category, to make sure you search all areas.

Source of Information : Sams Ubuntu Unleashed 2008 Edition

Installing Ubuntu Hardware Requirements

The hardware required to run Ubuntu depends on what kind of system you want to set up. A very minimal system that runs a textual (command line) interface and has very few software packages installed requires very different hardware from a system that runs a GUI, has many installed packages, and supports visual effects. Use the Alternate CD if you are installing Ubuntu on a system with less than 320 megabytes of RAM. If you want to run visual effects on the system, see gentoowiki.com/HARDWARE_Video_Card_Support_Under_XGL for a list of supported graphics cards.

A network connection is invaluable for keeping Ubuntu up-to-date. A sound card is nice to have for multimedia applications. If you are installing Ubuntu on old or minimal hardware and want to run a GUI, consider installing Xubuntu (www.xubuntu.org), as it provides a lightweight desktop and uses system resources more efficiently than Ubuntu does.

RAM (memory). An extremely minimal textual (command line) system requires 32 megabytes of RAM. A standard desktop system requires 320 megabytes, although you may be able to use less if you install Xubuntu. Installing Ubuntu from a live session requires 320 megabytes. Use the textual installer if the system has less than 320 megabytes of RAM. Linux makes good use of extra memory: The more memory a system has, the faster it runs. Adding memory is one of the most cost-effective ways you can speed up a Linux system.

CPU. Ubuntu Linux requires a minimum of a 200-megahertz Pentium-class processor or the equivalent AMD or other processor for textual mode and at least a 400-megahertz Pentium II processor or the equivalent for graphical mode.

Hard disk space. The amount of hard disk space Ubuntu requires depends on which edition of Ubuntu Linux you install, which packages you install, how many languages you install, and how much space you need for user data (your files). The operating system typically requires 2–8 gigabytes, although a minimal system can make due with much less space. Installing Ubuntu from a live session requires 4 gigabytes of space on a hard disk.

BIOS setup. Modern computers can be set to boot from a CD/DVD or hard disk. The BIOS determines the order in which the system tries to boot from each device. You may need to change this order: Make sure the BIOS is set up to try booting from the CD/DVD before it tries to boot from the hard disk.

CMOS. CMOS is the persistent memory that stores hardware configuration information. To change the BIOS setup, you need to edit the information stored in CMOS. When the system boots, it displays a brief message about how to enter System Setup or CMOS Setup mode. Usually you need to press Del or F2 while the system is booting. Press the key that is called for and move the cursor to the screen and line that deal with booting the system. Generally there is a list of three or four devices that the system tries to boot from; if the first attempt fails, the system tries the second device, and so on. Manipulate the list so that the CD/DVD is the first choice, save the list, and reboot.

Source of Information : A Practical Guide to Ubuntu Linux

Tuesday, October 28, 2008

How Different Are Linux Distributions from One Another?

While different Linux systems will add different logos, choose some different software components to include, and have different ways of installing and configuring Linux, most people who become used to Linux can move pretty easily from one Linux to another. There are a few reasons for this:

Linux Standard Base—There is an effort called the Linux Standard Base (www.linuxbase.org) to which most major Linux systems subscribe. The Linux Standard Base Specification (available from this site) has as one of its primary goals to ensure that applications written for one Linux system will work on other systems. To that end, the LSB will define what libraries need to be available, how software packages can be formatted, commands and utilities that must be available, and, to some extent, how the file system should be arranged. In other words, you can rely on many components of Linux being in the same place on LSB-certified Linux systems.

Open source projects—Many Linux distributions include the same open source projects. So, for example, the most basic command and configuration files for an Apache Web server, Samba file/print server, and sendmail mail server will be the same whether you use Red Hat, Debian, or many other Linux systems. And although they can change backgrounds, colors, and other elements of your desktop, most of the ways of navigating a KDE or GNOME desktop stay the same, regardless of which Linux you use.

A shell is a shell—Although you can put different pretty faces on it, once you open a shell command-line interpreter (such as bash or sh) in Linux, most experienced Linux or UNIX users find it pretty easy to get around on most any Linux system. For that reason, I recommend that if you are serious about using Linux, you take some time to try the shell. Additionally, focus on command-line and configuration file interfaces for setting up servers, because learning those ways of configuring servers will make your skills most portable across different Linux systems.

Some of the ways that Linux distributions distinguish themselves, however, are with the installers they use, their package management tools, and system administration tools. Also, distributions such as those sponsored by Red Hat will include new features developed by its sponsors to meet its commercial needs. For example, Red Hat has done a lot of work that is useful for enterprise computing environments, such as virtualization, global file systems, and software distribution tools.

Source of Information : Linux Bible 2008 Edition

Monday, October 27, 2008

Ubuntu for Business

Linux has matured over the years, and features considered essential for use in enterpriselevel environments, such as CPU architecture support, file systems, and memory handling, have been added and improved. The addition of virtual memory (the capability to swap portions of RAM to disk) was one of the first necessary ingredients, along with a copyright-free implementation of the TCP/IP stack (mainly due to BSD UNIX being tied up in legal entanglements at the time). Other features quickly followed, such as support for a variety of network protocols.

Ubuntu includes a Linux kernel that can use multiple processors, which allows you to use Ubuntu in more advanced computing environments with greater demands on CPU power. This kernel will support at least 16 CPUs; in reality, however, small business servers typically use only dual-CPU workstations or servers. However, Ubuntu can run Linux on more powerful hardware.

Ubuntu will automatically support your multiple-CPU Intel-based motherboard, and you will be able to take advantage of the benefits of symmetric multiprocessors (SMPs) for software development and other operations. The Linux kernels included with Ubuntu can use system RAM sizes up to 64GB, allow individual file sizes in excess of 2GB, and host the demands of—theoretically—billions of users.

Businesses that depend on high-availability, large-scale systems can also be served by Ubuntu, along with the specialist commercial support on offer from hundreds of support partners across the world.

However, Ubuntu can be used in many of these environments by customers with widely disparate computing needs. Some of the applications for Ubuntu include desktop support; small file, print, or mail servers; intranet web servers; and security firewalls deployed at
strategic points inside and outside company LANs. Commercial customers will also benefit from Debian’s alliances with several top-tier system builders, including Hewlett Packard.

Debian itself is also available for multiple architectures, and until recently was developed in tandem on 11 different architectures, from x86 to the older Motorola 680x0 chips (as found in the Commodore Amiga), along with several other architectures.

Small business owners can earn great rewards by stepping off the software licensing and upgrade treadmill and adopting a Linux-based solution. Using Ubuntu not only avoids the need for licensing accounting and the threat of software audits, but also provides viable alternatives to many types of commercial productivity software.

Using Ubuntu in a small business setting makes a lot of sense for other reasons, too, such as not having to invest in cutting-edge hardware to set up a productive shop. Ubuntu easily supports older, or legacy, hardware, and savings are compounded over time by avoiding unnecessary hardware upgrades. Additional savings will be realized because software and upgrades are free. New versions of applications can be downloaded and installed at little or no cost, and office suite software is free.

Ubuntu is easy to install on a network and plays well with others, meaning it works well in a mixed-computing situation with other operating systems such as Windows, Mac OS X, and, of course, UNIX. A simple Ubuntu server can be put to work as an initial partial solution or made to mimic file, mail, or print servers of other operating systems. Clerical staff should quickly adapt to using familiar Internet and productivity tools, while your business gets the additional benefits of stability, security, and a virus-free computing platform.

By carefully allocating monies spent on server hardware, a productive and efficient multiuser system can be built for much less than the cost of comparable commercial software. Combine these benefits with support for laptops, PDAs, and remote access, and you will find that Ubuntu supports the creation and use of an inexpensive yet efficient work environment.

Source of Information : Sams Ubuntu Unleashed 2008 Edition

Sunday, October 26, 2008

Linux Provides a Secure Hierarchical Filesystem

A file is a collection of information, such as text for a memo or report, an accumulation of sales figures, an image, a song, or an executable program. Each file is stored under a unique identifier on a storage device, such as a hard disk. The Linux filesystem provides a structure whereby files are arranged under directories, which are like folders or boxes. Each directory has a name and can hold other files and directories. Directories, in turn, are arranged under other directories, and so forth, in a treelike organization. This structure helps users keep track of large numbers of files by grouping related files in directories. Each user has one primary directory and as many subdirectories as required.

Standards. With the idea of making life easier for system administrators and software developers, a group got together over the Internet and developed the Linux Filesystem Standard (FSSTND), which has since evolved into the Linux Filesystem Hierarchy Standard (FHS). Before this standard was adopted, key programs were located in different places in different Linux distributions. Today you can sit down at a Linux system and know where to expect to find any given standard program.

Links. A link allows a given file to be accessed by means of two or more names. The alternative names can be located in the same directory as the original file or in another directory. Links can make the same file appear in several users’ directories, enabling those users to share the file easily. Windows uses the term shortcut in place of link to describe this capability. Macintosh users will be more familiar with the term alias. Under Linux, an alias is different from a link; it is a command macro feature provided by the shell.

Security. Like most multiuser operating systems, Linux allows users to protect their data from access by other users. It also allows users to share selected data and programs with certain other users by means of a simple but effective protection scheme. This level of security is provided by file access permissions, which limit which users can read from, write to, or execute a file. More recently, Linux has implemented Access Control Lists (ACLs), which give users and administrators finer-grained control over file access permissions.

Source of Information : A Practical Guide to Ubuntu Linux

Saturday, October 25, 2008

Why Linux Is Popular with Hardware Companies and Developers

Two trends in the computer industry set the stage for the growing popularity of UNIX and Linux. First, advances in hardware technology created the need for an operating system that could take advantage of available hardware power. In the mid-1970s, minicomputers began challenging the large mainframe computers because, in many applications, minicomputers could perform the same functions less expensively. More recently, powerful 64-bit processor chips, plentiful and inexpensive memory, and lower-priced hard disk storage have allowed hardware companies to install multiuser operating systems on desktop computers.

Proprietary operating systems
Second, with the cost of hardware continually dropping, hardware manufacturers could no longer afford to develop and support proprietary operating systems. A proprietary operating system is one that is written and owned by the manufacturer of the hardware (for example, DEC/Compaq owns VMS). Today’s manufacturers need a generic operating system that they can easily adapt to their machines.

Generic operating systems
A generic operating system is written outside of the company manufacturing the hardware and is sold (UNIX, Windows) or given (Linux) to the manufacturer. Linux is a generic operating system because it runs on different types of hardware produced by different manufacturers. Of course, if manufacturers can pay only for development and avoid per-unit costs (as they have to pay to Microsoft for each copy of Windows they sell), manufacturers are much better off. In turn, software developers need to keep the prices of their products down; they cannot afford to convert their products to run under many different proprietary operating systems. Like hardware manufacturers, software developers need a generic operating system. Although the UNIX system once met the needs of hardware companies and researchers for a generic operating system, over time it has become more proprietary as manufacturers added support for their own specialized features and introduced new software libraries and utilities. Linux emerged to serve both needs: It is a generic operating system that takes advantage of available hardware power.

Source of Information : A Practical Guide to Ubuntu.Linux

Friday, October 24, 2008

Troubleshooting Ubuntu Post-Installation Configuration Problems

A lot of work has gone into Ubuntu to make it as versatile as possible, but sometimes you may come across a piece of hardware that Ubuntu is not sure about. Knowing what to do in these situations is important, especially when you are working with Ubuntu for the first time. Because Ubuntu (and Linux in general) is built on a resilient UNIX foundation, it is much more stable than other operating systems. You might find this surprising if you are used to the Blue Screens of Death found on a certain operating system from Redmond, Washington. However, even though things might seem to be working fine, Ubuntu could have a problem that might not affect the appearance of the system. Perhaps kernel modules for devices will not load, for example, or services cannot start for some reason. In this section, you learn how to examine some of Ubuntu’s built-in error logs to help you diagnose any unseen faults. Ubuntu has a command that enables you to see detailed messages that are output directly by the operating system: the dmesg command, which is commonly used with the grep command to filter output. The dmesg command takes its output directly from the /var/log/messages file, so you can choose to either run dmesg directly or enter less /var/log/messages instead. The output is fairly detailed, so be prepared for an initial shock when you see how much information is generated. You might find it easier to generate a file with the dmesg output by using the following command:

$ dmesg > dmesg.txt

This takes the output from the dmesg command and stores it in a new text file called dmesg.txt. You can then browse it at your leisure using your choice of text editor such as vi or emacs. You can even use the less command, like so:

$ less dmesg.txt

The messages are generated by the kernel, other software run by /etc/init.d, and Ubuntu’s runlevel scripts. You might find what appear to be errors at first glance, but some errors are not really problems (for example, if a piece of hardware is configured but not present on your system).

Thanks to Google, troubleshooting is no longer the slow process it used to be. You can simply copy and paste error messages into Google and click Find to bring up a whole
selection of results similar to the problem you face. Remember, Google is your friend, especially http://www.google.com/linux, which provides a specialized search engine for Linux. You can also try http://marc.info, which browses newsgroup and mailing list archives. Either way, you are likely to come across people who have had the same problem as you.

It is important to only work on a solution to one problem at a time; otherwise, you may
end up getting no work done whatsoever. You should also get into the habit of making backup copies of all files that you modify, just in case you make a bad situation worse. Use the copy command like this:

$ cp file file.backup

You should never use a .bak extension because this could get overwritten by another automatic process and will leave you frustrated when you try to restore the original file. If something breaks as a result of you changing the original file, you can always copy the original back into place using the command like this:

$ cp file.backup file

(Something as simple as this can really save your bacon, especially when you are under
pressure when you’ve changed something you shouldn’t have on a production system.
That is, if you are daft enough to make sweeping changes on a production system!)

Source of Infomation : Sams Ubuntu Unleashed 2008 Edition

Thursday, October 23, 2008

What Is So Good About Linux?

In recent years Linux has emerged as a powerful and innovative UNIX work-alike. Its popularity is surpassing that of its UNIX predecessors. Although it mimics UNIX in many ways, the Linux operating system departs from UNIX in several significant ways: The Linux kernel is implemented independently of both BSD and System V, the continuing development of Linux is taking place through the combined efforts of many capable individuals throughout the world, and Linux puts the power of UNIX within easy reach of both business and personal computer users. Using the Internet, today’s skilled programmers submit additions and improvements to the operating system to Linus Torvalds, GNU, or one of the other authors of Linux.


Applications. A rich selection of applications is available for Linux—both free and commercial—as well as a wide variety of tools: graphical, word processing, networking, security, administration, Web server, and many others. Large software companies have recently seen the benefit in supporting Linux and now have on-staff programmers whose job it is to design and code the Linux kernel, GNU, KDE, or other software that runs on Linux. For example, IBM (www.ibm.com/linux) is a major Linux supporter. Linux conforms increasingly more closely to POSIX standards, and some distributions and parts of others meet this standard. These developments indicate that Linux is becoming more mainstream and is respected as an attractive alternative to other popular operating systems.


Peripherals. Another aspect of Linux that appeals to users is the amazing range of peripherals that is supported and the speed with which support for new peripherals emerges. Linux often supports a peripheral or interface card before any company does. Unfortunately some types of peripherals—particularly proprietary graphics cards—lag in their support because the manufacturers do not release specifications or source code for drivers in a timely manner, if at all.


Software. Also important to users is the amount of software that is available—not just source code (which needs to be compiled) but also prebuilt binaries that are easy to install and ready to run. These include more than free software. Netscape, for example, has been available for Linux from the start and included Java support before it was available from many commercial vendors. Now its sibling Mozilla/Thunderbird/Firefox is also a viable browser, mail client, and newsreader, performing many other functions as well.


Platforms. Linux is not just for Intel-based platforms: It has been ported to and runs on the Power PC—including Apple computers (ppclinux), Compaq’s (née Digital Equipment Corporation) Alpha-based machines, MIPS-based machines, Motorola’s 68K based machines, various 64-bit systems, and IBM’s S/390. Nor is Linux just for single processor machines: As of version 2.0, it runs on multiple-processor machines (SMPs). It also includes an O(1) scheduler, which dramatically increases scalability on SMP systems.


Emulators. Linux supports programs, called emulators, that run code intended for other operating systems. By using emulators you can run some DOS, Windows, and Macintosh programs under Linux. For example, Wine (www.winehq.com) is an open-source implementation of the Windows API on top of the X Window System and UNIX/Linux; QEMU (fabrice.bellard.free.fr/qemu) is a CPU-only emulator that executes x86 Linux binaries on non-x86 Linux systems.


Xen. Xen, which was created at the University of Cambridge and is now being developed in the open-source community, is an open-source virtual machine monitor (VMM). A VMM enables several virtual machines (VMs), each running an instance of a separate operating system, to run on a single computer. Xen isolates the VMs so that if one crashes it does not affect the others. In addition, Xen introduces minimal performance overhead when compared with running each of the operating systems natively.

Using VMs, you can experiment with cutting-edge releases of operating systems and applications without concern for the base (stable) system, all on a single machine. You can also set up and test networks of systems on a single machine. Xen presents a sandbox, an area (system) that you can work in without regard for the results of your work or for the need to clean up.

The Gutsy release of Ubuntu supports Xen 3.1. This book does not cover the
installation or use of Xen. See help.ubuntu.com/community/Xen for information
on running Xen under Ubuntu. For more information on Xen, refer to the wiki at wiki.xensource.com/xenwiki and the Xen home page at www.cl.cam.ac.uk/research/srg/netos/xen.


KVM and VirtualBox. If you want to run a virtual instance of Windows, you may want to investigate KVM (Kernel Virtual Machine, help.ubuntu.com/community/KVM) and VirtualBox (www.virtualbox.org).

Source of Information : A Practical Guide to Ubuntu Linux

Saturday, October 18, 2008

Why Ubuntu Then?

With so many distros out there, you may wonder why you should opt for Ubuntu. Well, as they say, numbers don’t lie, and Ubuntu’s popularity is not without good cause. These traits are especially crowd pleasing:

Easy to install
It’s fair to say that most Linux distributions these days are pretty easy to install (and definitely easier and faster to install than Windows). Ubuntu is right in line with these improvements, and the fact that you can install it with only a few mouse clicks while running the live CD means it is pretty much ready to go whenever you are.

Easy to use
Ubuntu is easy to use in that it is very Windows-like in operation, and yet it’s more Linux-like than other Windows user–oriented distributions.

DEB based
Ubuntu is based on the Debian distribution, which means that it utilizes Debian’s very convenient DEB package system for application handling and installation. The two preconfigured, graphical package installers that come with Ubuntu make installing applications even easier. There are so many packages available for Debian systems like Ubuntu that you are likely to find more software out there than you’ll ever know what to do with.

Up to date
Some distros are updated at a snail’s pace, while others strive to be so cutting edge that they are often plagued with bugs. Ubuntu, with its reasonable six-month release cycle, tries to stay as up-to-date as possible, while at the same time making sure that things are not released before they are ready for prime time. In this way, you are ensured of having an up-to-date yet less buggy distro at your disposal.

Dependable and robust
I know these terms come across as mere hype, but after you smack Ubuntu around a bit, you come to understand what they mean. Knock things down and around, and they bounce right back—this is very important for beginners who often have a knack for screwing things up. Nothing turns a new user off more than a twitchy system that has to be velvet gloved all the time.

Desktop user–oriented
A lot of Linux distributions, although quite capable in the desktop arena, cater more to geeks and developers, taking up valuable disk space with a lot of junk you’ll probably never use. Ubuntu’s purpose is to grab desktop market share from the Redmond folks, so the needs of the common end user are always in mind. The result is that Ubuntu’s GNOME desktop environment is a very comfy place for the average desktop user to be.

Source of Information : Ubuntu for Non-Geeks (2nd Ed)

Friday, October 17, 2008

Embedded Systems Time constraints

There are two types of time constraints for embedded systems: stringent and mild.
Stringent time constraints require that the system react in a predefined time frame; otherwise, ca tastrophic events happen. Take for instance a factory where workers have to handle materials being cut by large equipment. As a safety precaution, optical detectors are placed around the blades to detect the presence of the specially colored gloves used by the workers. When the system is alerted that a worker’s hand is in danger, it must stop the blades immediately. It can’t wait for some disk I/O operation involving reading data in from a Linux swap device (for example, swapping back in the memory storing safety management task code) or for some running task to relinquish the CPU. This system has stringent time requirements; it is a hard real-time system. If it doesn’t respond, somebody might lose an arm. Device failure modes don’t get much more painful than that.

Streaming audio systems and consumer devices such as MP3 players and cell phones would also qualify as having stringent requirements, because any transient lagging in audio is usually perceived as bothersome by the users, and failure to contact a cellular tower within a certain time will result in an active call being dropped. Yet, these latter systems would mostly qualify as having soft real-time requirements, because the failure of the application to perform in a timely fashion all the time isn’t catastrophic, as it would be for a hard real-time system. In other words, although infrequent failures will be tolerated—a call being dropped once in a while is an annoying frustration users already live with—the system should be designed to have stringent time requirements. Soft real-time requirements are often the target of embedded Linux vendors that don’t want the (potential) liability of guaranteeing hard real-time but are confident in the abilities of their product to provide, for example, reliable cell phone base-band GSM call management capabilities.

Mild time constraints vary a lot in requirements, but they generally apply to systems where timely responsiveness isn’t necessarily critical. If an automated teller takes 10 more seconds to complete a transaction, it’s generally not problematic (of course, at some point, the user is going to give up on the system and assume it’s never going to respond). The same is true for a PDA that takes a certain number of seconds to start an application. The extra time may make the system seem slow, but it won’t affect the end result. Nonetheless, it’s important that the system make the user aware that it is, in fact, doing something with this time and hasn’t gone out for lunch. Nothing is more frustrating than not knowing whether a system is still working or has crashed.

Source of information : OReilly Building Embedded Linux Systems

Thursday, October 16, 2008

Debugging EasyUbuntu

Errors during EasyUbuntu installation are usually due to invalid entries in /etc/apt/source.list, missing repositories, new package locations, or bad package dependencies. Unfortunately, EasyUbuntu will not tell you what failed or why. The /var/log/dpkg.log file will show you specific package failures, but not missing repositories. In other words, if the installation cannot find a required package, EasyUbuntu will not tell you about the failure.

If EasyUbuntu fails, try installing each checked item individually. This enables you to identify which packages fail. In the EasyUbuntu directory is the file packagelist dapper.xml. This shows every package that is installed for each selection. For example, when I select Video from the Web tab, EasyUbuntu 3.022 installs the package totem-gstreamer-firefox-plugin. Unfortunately, on the PowerPC platform, this package depends on a specific version of totem- gstreamer, and the dependency was outdated. Although the failure was due to the repository, EasyUbuntu does not identify what caused the installation failure. Instead, I narrowed down this problem by installing each item, one at a time, until I found that Video would not install. Then I looked in packagelist-dapper.xml and saw that it included totem-gstreamer- firefox-plugin. Finally, I performed the installation by hand using apt-get and saw the cause of the failure.

$ sudo apt-get -s install totem-gstreamer-firefox-plugin
Reading package lists... Done
Building dependency tree... Done
The following packages have unmet dependencies:
totem-gstreamer-firefox-plugin: Depends: totem-gstreamer (= 1.4.1-0ubuntu4)
but 1.4.3-0ubuntu1 is to be installed
E: Broken packages

My solution to this problem is to download the source code to the plug-in (apt-get source totem-gstreamer-firefox-plugin) and compile it manually.

There is an alternative to EasyUbuntu: Automatix. This tool is similar to EasyUbuntu in many ways. Automatix automates software installation and is not found in any of the standard repositories. Visit the Automatix home page (http://www.getautomatix.com/) for installation instructions.

Automatix supports more packages than EasyUbuntu, but has other limitations. The biggest concern with Automatix is its automatic installation of potentially harmful packages. It is very possible to trash your system if you select incompatible packages. It also requires modification to /etc/apt/sources.list for installation, and it requires a high technical level to understand all of the options.

In my opinion, community recommendations carry weight. Most forums that compare Automatix with EasyUbuntu recommend EasyUbuntu because of the low technical requirements, safe installations, and simpler download instructions. Two evaluations of these tools are available at https://lists.ubuntu.com/archives/ubuntu-users/2006-March/071696.html and http://nalioth.hostdestroyer.com/comparison.html.

If you just want to click-and-run, then EasyUbuntu is the simpler option. If you're looking for more detailed configurations, consider the Easy Linux Ubuntu Guide, available at http://easylinux.info/wiki/Ubuntu_dapper and http://ubuntuguide.org/wiki/Dapper. Although this guide is not automated, it is very complete-covering more options than either EasyUbuntu or Automatix-and has relatively easy to follow step-by-step instructions.

Source of Information : Hacking Ubuntu Serious Hacks Mods and Customizations

Tuesday, October 14, 2008

Using EasyUbuntu

EasyUbuntu is a script designed to download and automatically configure common packages. It only supports about 40 widely used packages (not every available package) and it does not install any source code packages. This software has one other limitation: EasyUbuntu is not found in any of the main, restricted, universe, or multiverse repositories. Instead, you will need to visit the EasyUbuntu homepage (http://easyubuntu.freecontrib.org/) and follow the installation instructions. The installation itself is straightforward:

1. Apply all updates. EasyUbuntu can have problems if the system is not up to date. Fortunately, problems simply means that components are not installed; problems do not cause system instability.
sudo apt-get update
sudo apt-get upgrade

2. Go to the EasyUbuntu home page and follow the installation instructions. These are usually a few lines of code for installing and running EasyUbuntu. Be sure to choose the current release. For EasyUbuntu 3.022, the installation instructions are:
wget http://easyubuntu.freecontrib.org/files/easyubuntu-3.022.tar.gz
tar -zxf easyubuntu-3.022.tar.gz
cd easyubuntu
sudo python easyubuntu.in

3. Open a terminal window (Applications -> Accessories -> Terminal) and paste the installation instructions into the terminal.

4. The final sudo command will prompt you for your password. After that, the graphical interface will start up.

If you use these commands as is, you will use EasyUbuntu version 3.022. Newer versions are periodically released as packages are updated. You should use the current version (whatever version number that may be). Newer versions may also include different installation steps. See the EasyUbuntu homepage for the latest installation instructions.

The user interface for EasyUbuntu displays four tabs showing the different package categories, and each tab displays the available packages. Since some packages are not available on every platform, only the available software is displayed. For example, since the binary audio packages gstreamer0.10-pitfdll and w32codecs, and the NVIDIA video drivers do not exist for the PowerPC, EasyUbuntu on a PowerPC does not display the Binary Codecs option.

EasyUbuntu uses dpkg to check which software has already been installed. If a package already exists on the system, then the check box is grayed out. This prevents you from trying to reinstall software. After making your selections, you can click OK to perform the installation. Depending on your choices and network speed, this may take a while.

Software sometimes changes faster than EasyUbuntu-the EasyUbuntu developers are constantly playing catch-up with other package developers. If an installation fails, EasyUbuntu will not corrupt your system. Instead, it will create a variety of popup windows that display errors and warnings. These are similar to the messages given by APT. You will need to manually install any failed packages.

EasyUbuntu cannot be installed using apt-get because it does not exist in any of the repositories. The reason is mainly based on legal issues. EasyUbuntu can help you install drivers that may not be legal in your country. For example the w32codecs enables you to play proprietary Windows multimedia files, and libdvdcss2 enables you to play DVD movies by cracking the DVD encryption. In some countries, these packages are illegal.

EasyUbuntu also supports packages from the Penguin Liberation Front (PLF) repository. As with main, restricted, universe, and multiverse, PLF is a specialized repository. It can be added to /etc/apt/sources.list and used to install software. The line to add is:
deb http://medibuntu.sos-sts.com/repo/ dapper free non-free

However, PLF has an interesting history: it distributes software that cannot be included in the standard repositories due to copyright, patents, or other legal restrictions. Installing software from this repository may also be illegal depending on your country's laws.

Source of Information : Hacking Ubuntu Serious Hacks Mods and Customizations

Sunday, October 12, 2008

Size of an embedded Linux system

The size of an embedded Linux system is determined by a number of different factors. First, there is physical size. Some systems can be fairly large, like the ones built out of clusters, whereas others are fairly small, like the Linux wristwatches that have been built in partnership with IBM. The physical size of an embedded system is often an important determination of the hardware capabilities of that system (the size of the physical components inside the finished device) and so secondly comes the size of the components with the machine. These are very significant to embedded Linux developers and include the speed of the CPU, the size of the RAM, and the size of the permanent storage (which might be a hard disk, but is often a flash device—currently either NOR or NAND, according to use).

In terms of size, we will use three broad categories of systems: small, medium, and large. Small systems are characterized by a low-powered CPU with a minimum of 4 MB of ROM (normally NOR or even NAND Flash rather than a real ROM) and between 8 and 16 MB of RAM. This isn’t to say Linux won’t run in smaller memory spaces, but it will take you some effort to do so for very little gain, given the current memory market. If you come from an embedded systems background, you may find that you could do much more using something other than Linux in such a small system, especially if you’re looking at “deeply embedded” options. Remember to factor in the speed at which you could deploy Linux, though. You don’t need to reinvent the wheel, like you might well end up doing for a “deeply embedded” design running without any kind of real operating system underneath.

Medium-size systems are characterized by a medium-powered CPU with 32 MB or more of ROM (almost always NOR flash, or even NAND Flash on some systems able to execute code from block-addressable NAND FLASH memory devices) and 64–128 MB of RAM. Most consumer-oriented devices built with Linux belong to this category, including various PDAs (for example, the Nokia Internet Tablets), MP3 players, entertainment systems, and network appliances. Some of these devices may include secondary storage in the form of NAND Flash (as much as 4 GB NAND Flash parts are available at the time of this writing; much larger size arrays are possible by combining more than one part, and we have seen systems using over 32 GB of NAND, even at the time that we are writing this), removable memory cards, or even conventional hard drives. These types of devices have sufficient horsepower and storage to handle a variety of small tasks, or they can serve a single purpose that requires a lot of resources.

Large systems are characterized by a powerful CPU or collection of CPUs combined with large amounts of RAM and permanent storage. Usually these systems are used in environments that require large amounts of calculations to carry out certain tasks. Large telecom switches and flight simulators are prime examples of such systems, as are government research systems, defense projects, and many other applications that you would be unlikely to read about. Typically, such systems are not bound by costs or resources. Their design requirements are primarily based on functionality, while cost, size, and complexity remain secondary issues.

In case you were wondering, Linux doesn’t run on any processor with a memory architecture below 32 bits (certainly there’s no 8-bit microcontroller support!). This rules out quite a number of processors traditionally used in embedded systems. Fortunately though, with the passage of time, increasing numbers of embedded designs are able to take advantage of Linux as processors become much more powerful (and integrate increasing functionality), RAM and Flash prices fall, and other costs diminish. These days, it often makes less economic sense to deploy a new 8051 microcontroller design where for a small (but not insignificant) additional cost one can have all the power of a full Linux system—especially true when using ucLinux-supported devices. The decreasing cost of System-On-Chip (SoC) parts combining CPU/peripheral functionality into a single device is rapidly changing the cost metrics for designers of new systems. Sure, you don’t need a 32-bit microprocessor in that microwave oven, but if it’s no more expensive to use one, and have a built-in web server that can remotely update itself with new features.

Source of Information : OReilly Building Embedded Linux Systems

Saturday, October 11, 2008

Who Uses Linux?

Who uses Linux? The myth from the old days is that it’s only for techies and power users. When you needed to put everything together by hand, this was clearly true. But modern distributions make Linux accessible to all. It’s no exaggeration to say that you could install Linux on a computer Luddite’s PC and have that person use it in preference to Windows. Up until quite recently, Linux was largely seen as a developer’s tool and a server operating system. It was geared toward programmers or was destined for a life running backroom computers, serving data, and making other computer resources available to users.

To this end, Linux continues to run a sizable proportion of the computers that make the Internet work, largely because it provides an ideal platform for the Apache web server, as well as various databases and web-based programming languages. This has lead to the LAMP acronym, which stands for Linux, Apache (a web server), MySQL (a database), and PHP, Python, or Perl (three programming languages that can be used in an online environment).

Despite its technical origins, recent years have seen a strong push for Linux on desktop computers. Linux has stepped out of the dark backrooms, with the goal of pushing aside Microsoft Windows and Mac OS in order to dominate the corporate workstation and home user market.

Running Linux on the desktop has always been possible, but the level of knowledge required was often prohibitively high, putting Linux out of the reach of most ordinary users. It’s only comparatively recently that the companies behind the distributions of Linux have taken a long, hard look at Windows and attempted to mirror its user-friendly approach. In addition, the configuration software in distributions like Ubuntu has progressed in leaps and bounds. Now, it’s no longer necessary to know arcane commands in order to do something as simple as switch the screen resolution. The situation has also been helped by the development of extremely powerful office software, such as OpenOffice.org and Koffice.

Is Linux for you? There’s only one way of finding out, and that’s to give it a go. Linux doesn’t require much of you except an open mind and the will to learn new ways of doing things. You shouldn’t see learning to use Linux as a chore. Instead, you should see it as an adventure—a way of finally getting the most from your PC and not having to worry about things going wrong for reasons outside your control.

Linux puts you in charge. You’re the mechanic of the car as well as its driver, and you’ll be expected to get your hands dirty every now and then. Unlike Windows, Linux doesn’t hide any of its settings or stop you from doing things for your own protection; everything is available to tweak. Using Linux requires commitment and the realization that there are probably going to be problems, and they’re going to need to be overcome.

However, using Linux should be enjoyable. In his initial newsgroup posting announcing Linux back in 1992, Linus Torvalds said that he was creating Linux “just for fun.” This is what it should be for you.

Thursday, October 9, 2008

Embedded Linux System Kernel Considerations

The kernel is the most fundamental software component of all Linux systems. It is responsible for managing the bare hardware within your chosen target system and bringing order to what would otherwise be a chaotic struggle between each of the many various software components on a typical system.

In essence, this means the kernel is a resource broker. It takes care of scheduling use of (and mediating access to) the available hardware resources within a particular Linux system. Resources managed by the kernel include system processor time given to programs, use of available RAM, and indirect access to a multitude of hardware devices— including those customs to your chosen target. The kernel provides a variety of software abstractions through which application programs can request access to system resources, without communicating with the hardware directly.

The precise capabilities provided by any particular build of the Linux kernel are configurable when that kernel is built. Kernel configuration allows you to remove support for unnecessary or obscure capabilities that will never be used. For example, it is possible to remove support for the many different networked filesystems from an embedded device that has no networking support. Conversely, it is possible to add support for a particular peripheral device unique to a chosen target system. Depending on their function, many capabilities can also be built into optional, runtime-loadable, modular components. These can be loaded later when the particular capability is required.

Most desktop or enterprise Linux vendors ship prebuilt Linux kernels as part of their distributions. Such kernels include support for the wide range of generic hardware devices typically available within modern consumer-grade or enterprise-level computing systems. Many of these capabilities are built into runtime-loadable modules, which are demand loaded by a variety of automated tools as hardware devices are detected. This one-size-fits-all approach allows Linux vendors to support a wide range of target systems with a single prebuilt binary kernel package, at the cost of a certain amount of generalization and the occasional performance impact that goes alongside it.

Unlike their desktop, server, or enterprise counterparts, embedded Linux systems usually do not make use of such all-encompassing prebuilt, vendor-supplied kernels. The reasons for this are varied, but include an inability for generic kernels to handle certain embedded, target-specific customizations, as well as a general underlying desire to keep the kernel configuration as simple as possible. A simpler configuration is both easier to debug and typically requires a reduced resource footprint when compared with its more generic counterpart. Building an embedded system from scratch is tough enough already without worrying about the many kernel capabilities you will never use.

Source of Information : OReilly Building Embedded Linux Systems

Wednesday, October 8, 2008

Analyze Linux Process Performance Statistics

The tools to analyze the performance of applications are varied and have existed in one form or another since the early days of UNIX. It is critical to understand how an application is interacting with the operating system, CPU, and memory system to understand its performance. Most applications are not self-contained and make many calls to the Linux kernel and different libraries. These calls to the Linux kernel (or system calls) may be as simple as "what's my PID?" or as complex as "read 12 blocks of data from the disk." Different systems calls will have different performance implications. Correspondingly, the library calls may be as simple as memory allocation or as complex as graphics window creation. These library calls may also have different performance characteristics.

Kernel Time Versus User Time
The most basic split of where an application may spend its time is between kernel and user time. Kernel time is the time spent in the Linux kernel, and user time is the amount of time spent in application or library code. Linux has tools such time and ps that can indicate (appropriately enough) whether an application is spending its time in application or kernel code. It also has commands such as oprofile and strace that enable you to trace which kernel calls are made on the behalf of the process, as well as how long each of those calls took to complete.

Library Time Versus Application Time
Any application with even a minor amount of complexity relies on system libraries to perform complex actions. These libraries may cause performance problems, so it is important to be able to see how much time an application spends in a particular library. Although it might not always be practical to modify the source code of the libraries directly to fix a problem, it may be possible to change the application code to call different or fewer library functions. The ltrace command and oprofile suite provide a way to analyze the performance of libraries when they are used by applications. Tools built in to the Linux loader, ld, helps you determine whether the use of many libraries slows down an application's start time.

Subdividing Application Time
When the application is known to be the bottleneck, Linux provides tools that enable you to profile an application to figure out where time is spent within an application. Tools such as gprof and oprofile can generate profiles of an application that pin down exactly which source line is causing large amounts of time to be spent.

Source of Information : Optimizing Linux Performance

Monday, October 6, 2008

Installing RealPlayer in Ubuntu

Installing RealPlayer involves adding a new software repository and then using the Synaptic Package Manager to download and install RealPlayer. The software repository is hosted by Canonical, the company that’s the chief sponsor and director of Ubuntu, as a method of providing some useful proprietary extra software

The below are the steps installing RealPlayer in Ubuntu

1. Click System -> Administration -> Software Sources.

2. When the Software Sources window appears, click the Third-Party Software tab.

3. Put a check alongside the first entry in the list, which should read http://archive.canonical.com/ubuntu hardy partner. Then click Close.

4. You will see a dialog box informing you that the information about available software is out-of-date. Click the Reload button. Once this has completed, the Software Sources dialog box will close automatically.

5. Open the Synaptic Package Manager (System -> Administration -> Synaptic Package Manager) and click the Search button. Search for realplay. In the list of results, click the check box alongside the package, and click Mark for Installation. Click the Apply button on the toolbar. Close Synaptic.

6. Click Applications -> Sound and Video -> RealPlayer 10 to start the RealPlayer setup program. Click the Forward button several times to move through the license agreement and information screens.

7. Eventually, you’ll be asked if you want to check for updates and configure Mozilla helpers. Ensure both boxes are checked and click OK.

8. RealPlayer will start. You can use it or close it.


Source of Information : Apress Beginning Ubuntu Linux 3rd Edition

Sunday, October 5, 2008

Ubuntu Multiple Terminals

It is both curious and sad that many Linux veterans have not heard of the screen command. Curious because they needlessly go to extra effort to replicate what screen takes in its stride and sad because they are missing a powerful tool that would benefit them greatly.

You connect to a server via SSH and are working at the remote shell. You need to open another shell window so you can have the two running side by side; perhaps you want the output from top in one window while typing in another. What do you do? Most people would open another SSH connection, but that is both wasteful and unnecessary. screen is a terminal multiplexer, which is a fancy term for a program that lets you run multiple terminals inside one terminal.

The best way to learn screen is to try it yourself, so open a console, type screen, and then press Enter. Your display will blank momentarily and then be replaced with a console; it will look like nothing has changed. Now, let's do something with that terminal. Run top and leave it running for the time being. Hold down the Ctrl key and press a (referred to as Ctrl+a from now on); then let go of them both and press c. Your prompt will clear again, leaving you able to type. Run the uptime command.

What happened to the old terminal running top? It is still running, of course. You can type Ctrl+a and then press 0 to return to it. Type Ctrl+a and then press 1 to go back to your uptime terminal. While you are viewing other terminals, the commands in the other terminals carry on running as normal so you can multitask.

Many of screen's commands are case sensitive, so the lettering used here is very specific: Ctrl+a means "press Ctrl and the a key," but Ctrl+A means "press Ctrl and Shift and the a key" so you get a capital A. Ctrl+a+A means "press Ctrl and the a key, let them go, and then press Shift and the a key."

You have seen how to create new displays and how to switch between them by number. However, you can also bring up a window list and select windows using your cursor with Ctrl+a+" (that is, press Ctrl and a together, let go, and press the double quotes key [usually Shift and the single quote key]). You will find that the screens you create have the name bash by default, which is not very descriptive. Select a window and press Ctrl+a+A. You are prompted to enter a name for the current window, and your name is used in the window list.

Once you get past window 9, it becomes impossible to switch to windows using Ctrl+a and 09; as soon as you type the 1 of 10, screen switches to display 1. The solution is to use either the window list or the quick change option, in which you press Ctrl+a+' (single quote), enter either the screen number or the name you gave it, then press Enter. You can also change back to the previous window by pressing Ctrl+a+Ctrl+a. If you only work within a small set of windows, you can use Ctrl+a+n and Ctrl+a+p to move to the next and previous windows, respectively. Of course, if you are changing to and from windows only to see whether something has changed, you are wasting time because screen can monitor windows for you and report if anything changes. To enable (or disable) monitoring for a window, use Ctrl+a+M; when something happens, screen flashes a message. If you miss it (the messages disappear when you type something), use Ctrl+a+m to bring up the last message.

Windows close when you kill the main program inside. Using Ctrl+a+c, this window is Bash; type exit to quit. Alternatively, you can use Ctrl+a+K to kill a window. When all your windows are closed, screen terminates and prints a screen is terminating message so you know you are out.

However, there are two alternatives to quitting: locking and disconnecting. The first, activated with Ctrl+a+x, locks access to your screen data until you enter your system password. The second is the most powerful feature of screen: You can exit it and do other things for a while and then reconnect later and screen will pick up where you left off. For example, you could be typing at your desk, disconnect from screen, then go home, reconnect, and carry on as if nothing had changed. What's more, all the programs you ran from screen carry on running even while screen is disconnected. It even automatically disconnects for you if someone closes your terminal window while it is in a locked state (with Ctrl+a+x).

To disconnect from screen, press Ctrl+a+d. You are returned to the prompt from which you launched screen and can carry on working, close the terminal you had opened, or even log out completely. When you want to reconnect, run the command screen -r. You can, in the meantime, just run screen and start a new session without resuming the previous one, but that is not wise if you value your sanity! You can disconnect and reconnect the same screen session as many times you want, which potentially means you need never lose your session again.

Source of Information : Ubuntu Unleashed

Saturday, October 4, 2008

Obtaining Ubuntu Installer

Ubuntu and its close cousins Kubuntu, Xubuntu, and Edubuntu are all designed with ease of use and familiarity in transition in mind. These distributions focus on keeping things simple and clean to help smooth out the learning curves when you are adapting to a new system.

The Ubuntu installer (Ubiquity) is a prelude to the simplicity of the Ubuntu system, breaking down the install process into about 10 clicks. Canonical Ltd., the support company behind Ubuntu, has even made it easy for people to obtain installation media by offering to mail CD-ROMs, free of charge if needed (https://shipit.ubuntu.com/login). If you have a fast Internet connection however, you can download one of the many ISO images for free from one of the many mirrors around the world (www.ubuntu.com/ getubuntu/downloadmirrors). The list of mirrors is huge to make sure there are plenty of servers available for people to download from. If one is not available, or unresponsive, try another.

The download mirrors can be a bit confusing, so there is also an enhanced download page (www.ubuntu.com/getubuntu/download) available to make things more clear. The current, stable offering at the time of this writing is Ubuntu 7.04 (Feisty Fawn). Ubuntu 7.10 is scheduled for October 2007, so you may have that option available instead. The enhanced download page currently offers the following Ubuntu install media in both Desktop and Server options. The Server option is geared towards people who do not need a full-blown Desktop system.

Ubuntu 7.04 — The current stable release of Ubuntu. This is the most commonly selected version.

Ubuntu 6.06 LTS — The Long Term Support offering of Ubuntu for people who purchase three-year Desktop support, or five-year Server support options. Other options you need to select on the enhanced download page are:

Standard personal computer — This option is the typical choice for most users. 64-bit desktop and laptop CPUs from both AMD and Intel will run this fine if you don’t have special RAM or application requirements.

64-bit AMD and Intel computers — This option is for computers that have a need for large amounts of RAM, or specifically, a 64-bit platform to run applications.

Sun UltraSPARC–based — This is Ubuntu for hardware based on the Sun Microsystems UltraSPARC RISC platform. A nice alternative to Sun Solaris (works fine on an Ultra 60).

The Alternate Desktop CD option has some extra functionality built in, such as Logical Volume Management (LVM) support (LVM is covered in Chapter 7). If you need LVM, at least at the time of this writing, you will need to check this box. Lastly, click the Download button.

After your download is complete, you may want to browse through the list of mirrors above and obtain the MD5SUM file for the version of Ubuntu you downloaded (https://help.ubuntu.com/community/HowToMD5SUM). This can help verify the integrity of the ISO image. Most open source software will have such a digital signature available, and we recommend that you verify this prior to installation, or before burning the ISO image to CD-ROM or DVD.

If you desire more security for your downloads beyond the MD5 checksums, look at SecureApt. For more information on how APT uses digital authentication and encryption for software packages, visit the SecureApt section on the Ubuntu help web site (https://help.ubuntu.com/community/SecureApt).

Source of Information : Wiley Ubuntu Linux Toolbox 1000 plus Commands for Ubuntu and Debian Power Users

Friday, October 3, 2008

Sharing Files with a USB Drive in Ubuntu

The simplest and most common use for a USB drive is to share files between systems. Dapper supports most USB drives. Simply plugging the drive into the USB port will automatically mount the drive. From there, you can access it as you would access any mounted partition.

Linux, Windows, Mac, and most other systems support FAT file systems. In order to share files with other users, consider formatting the drive with mkdosfs. For example:

1. Install the dosfstools package if mkdosfs is not already installed.
sudo apt-get install dosfstools

2. Unmount the drive (for example, /dev/sda1) if it is currently mounted.
sudo umount /dev/sda1

3. Format the drive using either FAT16 or FAT32.
mkdosfs -F 16 /dev/sda1 # format FAT16
mkdosfs -F 32 /dev/sda1 # format FAT32

If you do not mind restricting file sharing to Linux-only systems, then you can format the drive using an ext2 or ext3 file system using any of the following commands:

mkfs /dev/sda1 # default format is ext2
mkfs -t ext2 /dev/sda1 # explicitly format type as ext2
mkfs -t ext3 /dev/sda1 # explicitly format type as ext3
mkfs.ext2 /dev/sda1 # directly call format ext2
mkfs.ext3 /dev/sda1 # directly call format ext3


Many thumb drives have a light to indicate that the drive is being accessed. Even if the drive is not mounted, do not unplug the drive until the light indicates all activity has stopped.

If you want to create a FAT-formatted USB floppy drive, then use the -I option. For example: sudo mkdosfs -I -F 32 /dev/sda.

Source of Information : Hacking Ubuntu Serious Hacks Mods and Customizations

Thursday, October 2, 2008

Ubuntu Ctrl+Alt+Delete (CAD) Key Sequence

Trapping Ctrl+Alt+Delete
Different versions of Linux either have the Ctrl+Alt+Delete (CAD) key sequence enabled or disabled. In Ubuntu Dapper Drake, this key sequence is enabled, allowing a quick shutdown and reboot. However, the Gnome desktop intercepts CAD. To reboot, you need to switch to a text window (Ctrl+Alt+F1) and then press CAD.

Since the Gnome desktop intercepts CAD, you can remap this key sequence to run a different command. For example, to bring up the Gnome System Monitor, you can use:

gconftool-2 -t str --set /apps/metacity/global_keybindings/run_command_10 \
'<Control><Alt>Delete'
gconftool-2 -t str --set /apps/metacity/keybinding_commands/command_10 \
"gnome-system-monitor"

The system monitor enables you to see the running processes and selectively kill applications. This is similar to using CAD under Microsoft Windows to bring up the System Monitor.

Unfortunately, the reboot command runs as root, so you cannot make CAD run /sbin/reboot. However, you can use gksudo (a graphical front-end to sudo) to prompt you for your password and then run reboot as root:

gconftool-2 -t str --set /apps/metacity/keybinding_commands/command_10 \
"gksudo reboot"



Disabling Ctrl+Alt+Delete
Sometimes you may want to prevent CAD from rebooting the system. For example, a critical server may have CAD disabled to prevent someone from playing with the keyboard and cycling the system.

1. Edit the /etc/inittab file.
sudo vi /etc/inittab

2. Find the line that says:
ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
This line says, for all init levels (1, 2, … 5), run the shutdown command and reboot now.

3. To disable CAD, comment out the line by inserting # at the beginning of the line.
#ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now

4. To alter the CAD action, change the /sbin/shutdown command to run your own program. For example, you may want to send an alert to an administrator or play some Disco music to let the user know that CAD is outdated.
ca:12345:ctrlaltdel:/usr/bin/play /home/nealk/disco.mp3 > /dev/null
Only one application can use the audio driver at a time, so this will only play music if nothing else is playing at the same time.

5. After changing the inittab file, reload it using: sudo telinit q.


Unmapped keyboard signals can be lost. If you disable CAD, then you may find that you cannot re-enable it without rebooting the system. But if you change the functionality (without disabling the command) then you do not need to reboot. The same is true for power level signals and Alt-UpArrow.

Source of Information : Hacking Ubuntu Serious Hacks Mods and Customizations