Monday, August 31, 2009

The Linux Kernel - Software Program Management

The Linux operating system calls a running program a process. A process can run in the foreground, displaying output on a display, or it can run in the background, behind the scenes. The kernel controls how the Linux system manages all the processes running on the system.

The kernel creates the first process, called the init process, to start all other processes on the system. When the kernel starts, it loads the init process into virtual memory. As the kernel starts each additional process, it allocates to it a unique area in virtual memory to store the data and code that the process uses.

Most Linux implementations contain a table (or tables) of processes that start automatically on boot-up. This table is often located in the special file /etc/inittabs. However, the Ubuntu Linux system uses a slightly different format, storing multiple table files in the /etc/event.d folder by default.

The Linux operating system uses an init system that utilizes run levels. A run level can be used to direct the init process to run only certain types of processes, as defined in the /etc/inittabs file or the files in the /etc/event.d folder. There are seven init run levels in the Linux operating system. Level 0 is for when the system is halted, and level 6 is for when the system is rebooting. Levels 1 through 5 manage the Linux system while it’s operating.

At run level 1, only the basic system processes are started, along with one console terminal process. This is called Single User mode. Single User mode is most often used for emergency filesystem maintenance when something is broken. Obviously, in this mode only one person (usually the administrator) can log into the system to manipulate data. The standard init run level is 3. At this run level most application software, such as network support software, is started. Another popular run level in Linux is 5. This is the run level where the system starts the graphical X Window software and allows you to log in using a graphical desktop window.

The Linux system can control the overall system functionality by controlling the init run level. By changing the run level from 3 to 5, the system can change from a console-based system to an advanced, graphical X Window system. Here are a few lines extracted from the output of the ps command:

test@testbox~$ ps ax
PID TTY STAT TIME COMMAND
1 ? Ss 0:01 /sbin/init
2 ? S< 0:00 [kthreadd]
3 ? S< 0:00 [migration/0]
4 ? S< 0:00 [ksoftirqd/0]
5 ? S< 0:00 [watchdog/0]
4708 ? S< 0:00 [krfcommd]
4759 ? Ss 0:00 /usr/sbin/gdm
4761 ? S 0:00 /usr/sbin/gdm
4814 ? Ss 0:00 /usr/sbin/atd
4832 ? Ss 0:00 /usr/sbin/cron
4920 tty1 Ss+ 0:00 /sbin/getty 38400 tty1
5417 ? Sl 0:01 gnome-settings-daemon
5425 ? S 0:00 /usr/bin/pulseaudio --log-target=syslog
5426 ? S 0:00 /usr/lib/pulseaudio/pulse/gconf-helper
5437 ? S 0:00 /usr/lib/gvfs/gvfsd
5451 ? S 0:05 gnome-panel --sm-client-id default1
5632 ? Sl 0:34 gnome-system-monitor
5638 ? S 0:00 /usr/lib/gnome-vfs-2.0/gnome-vfs-daemon
5642 ? S 0:09 gimp-2.4
6319 ? Sl 0:01 gnome-terminal
6321 ? S 0:00 gnome-pty-helper
6322 pts/0 Rs 0:00 bash
6343 ? S 0:01 gedit
6385 pts/0 R+ 0:00 ps ax
$

The first column in the output shows the process ID (or PID) of the process. Notice that the first process is our friend, the init process, which is assigned PID 1 by the Ubuntu system. All other processes that start after the init process are assigned PIDs in numerical order. No two processes can have the same PID.

The third column shows the current status of the process. The first letter represents the state the process is in (S for sleeping, R for running). The process name is shown in the last column. Processes that are in brackets have been swapped out of memory to the disk swap space due to inactivity. You can see that some of the processes have been swapped out, but the running processes have not.

Source of information : Wiley Ubuntu Linux Secrets

Sunday, August 30, 2009

The Linux Kernel - System Memory Management

One of the primary functions of the operating system kernel is memory management. Not only does the kernel manage the physical memory available on the server, it can also create and manage virtual memory, or memory that does not actually exist. It does this by using space on the hard disk, called the swap space. The kernel swaps the contents of virtual memory locations back and forth from the swap space to the actual physical memory. This process allows the system to think there is more memory available than what physically exists.

The memory locations are grouped into blocks called pages. The kernel locates each page of memory in either the physical memory or the swap space. It then maintains a table of the memory pages that indicates which pages are in physical memory and which pages are swapped out to disk.

The kernel keeps track of which memory pages are in use and automatically copies memory pages that have not been accessed for a period of time to the swap space area (called swapping out). When a program wants to access a memory page that has been swapped out, the kernel must make room for it in physical memory by swapping out a different memory page and swap in the required page from the swap space. Obviously, this process takes time, and it can slow down a running process. The process of swapping out memory pages for running applications continues for as long as the Linux system is running. You can see the current status of the memory on a Ubuntu system by using the System Monitor utility.

The Memory graph shows that this Linux system has 380.5 MB of physical memory. It also shows that about 148.3 MB is currently being used. The next line shows that there is about 235.3 MB of swap space memory available on this system, with none in use at the time. By default, each process running on the Linux system has its own private memory pages. One process cannot access memory pages being used by another process. The kernel maintains its own memory areas. For security purposes, no processes can access memory used by the kernel processes. Each individual user on the system also has a private memory area used for handling any applications the user starts. Often, however, related applications run that must communicate with each other. One way to do this is through data sharing. To facilitate data sharing, you can create shared memory pages.

A shared memory page allows multiple processes to read and write to the same shared memory area. The kernel maintains and administers the shared memory areas, controlling which processes are allowed access to the shared area. The special ipcs command allows us to view the current shared memory pages on the system. Here’s the output from a sample ipcs command:

test@testbox:~$ ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 557056 test 600 393216 2 dest
0x00000000 589825 test 600 393216 2 dest
0x00000000 622594 test 600 393216 2 dest
0x00000000 655363 test 600 393216 2 dest
0x00000000 688132 test 600 393216 2 dest
0x00000000 720901 test 600 196608 2 dest
0x00000000 753670 test 600 393216 2 dest
0x00000000 1212423 test 600 393216 2 dest
0x00000000 819208 test 600 196608 2 dest
0x00000000 851977 test 600 393216 2 dest
0x00000000 1179658 test 600 393216 2 dest
0x00000000 1245195 test 600 196608 2 dest
0x00000000 1277964 test 600 16384 2 dest
0x00000000 1441805 test 600 393216 2 dest

test@testbox:~$

Each shared memory segment has an owner that created the segment. Each segment also has a standard Linux permissions setting that sets the availability of the segment for other users. The key value is used to allow other users to gain access to the shared memory segment.

Source of Information : Apress Ubuntu On A Dime The Path To Low Cost Computing

Saturday, August 29, 2009

Pay-to-Use Software

With pay-to-use software, you visit your local computer store or a big name chain store, find a box containing the software you want to install on your computer, and your bank account balance is then lowered anywhere from $20.00 to $600.00. Another version of pay-to-use software is found when you visit a company’s web site, provide your credit card information, and then download the software immediately using your broadband connection. (Sometimes you have to wait for the company to actually send you the software on a disc—anxiously checking your mailbox daily for its arrival.) With the downloadable software, you get no CD/DVD disc, no printed manual, and no pretty box with color graphics on the front. Would you believe, however, that you’re probably charged the same amount of money as if you’d purchased the boxed version from a brick-and-mortar store? It’s happened to me! You’ll typically get no explanation for why you’re not getting a discount for saving them packing and shipping costs.

Finally, there’s another version of pay-to-use that’s even more devious: subscription software. You pay a one-time product initiation fee (typically large) to get the software and then you pay a weekly/monthly/yearly subscription fee (sometimes also referred to as a maintenance fee) to keep using the software.

Believe it or not, the current rumors buzzing around the technology world are that subscription software is the future of software: you won’t buy software anymore; you’ll “rent” it. Miss a payment and that software might very well uninstall itself from your computer, requiring a completely new purchase (with that large product initiation fee) before starting up with the subscription fees again.

Shocked? Angry? Confused? Yes to all three? Well, let me put your mind at ease and let you know that there are individuals, teams, and companies out there determined to fight subscription fees every step of the way. And they’re doing this by creating free applications that are competing with the big name applications (BNAs).

Source of Information : Apress Ubuntu On A Dime The Path To Low Cost Computing

Friday, August 28, 2009

Ubuntu need hardware

First, the good news: every piece of software you’ll learn about in this book, including the Ubuntu operating system, is 100 percent free—free to download, free to install, and free to use. I might as well go ahead and say that the software is also free to uninstall, free to love or hate, free to complain about, and, of course, free to rave about to your friends and family.

And now the bad news: unless a major breakthrough in direct-to-brain downloading has occurred as you read this, that 100 percent free operating system and software will need a home. And that means a computer—a whirring, beeping, plugged-in personal computer (PC) that contains a few basic components that are absolutely required for you to download, store, and use the previously mentioned software. But there’s more good news. It is no longer mandatory that you spend a bundle of money to be able to install an operating system and all the software you know you’ll want to use. Let me explain.

In early 2007, Microsoft introduced its latest operating system, Vista, to the world. It then promptly informed everyone that running the operating system properly would require some hefty computer hardware requirements: more hard drive space than any previous operating system, more memory, and a much faster processor. And those requirements were the minimum just to run Vista; other limits existed. For example, if you wanted to have all the fancy new graphics features, you’d have to invest in a faster (and more expensive) video card. It wasn’t uncommon to find users spending $500 or more on hardware upgrades. And in many instances a completely new computer would need to be purchased if the user wanted to run Vista; older computers simply didn’t meet the requirements.

Have you had enough? Are you tired of spending dollar after dollar chasing the dream of the “perfect PC?” Are you looking for an inexpensive but scalable (upgradeable) computer that can provide you with basic services such as e-mail, word processing, and Internet browsing? And don’t forget other features, such as Internet messaging, VOIP (using your Internet connection to make phone calls), photo editing, and games. You shouldn’t have to skimp on any services or features. Does this sound like a computer you’d enjoy owning and using?

If so, today’s your lucky day. Because I’ll show you how easy it is to put together your own computer using inexpensive components. And because you won’t be spending any money on software, you’ll have the option to put some (or all) of those savings into your new computer. You might splurge and buy a bigger LCD panel (or a second LCD for multiple-monitor usage!) or add some more memory so you can run more applications at once. Or you can spend the bare minimum on hardware, keeping your expenditures low without skimping on software and services. (And if you want to get some more life out of your existing computer, I’ll explain how you can possibly give it a second life by installing Ubuntu to save even more money!)


Basic Components
You’ve probably heard the phrase, “Your mileage may vary,” and it’s no truer than when dealing with different computer hardware settings running Ubuntu (or any operating system). But when it comes to the Ubuntu operating system versus the Windows operating system, there is one large difference in hardware requirements: Ubuntu requires substantially less “oomph” when it comes to the basic components you need inside your computer. By this, I mean you don’t need the fastest processor, a huge amount of RAM memory, or even a large capacity hard drive.

The recommended Ubuntu hardware consists of the following:
• 700MHz x86 processor
• 384MB system memory (RAM)
• 8GB disk space
• Graphics card capable of 1024x768 resolution
• Sound card
• Network or Internet connection

Note that you don’t need an Intel 3GHz (gigahertz) Core 2 Duo processor with 2GB (gigabytes) of RAM and 500GB hard drive to install the Vista Ultimate operating system. You can install the latest version of Ubuntu on cutting-edge hardware found in a computer from 2000. Back in 2000, a good computer might typically come with a 60GB hard drive, 512MB or 1GB of RAM memory, and a Pentium 4 processor; as well as a built-in network card, video, and sound on the motherboard. Surprised?

This means that it’s possible for you to install Ubuntu on your current computer (or an older one you’ve packed away and hidden in a closet somewhere). Ubuntu doesn’t put a lot of demand on hardware, so you can use your current computer or build your own, but avoid the latest bleeding-edge technology (that also comes with a bleeding-edge price).

Not convinced? Okay, here’s where I put my money where my mouth is and show you just how easy it is to build a computer that will run Ubuntu and hundreds more applications for very little money. What’s even better is that I’m 99.9 percent certain that in five years this computer will most likely run the latest version of Ubuntu. Can you say that about your current computer and, say, Windows 2014 Home Edition?

In the introduction of this book, I mentioned that I wanted to build a basic computer that would satisfy a number of requirements:

• It must cost me less than $250.00.
• It must allow me to access the Internet.
• It must also let me access my e-mail, either via the Internet or stored on my hard drive.
• It must provide me with basic productivity features: word processor, spreadsheet, and slideshow-creation software.
• It must allow me to play music and create my own CDs or DVDs.

This list helps define the hardware I need to purchase to build my U-PC.

Source of Information : Apress Ubuntu On A Dime The Path To Low Cost Computing

Thursday, August 27, 2009

Moonlight|3D—3-D Image Modeling

www.moonlight3d.eu
This last project looks really cool and impressed me, but I’m afraid documentation is nonexistent, so hopefully some of you folks at home can help these guys out. According to the Freshmeat page:

Moonlight|3D is a modeling and animation tool for three-dimensional art. It currently supports mesh-based modeling. It’s a redesign of Moonlight Atelier, formed after Moonlight Atelier/Creator died in 1999/2000. Rendering is done through pluggable back ends. It currently supports Sunflow, with support for RenderMan and others in planning.

The Web site sheds further light on the project, which states one of its goals as: “In order to speed up the progress of our development efforts, we open up the project to the general public, and we hope to attract the support of many developers and users, bringing the project forward faster.”

Installation. In terms of requirements requirements, the only thing I needed to install to get Moonlight running was Java, so thankfully, the dependencies are fairly minimal. As for choices of packages at the Web site, there’s a nightly build available as a binary or the latest source code (I ran with the binary). Grab the latest, extract it to a local folder, and open a terminal in the new folder. Then, enter the command:

$ ./moonlight.sh

Provided you have everything installed, it now should start. Once you’re inside, I’m sorry, I really can’t be of much help. There are the usual windows in a 3-D editor for height, width, depth and a 3-D view, and on the left are quick selection panes for objects, such as boxes, cones, spheres and so on (actually, the pane on the left has access to just about everything you need—it’s pretty cool). Scouting about, a number of cool functions really jumped out at me, like multiple preview modes; changeable light, camera sources and positions; and most important, the ability to make your own animations. If only I could find a way to use them.

This project really does look pretty cool, and it seems to be a decent alternative to programs like Blender, but there honestly is no documentation. All links to documentation lead to a page saying the documentation doesn’t exist yet and provides a link to the on-line forums. The forums also happen to have very little that’s of use to someone without any prior knowledge of the interface, and I assume all those already on the forum are users of the original Moonlight Atelier. Nevertheless, the project does look interesting and seems to be quite stable. I look forward to seeing what happens with this project once some documentation is in place.

Source of Information : Linux Journal Issue 181 May 2009

Wednesday, August 26, 2009

gipfel—Mountain Viewer/Locater

www.ecademix.com/JohannesHofmann/gipfel.html

This is definitely one of the most original and niche projects I’ve come across— and those two qualities are almost bound to get projects included in this section! gipfel has a unique application for mountain images and plotting. According to the Web site:

gipfel helps to find the names of mountains or points of interest on a picture. It uses a database containing names and GPS data. With the given viewpoint (the point from which the picture was taken) and two known mountains on the picture, gipfel can compute all parameters needed to compute the positions of other mountains on the picture. gipfel can also generate (stitch) panorama images.

Installation. A source tarball is available on the Web site, and trawling around the Net, I found a package from the ancient wonderland of Debian. But, the package is just as old and beardy as its parent OS. Installing gipfel’s source is a pretty basic process, so I went with the tarball. Once the contents are extracted and you have a terminal open in the new directory, it needs only the usual:

$ ./configure
$ make

And, as sudo or root:

# make install

However, like most niche projects, it does have a number of slightly obscure requirements that probably aren’t installed on your system (the configure script will inform you). The Web site gives the following requirements:

• UNIX-like system (for example, Linux, *BSD)

• fltk-1.1

• gsl (GNU Scientific Library)

• libtiff

I found I needed to install fltk-1.1-dev and libgsl0-dev to get past ./configure (you probably need the -dev package for libtiff installed too, but I already had that installed from a previous project). Once compilation has finished and the install script has done its thing, you can start the program with:

$ gipfel

Usage Once you’re inside, the first thing you’ll need to do is load a picture of mountains (and a word of warning, it only accepts .jpg files, so convert whatever you have if it isn’t
already a .jpg). Once the image is loaded, you either can choose a viewpoint from a predefined set of locations, such as Everest Base Camp and so on, or enter the coordinates manually. However, I couldn’t wrap my head around the interface for manual entry, and as Johannes Hofmann says on his own page:

...gipfel also can be used to play around with the parameters manually. But be warned: it is pretty difficult to find the right parameters for a given picture manually. You can think of gipfel as a georeferencing software for arbitrary images (not only satellite images or maps).

As a result, Johannes recommends the Web site www.alpin-koordinaten.de as a great place for getting GPS locations, but bear in mind that the site is in German, und mein Deutsch ist nicht so gut, so you may need to run a Web translator. If you’re lucky enough to get a range of reference points appearing on your image, you can start to manipulate where they land on your picture according to perspective, as overwhelming chance dictates that the other mountain peaks won’t line up immediately and, therefore, will require tweaking.

If you look at the controls, such as the compass bearing, focal length, tilt and so on, these will start to move the reference points around while still connecting them as a body of points. Provided you have the right coordinates for your point of view, the reference points should line up, along with information on all the other peaks with it (which is really what the project is for in the first place). gipfel also has an image stitching mode, which allows you to generate panoramic images from multiple images that have been referenced with gipfel. As my attempts with gipfel didn’t turn out so well, I include a shot of Johannes’ stunning results achieved from Lempersberg to Zugspitze in the Bavarian Alps, as well as one of the epic panoramic shots as shown on the Web site. Although this project is still a bit unwieldy, it is still in development, and you have to hand it to gipfel, it is certainly original.

Source of Information : Linux Journal Issue 181 May 2009

Tuesday, August 25, 2009

NON-LINUX FOSS - ReactOS Remote Desktop

If you’re a Linux fan, there’s a bit of a tendency to think that Linux and open source are two ways of saying the same thing. However, plenty of FOSS projects exist that don’t have anything to do with Linux, and plenty of projects originated on Linux that now are available on other systems. Because a fair share of our readers also use one of those other operating systems, willingly or unwillingly, we thought we’d highlight here in the coming months some of the FOSS projects that fall into the above categories. We probably all know about our BSD brethren: FreeBSD, OpenBSD, NetBSD and so on, but how many of us know about ReactOS? ReactOS is an open-source replacement for Windows XP/2003. Don’t confuse this with something like Wine, which allows you to run Windows programs on Linux. ReactOS is a full-up replacement for Windows XP/2003. Assuming you consider that good news (a FOSS replacement for Windows), the bad news is that it’s still only alpha software. However, the further good news is that it still is under active development; the most recent release at the time of this writing is 0.3.8, dated February 4, 2009. For more information, visit www.reactos.org.


ReactOS Remote Desktop (from www.reactos.org)

Widelands—Real-Time Strategy

xoops.widelands.org

I covered this game only briefly in the Projects at a Glance section in last month’s issue, so I’m taking a closer look at it this month. Widelands is a real-time strategy (RTS) game built on the SDL libraries and is inspired by The Settlers games from the early and mid-1990s. The Settlers I and II games were made in a time when the RTS genre was still in its relative infancy, so they had different gameplay ideals from their hyperspeed cousins, where a single map could take up to 50 hours of gameplay.

Thankfully, Widelands has retained this ideal, where frantic “tank-rush” tactics do not apply. Widelands takes a much slower pace, with an emphasis not on combat, but on building your home base. And, although the interface is initially hard to penetrate, it does lend itself to more advanced elements of base building, with gameplay mechanics that seem to hinge on not necessarily what is constructed, but how it is constructed. For instance, the ground is often angled. So, when you build roads, you have to take into account where they head in order for builders to be able to transport their goods quickly and easily. Elements such as flow are just about everything in this game— you almost could call it feng shui.


Installation. If you head to the Web site’s Downloads section, there’s an i386 Linux binary available in a tarball that’s around 100MB, which I’ll be running with here. For masochists (or non-Intel machines), the game’s source is available farther down the page.
Download the package and extract it to a new folder (which you’ll need to make yourself). Open a terminal in the new folder, and enter the command:

$ ./widelands

If you’re very lucky, it’ll work right off the bat. Chances are, you’ll get an error like this:
./widelands: error while loading shared libraries: libSDL_ttf-2.0.so.0: cannot open shared object file: No such file or directory I installed libSDL_ttf-2.0-dev, which fixed that, but then I got several other errors before I could get it to start. I had to install libSDL_gfx.so.4 and libsdl-gfx1.2-4 before it worked, but Widelands relies heavily on SDL (as do many other games), so you might as well install all of the SDL libraries while you’re there.


Usage. Once you’re in the game, the first thing you should do is head to the Single Player mode, and choose Campaign to start, as there’s a good tutorial, which you will need. While the levels are loading, hints are given to you for when you get in the game, speeding up the learning process. Controls are with the mouse and keyboard. The mouse is used for choosing various actions on-screen, and the keyboard’s arrow keys let you move the camera around the world. Left-clicking on an insignificant piece of map brings up a menu for all of the basic in-game options. Right-clicking on something usually gets rid of it.

From here on, the game is far too complex to explain in this amount of space, but it’s well worth checking out the documentation and help screens for further information. Once you’ve finished the intro campaign, check out the game’s large collection of singleand multiplayer maps. You get a choice of multiple races, including Barbarians, Empire and Atlanteans, coupled with the ability to play against the computer or against other humans (or a close approximation). It also comes with a background story to the game, and if you spend your Saturday nights playing World of Warcraft instead of going to the pub, I’m sure you’ll find it very interesting.

Delve into this game, and there’s much that lies beneath the surface. It has simple things that please, like how the in-game menus are very sophisticated and solid, with none of the bugginess you get in many amateur games. But, it’s the complete reversal of hyperspeed in its gameplay that I really love. I always want to get back to building my base when playing most RTS games, but I’m constantly drawn away by fire fights. This game lets you keep building, and places serious emphasis on how you do it.

The Web site also has add-ons, such as maps, music and other tribes, along with an editor, artwork and more, so check it out. Ultimately, Widelands is a breath of fresh air in an extremely stale genre, whose roots ironically stem from way back in the past in RTS history. Whether you’re chasing a fix of that original Settlers feel or just want a different direction in RTS, this game is well worth a look.

Source of Information : Linux Journal Issue 181 May 2009

Monday, August 24, 2009

The Future of AppArmor

AppArmor has been adopted as the default Mandatory Access Control solution for both the Ubuntu and Mandriva distributions. I’ve sung its praises before, and as evidenced by writing my now third column about it, clearly I’m still a fan.

But, you should know that AppArmor’s future is uncertain. In late 2007, Novell laid off its full-time AppArmor developers, including project founder Crispin Cowan (who subsequently joined Microsoft).

Thus, Novell’s commitment to AppArmor is open to question. It doesn’t help that the AppArmor Development Roadmap on Novell’s Web site hasn’t been updated since 2006, or that Novell hasn’t released a new version of AppArmor since 2.3 Beta 1 in July 2008, nearly a year ago at the time of this writing.

But, AppArmor’s source code is GPL’d: with any luck, this apparent slack in AppArmor leadership soon will be taken up by some other concerned party—for example, Ubuntu and Mandriva developers. By incorporating AppArmor into their respective distributions, the Ubuntu and Mandriva teams have both committed to at least patching AppArmor against the inevitable bugs that come to light in any major software package.

Given this murky future, is it worth the trouble to use AppArmor? My answer is an emphatic yes, for a very simple reason: AppArmor is so easy to use—requiring no effort for packages already having distribution provided profiles and minimal effort to create new profiles—that there’s no reason not to take advantage of it for however long it remains an officially supported part of your SUSE, Ubuntu or Mandriva system.

Source of Information : Linux Journal 185 September 2009

Sunday, August 23, 2009

AppArmor on Ubuntu

In SUSE’s and Ubuntu’s AppArmor implementations, AppArmor comes with an assortment of pretested profiles for popular server and client applications and with simple tools for creating your own AppArmor profiles. On Ubuntu systems, most of the pretested profiles are enabled by default. There’s nothing you need to do to install or enable them. Other Ubuntu AppArmor profiles are installed, but set to run in complain mode, in which AppArmor only logs unexpected application behavior to /var/log/messages rather than both blocking and logging it. You either can leave them that way, if you’re satisfied with just using AppArmor as a watchdog for those applications (in which case, you should keep an eye on /var/log/messages), or you can switch them to enforce mode yourself, although, of course, you should test thoroughly first.

Still other profiles are provided by Ubuntu’s optional apparmor-profiles package. Whereas ideally a given AppArmor profile should be incorporated into its target application’s package, for now at least, apparmor-profiles is sort of a catchall for emerging and not-quite-stable profiles that, for whatever reason, aren’t appropriate to bundle with their corresponding packages. Active AppArmor profiles reside in /etc/apparmor.d. The files at the root of this directory are parsed and loaded at boot time automatically. The apparmor-profiles package installs some of its profiles there, but puts experimental profiles in /usr/share/doc/apparmor-profiles/extras.

The Ubuntu 9.04 packages put corresponding profiles into /etc/apparmor.d. If you install the package apparmor-profiles, you’ll additionally get default protection for the packages shown. The lists in Tables 1 and 2 are perhaps as notable for what they lack as for what they include. Although such high-profile server applications as BIND, MySQL, Samba, NTPD and CUPS are represented, very notably absent are Apache, Postfix, Sendmail, Squid and SSHD. And, what about important client-side network tools like Firefox, Skype, Evolution, Acrobat and Opera? Profiles for those applications and many more are provided by apparmor-profiles in /usr/share/doc/apparmor-profiles/extras, but because they reside there rather than /etc/apparmor.d, they’re effectively disabled. These profiles are disabled either because they haven’t yet been updated to work with the latest version of whatever package they protect or because they don’t yet provide enough protection relative to the Ubuntu AppArmor team’s concerns about their stability. Testing and tweaking such profiles is beyond the scope of this article, but suffice it to say, it involves the logprof command.

Source of Information : Linux Journal 185 September 2009

Saturday, August 22, 2009

AppArmor Review

AppArmor is based on the Linux Security Modules (LSMs), as is SELinux. AppArmor, however, provides only a subset of the controls SELinux provides. Whereas SELinux has methods for Type Enforcement (TE), Role-Based Access Controls (RBACs) and Multi Level Security (MLS), AppArmor provides only a form of Type Enforcement. Type Enforcement involves confining a given application to a specific set of actions, such as writing to Internet network sockets, reading a specific file and so forth. RBAC involves restricting user activity based on the defined role, and MLS involves limiting access to a given resource based on its data classification (or label). By focusing on Type Enforcement, AppArmor provides protection against, arguably, the most common Linux attack scenario—the possibility of an attacker exploiting vulnerabilities in a given application that allows the attacker to perform activities not intended by the application’s developer or administrator. By creating a baseline of expected application behavior and blocking all activity that falls outside that baseline, AppArmor (potentially) can mitigate even zero-day (unpatched) software vulnerabilities. What AppArmor cannot do, however, is prevent abuse of an application’s intended functionality. For example, the Secure Shell dæmon, SSHD, is designed to grant shell access to remote users. If an attacker figures out how to break SSHD’s authentication by, for example, entering just the right sort of gibberish in the user name field of an SSH login session, resulting in SSHD giving the attacker a remote shell as some authorized user, AppArmor may very well allow the attack to proceed, as the attack’s outcome is perfectly consistent with what SSHD would be expected to do after successful login. If, on the other hand, an attacker figured out how to make the CUPS print services dæmon add a line to /etc/passwd that effectively creates a new user account, AppArmor could prevent that attack from succeeding, because CUPS has no reason to be able to write to the file /etc/passwd.

Source of Information : Linux Journal 185 September

Friday, August 21, 2009

Why Buy a $350 Thin Client?

On August 10, 2009, I’ll be at a conference in Troy, Michigan, put on by the LTSP (Linux Terminal Server Project, www.ltsp.org) crew and their commercial company (www.disklessworkstations.com). The mini-conference is geared toward people considering thin-client computing for their network. My talk will be targeting education, as that’s where I have the most experience.

One of the issues network administrators need to sort out is whether a decent thin client, which costs around $350, is worth the money when full-blown desktops can be purchased for a similar investment. As with most good questions, there’s really not only one answer. Thankfully, LTSP is very flexible with the clients it supports, so whatever avenue is chosen, it usually works well. Some of the advantages of actual thin-client devices are:

1. Setup time is almost zero. The thin clients are designed to be unboxed and turned on.

2. Because modern thin clients have no moving parts, they very seldom break down and tend to use much less electricity compared to desktop machines.

3. Top-of-the-line thin clients have sufficient specs to support locally running applications, which takes load off the server without sacrificing ease of installation.

4. They look great. There are some advantages to using full desktop machines as thin clients too, and it’s possible they will be the better solution for a given install:

1. Older desktops often can be revitalized as thin clients. Although a 500MHz computer is too slow to be a decent workstation, it can make a very viable thin client.

2. Netbooks like the Eee PC can be used as thin clients and then used as notebook computers on the go. It makes for a slightly inconvenient desktop setup, but if mobility is important, it might be ideal for some situations.

3. It’s easy to get older computers for free. Even with the disadvantages that come with using old hardware, it’s hard to beat free.

Thankfully, with the flexibility of LTSP, any combination of thin clients can coexist in the same network. If you’re looking for a great way to manage lots of client computers, the Linux Terminal Server Project might be exactly what you need. I know I couldn’t do my job without it.

Thursday, August 20, 2009

NON-LINUX FOSS - Moonlight

Moonlight is an open-source implementation of Microsoft’s Silverlight. In case you’re
not familiar with Silverlight, it’s a Web browser plugin that runs rich Internet applications. It provides features such as animation, audio/video playback and vector graphics.

Moonlight programming is done with any of the languages compatible with the Mono runtime environment. Among many others, these languages include C#, VB.NET and Python. Mono, of course, is a multiplatform implementation of ECMA’s Common Language Infrastructure (CLI), aka the .NET environment. A technical collaboration deal between Novell and Microsoft has provided Moonlight with access to Silverlight test suites and gives Moonlight users access to licensed media codecs for video and audio. Moonlight currently supplies stable support for Silverlight 1.0 and Alpha support for Silverlight 2.0.


Silverlight Pad Running on Moonlight (from www.mono-project.com)

Source of Information : Linux Journal 185 September 2009

Wednesday, August 19, 2009

gWaei—Japanese-English Dictionary

gwaei.sourceforge.net
Students of Japanese have had a number of tools available for Linux for sometime, but here’s a project that updates the situation and brings several elements together from other projects to form one sleek application. In the words of the gWaei Web site:

gWaei is a Japanese-English dictionary program for the GNOME desktop. It is made to be a modern drop-in replacement for Gjiten with many of the same features. The dictionary files it uses are from Jim Breen’s WWWJDIC Project and are installed separately through the program.

It features the following:

• Easy dictionary installation with a click of a button.

• Support for searching using regular expressions.

• Streams results so the interface is never frozen.

• Click Kanji in the results pane to look at information on it.

• Simple interface that makes sense.

• Intelligent design and Tab switches dictionaries.

• Organizes relevant matches to the top of the results.


Installation If you head to the Web site’s download section, there are gWaei packages available in deb, RPM and source tarball format. For me, the deb installed with no problems, so I ran with that. When running with the source version, I couldn’t find all of the dependencies, but the Web site says you need the following packages, along with their -dev counterparts: gtk+-2.0, gconf-2.0, libcurl, libgnome-2.0 and libsexy. The documentation also says that compiling the source is the standard fare of:

$ ./configure
$ make
$ sudo make install

After installation, I found gWaei in my menu under Applications -> Utilities -> gWaei Japanese-English Dictionary. If you can’t find gWaei in your menu, enter the command:

$ gwaei

Usage Once gWaei starts, the first thing you see is a Settings window that’s broken into three tabs: Status, Install Dictionaries and Advanced. Status tells you how things are currently set up, and to start off with, all you’ll see is Disabled. Click the Install Dictionaries tab, and you’ll see that there are buttons already set up to install new dictionaries, called Add, for English, Kanji, Names and Radicals. Once these are all installed, each of them will be changed to Enabled back in the Status tab. After these are installed, click Close, and you are in the program. The first place you should go is the search bar.

Enter something in English or in Romaji (Japanese with the Latin alphabet we use), and meanings and translations appear in the large field below with a probable mix of kanji and kana, and an English translation. You also can enter searches in kana and kanji, but my brother has my Japanese keyboard, so I couldn’t really try it out. For a really cool feature, click Insert -> Using Kanjipad, and a blank page comes up where you can draw kanji characters by hand with your mouse. Various kanji characters then appear on the right and update, depending on how many strokes you make and their shape. If you click Insert -> Using Radical Search Tool, you can search for radicals on basic kanji characters, which also can be restricted by the number of strokes. All in all, gWaei is a great program with elegant simplicity, and it has the features you need, whether you’re in Japan or the West (or anywhere else that’s not Japan for that matter). If you’re a Japanese student, this should be standard issue in your arsenal.


The coolest feature in gWaei is this kanji pad,
where you can draw kanji with your mouse,
and the computer dynamically alters the
selection based on your strokes.


Source of Information : CPU Magazine 07 2009

Tuesday, August 18, 2009

NON-LINUX FOSS

In our second Upfront installment highlighting non-Linux FOSS projects, we present SharpDevelop. SharpDevelop (aka #Develop) is an IDE for developing .NET applications in C#, F#, VB.NET, Boo and IronPython. SharpDevelop includes all the stuff you’d expect in a modern IDE: syntax highlighting, refactoring, forms designer, debugger, unit testing, code coverage, Subversion support and so on. It runs on all modern versions of the Windows platform. SharpDevelop is a “real” FOSS project; it’s not controlled by any big sinister corporation (and we all know who I’m talking about). It has an active community and is actively upgraded. At the time of this writing, version 3.0 just recently has been released. Even if you use only Linux, you may be indirectly using SharpDevelop. If you use any Mono programs, they probably were developed using the MonoDevelop IDE. MonoDevelop was forked from SharpDevelop in 2003 and ported to GTK.


SharpDevelop Running on Vista (from www.icsharpcode.net)


Source of Information : Linux Journal Issue 182 June 2009

Monday, August 17, 2009

WHAT’S NEW IN KERNEL DEVELOPMENT

An effort to change the license on a piece of code hit a wall recently. Mathieu Desnoyers wanted to migrate from the GPL to the LGPL on some userspace RCU code. Read-Copy Update is a way for the kernel to define the elements of a data object, without other running code seeing the object in the process of formation. Mathieu’s userspace version provides the same service for user programs. Unfortunately, even aside from the usual issue of needing permission from all contributors to change the license of their contribution, it turns out that IBM owns the patent to some of the RCU code concepts, and it has licensed the patent for use only in GPLed software. So, without permission from IBM, Mathieu can get permission from all the contributors he wants and still be stuck with the GPL.

Loadlin is back in active development! The venerable tool boots Linux from a directory tree in a DOS partition, so all of us DOS users can experiment with this new-fangled Linux thing. To help us with that, Samuel Thibault has released Loadlin version 1.6d and has taken over from Hans Lerman as official maintainer of the code. The new version works with the latest Linux kernels and can load up to a 200MB bzImage. He’s also migrated development into a mercurial repository. (Although not as popular as git with kernel developers, mercurial does seem to have a loyal following, and there’s even a book available at hgbook.red-bean.com.) After seven years of sleep, here’s hoping Loadlin has a glorious new youth, with lots of new features and fun. It loads Linux from DOS! How cool is that?

Hirofumi Ogawa has written a driver for Microsoft’s exFAT filesystem, for use with large removable Flash drives. The driver is read-only, based on reverse-engineering the filesystem on disk. There doesn’t seem to be immediate plans to add write support, but that could change in a twinkling, if a developer with one of those drives takes an interest in the project. Hirofumi has said he may not have time to continue work on the driver himself. Meanwhile, Boaz Harross has updated the exofs filesystem. Exofs supports Object Storage Devices (OSDs), a type of drive that implements normal block device semantics, while at the same time providing access to data in the form of objects defined within other objects. This higher-level view of data makes it easier to implement fine-grained data management and security. Boaz’s updates include some ext2 fixes that still apply to the exofs codebase, as exofs originally was an ext2 fork. He also abandoned the IBM API in favor of supporting the open-osd API instead. Adrian McMenamin has posted a driver for the VMUFAT filesystem, the SEGA Dreamcast filesystem running on the Dreamcast visual memory unit. Using his driver, he was able to manage data directly on the Dreamcast. At the moment, the driver code does seem to have some bugs, and other problems were pointed out by various people. Adrian has been inspired to do a more intense rewrite of the code, which he intends to submit a bit later than he’d first anticipated. A new source of controversy has emerged in Linux kernel development. With the advent of pocket devices that are intended to power down when not in use, or at least go into some kind of power-saving state, the whole idea of suspending to disk and suspending to RAM has become more complicated. It’s not obvious whether the kernel or userspace should be concerned with analyzing the sleep-worthiness of the various parts of the system, or how much the responsibility should be shared between them. There seems to be many opinions, all of which rest on everyone’s idea of what is appropriate as well as on what is feasible. The kernel is supposed to control all hardware, but the X Window System controls hardware and is not part of the kernel. So, clearly, exceptions exist to any general principles that might be involved. Ultimately, if no obvious delineation of responsibility emerges, it’s possible folks may start working on competing ideas, like what happened initially with software suspend itself.

Source of Information : Linux Journal Issue 182 June 2009

Sunday, August 16, 2009

Content Management Systems

Apart from the ISO images of four FOSS distributions in this month’s DVD, we have also managed to pack in some of the best content management systems (CMS). We hope you deploy and test them all. Well, if you really do, let us know your feedback on them, or write a comparison article if you have the time :-)

Drupal is a FOSS modular framework and CMS written in PHP. It is used as a back-end system for many different types of websites, ranging from small personal blogs to large corporate and political sites. The standard release of Drupal, known as “Drupal core”, contains basic features common to most CMSs. These include the ability to register and maintain individual user accounts, administration menus, RSS-feeds, customizable layout, flexible account privileges, logging, a blogging system, an Internet forum, and options to create a classic brochure-ware website or an interactive community website.

Joomla CMS enables you to build websites and powerful online applications. Many aspects, including its ease-of-use and extensibility, have made Joomla the most popular website software available. Best of all, Joomla is an open source solution that is freely available to everyone.

WebGUI is a platform for managing all your Web-based content and applications. WebGUI is modular, powerful, secure, and user-friendly. Most users find themselves managing content within hours, and developers can easily plug-in functionality to maximise a site’s potential. It is an easy to use content management system, which has ability to create and install custom applications. With WebGUI, you can publish articles, participate in forums, create photo galleries and can even create interactive event calendars.

WordPress is a state-of-the-art Web publishing platform with a focus on aesthetics, Web standards, and usability. It’s arguably the de-facto blogging platform.

TYPO3 is a free and open source content management system written in PHP. TYPO3 offers full flexibility and extendibility while featuring an accomplished set of readymade interfaces, functions and modules. The system is based on templates. People can choose an existing template and change features such as logo, colours, and fonts, or they can construct their own templates using a configuration language called TypoScript.

Mambo ( formerly named Mambo Open Source or MOS) is a free software/open source content management system (CMS) for creating and managing websites through a simple Web interface. It has attracted many users due to its ease of use. Mambo includes advanced features such as page caching to improve performance on busy sites, advanced templating techniques, and a fairly robust API. It can provide RSS feeds and automate many tasks, including web indexing of static pages.

e107 is a content management system written in PHP and using the popular open source MySQL database system for content storage. It’s completely free, totally customizable and in constant development.

XOOPS is an extensible, object oriented, easy to use dynamic Web CMS written in PHP. XOOPS is an ideal tool for developing small to large dynamic community websites, intra company portals, corporate portals, blogs and much more.

Plone is a free and open source CMS built on top of the Zope application server. It is suited for an internal website or may be used as a server on the Internet, playing such roles as a document publishing system and group ware collaboration tool. Plone is designed to be extensible.

OpenCms is a professional, easy-to-use website CMS. The fully browser-based user interface features configurable editors for structured content with well-defined fields. Alternatively, content can be created using an integrated WYSIWYG editor similar to well known office applications. A sophisticated template engine enforces a site-wide corporate layout and W3C standard compliance for all content.

Moodle is a Learning Management System (LMS). It is a free Web application that educators can use to create effective online learning sites. It’s open source licence and modular design means that people can develop additional functionality.

Source of Information : Linux For You May 2009

Saturday, August 15, 2009

Balancing Traffic Across Data Centres Using LVS

The LVS (Linux Virtual Server) project was launched in 1998 and is meant to eliminate Single Point of Failures (SPOF). According to the linuxvirtualserver.org website: “LVS is a highly scalable and available server built on a cluster of real servers, with the load balancer running on Linux. The architecture of the server cluster is fully transparent to the end user, and the users interact as if it were a single high-performance virtual server. The real servers and the load balancers may be interconnected by either a high speed LAN or by a geographically dispersed WAN.”

The load balancer is the single entry point into the cluster. The client connects to a single known IP address, and then inside the virtual server the load balancer redirects the incoming connections to the server(s) that actually does the work according to the scheduling algorithm chosen. The nodes of the cluster (real servers) can be transparently added/removed, providing a high level of scalability. The LVS detects node failures on-the-fly and reconfigures the system accordingly, automatically, thus providing high availability. Theoretically, the load balancer can either run IPVS or KTCPVS techniques for load balancing, but owing to a very high stability of IPVS, it is used in almost all the implementations I have seen. See the sidebar titled “IPVS v/s KTCPVS” for a brief note on the differences between the two. IPVS provides Layer 4 load balancing and KTCPVS provides Layer 7 load balancing (see the sidebar).

There are three load balancing techniques used in IPVS:
LVS/NAT – Virtual Server via NAT
LVS/TUN – Virtual Server via Tunnelling
LVS/DR – Virtual Server via Direct Routing



IPVS v/s KTCPVS
IPVS or IP Virtual Server is an implementation of Layer 4 load balancing inside the Linux kernel. Layer 4 load balancing works on OSI Layer 4 (Transport Layer) and distributes requests to the servers at the transport layer without looking at the content of the packets.

KTCPVS or Kernel TCP Virtual Server is an implementation of Layer 7 load balancing in the Linux kernel. Layer 7 load balancing is also known as application-level load balancing. The load balancer parses requests in the application layer and distributes requests to servers based on the content. The scalability of Layer 7 load balancing is not high because of the overhead of parsing the content.



IPVS Load Balancing Techniques
LVS/NAT: This technique is one of the simplest to set up but could present an extra load on the load balancer, because the load balancer needs to rewrite both the request and response packets. The load balancer needs to also act as a default gateway for all the real servers, which does not allow the real servers to be in a geographically different network. The packet flow in this technique is as follows:

• The load balancer examines the destination address and port number on all incoming packets from the client(s) and verifies if they match any of the virtual services being served.

• A real server is selected from the available ones according to the scheduling algorithm and the selected packets are added to the hash tables recording the connections.

• The destination address and port numbers on the packets are rewritten to match that of the real server and the packet is forwarded to the real server.

• After processing the request, the real server passes the packets back to the load balancer, which then rewrites the source address and port of the packets to match that of the real service and sends it back to the client.

LVS/DR: DR stands for Direct Routing. This technique utilises MAC spoofing and demands that at least one of the load balancer’s NIC and real server’s NIC are in the same IP network segment as well as the same physical segment. In this technique, the virtual IP address is shared by the load balancer as well as all the real servers. Each real server has a loop-back alias interface configured with the virtual IP address. This loop-back alias interface must be NOARP so that it does not respond to any ARP requests for the virtual IP. The port number of incoming packets cannot be remapped, so if the virtual server is configured to listen on port 80, then real servers also need to service on port 80. The packet flow in this technique is as follows:

• The load balancer receives the packet from the client and changes the MAC address of the data frame to one of the selected real servers and retransmits it on the LAN.

• When the real server receives the packet, it realises that this packet is meant for the address on one of its loopback aliased interfaces.

• The real server processes the request and responds directly to the client.

LVS/TUN: This is the most scalable technique. It allows the real servers to be present in different LANs or WANs because the communication happens with the help of the IP tunnelling protocol. The IP tunnelling allows an IP datagram to be encapsulated inside another IP datagram. This allows IP datagrams destined for one IP address to be wrapped and redirected to a different IP address. Each real server must support the IP tunnelling protocol and have one of its tunnel devices configured with the virtual IP. If the real servers are in a different network than the load balancer, then the routers in their network need to be configured to accept outgoing packets with the source address as the virtual IP. This router reconfiguration needs to be done because the routers are typically configured to drop such packets as part of the anti-spoofing measures. Like the LVS/DR method, the port number of incoming packets cannot be remapped. The packet flow in this technique is as follows:

• The load balancer receives the packet from the client and encapsulates the packet within an IP datagram, and forwards it to a dynamically selected real server.

• The real server receives the packet, ‘de-encapsulates’ it and finds the inner packet with a destination IP that matches with the virtual IP configured on one of its tunnel devices.

• The real server processes the request and returns the result directly to the user.

Source of Information : Linux For You May 2009

Friday, August 14, 2009

Containing Linux Instances with OpenVZ

Understanding the OpenVZ way of virtualisation and getting started with it.

Virtualisation is going mainstream, with many predicting that it will expand rapidly in the next few years. Virtualisation is a term that can refer to many different techniques. Most often, it is just software that presents a virtual hardware on which other software can run. Virtualisation is also done at a hardware level, like in the IBM mainframes or in the latest CPUs that feature the VT or SVM technologies from Intel and AMD, respectively. Although a fully featured virtual machine can run unmodified operating systems, there are other techniques in use that can provide special virtual machines, which are nevertheless very useful.


Performance and virtualisation
The x86 architecture is notorious for its virtualisation unfriendly nature. Explaining why this is the case requires a separate article on the subject. The only way to virtualise x86 hardware was to emulate it at the instruction level or to use methods like ‘Binary Translation’ and ‘Binary Patching’ at runtime. Well known software in this arena are QEMU, Vmware and the previously well-known Bochs. These programs emulate a full PC and can run unmodified operating systems.

The recent VT and SVM technologies provided by Intel and AMD, respectively, do away with the need to interpret/patch guest OS instruction streams. Since these recent CPUs provide hardware-level virtualisation, the virtualisation solution can trap into the host OS for any privileged operation that the guest is trying to execute. Although running unmodified operating systems definitely has its advantages, there are times when you just need to run multiple instances of Linux, for example. Then why emulate the whole PC? VT and SVM technologies virtualise the CPU very well, but the various buses and the devices sitting on them need to be emulated. This hits the performance of the virtual machines.

As an example, let us take the cases of QEMU, Xen, KVM and UML. This comparison is kind of funny, since the guys who wrote these software, never wanted to end up in a table like Table 1. This is like comparing apples to oranges, but all we want to understand from this table is whether the VMM can run an unmodified operating system, at what level it runs, and how the performance is compared to natively running it.


Introducing OpenVZ
Let us suppose you want to run only Linux, but want to make full use of a physical server. You can run multiple instances of Linux for hosting purposes, education and in testing environments, for example. But do you have to emulate a full PC to run these virtual, multiple instances? Not really. A solution like User Mode Linux (UML) lets you run Linux on the Linux kernel, where each Linux is a separate, isolated instance. To get a simplified view of a Linux system, let us take three crucial components that make up a system. They are: the kernel, the root filesystem, and the processes that are created as the system boots up and runs. The kernel is, of course, the core of the operating system; the root filesystem is what holds the programs and the various configuration files; and the processes are running instances of the programs created from binaries on the root file system. They are created as the system boots up and runs.

In UML, there is a host system and then there are guests. The host system has a kernel, and the root file system and its set of processes. Each guest has a kernel, a root file system and its own set of processes.

Under OpenVZ, things are a bit different. There is a single kernel and there are multiple root file systems. The guest’s root file systems are directory trees under the host file system. A guest under OpenVZ is called a Virtual Environment (VE) or Virtual Private Server (VPS). Each VPS is identified using a name or a number, where VPS 0 is the host itself. Processes created by these VEs remain isolated from others. That is, if VPS 101 creates five processes and VPS 102 creates seven, they can’t ‘see’ each other. This may sound a lot like chroot jails, but you must note the differences as well. A chroot jail provides only filesystem isolation. The processes in a chroot jail still share processes, networks and other namespaces with the host. For example, if you run ps -e from a chroot jail, you still see a list of systemwide processes. If you run a socket program from the chroot environment and listened on localhost, you can connect to it from outside the chroot jail. This simply means there is no isolation at the process or the network level. You can also verify this by running netstat –a from the chroot jail. You will be able to see the status of system wide networking connections.

OpenVZ is rightly called a container technology. In case of OpenVZ, there is no real virtual machine. The OpenVZ kernel is a modification of the Linux kernel that isolates namespaces and contains or separates processes created by one VPS from another. By doing so, the overhead of running multiple kernels is avoided and maximum performance is obtained. In fact, the worse case overhead compared to native performance in OpenVZ is said to be rarely more than 3 per cent. So, on a server with a few gigs of RAM, it is possible to run tens of VPSs and still have decent performance. Since there is only one kernel to deal with, memory consumption is also under check.


User bean counters
OpenVZ is not just about the isolation of processes. There are various resources on a computer system that processes compete for. These are resources like CPU, memory, disk space and at a finer level, file descriptors, sockets, locked memory pages and disk blocks, among others. At a VPS level, it is possible in OpenVZ to let the administrator set limits for each of these items so that resources can be guaranteed to VPSs and also to ensure that no VPS can misuse available resources. OpenVZ developers have chosen about 20 parameters that can be tuned for each of the VPSs.


The OpenVZ fair scheduler
Just as various resources are guaranteed to VPSs, CPU time for a VPS can also be guaranteed. It is possible to specify the minimum CPU units a VPS will receive. To make sure this happens, OpenVZ employs a two-level scheduler. The first level fair scheduler makes sure that no VPS is starved of its minimum CPU guarantee time. It basically selects which VPS has to run on the CPU next. At the scheduler level, a VPS is just a set of processes. Then, this set is passed on to the regular Linux kernel scheduler and one from the set is scheduled to run. In a VPS Web hosting environment, the hosting provider can thus guarantee the customer some minimum CPU power.


Installing OpenVZ
To install OpenVZ and have it work, you need to download or build an OpenVZ kernel, and also build or download pre-built OpenVZ tools. When you install the OpenVZ tools, it also installs the init scripts that take care of setting up OpenVZ. During system start-up and shut down, VEs are automatically started and shut down along with the Hardware Node (HN). Once the tools are installed, you can see that a directory named ‘vz’ is created in the root directory and it also contains other directories. On a production server, you may want ‘/vz’ on a separate partition.

Source of Information : Linux For You May 2009

Thursday, August 13, 2009

PhotoRec - Recovering Deleted Files Easily

Here’s a step-by-step guide to an application called PhotoRec, which helps you recover deleted data.

While most of us do regular back-ups of important data, some just postpone the back-up date until that dreaded day arrives—unexpectedly, in the blink of an eye, you suddenly realise all that precious data you gathered over months, has disappeared. Maybe it was just ‘the wrong key’ that got pressed. Well, it’s time to start pulling your hair out and sweating profusely. May the Lord rescue you if your boss has a bad temper. Well don’t lose hope; there’s PhotoRec to the rescue. A huge list of 140 different file types like JPEG, MID, SQLite, Real Audio, MP3, .doc, Macromedia, .exe, .flv, VMware images, .chm, .bz2, Autocad, RAR, Adobe Photoshop images, etc, are supported. This tutorial will help you in recovering your data without emptying your pockets—and hey, I hope it’ll put the smile back on your face once you’re out of your dire straits. However, you will have to spend some time searching for that important file.

PhotoRec is an open source multiplatform application distributed under the GPL. It is a companion program to TestDisk, an application for the recovery of lost partitions for a variety of file systems and to make disks bootable again. Apart from Linux, PhotoRec supports the following operating systems:

DOS/Win9x
Windows NT 4/2000/XP/2003/Vista
FreeBSD, NetBSD, OpenBSD
Sun Solaris
Mac OS X
UNIX

PhotoRec can recover lost files from the following file systems:
FAT/FAT32
NTFS
EXT2/EXT3
HFS+
ReiserFS (does not work very well with this file system)


Getting PhotoRec
While most distributions include TestDisk (which, in turn, has PhotoRec) in their repositories, you can download the source file or the RPM for your distro from www.cgsecurity.org/wiki/TestDisk_Download. Alternatively, you can go for PartedMagic (~ 90 MB in size), which contains TestdDisk and a host of other utilities. This is available at downloads.sourceforge. net/partedmagic/pmagic-3.7.iso.zip.


The road to recovery
You can use PhotoRec to recover data or pictures that have been deleted from a pen drive. You can also recover data from a partition of a hard disk and save it to another partition on the same disk. The only condition is the partition to which data will be saved should be equal to or larger than the partition from which data will be recovered. You will require a card reader for digital camera flash drives. Hard disks require a suitable USB enclosure. Alternatively, you can connect hard disks to an internal slot.

Step 1 : Create a directory called photorec_dir where PhotoRec will save files. Connect the flash/hard disk drive to your USB port/internal port (or just select your internal drive if you want to recover data from a partition) and fire up PhotoRec from the terminal as the superuser:

[root@localhost ~]# photorec

PhotoRec will display all your hard disks and USB drives. Choose the drive from which data needs to be recovered.

Step 2 : Up next is to choose the partition table type. PhotoRec supports a number of partition table types—Intel/PC, Apple, Sun Solaris, XBox, EFI GPT partition and ‘None’ partition types. Choose the Intel/PC type, which most of us use anyway. Even if you have a single partition, do not choose the ‘None’ option.

Step 3 : The next screen offers the option to recover data from the whole disk or the choice of selecting a partition. Choose your option using the up/down arrow keys. In case of a disk with multiple partitions, PhotoRec will display all the partitions, similar to what fdisk -l option does. Select the partition that contains the deleted data using up/down arrow keys.

Step 4 : We now come to the most important step of the recovery process. Select the ‘File Opt’ option using the left/right arrow keys and press Enter. PhotoRec puts forth a huge list of about 140 different file types that can be recovered. Use the up/down arrow keys to move between entries, and use the Space bar to select [x] or unselect [ ] file types to be recovered. For instance, to recover only picture files, choose ‘jpg’, ‘gif ’, ‘dsc’, etc. Select ‘Quit’ when done. This takes you back to the last screen. Select the ‘Search’ option.

Step 5 : To recover lost/deleted files, PhotoRec needs to know the file system type your files were stored in. Options include ‘ext2/ext3’ and ‘Other’. Choose ‘Other’ for FAT, NTFS, ReiserFS.

Step 6 : The next screen gives the option of choosing between recovering data from the ‘whole partition/disk’ or from the ‘free space’ of the partition. This ‘free space’ has the inodes [en.wikipedia.org/wiki/Inode] that contain the deleted data. Choose the ‘free space’ option using the up/down arrow keys.

Step 7 : PhotoRec now needs to know the destination folder to save the recovered files. In Step 1, we created a folder called photorec_files. Navigate to your path using the up/down arrow key and press Enter. Mine was /hdc6/home/nelson/photorec_files. If the path is not provided, the default directory will be /root.

Your recovered files will be saved in the destination directory in a number of folders recup_dir.1, recup_dir.2, etc. In a single recup_dir, you will find zip files, doc files, jpg files, etc, if you have chosen to recover these files.

Searching for a file through those that are lost can be a real pain. You need to sort these files out. Here is what you can do to sort out zip files. Make a directory for zip files as follows:

mkdir /home/user/Zip

Now as the root user:

mv /home/user/recup_dir.1/*.zip ~/Zip

Alternatively, issue the following command:

mv /home/user/recup_dir.*/*.zip ~/Zip

You can similarly repeat the steps for other file types. Of particular interest is how to sort out picture files. Let us separate those little thumbnail pictures from your ‘real’ ones. Again, create a directory for small pictures with the code below:

mkdir ~/small_jpg

Now, as the root user, issue the following command:

find /home/user/recup_dir.1/ -name “*.jpg” \
-size -20K | xarg -i mv {} /home/user/small_jpg

This will find all jpg files equal to or smaller than 20K and move them to small_jpg.

The bottom line
I found almost all of my deleted photographs and various other files with a little sweat (manually opening each file till I got the one I was really looking for) using PhotoRec. Anyone looking at a launch pad for a career in forensics?

Source of Information : Linux For You May 2009

Wednesday, August 12, 2009

Run Linux on Windows - Ulteo Virtual Desktop

We looked at how to use a feature in Ubuntu that lets you install it just like a typical Windows application. Once installed, you can simply reboot your system into the world of Linux— there’s no partitioning and other techie fiddling around required. The downside there was the “reboot to use Linux”. Well, what if I tell you that there’s a way to run Linux from inside Windows without a reboot? No, I’m certainly not talking about virtual machines and virtualization here, but something as simple as a typical click-next application installation procedure. Allow me to introduce you to something called Ulteo Virtual Desktop [www.ulteo.com], an open source application that nicely integrates into your Windows operating system and allows you to work on a full Linux system. Its main benefit is that you can run Linux and Windows applications simultaneously within the same desktop environment without rebooting the system.

Ready, steady, go!
Before starting the installation, let’s look at the hardware and software requirements. According to the website [www.ulteo.com], “Ulteo Virtual Desktop requires a x86-based PC with a modern 32-bit CPU and at least 512 MB RAM. A minimum of 4 GB of free HD space is required.” This is certainly not asking too much, I guess. My test system has the following specs:
Pentium 4 with HT technology
An Intel 865 motherboard
1 GB RAM
You can download the Ulteo VD from its website. There are two types of Ulteo products—Ulteo Application System is an installable live CD that offers a Windows alternative, and Ulteo Virtual Desktop is a coLinux-based Virtualised Ulteo workstation, which you can install on Windows. Its setup file is about 510 MB in size. It supports a full range of Linux applications, like Firefox, OpenOffice.org, KPDF, Skype, Thunderbird, The GIMP, Inkscape and many others. After downloading it successfully, double click on the set-up (.exe) file. You will be greeted by a welcome screen.

Follow the on screen instructions, and if everything goes fine, within five minutes Ulteo Virtual Desktop will be installed on your Windows system. Could getting started with Linux have been any easier? It’s like installing any application on Windows, with an additional benefit: it doesn’t even require a Windows reboot to work properly. The virtual desktop contains a virtual filesystem of around 5 GB, as a disk file in your Windows directories where “everything will happen”.

Run the Ulteo virtual desktop and you will see a panel at the top of the screen. You can browse through all the Linux applications by using the drop-down menu. Of course, you can configure the panel according to your liking by using the Configure Panel option. I guess you’ve figured out by now that running Linux applications alongside Windows apps at the same time is as easy as a piece of cake.

The default user will be created with the user name ‘me’. The password for the user ‘me’ is, well, ‘me’ by default! The ‘root’ user password is ‘root’. Ulteo Virtual Desktop contains many useful commands to work on the Linux Command Line Interface (CLI). You can also start KDE by using the startkde command. I won’t carry on more about this and that feature, but request you to explore the world of Linux applications yourself. Once you get used to them, I’m sure you’ll take a step in the right direction.

Of course, whenever you plan to uninstall the Ulteo VD from your Windows because you want to install an independent Linux operating system in your PC, go to the Windows control panel and under the Add or Remove Programs section, you will find Ulteo VD. Uninstall it the way you uninstall any other software in Windows. However, before you do this, you better have a separate Linux installation in a dedicated hard disk partition, or else...

Source of Information : Linux For You May 2009

Tuesday, August 11, 2009

PCLinuxOS 2009.1

PCLinuxOS 2009.1 has finally been released after a wait of almost two years. We take a look at how the new version of this former Distrowatch topper fares.

It’s only fair to first mention a few words about myself. I have been a hardcore Windows user, until about two years back. Since then, although I have been running Mandriva on my machine, I was still predominantly a Windows user because I am a professional gamer. As typical with most OS migrators (especially from Windows), KDE was my very first choice to test drive, and I have stayed glued to it till date. Hence, it’s no wonder that I was asked to review PCLinuxOS 2009 the moment our IT admin downloaded it (owing to its similarity with Mandriva, and KDE runs on it, by default).

Background check
PCLinuxOS is a GNU/Linux distribution that was built on Mandriva and was launched as a set of RPM packages. Bill Reynolds (a.k.a. Texstar) had created PCLOS as a fulfillment of his wish to package source code without having to deal with the rest of the world. It later evolved into a complete desktop operating system with its own unique set of features. The distro’s stability and ease of use made it popular pretty fast and even topped the distro ranking at Distrowatch in 2007.

Configuration of the test machine
I ran it on a pretty low-end laptop, a Compaq Presario C300, with the following specs:
Motherboard: Intel 915
Processor: Celeron M 1.67 MHz
Memory: 512 MB DDR
Hard Disk: 60 GB SATA

The installation
After I booted the machine off the LiveCD, I was presented with the option of a LiveCD install and a few safe modes. There was no option to install PCLOS 2009.1 right away.
Anyway, before loading the operating system, I was asked for my keyboard layout and was directly presented with the log-in screen. The default IDs are as usual, ‘guest’ and ‘root’, with their respective passwords mentioned at the top left of the screen. I
logged in as the root user right away because I wanted to install this baby on my machine,
ASAP.W hile you log in, you are presented with a loading screen that actually shows various icons symbolising each of PCLOS’ major releases. Although I am not too confident if it is a welcome change from KDE’s traditional loading screen, I am sure I would for all you KDE4 enthusiasts. PCLinuxOS 2009.1 does not ship the latest version of KDE. PCLOS runs on KDE 3.5.10 since the creators still do not have enough faith in the latest version. However, they have promised to make it available on the repos, very soon. When I clicked on the installation icon on the LiveCD desktop, it was interesting to find a message window asking me to remove a few video drivers from the installation if I do not need them. For example, if you have an ATI card, NVIDIA drivers would be redundant and it’s better not to install them in the first place. Removal of unnecessary drivers also gives you a faster booting sequence. Of course, if you are unsure, you can always click on ‘Cancel’ and move on with the installation process. PCLOS has a pretty simple drake-live installer that doesn’t ask for many options and gets the job of installation done in about 10 minutes flat. However, the partitioning window is not too newbie friendly, especially considering that PCLinuxOS is targeted at people who wish to migrate from other operating systems. Moreover, PCLOS gave me the option of creating a custom partition or installing it on available free space, but there was no option to install it on the complete disk. This just makes one wonder, are the guys at PCLOS trying to tell me that it is always good to have another distro installed in my computer, as a back-up? Don’t they have faith in their own offering? My computer already had openSUSE installed in it and I wanted to use the same ‘home’ folder. It was great to notice that PCLOS did not offer to format ‘home’ by default, but only the ‘root’ drive. After partitioning, the operating system was installed onto my machine and I was presented with the GRUB bootloader settings. While you can always tweak it out, for the benefit of the newbies, you can just keep clicking Next till the installer asks you to remove the disk medium and reboot the system. Oh, I almost forgot, there is also a PCLOS-GNOME version available for GNOME enthusiasts.

Initial impressions
One of the first setbacks I experienced was when I found out that there is no 64-bit edition for PCLinuxOS 2009.1. What’s more, booting into the operating system was a rude shock because of the really ugly artwork that they have come up with. Although a customised PCLOS button is always a welcome change from the same old KDE menu button, I only wish they had used a bit more aesthetics while designing it. Moreover, the default theme used by PCLOS will actually make the windows look like that of Windows Vista. Nice try, I’ll say. The very first thing I did, after logging in, is change the theme to ‘Plastik’ to give myself some visual respite. What also irritates me, at times, is why the distros don’t manage to automatically gauge a machine’s graphics capability and switch on the 3D graphics automatically? For new users, there is a high probability of them using the sam distro for years, without even being aware of its 3D capability, ever! At least, a user should be presented the option of enabling 3D graphics right after logging on, for the very first time. Well, speaking of PCLOS, activating Compiz in my machine was a farce since it degraded the performance to a frustrating level. But, then again, I guess this time the fault is at my laptop’s end—it’s too old.

A deeper look into the distribution
The software manager in PCLOS is pretty confusing, at least for me. Allow me to present my case. Firstly, Synaptic as a package manager, for a RPM distro, hits you right at the middle of the head. What’s more, one can’t even install an RPM file by double-clicking on it. I had to download KPackage to be able to install RPMs without any hassle. (Editor’s Note: Boy, and I though Synaptic as the default package manager was a seller—at least for most people ;-)) The repositories have really old software. It beats me that PCLinuxOS still uses printer-drake, something that has been discontinued even by Mandriva, which maintained it. Unfortunately, I did not have a printer nearby to test its functionality. Fortunately, there’s something good about the distro, too. PCLOS Control Centre shows some good improvements, including changes to how networking, firewalls, proxy and shares are handled. Software load times are very good, at least compared to Mandriva and openSUSE. Moreover, since I use the ‘printscreen’ a lot everyday, finding the button automatically configured with KSnapshot brought a smile in my face, right away.

Bundled software
The desktop version of PCLinuxOS 2009.1 has exactly the same number of software as on the LiveCD. Moreover, it is just the sweet spot for the ‘migrators’ since the distro doesn’t pack in as many software as in Mint and Sabayon, but neither is it as bare as in MiniMe. One function that won my heart is the option to create a LiveCD and LiveUSB on-the-fly. While I created a LiveUSB of PCLOS itself, I have a feeling that the tool shall allow one to create a LiveUSB of any distro, since it asked me for the image file of the operating system.

Compatibility issues
I, personally, didn’t find a single compatibility issue with PCLOS. Every single piece of hardware on my laptop got detected seamlessly by the operating system. Besides, since I need to use ndiswrapper for the wireless adaptor, PCLOS actually gave me options to choose the one that’s for me and went on to install it without a hitch! I just wish the DVD drive of my personal computer was in working condition, because I was very interested to check out how easy/difficult it gets to detect my Web cam.

Multimedia capabilities
I must admit, I was blown away by the distro’s multimedia capabilities. It didn’t matter which audio or video file I threw at it, PCLOS managed to play it with absolute elan. All my MP4 and AVI files worked out of the box. It was also good to see KMplayer as the default video player, and not Kaffeine. My only complaint… I wish VLC and SMplayer were also bundled with the distribution.

PCLOS for an Internet PC
PCLinuxOS 2009.1 is the perfect GNU/Linux distribution to be installed in an Internet PC. With the latest craze for this genre of computers and with the attempts to reduce their retail prices, PCLOS should be an automatic choice. It comes with the latest versions of Mozilla Firefox and Thunderbird. That takes care of the browsing and e-mail needs. For IM, there’s always Kopete to bail you out. Yet, a desire for Pidgin does keep simmering. What’s more, it also has Flash already installed in it. So, one can start YouTubing right away, without the hassle of downloading the plug-in separately, as is the case with almost every other operating system! Also, apart from KTorrent, PCLOS also has FrostWire to take care of your downloading needs. However, beware! Just like proprietary software, unauthorised downloading of movies and audio files is still a criminal offence. For a distro version that took two years to launch, the level of development is zilch. Moreover, I found PCLinuxOS 2009.1 a bit behind the times in a few areas.

Rounding off
Although PCLinuxOS 2009.1 is absolutely solid and recommendable to one and all, the latest artwork is an absolute turn off. I have been using Mandriva 2009 for quite some time now, with KDE 4.2 on it, and had wanted to shift to another distro for a few weeks. However, the very looks of PCLOS 2009 forced me to stay put with Mandy and bide my time for the Spring Edition. Otherwise, if you are looking out for just plain functionality and do not give a hoot about how your operating system looks, you should be one very happy PCLinuxOS user, for sure!



PCLinuxOS 2009.1 Rating

Pros: Painless Installation, preloaded ndiswrapper drivers, well-organised Control Center, Software bundle, Audio/Video Codecs Support, Rock Stable.

Cons: Pathetic Artwork, RPMs can’t be installed with doubleclicks, unoptimised Compiz, KDE 4.2 absent.

Platform: x86

Price: Free (as in beer)

Website: www.pclinuxos.com

Monday, August 10, 2009

Red Hat launches the Teiid data integration project

Red Hat has announced the official launch of the Teiid data virtualization system project in the JBoss.org community. Teiid is the first open source community project that aims to deliver Enterprise Information Integration (EII) with both relational and XML data virtualisation, according to the company. “When Red Hat acquired MetaMatrix in April 2007, we committed to releasing the data services technology in the open source community and Teiid is the result of that promise,” said Craig Muzilla, vice president of middleware business at Red Hat. “The demand for applications and services leveraging the data stores of a typical organisation never ends. But now, enterprises have a choice between expensive, proprietary data services and an enterprise-class open source platform at a fraction of the cost.”

Most open source data integration technologies centre on physically moving or copying data to locations. Teiid bucks this trend by focusing on data virtualisation, which enables real-time access to data across heterogeneous data sources without copying or moving the data from the systems of record. Its Java Database Connectivity (JDBC) and Web Services interfaces are designed to provide straightforward integration with both custom and COTS applications.

Source of Information : Linux For You May 2009

Sunday, August 9, 2009

Ingres joins Open Source Channel Alliance

Ingres Corporation, an open source database management company, has joined the Open
Source Channel Alliance, a group founded by Red Hat and SYNNEX Corporation, to extend the value of open source to a broad set of customers through the 15,000 resellers of SYNNEX. Ingres joins as a charter member and has signed a distribution agreement with SYNNEX.

The alliance enables SYNNEX to distribute the Ingres product portfolio as well as other open source applications, as part of an end-to-end open source solution approach that will deliver a high return-on-investment (ROI) as IT budgets continue to shrink. Specifically, Ingres Database, an open source database that helps organizations develop and manage business critical applications at an affordable cost, will be available through SYNNEX’ distribution channel.

Source of Information : Linux For You May 2009

Saturday, August 8, 2009

The Android 1.5 earlylook SDK is out!

To give developers a headstart, the next release of the Android 1.5 early-look SDK has now been made available. The Android 1.5 platform will include many improvements and new features for users and developers. Additionally, the SDK itself introduces several new capabilities that enable you to develop applications more efficiently for multiple platform versions and locales. The new SDK has a different component structure compared to earlier SDK releases. This means that it does not work with older Eclipse plug-ins (ADT 0.8) and the old SDKs (1.1 and earlier) do not work with this new Eclipse plug-in (ADT 0.9). Note that since this is an early-look SDK, the tools and documentation are not complete. Additionally, the API reference documentation for it is provided only in the downloadable SDK package—see the documentation in the SDK’s docs/reference/directory. Visit developer.android.com/sdk/preview/features.html to get a grip over the feature set of the Android 1.5 SDK.

Source of Information : Linux For You May 2009

Friday, August 7, 2009

Better ASP.NET with MonoDevelop 2.0, Mono 2.4

MonoDevelop 2.0 has been released with the aim of enabling developers to write desktop and ASP.NET Web applications on Linux, port .NET applications created with Microsoft Visual Studio to Linux and Mac OS X, and maintain a single code base for all three platforms. MonoDevelop provides tools to simplify and streamline .NET application development on Linux, including: improved ASP.NET and C# 3.0 support and a built-in debugger. It now uses MSBuild-style project files to increase interoperability with Visual Studio. Web projects are now also compatible with Visual Studio 2008 and Visual Web Developer 2008 SP1, providing more options for developers who want to build and deploy their Web applications on both Windows and Linux.

New features available in Mono 2.4 include a new code generation engine that improves the performance of executing .NET applications on the Mono runtime, while managed Single Instruction, Multiple Data (SIMD) extensions enable developers to take advantage of hardware acceleration without having to program in lower-level languages. Additional runtime innovations, such as full ahead-of-time (AOT) compilation, bring Mono-based applications to new platforms, including the Apple iPhone. MonoDevelop 2.0 and Mono 2.4 are available now and can be downloaded at www.mono-project.com/downloads.

Source of Information : Linux For You May 2009

Thursday, August 6, 2009

VirtualBox 2.2 supports OVF and virtual appliances

Sun Microsystems has announced the availability of Sun VirtualBox 2.2, which introduces support for the new Open Virtualization Format (OVF) standard, along with significant performance enhancements and updates. VirtualBox 2.2 software enables users to build virtual machines or appliances and effortlessly export them from a development environment, and import them into a production environment. Support for OVF also helps to ensure VirtualBox 2.2 software is interoperable with other technologies that follow the standard. Additional features of VirtualBox 2.2 software include Hypervisor optimisations, 3D graphics acceleration for Linux and Solaris applications using OpenGL, and a new host-interface networking mode, which makes it easier to run server applications in virtual machines. To download the freely available Sun VirtualBox software, visit: http://www.sun.com/software/products/virtualbox/get.jsp.

Source of Information : Linux For You May 2009

Wednesday, August 5, 2009

The Post-Monopoly Game

Walking out of cable TV’s walled garden. DOC SEARLS

Far as I know (that is, as far as some Motorola engineers have told me), my Dish Network and Verizon FiOS set-top boxes are Linux machines. So is my Sony flat-screen TV, which came complete with a four-page document explaining the GPL. Linux in each case is embedded. That is, it is enslaved to a single purpose or to a narrow set of purposes. This isn’t a big deal. Linux has become the default embedded operating system for all kinds of stuff. I just think TV would have a lot bigger future if we liberated the whole category from enslavement to Hollywood and its captive distributors. Until we do, the one-way-ness of TV remains a highway to hell. I’m getting a good look at that hell right now, sitting in an airport lounge in Boston. It’s still winter as I’m writing this. There are lots of canceled flights and, therefore, lots of relevant news on the lounge television, tuned, as always, to CNN. If this were two years ago, there would have been people gathered around that TV to see what’s up with the weather and the rest of the news. But, not today. Nobody is watching. The TV is just noise in the background. Of the 18 passengers waiting here, all but two are using laptops. I just did a quick walk-around and talked to a few of the laptoppers. All of them are using their laptops to keep up with weather and flight conditions. TV can’t compete with that. There are too many good sources of information on the Web now. More important, they’re all interactive. TV isn’t—not yet, anyway. As it happens, our family withdrew cold-turkey from TV this morning. We called Verizon and canceled our FiOS TV service. The set-top box, Linux innards and all, is now sitting in the hall, waiting to be ferried back to a Verizon office.
The reason was choice. Even at its best, TV didn’t give us much—not compared to the endless millions of choices on YouTube, Hulu and everybody else with video to share on the Web.

The free stuff—old-fashioned over-the-air (OTA) TV—is a mess. By the time you read this, most or all of the TV stations in the US will be transmitting digital audio and video, via ATSC. Old-fashioned analog NTSC, which has been with us since the 1940s, will be gone by the June 12 deadline. I’m not sure how much people will bother watching. All you get are a couple dozen channels, tops. On cable or satellite, you can get much more. I don’t think you can get a bigger selection than what Verizon FiOS offers. Where we live, FiOS carries 596 channels, including 108 HD channels and 136 premium channels, most of which are also HD. By the time I canceled the service on the phone this morning, the FiOS agent had reduced the price of the Extreme HD plan to $47.99/month, including free DVR settop box rental (normally $12.99/month). That plan has 358 channels, including all 108 HD channels. It’s a helluva deal, if you like a lot of TV. Making FiOS even better is that it comes over a fiber-optic connection that provides uncompromised data quality. But we still canceled it, because we’d rather not watch channels at all. We’d rather watch programs. Or movies. Or stuff that doesn’t fit either category. And, we’d prefer a better way to select them than by struggling with any of the cable or satellite systems’ “guides”, which are all terrible. It’s much easier to navigate file paths and to do it on a real computing device, including today’s smart phones (which are really data devices that also do telephony). Because there’s lots of video available on-line and from rental services like Netflix, we figure we’d take advantage of those. As it happens, Verizon makes it easy to get them in high-def, because we remain customers of FiOS high-speed Internet. There we get a solid 20Mbps both upstream and down, for $64.99. It’s an excellent deal, because that’s for the whole world, and not just for a few hundred “channels” behind the gate to a walled garden.

Now that we’ve walked out of cable TV’s walled garden, I can see how it traps the carriers even more than it traps the viewers. What they’re trapped in is a scarcity game. And, they’re losing. The producers and consumers are getting together without them. I can watch ACC sports on-line at the Raycom site. Nearly every channel on TV has a Web site that offers either live or archived content. True, all of them are pains in the butt to use (some requiring Flash plugins or worse), and many make half-hearted efforts to protect their cable and station distributors. But the writing is on the screen.

Now I’m thinking about what the abundance business would be like. What would you want out of the carriers if their Linux set-top boxes were open, or if you could provide your own? What game should they be playing once all they own is, say, the railroads and not the whole Monopoly board? Or hey, choose your own metaphor. Let’s help them out here. They’ll need it.

Source of Information : Linux Journal Issue 182 June.2009