Sunday, February 28, 2010

Anatomy of an Embedded Linux System - Boot Loader

Boot loaders can be laden with features, but their primary responsibility is to get the processor initialized and ready to run the operating system. Later in the book, I go through the boot-up process from beginning to end; but for practical purposes, this is the software that’s first run on the system.

In most modern embedded Linux systems, the kernel is stored in a partition in flash memory. The boot loader copies that flash partition into a certain location in RAM, sets the instruction pointer to that memory location, and tells the processor to start executing at the instruction pointer’s current location. After that, the program that’s running unceremoniously writes over the boot loader. The important thing to note is that the boot loader is agnostic with respect to what is being loaded and run. It can be a Linux kernel or another operating system or a program written to run without an operating system. The boot loader doesn’t care; it performs the same basic actions in all these use scenarios.

As boot loaders have matured, they’ve become more like operating systems with network, video, and increasing support for flash storage devices. Later in this book, I look at the popular boot loaders you may encounter when working with Linux.

One more important note: boot loaders are now ubiquitous. Rarely as an embedded Linux developer do you need to port a boot loader for your board. You may want to recompile the boot loader (I’ll cover that, too) to remove functionality to conserve space and increase boot time, but the low-level engineering is done by the board vendor. Users of Intel-based systems that use the Phoenix BIOS boot loader have no opportunity to change this code, because it’s baked into the board design.


If you’re familiar with Linux systems running on PC-type hardware, you’re no doubt familiar with Grub and LILO. If you’re not, hit the reset button and wait. You see one or the other as the computer starts. Technically, these aren’t boot loaders, but loaders for Linux. Old-school Linux users remember running a similar program, loadlin, from the DOS prompt in order to begin running Linux after the PC first booted DOS or Windows; in this case, MS-DOS acted as the boot loader for Linux. The boot loader contained in the BIOS of the machine read these programs from a certain sector of a disk and begins running them.

Source of Information : Pro Linux Embedded Systems (December 2009)

Saturday, February 27, 2010

Anatomy of an Embedded Linux System

At runtime, an embedded Linux system contains the following software components:

• Boot loader: What gets the operating system loaded and running on the board.

• Kernel: The software that manages the hardware and the processes.

• Root file system: Everything under the / directory, containing the programs run by the kernel. Every Linux system has a root file system. Embedded systems have a great amount of flexibility in this respect: the root file system can reside in flash, can be bundled with the kernel, or can reside on another computer on the network.

• Application: The program that runs on the board. The application can be a single file or a collection of hundreds of executables.

All these components are interrelated and thus depend on each other to create a running system. Working on an embedded Linux system requires interaction with all of these, even if your focus is only on the application.

If you’re new to Linux but have used other commercial embedded solutions, the notion of a distinct kernel and root file system can be disorienting. With a traditional embedded solution, the application code is linked into a binary image with the rest of the embedded OS. After initialization, the operating system calls a function that is the entry point into your code and starts running.


Source of Information : Pro Linux Embedded Systems (December 2009)

Friday, February 26, 2010

10,000-Foot Embedded Linux Development Flyover

This section contains a quick and dirty explanation of the embedded Linux development process. Embedded Linux is a topic with many interdependencies; this section lays out the big points and purposely lacks detail so you can see the big picture without getting distracted by the fine details. The heft of this book should indicate that more details are forthcoming.



Target Hardware
Nearly every project involves selecting the processor to be used. A processor is just a chip and not much more until it’s soldered on a board with some peripherals and connectors. Processor vendors frequently create development boards containing their chip and a collection of peripherals and connectors. Some companies have optimized this process to the point that a board with connectors and peripherals is connected to a small daughter board containing the processor itself, allowing the base board to be shared across several different processor daughter boards.

Development boards are large, bulky, and designed to be easily handled. Every connector is supported by the processor because the vendor wants to create only one board to ship, inventory, and support. The development kit for a cell phone occupies as much space as a large laptop computer. In a majority of products, the development board isn’t used in the final product. An electrical engineer lays out a new board that fits in the product’s case and contains only the leads for the peripherals used in the final application, and he probably sneaks in a place to connect a serial or JTAG port for debugging.



Obtaining Linux
Linux is nearly always included with a development board and has support for the peripherals supported by the chip or the development board. Chances are, the board was tested with Linux to ensure that the processor and connectors work as expected. Early in the history of embedded Linux, there were porting efforts to get Linux running on a board; today, this would be an anomaly. If the board is an Intel IA-32 (frequently called x86) architecture, you can boot it (under most circumstances) with any desktop distribution of Linux. In order to differentiate their IA-32 boards, vendors frequently include a Linux distribution suitable for an embedded project.

Just as the development board has every known connector, the Linux included with the board is suited for development and not for final deployment. Part of using Linux is customizing the kernel and the distribution so they’re correct for the application.



Booting Linux
Because most board vendors supply a Linux distribution with a board, getting Linux booted is about configuring the software services Linux needs to boot and ensuring the cabling is proper and attached. At this point, you probably need a null modem serial cable, a null modem Ethernet cable (or a few cables and an Ethernet concentrator or switch), and maybe a USB cable. Unlike a desktop system with a monitor, the user interface for an embedded target may be just a few lights or a one-line LCD display. In order for these boards to be useful in development, you connect to the board and start a session in a terminal emulator to get access to a command prompt similar to a console window on a desktop Linux system.

Some (enlightened) vendors put Linux on a Flash partition so the board runs Linux at power up. In other cases, the board requires you to attach it to a Linux host that has a terminal emulator, file-transfer software, and a way to make a portion of your Linux system’s hard drive remotely accessible. In the rare cases where the board doesn’t include Linux (or the board in question hails from before you welcomed the embedded Linux overlords), the process requires you to locate a kernel and create a minimal root file system.



Development Environment
Much of the activity around embedded development occurs on a desktop. Although embedded processors have become vastly more powerful, they still pale in comparison to the dual core multigigabyte machine found on your desk. You run the editor, tools, and compiler on a desktop system and produce binaries for execution on the target. When the binary is ready, you place it on the target board and run it. This activity is called cross-compilation because the output produced by the compiler isn’t suitable for execution on your machine.

You use the same set of software tools and configuration to boot the board and to put the newly compiled programs on the board. When the development environment is complete, work on the application proper can begin.



System Design
The Linux distribution used to boot the board isn’t the one shipped in the final product. The requirements for the device and application largely dictate what happens in this area. Your application may need a web server or drivers for a USB device. If the project doesn’t have a serial port, network connection, or screen, those drivers are removed. On the other hand, if marketing says a touch-screen UI is a must-have, then a suitable UI library must be located. In order for the distribution to fit in the amount of memory specified, other changes are also necessary.

Even though this is the last step, most engineers dig in here first after getting Linux to boot. When you’re working with limited resources, this can seem like a reasonable approach; but it suffers from the fact that you don’t have complete information about requirements and the person doing the experimentation isn’t aware of what can be done to meet the requirements.

Source of Information : Pro Linux Embedded Systems (December 2009)

Thursday, February 25, 2010

Commercial Reasons to Use Embedded Linux

In addition to the outstanding technical aspects of Linux that make it advantageous to use for an embedded device, there are also compelling commercial reasons to choose Linux over other commercial offerings. Some of these reasons, such as lower costs, will appeal to the bean-counters in your organization; but the key difference is that you’ll have greater control over a critical aspect of your development project.



Complete Software Ecosystem
The universe of software around Linux is vast and varied. If you’re new to Linux, you’ll soon find that Linux is a more than its namesake kernel: it’s a collection of software that works together. The nice thing about Linux is that what’s available on your desktop can be used on an embedded system. Even better, the ability to run and test software on a regular desktop gives you the chance to see if the software offers the right feature set for the application.

The nature of open source gives you plenty of choices for nearly every piece of your configuration, and that can cause consternation if you’re trying to pick the right package. Oddly, the large amount of choice is posited by commercial vendors as a reason to use their closed source or highly structured Linux distribution. Don’t fall for this line of reasoning! You’re reading this book so you can take advantage of what open source has to offer.

Open source software is always available as source code. Most of the time, the authors have written both the package itself and the build instructions so that the project isn’t architecture dependent and can be built for the target system. Software that is part of the root file system is nearly always completely portable, because it’s written in a high-level language like C. Because of the nice job done in the Linux kernel to isolate architecture-dependent code, even a vast amount of kernel code is portable.

The key to using what’s available in open source is two-fold: having a cross-compiler and having the build scripts work when you’re cross-compiling. The vast majority of packages use the automake/autoconf project for creating build scripts. Automake and autoconf by default produce scripts suitable for cross-compilation. Later in this book, I explain how to use them properly. I talk about the cross-compiler that later in this chapter and tell you how to build your own from source. Although constraints like memory and storage space may make some choices impractical, if you really need certain functionality, the source is there for you to reduce the size of the package. Throughout the book, you'll find references to software projects typically used by embedded engineers.



No Royalties
Royalties, in the software sense, are the per-unit software costs paid for every unit shipped, which compensate an owner of intellectual property for a license granted for limited use of that property. Royalties increase the Bill of Materials (BOM) cost of every unit shipped that contains the licensed intellectual property. A licensee must make regular reports and prompt payment and must avail itself for audit so that the holder of the licensed property can ensure the monies paid accurately reflect what was shipped.

In this model, forms must be filled out, checked, and signed. Paper must be shuffled, and competitive wages and benefits need to be paid to those doing the shuffling. With a contract to sign, expect a bill from your attorney as well. The entire cost of the royalty is greater than what appears on the BOM.

Royalties impose another cost: lack of flexibility. Want to experiment with a newer processor? Want to create a new revision of the product or add features? All these activities likely require permission from the vendor and, when you ship a new product, a new royalty payment to not only make but properly administer.

When presenting an embedded operating system that has royalties, the salescritter2 will explain that the payments represent a small concession compared to what the software they’re selling brings to the table. What they leave out is the loss of liberty with respect to how your company structures its development operations and the additional administrative and legal costs that never make it into the calculations showing your “savings.” Caveat emptor.



Control
This is an often-missed reason to use Linux: you have the sources to the project and have complete control over every bit of software included in the device. No software is perfect, but with Linux you aren’t at the mercy of a company that may not be interested in helping you when a defect becomes apparent. When you have trouble understanding how the software works, the source code can serve as the definitive reference.

Unfortunately, this amount of control is frequently used to scare people away from Linux. “You may have the source code, but you’ll never figure anything out … it’s so complex,” the fear mongers say. The Linux code base is well written and documented. If you’re capable of writing a commercial embedded Linux application, you no doubt have the ability to understand the Linux project—or any other open source project, for that matter.

Source of Information : Pro Linux Embedded Systems (December 2009)

Wednesday, February 24, 2010

Embedded Linux - Security

Security means access to data and resources on the machine as well as maintaining confidentiality for data handled by the computer. The openness of Linux is the key to its security. The source code is available for anyone and everyone to review; therefore, security loopholes are there for all to see, understand, and fix.

Security has a few different dimensions, all of which may be necessary for an embedded, or any other, system. One is ensuring that users and programs have the minimal level of rights to resources in order to be able to execute; another is keeping information hidden until a user with the correct credentials requests to see or change it. The advantage of Linux is that all of these tools are freely available to you, so you can select the right ones to meet your project requirements.



SELinux
A few years ago, a governmental agency with an interest in security—the National Security Agency (NSA)—and with several other private companies with similar interests took it upon themselves to examine the Linux kernel and introduce concepts such as data protection, program isolation, and security policies, following a Mandatory Access Control (MAC) model. This project is called SELinux (where SE stands for Security Enhanced), and the changes and concepts of the project were made part of the 2.6.0 release of the Linux kernel.

The MAC concepts in SELinux specify controls whereby programs must be assigned the rights to perform certain activities, like opening a socket or file, as part of their security policy. The assignment must come from an administrator; a regular user of the system can’t make changes. SELinux systems operate under the principle of least privilege, meaning that a process has only the rights granted and no more. The least-privilege concept makes errant or compromised programs less dangerous in a properly configured environment, because the administrator has already granted a program the minimal set of rights in order to function. As you may guess, creating security policies can be a project itself. I’ll spend some time talking about how to go about doing this on an embedded system.



PAM
Pluggable Authentication Modules (PAM) are a way to create a uniform interface to the process of authenticating users. Traditionally, user authentication on a Linux system occurs by looking up the user name in the /etc/passwd file and checking the password encrypted therein (or using the shadow password file). The PAM framework also provides session management: performing certain actions after a user is authenticated and before they log out of the system.

The open design of the PAM system is important for embedded projects that are attached to a network in a corporate environment. For example, if the device serves as a shared drive, some of your target market may use LDAP to decide who has access to the device, whereas others may put use accounts in an NT domain. PAM works equally well with both of these technologies, and you can switch between the two with simple configuration changes.



IPsec
IPsec is a system for authenticating and transmitting data between two trusted hosts over an IP network. IPsec at level 3, the Network Layer of the OSI stack, isn’t a single piece of software but rather a collection of tools working together to provide secure communication. By operating at this layer, IPsec can provide secure communication between hosts with no participation by the protocols running further up the stack.

A classic use for IPSec is encrypting virtual private network traffic. It can also be used in cases where you want to use a simple protocol for sending data, like HTTP or even plain text, but you want this data to be kept secure. One of the nice things about embedded Linux is that you can perform all the configuration work to use IPsec on a pair of desktop machines and transport those configuration files to the embedded target. This is possible because when you create an embedded Linux system, you can use the same software that is running on the target on the desktop used for development, making it an ideal platform for emulating your target hardware.


Source of Information : Pro Linux Embedded Systems (December 2009)

Tuesday, February 23, 2010

Embedded Linux - Process Isolation and Control

The Linux kernel, at its most basic level, offers these services as a way of providing a common API for accessing system resources:
• Manage tasks, isolating them from the kernel and each other
• Provide a uniform interface for the system’s hardware resources
• Serve as an arbiter to resources when contention exists

These are very important features that result in a more stable environment versus an environment where access to the hardware and resources isn’t closely managed. For example, in the absence of an operating system, every program running has equal access to all available RAM. This means an overrun bug in one program can write into memory used by another program, which will then fail for what appear to be mysterious, unexplainable reasons until all the code on the system is examined. The notion of resource contention is more complex than just making sure two processes don’t attempt to write data to the serial port simultaneously—the scarcest resource is time, and the operating system can decide what tasks run when in order to maximize the amount of work performed. The following sections look at each item in more detail.



Manage and Isolate Tasks
Linux is a multitasking operating system. In Linux, the word process describes a task that the kernel tracks for execution. The notion of multitasking means the kernel must keep some data about what is running, the current state of the task, and the resources it’s using, such as open files and memory. For each process, Linux creates an entry in a process table and assigns the process a separate memory space, file descriptors, register values, stack space, and other process specific information. After it’s created, a process can’t access the memory space of another process unless both have negotiated a shared memory pool; but even access to that memory pool doesn’t give access to an arbitrary address in another process.

Processes in Linux can contain multiple execution threads. A thread shares the process space and resources of the process that started it, but it has its own instruction pointer. Threads, unlike processes, can access each other’s memory space. For some applications, this sharing of resources is both desired and convenient; however, managing several threads’ contention for resources is a study unto itself. The important thing is that with Linux, you have the design freedom to use these process-control constructs.

Processes are isolated not only from each other but from the kernel as well. A process also can’t access arbitrary memory from the kernel. Access to kernel functionality happens under controlled circumstances, such as syscalls or file handles. A syscall, short for system call, is a generic concept in operating system design that allows a program to perform a call into the kernel to execute code. In the case of Linux, the function used to execute a system call is conveniently named syscall().
When you’re working with a syscall, as explained later in this chapter, the operation works much like a regular function call for an API. Using a file handles, you can open what appears to be a file to read and write data. The implementation of a file still reduces to a series of syscalls; but the file semantics make them easier to work with under certain circumstances.

The complete separation of processes and the kernel means you no longer have to debug problems related to processes stepping on each other’s memory or race conditions related to trying to access shared resources, such as a serial port or network device. In addition, the operating system’s internal data structures are off limits to user programs, so there’s no chance of an errant program halting execution of the entire system. This degree of survivability alone is why some engineers choose Linux over other lighter-weight solutions.



Memory Management and Linux
Linux uses a virtual memory-management system. The concept of virtual memory has been around since the early 1960s and is simple: the process sees its memory as a vector of bytes; and when the program reads or writes to memory, the processor, in conjunction with the operating system, translates the address into a physical address.

The bit of the processor that performs this translation is the memory management unit (MMU). When a process requests memory, the CPU looks up the address in a table populated by the kernel to translate the requested address into a physical address. If the CPU can’t translate the address, it raises an interrupt and passes control to the operating system to resolve the address.

The level of indirection supplied by the memory management means that if a process requests memory outside its bounds, the operating system gets a notification that it can handle or pass along to the offending process. In an environment without proper memory management, a process can read and write any physical address; this means memory-access errors may go unnoticed until some other part of the program fails because its memory has been corrupted by another process.

Programs running in Linux do so in a virtual memory space. That is, when a program runs, it has a certain address space that is a subset of the total system’s memory. That subset appears to start at 0. In reality, the operating system allocates a portion of memory and configures the processor so that the running program thinks address 0 is the start of memory, but the address is actually some arbitrary point in RAM. For embedded systems that use paging, this fiction continues: the kernel swaps some of the available RAM out to disk when not in use, a feature commonly called virtual memory. Many embedded systems don’t use virtual memory because no disk exists on the system; but for those that do, this feature sets Linux apart from other embedded operating systems.



Uniform Interface to Resources
This sounds ambiguous because there are so many different forms of resources. Consider the most common resource: the system’s memory. In all Linux systems, from an application perspective, memory from the heap is allocated using the malloc() function. For example, this bit of code allocates 100 bytes, storing the address to the first byte in from_the_heap:

char* from_the_heap;
from_the_heap = (char*) malloc(100);

No matter what sort of underlying processor is running the code or how the processor accesses the memory, this code works (or fails in a predictable manner) on all Linux systems. If paged virtual memory is enabled (that is, some memory is stored on a physical device, like a hard drive) the operating system ensures that the requested addresses are in physical RAM when the process requests them. Memory management requires interplay between the operating system and the processor to work properly. Linux has been designed so that you can access memory in the same way on all supported processors.

The same is true for accessing files: all you need to do is open a file descriptor and begin reading or writing. The kernel handles fetching or writing the bytes, and that operation is the same no matter what physical device is handling the bits:

FILE* file_handle;
file_handle = fopen(“/proc/cpuinfo”, “r”);

Because Linux is based on the Unix operating system philosophy that “everything is a file,” the most common interface to system resource is through a file handle. The interface to that file handle is identical no matter how the underlying hardware implements this functionality. Even TCP connections can be represented with file semantics.

The uniformity of access to resources lets you simulate a target environment on your development system, a process that once required special (and sometimes costly) software. For example, if the target device uses the USB subsystem, it has the same interface on the target as it does on the development machine. If you’re working on a device that shuffles data across the USB bus, that code can be developed, debugged, and tested on the development host, a process that’s much easier and faster than debugging code on a remote target.



System Calls
In addition to file semantics, the kernel also uses the idea of syscalls to expose functionality. Syscalls are a simple concept: when you’re working on the kernel and want to expose some functionality, you create an entry in a vector that points to an entry point of for the routine. The data from the application’s memory space is copied into the kernel’s memory space. All system calls for all processes are funneled through the same interface.

When the kernel is finished with the syscall, it transfers the results back into the caller, returning the result into the application’s memory space. Using this interface, there’s no way for a program in user space to have access to data structures in the kernel. The kernel can also keep strict control over its data, eliminating any chance of data corruption caused by an errant caller.


Source of Information : Pro Linux Embedded Systems (December 2009)

Monday, February 22, 2010

Embedded Linux - Standards Based

The Linux operating system and accompanying open source projects adhere to industry standards; in most cases, the implementation available in open source is the canonical, or reference, implementation of a standard. A reference implementation embodies the interpretation of the specification and is the basis for conformance testing. In short, the reference implementation is the standard by which others are measured.

If you’re new to the notion of a reference implementation, it may be a little confusing. Take for example the Portable Operating System Interface for Unix (POSIX) for handling threads and interprocess communication, commonly called pthreads. The POSIX group, part of the Institute of Electrical and Electronics Engineers (IEEE) is a committee that designs APIs for accomplishing the tasks of interacting with a thread but leaves the implementation of that standard to another group. In practice, when work begins on a standard, one or more of the participants on the committee volunteer to create the code to bring the standard to life, creating the reference implementation. The reference implementation includes a test suite; other implementations consider the passage of the test suite as evidence that the code works as per the specification.

Using standards-based software is not only about quality but also about independence. Basing a project on software that adheres to standards reduces the chances of lock-in due to vendor-specific features. A vendor may be well meaning, but the benefits of those extra features are frequently outweighed by the lack of interoperability and freedom that silently become part of the transaction and frequently don’t receive the serious consideration they merit.

Standards are increasingly important in a world where many embedded devices are connected, many times to arbitrary systems rather than just to each other. The Ethernet is one such connection method, but others abound, like Zigbee, CANbus, and SCSI, to name a few.

Source of Information : Pro Linux Embedded Systems (December 2009)

Sunday, February 21, 2010

Why Use Embedded Linux?

Embedded Linux is just like the Linux distributions running on millions of desktops and servers worldwide, but it’s adapted to a specific use case. On desktop and server machines, memory, processor cycles, power consumption, and storage space are limited resources—they just aren’t as limiting as they are for embedded devices. A few extra MB or GB of storage can be nothing but rounding errors when you’re configuring a desktop or server. In the embedded field, resources matter because they drive the unit cost of a device that may be produced in the millions; or the extra memory may require additional batteries, which add weight. A processor with a high clock speed produces heat; some environments have very tight heat budgets, so only so much cooling is available. As such, most of the efforts in embedded programming, if you’re using Linux or some other operating system, focus on making the
most with limited resources.

Compared to other embedded operating systems, such as VxWorks, Integrity, and Symbian, Linux isn’t the most svelte option. Some embedded applications use frameworks such as ThreadX1 for application support; the framework runs directly on the hardware, eschewing an operating system altogether. Other options involve skipping the framework and instead writing code that runs directly on the device’s processor. The biggest difference between using a traditional embedded operating system and Linux is the separation between the kernel and the applications. Under Linux, applications run in a execution context completely separate from the kernel. There’s no way for the application to access memory or resources other than what the kernel allocates. This level of process protection means that a defective program is isolated from kernel and other programs, resulting in a more secure and survivable system. All of this protection comes at a cost.

Despite its increased resource overhead compared to other options, the adoption of Linux continues to increase. That means engineers working on projects consider the increased overhead of Linux worth the additional cost. Granted, in recent years, the costs and power demands of system-onchip (SOC) processors has decreased to the point that they cost no more than a low-power 8-bit microcontroller from the past, so using a more sophisticated processor is an option when it might not have been before. Many design solutions use off-the-shelf SOC processors and don’t run the leads from chip for the Ethernet, video, or other unused components.

Linux has flourished because it provides capabilities and features that can’t be made available with other embedded solutions. Those capabilities are essential to implementing the ever more sophisticated designs used to differentiate devices in today’s market. The open source nature of Linux means embedded engineers can take advantage of the continual development happening in the open source environment, which happens at a pace that no single software vendor can match.



Technical Reasons to Use Embedded Linux
The technical qualities of Linux drives its adoption. Linux is more than the Linux kernel project. That software is also at the forefront of technical development, meaning that Linux is the right choice for solving today’s technical problems as well as being the choice for the foreseeable future. For example, an embedded Linux system includes software such as the following:

• SSL/SSH: The OpenSSH project is the most commonly used encryption and security mechanism today. The open nature of the project means that thousands of security experts are constantly evaluating it; when problems are found, updates occur in a matter of hours, provided the update is not included with the exploit itself.

• Apache and other web servers: The Apache web server finds its way into embedded devices that need a full-featured web server. For devices with less demanding requirements, users can pick from smaller web servers like Boa, lighttp, and (a personal favorite) micro_httpd.

• The C Library: The Linux environment has a wealth of options in this area, from the fully featured GNU C Library to the minimalist dietlibc. If you’re new to embedded Linux development, having a choice in this area reinforces the open nature of open source.

• Berkeley sockets (IP): Many projects move to Linux from another operating system because of the complete, high-performance network stack included in the operating system. A networked device is becoming the rule and not the exception.

Source of Information : Pro Linux Embedded Systems (December 2009)

Saturday, February 20, 2010

KINGSTON V-SERIES

40GB SSD KIT Gives a real speed boost, and includes all the parts and software you will need

Solid State Disks, or SSDs, are designed to replace harddisks in computers. Unlike harddisks, which have moving parts, they use the same type of memory as USB memory keys. At£75 Kingston’s 40GBSSDkit is the cheapest we have seen. The kit includes the disk itself, the cables and bracket needed to fit it into a desktop PC and a program that can copy the contents of your harddisk onto the new SSD. The benefit of anSSDis speed. We copied a new Windows 7 installation from harddisk to the SSD. With the harddisk the computer took 52 seconds to start, but the SSD took only 35 seconds.The speed for copying files increased by around 50per cent. It’s silent and with no moving parts should be more reliable than a harddisk. 40GBis not much space so this SSD would best be used to store Windows and programs, with files stored on a separate harddisk.

DETAILS
Contact: Kingston 01932 738888
Info: www.kingston.com
Retail price: £75
Buy: www.computeractive.co.uk/bestprices

Source of Information : Computer Active Issue 310 January 7 2010

Friday, February 19, 2010

MICROSOFT WIRELESS MOBILE MOUSE 4000

It is A small but perfectly formed mouse, ideal for laptops or small hands.

This small but perfectly formed mouse from Microsoft is designed to be used with a laptop computer. For that reason it’s smaller than the average desktop model, and its tiny USB receiver fits neatly into a slot on the bottom when not in use. However, it works just as well with any desktop PC. As well as the usual two buttons it has a third smaller button to one side and a clickable wheel. One AA battery is required, which is included in the box. The Mobile Mouse 4000 uses Microsoft’s new Bluetrack technology ,and although we are not sure this is any better than the laser systems found in other mice it worked well enough. The mouse we tested came in a rather gaudy shade of yellowish green, but it’s also available in black and other shades. The retail price of £35 is high, but it can be found online for less than £20. At that price, it’s a good buy


DETAILS
Contact: Microsoft 0870 60 10 100
Info: www.microsoft.com/uk
Retail price: £35
Buy: www.computeractive.co.uk/bestprices


Source of Information : Computer Active Issue 310 January 7 2010

Thursday, February 18, 2010

Windows 7 on the go

Tiny ‘netbook’ laptops are flying off the shelves – find out howWindows 7 can help you get the best performance from portable PCs of all types


Even just a few years ago, the idea of taking a computer out and about would have struck some people as rather pointless. Many consumers chose laptops simply because they didn’t take up much space in the home. Those laptops lightweight enough to carry from home to work or school were much more expensive and aimed at those who had a real need to work on documents while out and about.

Today’s laptops are lighter and cheaper than ever but there’s a new generation of portable computers that use low-powered processors and solid-state storage to produce smaller, leaner and even less expensive computers called ‘netbooks’. Early netbookswere sold with Linux, while more recently Windows XP has been the supplied operating system. Vista was much rarer, largely because it required more memory and processing power to run than XP. Windows 7was designed from the outset with portability in mind and despite it being more advanced than Vista, most netbooks can run it with little impact on performance. In this article we will show you how to get the best performance From a netbook running Windows 7.

One of the quickest ways to boost netbook performance is to turn off the advanced graphics features that make Windows 7 look great, but which use a significant proportion of processing power. The main culprit is Aero, which produces the transparent effect on the edges of Windows and includes handy tools such as AeroPeek and JumpLists. These are useful tools but if you need the extra performance for other tasks, it is easy to limit Aero’s processor use. Right-click the desktop, select Personalize, scroll down the page until you see ‘Basic and high-contrast themes’ and choose Windows 7 Basic. You’ll still get a basic level of Peek and Jump Lists continue to work.



Power down
Battery life is an important concern if you want to use a computer away from home. Netbook processors use less power but their displays and other components still draw lots of energy – just like laptops. The good news is that you can change the settings in
Windows 7 to preserve power. Click the Start menu icon and select Control Panel followed by Power Options. A selection of power plans is displayed here – a plan is a group of settings that tell Windows how much power to use for various components and when to dim the display and cut power to some components altogether. The default setting in Windows 7 Home Premium is Balanced, which ensures the processor has enough power to give good performance when in active use by applications while reducing energy usage if the computer is not being used after a set time. The Power Saver plan is ideal for portable PCs when away from a mains connection. While Balanced dims the screen after 10 minutes of inactivity, Power Saver does this after five minutes. Similarly, the amount of time Power Saver waits to put the PC into Sleep mode is cut by half to 15 minutes. Sleep mode cuts power to everything but memory, so work in open documents is not lost. To change the power plan your computer uses, click the button next to its name so the circle has a blue dot in it and close the Power Options window.
You can also customise the power settings to save even more battery life. Each plan has a link next to it labelled ‘Change plan settings’, which you can click to change the time Windows 7waits before switching off the display or entering Sleep mode. Windows 7 also reactivates the display and sleeping components much more quickly than Vista.



Customise settings
Windows 7 also enables you to create your own power plan. Back at the main Power Option window click ‘Create a power plan’ to devise your own, which is handy if you want to have settings for use at home and at a second specific location. The settings are the same whichever method you choose so let’s create our own plan for using the netbook while on a train. First choose the settings you want to change, Power Saver in our case, and give it a name before clicking Next. Then set how long to wait before dimming the display and entering Sleep mode before clicking Create. Our plan now appears in the ‘Select a power plan’ window, where we can click ‘Change plan settings’ and then ‘Change advanced power settings’.



Wonderful wireless
Wireless networking also eats into battery life because the network adapter in the computer is continually using energy. Many netbooks have a switch or keyboard button to deactivate wireless when not needed – consult your computer’s manual to find out about this. You can also reduce the power consumption of the wireless adapter in your power plan (although this might affect your web access if the signal strength from the wireless network is low). Open Wireless Adapter Settings, then Power Saving Mode and left-click ‘Setting’ to reveal a drop down menu where you can choose your preference. You can always undo changes by clicking ‘Restore plan defaults’. Our final tip concerns the netbook screen. The relatively small size of some netbook displays can make it difficult to read documents and websites. You should check that your display is set to its native resolution; open the Start menu, click Control Panel and then double-click Display followed by Screen Resolution. The resolution of the display is normally included in the netbook’s manual, although Windows 7 can detect this. Click the resolution dropdown menu and ensure the figure selected has ‘(recommended)’ next to it. Another way to reclaim a valuable proportion of display space is to set the Taskbar to appear only when
it is needed. Right-click the Taskbar and select Properties. In the dialogue box that appears, click the box labelled ‘Auto-hide the taskbar’, followed by OK. The Taskbar nowappears only when the mouse pointer is moved towards it. There’s no doubt that Windows 7 performs well on most netbook computers. There are more options to conserve battery life, networking is easier and with more touch screen netbooks due to launch later this year, using a computer while travelling looks set to become even easier.


Source of Information : Computer Active Issue 310 January 7 2010

Wednesday, February 17, 2010

Work wonders with the web

Browser add-ons can make your web surfing easier, faster and more fun – we list the very best for Internet Explorer and Firefox

Your internet browser probably works beautifully: both Firefox and Internet Explorer, the two most popular browsers, are fast, stable and simple ways of accessing the internet. You could, if you wanted, install them and add only the update files that are made available periodically to ensure you always had the most up-to-date version of your browser of choice. If you did, though, you’d be missing out. Both Internet Explorer and Firefox allow you to download and install tools, called add-ons that can make surfing the web easier, quicker and more enjoyable. In this issue we’ll list the must-have browser add-ons for both Firefox and Internet Explorer, and explain how to install them.


Useful functions
The concept of a browser add-on is simple. It’s a tiny program that doesn’t run on its own but instead adds a new function to an existing web browser – normally a function that would otherwise require you to install yet another program on your computer and run that when required. As they connect to a web browser, add-ons are sometimes known as ‘plug-ins’. Although some add-ons are created by companies, most are created by other users to add a feature that they themselves want. The Firefox web browser has a particularly strong community of add-on authors, thanks partly to the fact that the program is open source, so anyone can examine the code that makes it tick.

Although there are many add-ons for Internet Explorer, and we’ll list some in this article, we’d recommend using Firefox if you want the best choice. Sowhat can you do with these small extras for your browser? The sky is nearly the limit. You can download videos from Youtube, keep tabs on your email account without needing to login, or block distracting, time-wasting adverts from view. Alternatively, you can download add-ons that enable you to keep up to date with social-networking sites while browsing elsewhere. Installing add-ons is generally simple. Both Firefox and Internet Explorer have special websites that gather all the add-ons together, allowing you to search by keyword or category, and then sort your results by the number of downloads they’ve had or recommendations they’ve received from other users. For Firefox, go to https://addons.mozilla.org, or visitwww.ieaddons.com for Internet Explorer.


Easy does it
Once you’ve found the add-on youwant, installing it is generally easy. In Firefox, select your chosen add-on and click the green Add to Firefox button. You’ll be given a moment to reflect on your choice, and, once the add-on is installed, you may be asked to restart the browser. This can be postponed until later if you’re in the middle of doing something important. Internet Explorer is slightly more complex, as it divides its add-ons into groups such as toolbars, Accelerators (we’ll explain what this means shortly) and search providers, with a fourth, unmentioned category for everything that works like a Firefox add-on. Installation is a bit more intimidating as well –when you click on most links you’ll be prompted to download an installer that runs as if you were installing a separate application. It’s also not unusual for Windows to display some stark warnings about your PC’s security, even though you’re downloading from an official Microsoft site. But there is a strong range of add-ons available for the world’s most popular browser, particularly when you consider the role of Internet Explorer’s Accelerators. An Accelerator is an add-on for Internet Explorer 8 that allows you to highlight text and perform a task based on its content. For instance, if you download the Bing Maps accelerator you can highlight a postcode then click the small blue icon that appears and find the postcode on a map without needing to enter it manually.


Make surfing faster
Adobe’s Flash technology is one of the most important aspects of the internet today. Without it we wouldn’t be able to watch videos on sites such as Youtube or our very own Computeractive TV. Sadly, though, Flash can also be used to create particularly annoying adverts on websites. These often take time to load, get in the way and generally frustrate you, while some even include sound effects to make the experience even more infuriating.
Luckily though, both Firefox and Internet Explorer offer add-ons that prevent Flash adverts from running. Firefox’s is the simplest – visitwww.snipca.com/x522 and click the green button to install Adblock Plus. Firefox needs to restart once the add-on is installed, but from there you should find visiting heavily advertised websites a much more restful experience.

Each time you see a distracting advert, click on the small ‘Block’ button above it, in the future it – or others coming from the same company’s server –won’t appear. For those using Internet Explorer, the best tool for blocking adverts is IE7 Pro (www.ie7pro.com). Despite the name, which suggests compatibility only with an older version of Internet Explorer, IE7 Pro also works well with Internet Explorer 8. It blocks all manner of adverts, including pop-ups and Flash adverts, and also adds a number of extra features to Internet Explorer. These include mouse gestures, which allow you to instruct Internet Explorer to perform certain tasks, such as going back a pagewhen you drawa shape with the mouse, and the ability to download videos from websites such as Youtube.


Browsing engines
The name might sound daunting, but these add-ons can be very handy. There are many different web browsers available, and sadly not every website works properly with every browser. So if you use Firefox, for example, you may occasionally come across sites that simply refuse to work as you are not using Internet Explorer. An add-on called IE Tab by PCMan, however, can fix this. It adds an option to the menu that appears when you right click on aweb page. If you find a page that doesn’t display, simply right-click the page and choose ‘ViewPage in IE Tab’. A new tab will open in Firefox, but this will use the technology from inside Internet Explorer to display the page properly. It’s faster and simpler than opening another browser just for that one page. IE Tab can be found atwww.snipca.com/x525. Internet Explorer doesn’t have the same problem when it comes to compatibility; as long as Internet Explorer remains the most popular web browser, just about every website will be designed towork with it. It can, however, feel slow to load pages in comparison with other browsers. In particular, Google’s slimmed down Chrome often reveals pages far more quickly. Google Chrome Frame does much the same thing for Internet Explorer as IE Tab does for Firefox, except it brings Chrome’s speed advantages to Microsoft’s browser. Microsoft is unsurprisingly rather sniffy about the prospect of people using someone else’s technology in its browser, so you have to go to Google’swebsite at www.snipca.com/x529 to get it. It’s easy to download and install, however, and runs verywell.


Multimedia opportunities
The number ofways to share your digital photos on the internet is mind-boggling. Sites such aswww.flickr.com cater for keen amateurs,while social-networking sites such as Facebook are perfect for sharing family snaps with friends and family. But while getting your photos on the internet is easy, actually viewing them can be harder, and laboriously clicking through an online album of dozens of pictures is dull. Again, however, free add-ons can help. Available for both Firefox and Internet Explorer, Cooliris is both a standard program you can run from the Start menu and a browser add-on. It can be downloaded fromwww.snipca.com/x530 for Firefox and www.snipca.com/x532 for Internet Explorer. Once you’ve installed it and restarted your browser, load a web page with lots of images on it, and then hover the mouse over one of them. A small icon appears which, when clicked on, loads a spectacular 3Dwall of images from the page. You can drag the wall around and zoom in on image you want to see more closely, or start an automatic slide show, regardless of whether the site the pictures come from offers one itself.


Social networking
The great thing about sites such as Facebook (www.facebook.com) and Twitter (www.twitter.com) is that they let you see what your friends are doing, but constantly going back to the sites to check them, if you’r ewaiting for a message, for instance, is tedious. There are various standalone programs that allow you to keep tabs on things, but a simpler solution is to install an add-on that allows you to check your favourite sites from within your browser. A highly automated solution is Yoono, which claims to “socialise your browser”. It’s available for both Internet Explorer (www.snipca.com/x534) and Firefox (www.snipca.com/x533), and if you have accounts at more than one social networking site it can be a timesaver. It connects to Twitter and Facebook, plus Myspace, Flickr and Friendfeed, aswell as instant-messaging services such as LiveMessenger, AOL Instant Messenger and Google Talk. As long as your browser is running it keeps you up to date with what your contacts are doing, which is entertaining, if highly distracting. You can also share things you’ve found online on social networking sites without needing to open the site in question.


Bookmark synchronisation
Bookmarks, known as Favorites in Internet Explorer, are a handy way of keeping track of websites that you use regularly or may want to visit again. Although both Firefox and Internet Explorer keep track of recently visited sites in a History tool, it’s far easier to bookmark a page of handy information than to fish around for it a few weeks later when you can only remember half the title. On the other hand, bookmarks can be rather limited if you use more than one computer, as neither browser gives you a simple way of synchronising a list of bookmarks between two or more computers. Fortunately, there’s a great add-on that can help. Xmarks is a simple way to keep bookmarks synchronised between several computers. First you install the add-on, then create a free account and Xmarks sends details of your bookmarks to its own storage space on the internet.You can then install the add-on on another computer, log into your account and the bookmarks will be downloaded. The service can even keep bookmarks synchronized if you’re using Internet Explorer on one computer and Firefox on the other. It’s free to download from www.xmarks.com.


Breath of life
Add-ons for your browser can transform your experience of using the internet. They can make it faster, or less stressful and distracting by getting rid of adverts. Some add-ons are so useful you’ll wonder how you ever got by without them. Firefox has a definite edge – its add-ons website is better organised than Internet Explorer’s, and the huge number of users means the popularity and ratings of each add-on are genuinely useful. Installing them is also slightly easier than it is with Internet Explorer. However, IE8 has some deniably neat touches, such as accelerators: our advice is to try both and see which works best for you.

Tuesday, February 16, 2010

2009 The year in review

We cast tan eye back over the biggest stories reported by Computeractive over the past year



January
The beginning of the year was dominated by concerns over BT’s plans to use the controversial advertising technology Phorm, which monitors the websites visited by internet users in order to show them more relevant adverts. After the end of a test the company said it expected “to move towards deployment”. Meanwhile, Microsoft released an early testing version of its Windows 7 operating system for the public to try,while Nintendo’s hugely popular Wii games console caused trouble. Molly Elvig of Colorado sued the gaming giant for $5m (£3m) after a Wii controller flew from her son’s hand and smashed a television.



February
February saw computer security experts warning of rising infections spreading via USB memory keys. The Conficker worm, still active almost a year later, was exploiting part of Windows called Autorun to spread via USB devices. Meanwhile, Computeractive readers alerted us to a website that was run by Gary Cooper, a then 16-year-old boy from Essex. GC’s PCs sold mobile phones and other gadgets, but some customers complained of unauthorized charges – Terence Warmbier was charged an extra £800 after buying four phones.



March
In March children’s charities called on the Government to take action to limit access to websites showing images of child abuse. The NSPCC claimed that around 700,000 households were connected to the internet via internet service providers (ISPs) that did not subscribe to the Internet Watch Foundation block list. Youtube users found themselves up against a different block list as a spat between the video clip website and the Performing Right Society (PRS) came to a head. Thousands of music videos were made unavailable in the UK.



April
Fake security software that fools users into paying for unnecessary ‘virus removal’ was a major problem in 2009, and it hit the headlines in April after security company Finjan claimed scammers could earn $10,800 (£7,452) per day. According to another report, some of those tricked by dodgy software might not even notice they have lost money. Security firm CPP reported that British cardholders were unable to account for over £10.8bn of transactions in the preceding year, with more than a third of those surveyed unable to account for a fifth of their monthly transactions.



May
Phorm leapt back into the limelight in May as the European Commission launched legal action against the Government for failing to ensure the privacy of internet users. The online retail giant Amazon also stepped in, announcing it would not allow its sites to be monitored by the system. Phorm fought back with a website in which it accused critics of orchestrating a “smear campaign”. The rather strange website, which mocked the company’s critics as “privacy pirates”, disappeared a few months later.



June
This summer saw a 50p tax on landline telephones proposed to pay for next-generation broadband. Lord Carter’s Digital Britain report said the levy was the “fairest” way to ensure that everyone in the UK could benefit from fibre-optic high-speed broadband services. Meanwhile three people were arrested and released on police bail in relation to GC’s PCs (see February). Detective Simon Dovaston told Computeractive that “enquiries are still required to resolve the matter”, but two months later we were told that the investigation had been concluded with no charges brought.



July
In July we reported that Microsoft was planning to slash the cost of Windows 7 in the UK. At the time, the company said it would not be selling upgrade licences in the UK, so full copies of Windows 7Home Premium would cost just £80 until the end of 2009. The controversial advertising technology Phorm made the news again, as both BT and Talktalk announced they had dropped plans to use the technology. Of the UK’s major ISPs this left only Virgin Media in talks with the company.



August
In August we reported an Ofcom study claiming that only one person in nine was getting the speed advertised for their broadband internet connection. This did not come as a huge surprise as Computeractive had been campaigning for clearer broadband advertising since 2007. Microsoft announced that upgrade editions of Windows would be available to UK customers, but raised the cost of the full versions of Windows 7 Home Premium to £150.



September
The possibility of a ‘three-strikes’ rule that could see illegal file sharers disconnected from the internet had been looming for months, but after comments that were made by Communications Minister Stephen Timms in August the debate flared up again. Privacy International, the Open Rights Group and other organizations warned that it could break both European and human rights legislation. September saw good news for would-be broadband customers, as BT announced a test of a new technology that could see broadband reach homes up to 12km from phone exchanges.



October
Anation wide retune of the Freeview television system had sounded the death knell for some older set-top boxes. Other users found channels such as BBC1 and ITV moved to new channel numbers, while almost half a million people lost access to ITV3 and ITV4.
In the same month, Shadow Culture Secretary Jeremy Hunt said a Conservative government would scrap the 50p levy on landline telephones proposed by the Government to pay for next-generation broadband.



November
The ISP Talktalk opened a new front in the fight against a lawthat could see illegal file sharers disconnected from the internet, warning the Government that it would refuse to disconnect its customers unless concrete proof of their guilt was provided. It also launched a campaign website, asking visitors to petition against the proposals. Our next issue brought news that the Digital Economy Bill, which is planned to introduce a ‘three-strikes’ rule, had been included in the Queen’s speech.



December
Just like January, controversy surrounded an ISP planning to monitor its users. This time the culprit was Virgin Media, which planned to test a technology called Cviewon 40 per cent of its network. The trial was designed to look for illegally shared music files, but Virgin did not plan to report users to the copyright owners. Computeractive also got its first look at the Google’s new operating system, Chrome OS.


Source of Information : Computer Active Issue 310 January 7 2010

Monday, February 15, 2010

Scandanavia and Slough not so slow

SCANDINAVIANSCAN signup for th eworld’s first commercial super-fast mobile broadband service. Dubbed4G (fourth generation) networks, download speeds can reach 100Mbits/sec–about 10 times that of3Gservices. OperatorTeliasonera, which launched the service in Stockholm and Oslo last month, said the service will open up new possibilities, including online gaming and web conferencing. UKoperatorO2 also plans to launch a limited trial to broadband customers in Slough.The networks use Long Term Evolution (LTE) technology, which is considered the next major standard in mobile broadband technology and is designed to work alongside existing 3Gtechnology.

The only drawback is that no mobile handsets have been developed, so people in Norway, Sweden and Slough will initially have to access the service using a computer. But Samsung and LGare reportedly launching handsets capable of using the new networks
frommid to late2010. The equipment for the Stockholm4Gcity network has been supplied by Ericsson, while Chinese firm Huawei has supplied the adapters needed for Slough and Oslo. LTE, however, is on a roll and a further 17 networks are expected to go online this year in the US, Canada, Japan, Norway, South Korea, South Africa, Sweden and Armenia.

Source of Information : Computer Active Issue 310 January 7 2010

Sunday, February 14, 2010

Australia to filter the web

THE AUSTRALIAN government plans to amend its current legislation to force internet service providers (ISPs) to block obscene and illegal websites. The decision follows a test of internet filtering, started in 2008 as part of the A$128m Plan for Cyber Safety. In a statement, the Australian government said it plans to “introduce legislative amendments to the Broadcasting Services Act to require all ISPs to block RC (Refused Classification)-rated material hosted on overseas servers. “RC-rated material includes child sex abuse content, sexual violence and the detailed instruction of drug use.” Senator Stephen Conroy said the scheme “balances safety for families and the benefits of the digital revolution”. Australian ISPs will use a blacklist of restricted sites operated by ACMA, the Australian Communications and Media Authority. Critics of the plans have raised concerns over how this list will be compiled and how easily it can be circumvented. They point out that last year an early version of the list, obtained by website Wiki leaks, was found to contain perfectly innocuous sites, including that of a dentist from Queensland.
Senator Scott Ludlam, communications spokesperson for the Australian Greens political party, described the policy as “pointless and simply misguided”. Last October the UK Government dropped plans to introduce legislation that would force ISPs to block access to sites hosting images of child abuse and other illegal content. It said the plans were not needed as self regulation works.

Source of Information : Computer Active Issue 310 January 7 2010

Saturday, February 13, 2010

Criminals target social sites

CYBER CRIMINALS WILL make more use of social-networking sites such as Facebook and Bebo this year to launch attacks and spread malicious software, security companies have warned. As 2009 ended, they predicted that hackers will release more malicious software using rootkits and other covert methods to avoid detection by security software. Security firm Kaspersky said file-sharing services could become increasingly popular for launching attacks, after they were used to spread several infections last year. Joseph Souren, vice president of internet security at security company CA, said: “It is a cat and mouse game. Cybercriminals are evolving and are constantly looking for new vulnerabilities to exploit.” However, one of last year’s most popular scams, deceiving people to download fake anti-virus tools, could become less effective as more people become aware of it. Such software presented users with fake scan results and bogus warning messages in an attempt to steal bank details collect payment for a fake security product or install other malicious software. “The fake antivirus market has been saturated and profits for cybercriminals have fallen,” Kaspersky said. But Symantec said that criminals have not completely finished with anti-virus-related fraud. It has noted more sites selling legitimate, but free, anti-virus software such as AVG. “Consumers are still being ripped off paying for software they can get free,” the company said.

Source of Information : Computer Active Issue 310 January 7 2010

Friday, February 12, 2010

WARNING TO ADOBE USERS

Security experts are warning Adobe customers to be extra vigilant following the discovery of an attack that attempts to exploit vulnerability in Adobe’s Reader and Acrobat products. Security researchers for Symantec said that the attack comes as a Trojan hidden in a PDF file attachment in junk emails. The attack attempts to lure email recipients into opening the attachment. When the file is opened, a malicious file disables the Windows firewall and downloads software. Adobe has since confirmed it is investigating the “reports of vulnerability in Adobe Reader and Acrobat 9.2 and earlier versions” and will issue a fix as soon as it has more information.


Source of Information : Computer Active Issue 310 January 7 2010

Thursday, February 11, 2010

Harmful advert fears addressed

Advertisements in on-demand programmes now rated and restricted

CONCERNS ABOUT inappropriate adverts appearing on video-on-demand (VoD) services have been addressed by a newlaw. Providers of these services, such as Channel 4’s 4OD and the ITVPlayer, must nowcomply with the newAudioVisual Media Services (AVMS) Directive that came into force on 19 December 2009. AVMSis the successor to the Television without Frontiers Directive. It imposes the British Code of Advertising, Sales Promotion and Direct Marketing (CAP Code) onVoD services. This means that in future the Advertising Standards Authority (ASA) can act if viewers complain about an advert. Although VoD adverts don’t have to be cleared in the sameway as those appearing on traditional TVservices (called linear TV), VoD services must abide by certain rules, including banning product placement in all children’s programmes. To ensure that the standards are adhered to, VoD providers including Virgin, Sky, ITV, Channel 4 and Five began checking adverts on their services before the AVMSbecame law. Clearcast,which monitors the adverts before they have been aired for these companies, said that because of the nature of VoD, timing restrictions currently assigned to linear adverts cannot be carried across. The company said itwould assign levels for providers that will indicatewhether there is violence, nudity, or potential harm or offence in an advert. Level one will be equivalent to advertswhich must not be shown around children’s programmes, level two for those that can’t be shown before 7.30pm, level three adverts can be shown after 9pm, and levels four and five can be shown after 10pm and 11pm respectively. The inclusion of online and on-demand video in the AVMSDirectivewas controversial, as some feared that the European Commissionwas attempting to extend media regulation to thewhole internet; for example user-generated videos, such as those posted on Youtube. Clearcast pointed out that the directive is only applicable to mass market TV-like services. Kristoffer Hammer for Clearcast said: “Any display advertisements or audio-visual ads the viewer will see before selecting aVoD programme will not be covered by the directive.”


Source of Information : Computer Active Issue 310 January 7 2010

Wednesday, February 10, 2010

iPhone users are happier to micropay

Apple may have found the Holy Grail of computing – persuading web users to make micropayments to consume content – if a new survey’s findings are correct. The bad news for newspapers is that it could be harder to persuade customers to pay for news. According to the Olswang Convergence Survey, published by UK law firm Olswang, iPhone users are ‘amongst the heaviest users of digital content’ and are ‘also more willing than any other consumer to pay for a wide range of types of content’. The survey was conducted by Olswang and YouGov, which carried out an online poll of 1013 UK adults and 536 13-17-year-olds. If the survey’s findings are correct, Apple has found a way to make money from online digital content, rather than relying solely on online advertising, which few companies apart from Google have mastered. The survey found that iPhone users were ‘heavy users of services such as on-demand TV’: 19% of iPhone users watch it on their phones compared with 3% of the survey base. Interestingly, iPhone users are also heavier users of on-demand TV on their televisions at home: 37% of iPhone users compared with 26% of the survey base. Furthermore, Olswang found that 37% of iPhone users were interested in accessing on-demand TV on their phones in the future, compared with 11% of the overall survey base. Even more significantly, Olswang found iPhone users ‘demonstrated greater willingness to use micropayments and subscriptions to pay for access to a broad range of content’.


Source of Information : MacUser.January 2010

Tuesday, February 9, 2010

Schiller defends App Store approval policy

‘Schiller claimed 90%of rejectionswere for technical reasons such as bugs or functions that didn’twork as intended’

Faced with the exodus of some high-profile developers of iPhone apps, Apple senior vice-president for worldwide product marketing Phil Schiller granted a rare interview to explain the company’s App Store approval process, which has been variously condemned as confusing, arbitrary and controlling. Schiller spoke to BusinessWeek days after Joe Hewitt, who created the Facebook iPhone app, announced he would no longer develop for the iPhone. Hewitt, who also helped develop the Firefox browser, said his decision ‘had everything to do with Apple’s policies’, which he alleged were ‘setting a horrible precedent for other software platforms’. Another Mac and iPhone developer, Rogue Amoeba, also announced it wouldn’t develop any more iPhone apps after its Airfoil Speakers Touch app was blocked by Apple over alleged trademark infringement.
While Schiller promised Apple would be more flexible, he dedicated most of his interview to defending the company’s approach, and pointed out that Apple approves the vast majority of apps submitted to it by developers. Schiller claimed 90% of rejections were for technical reasons such as bugs or functions that didn’t work as intended. He said that when these problems were fixed Apple approved the apps. Schiller said the remaining 10% of rejections were rejected as ‘inappropriate’. ‘There have been applications submitted for approval that will steal personal data, or which are intended to help the user break the law, or which contain inappropriate content,’ said Schiller. ‘We’ve built a store for the most part that people can trust. You and your family and friends can download applications from the store, and for the most part they do what you’d expect, and they get onto your phone, and you get billed appropriately, and it all just works.’ Schiller pointed out that developers send Apple around

Source of Information : MacUser.January 2010

Monday, February 8, 2010

Opera 10

www.opera.com/browser

0pera has long been the most innovative of the big web browsers. The latest release introduces yet more new features and the software's rendering engine has been optimized to make it much faster at loading JavaScript-heavy sites such as Google Mail and Facebook. It's also had a stylish makeover from British designer Jon Hicks (creator of the Firefox logo). There are two versions of Opera available: the standard one and a Labs release which contains Unite, an add-on designed to transform Opera into a web server. This experimental edition is available to download from http://unite.opera com but for now we'll concentrate on the main browser.



BROWSE THE WEB WITH OPERA
Often overlooked, Opera is actually one of the top browsers for speed and features. Here are some highlights

It's now a standard feature in most browsers but Speed Dial made its debut in Opera. It displays your most frequently accessed sites as thumbnails on any new tab. Click a blank square to add a site. 1 The Configure Speed Dial 2 link lets you add a background and change the number of sites on display.

One of the most noticeable changes in Opera 10 is the addition of a resizable tab bar. Click and drag the handle 1 downwards and thumbnails of the open sites will appear above their respective tabs. Hovering your mouse over a tab will display a larger thumbnail of the site.

Opera supports widgets. To add some, go to Widgets, 1 Add Widgets and browse the selection. When you find one you like, click Launch. You'll be asked if you want to keep it or not. Widgets float above all windows, not just your browser, and can be toggled on and off (individually) from the Taskbar.

Opera Mail is a combined email client/newsreader. To use it, go to Tools, 1 'Mail and Chat Accounts'. Choose the type of account you want 2 and follow the set-up instructions. When you've finished, a Mail menu 3 and Mail panel 4 will appear. Opera 10 also offers separate integrated support for webmail.

You can download BitTorrent files directly in the browser using the program of your choice - Opera is the default. 1 Click the preferences button 2 to adjust the upload/download speeds and change the listen port. 4 Use the search box to search for BitTorrent files. Click the link to download a file.

Opera lets you subscribe to feeds using any feed reader. Click the RSS button in the address bar and the feed will be laid out across a page. Select a feed reader from the drop down menu. The default is Opera Mail but other options include Bloglines and Google Reader. Click the button to subscribe.

Opera is very standards-compliant but, if a website won't display properly, you can pretend you're using IE or Firefox. Go to Tools, Quick Preferences, Edit Site Preferences. Click the Network tab. 1 In the identification box, choose a browser to identify 2 or masquerade as. Click OK and then reload the page.

The Opera Turbo feature uses compression technology to speed up page loading on a slow connection. To activate it, click the Turbo button 1 at the bottom of the screen. However, it's not designed to run on a speedy broadband connection and may reduce the quality of web pages noticeably.

Source of Information : Ultimate PC and Web Workshops Winter 2009

Sunday, February 7, 2010

Three Simple Annoyance Busters for Your Windows PC

Install updates without rebooting, constrain Windows Media Center’s drive space to a limit you prefer, and force apps to run full-screen.

THIS MAY COME as a shock to you, but Windows doesn’t always behave as it should. Fortunately, I know a few tricks that can rehabilitate your PC. This month, I’ll outline how to avoid automatic reboots after Windows Update runs. I’ll also describe how to limit the amount of disk space Windows Media Center can use. And I’ll share a trick for automatically opening apps in full-screen mode.



Stop Reboots After Automatic Updates
You step away from your computer for a little while, and when you come back, your windows and your work are gone. Why? Because Windows downloaded some updates and then took it upon itself to reboot without asking you for permission to do so. Gah! This very thing happened to me not long ago, and I lost some in-progress work as a result. More precisely, I had instructed the Windows Update pop-up to postpone re - booting for 4 hours—and I just happened to be away from the PC when that timer ran out. Unlucky me. A ridiculously easy fi x for this exists, and I’m kicking my - self for not applying it sooner. If you’ve been plagued by the same problem, here’s what you need to do:

1. In Vista, click Start, type Windows Update, and click Enter. In XP, open the Control Panel and select Automatic Update from the menu of options.

2. In Vista, click the Change Settings option at left. In XP, you can simply skip to step 3.

3. Change the setting to Download updates but let me choose whether to install them (in Vista) or Download up - dates for me, but let me choose when to install them (in XP).

4. Click OK.
That’s it. Windows may still nag you about installing up - dates, but at least it won’t reboot without your permission.



Prevent Windows Media Center From UsingYour Entire Hard Drive
I’m a big fan of the Windows Media Center software that comes baked into most versions of Vista and Windows 7. Specifi cally, I use it in conjunction with a TV tuner (four of them, in fact) to transform my PC into a DVR that rivals TiVo, in my humble opinion. Just one problem: If you use Windows Media Center to record TV shows, it can consume almost your entire hard drive. For example, suppose that you confi gure it to record 30 Rock, The Office, Mad Men, or whatever your favorite shows may be. By default, WMC records an unlimited number of episodes of each TV series you specify; but if a few weeks go by before you have a chance to sit down and watch anything (that’s what a DVR is for, right?), the accumulating shows may fi ll your hard drive to the brim—leaving you little or no room for anything else. The solution to this problem is to limit the amount of space WMC can claim for TV recording. Here’s how to proceed:

1. Start Windows Media Center.

2. Scroll down to Tasks, and then over to Settings, and click that option (or press ).

3. Choose Recorder, and then Recorder Storage. (These options will appear only if you have a TV tuner installed and configured.)

4. Use the minus arrow located next to the redundantly named ‘Maximum TV limit’ to decrease the storage maximum (in 25GB increments) available for Windows Media Center’s use.

5. Click Save to finish the operation.



Force Programs to Run at Full-Screen Size
Reader Bill has a problem with Internet Explorer 8, which he runs in Windows XP: Every time he starts the browser, it opens in a reduced-size window rather than at full-screen size. Then he has to maximize it manually every time. What a hassle! I encountered the same annoyance with Excel 2007. Fortunately, it’s easy to force any program to run maximized (that is, at fullscreen size) when you start it. Here’s how:

1. Right-click the program’s shortcut, and click Properties.

2. The Properties window will open with the Shortcut tab selected. Click the pull-down menu next to Run, and choose Maximized.

3. Click OK, and you’re done.

Henceforth, whenever you start that program using that shortcut, it should automatically give you a full-screen window.


Source of Information : PC World December 2009

Saturday, February 6, 2010

How do I transfer my old Outlook Express inbox to a new PC?

HOW YOU MOVE your inbox depends on the version of Windows it’s moving to. I’ll focus on moving from XP to Vista; for details on how to go from one XP system to another or from XP to Windows 7, see find.pcworld.com/63937. First, you must copy your old PC messages. In Outlook Express, select Tools¨Options. Click the Maintenance tab, and then the Store Folder button. The resulting Store Location dialog box has a fi eld containing a folder path (probably starting with ‘C:\Documents and Settings...’). Select this entire path by clicking inside it, pressing , and then pressing -. With the entire path highlighted, press -C to copy it. Click Cancel twice to leave both dialog boxes. Be sure to close Outlook Express. Then select Start¨Run, press -V to paste that path into the Run box, and press .

A Windows Explorer window will open, showing the contents of your store folder—the one holding your mail. Click the Up Folder icon to move to that folder. Copy the folder (probably called ‘Outlook Express’) to an external drive, a shared folder, or other media. In Vista, launch Windows Mail, and select File¨Import¨Messages. Select Microsoft Outlook Express 6 as the program in the resulting Windows Mail Import wizard. Click the Browse button, find and select the Outlook Express folder that you copied from your old PC, and click Select Folder. Complete the wizard’s remaining steps. To move the contents of your old inbox into your new one, click the Inbox folder inside the Imported Folder folder, press -A to select all the messages, and drag them to the real Inbox folder.

Source of Information : PC World December 2009

Friday, February 5, 2010

Make Documents and Media Open in the Right App

MY WIFE’S PC came with a trial of Microsoft Office 2007, but I installed IBM Lotus Symphony on the system instead—in part because it’s free, and in part because I think it’s easier to use. But when the missus attempts to open certain file types (such as .docx or .rtf), up pops Office 2007, its trial period having long since expired. Why don’t these files open in Symphony?

For whatever reason, certain file types remain associated with Office, so
Windows doesn’t know that it’s supposed to direct them to Symphony. Fortunately, the problem is easy to fix. In Vista and Windows 7, click Start, type Default, and press to load the Default Programs menu in Windows. Then click Associate a file type or protocol with a program, choose the file type in question, click Change Program, and go from there. That’s a lengthy process. I prefer to right-click any file that’s incorrectly associated (such as one of the aforementioned .rtf files), mouse over Open With, and click Choose Default Program. If the program you want appears under Recommended Programs (and it should), click it, and make sure the checkbox for Always use the selected program to open this kind of file is checked. Click OK and you’re done.
Henceforth, any attempt to open that file type (not just that file) will cause Windows to load the selected program. If the program doesn’t appear, click Browse to locate its executable on your hard drive. That’s not the easiest task in the world, but you’ll need to do it if you want to re-associate that file type. The most common file association hassle you’re likely to encounter involves media files—MP3s, videos, and the like—that refuse to open where you want them to. This solution works with those kinds of files as well as with document files.

Source of Information : PC World December 2009

Thursday, February 4, 2010

Which Windows Update patches should I download and install?

UPDATES ARE CONFUSING because Microsoft throws a lot of stuff at you. Some items you need; some you might like; and some Microsoft wants you to have for its own purposes. The fact that most of the updates’ names are meaningless certainly doesn’t help. For any Windows Vista update, double-clicking the update will summon a pop-up window with a description. In XP, click the + next to the update name to expand the list and show details.

Vista updates come in three levels:

• Important: Most of these updates are essential security fixes. Unfortunately, Microsoft occasionally throws something into this group that it wants you to have for its own benefit—not yours— such as Windows Genuine Advantage.

• Recommended: Nothing horrible will happen if you skip these items, but you might miss something that will make your PC work better. Read the descriptions and decide for yourself.

• Optional: You might occasionally discover a useful driver update here, but more likely you’ll find marketing hype. XP has just two levels:

• High Priority: As with Vista’s Important category, most of the content here is crucial. For example, if you still use Internet Explorer 6, the upgrade to IE 8 is high-priority. IE 8 is significantly more secure, but it’s a big change and some people hate it.

• Optional: Divided into separate Software and Hardware sublevels, this group combines useful but nonvital updates, drivers (though not many), and useless hype. Use your judgment.

Most individual Windows updates—even Important and High Priority ones—aren’t cumulative. If they were, you’d need to update your PC with only the most recent of them. The big service packs, however, are always cumulative. In fact, Microsoft just recently replaced a long list of Vista updates with one: Service Pack 2.

Source of Information : PC World December 2009

Wednesday, February 3, 2010

Host Free Conference Calls for Your Business

CONFERENCE call service, like most other forms of telecommunications, has be - come a commodity. You can save money by using one of many free options available. Though its name and URL may sound dubious, FreeConference.com delivers on its promise of free basic service. Up to 150 people can join the call, which is ample headroom for accommodating nearly any situation a small or medium-size business is likely to encounter. You schedule the call in advance or set up an access number for an impromptu meeting whenever needed. Planned calls let you input a few extra controls; for example, the organizer can mute the entire group of callers. Either way, participants enter a (usually) long-distance number, supply an access code, and join the conversation. Participants can talk for 4 hours on a scheduled call or 3 hours on an unplanned phone meeting. The only charges involved are the relevant long-distance fees from your own phone company.

FreeConference.com sells upgrades to customers who need more features. For $9 per month you can get call recording or PC desktop sharing tools. (Getting both costs $18 monthly.) Or you can add either service to a single call for $6.50 per month each. If you want participants to be able to dial in to an 800 number, FreeConference.com can set it up for you, but the host must pay 10 cents per minute per participant (the charges will appear on the host’s credit card). You might consider springing for those extra features in certain situations. But depending on your needs, you may fi nd that you can get by with the free service for most—or all—calls.

Source of Information : PC World December 2009

Tuesday, February 2, 2010

Upgrade to Gigabit Networking for Faster Transfers

Get speedier file transfers, smoother video streaming, and better network gaming with the right PC networking tools.


ON MOST HOME networks, the transfer rate of a fast ethernet connection (about 12.5 megabits per second) is the speed limit—and that’s painfully slow for some tasks. The solution? Upgrade to a gigabit network. Switching over to gigabit (1000-mbps) speeds increases potential throughput tenfold, minimizing transfer times and greatly enhancing your ability to stream high-bandwidth files to connected devices without interference. Gigabit networking is now a common feature of networking devices and shouldn’t carry a big cost premium. Most modern motherboards have gigabit functionality built in. This guide does not apply to wireless networks. The factors that constrain speeds on wireless networks are entirely unlike those that limit speeds on wired networks. Here we’ll look at how to determine whether your equipment can handle gigabit networking, and (if not) how to build a gigabit network from scratch.


Identify Your Network
Do you already have a gigabit network? Th e Windows desktop doesn’t indicate whether you’ve acquired this superspeedy networking feature. And since many factors influence network transfer speeds, your gigabit network might crawl at a data transfer rate of less than 10 mbps for various reasons. One requirement of gigabit networking is that all connected devices be connected via a gigabit port. In addition, they must be connected to one another with network cables that can handle the bandwidth. For devices such as your router, a gaming console, or an external storage device, the easiest way to discover whether they support fast Ethernet (10/100 mbps) or gigabit ethernet (10/100/1000 mbps) is to check the devices’ specifications in their online descriptions or accompanying manuals. Look for a mention of either “gigabit networking” or “1000 Mbps.” Your PC’s motherboard is a critical component of the gigabit network. If your system came to you prebuilt or if you don’t remember relevant details about your rig’s motherboard, don’t worry. In Windows, click Start and select Run (for more-modern versions of the OS, move your cursor to the search box and left -click). Type ncpa.cpl and press . The Network Connections window should pop up. Right-click the network connection that’s listed as your Local Area Connection (LAN), and left -click Properties.
Click the big Configure button located to the right of the listing for your network controller. In the new window that appears, open the Advanced tab and scroll down until you find a property labeled ‘Connection Type’ or ‘Speed’. Left -click it and click the Value field to the right. Scroll up and down through this list of options, looking for anything that starts with a value of ‘1000’ or anything that refers to network speeds in ‘Gbps’. If all you see are ‘100’ values and speeds designated in ‘Mbps’, your motherboard’s built-in Ethernet controller tops out at fast-ethernet speeds. But you can still upgrade your PC to gigabit networking by installing a third-party gigabit ethernet card. If all of the devices on your network do support gigabit functionality, great! If you add a slower, fast-ethernet device to a gigabit-ready hub, transfer speeds will crawl only when you access that particular device—a slow device connected to a router won’t poison the rest. Obviously, if you directly connect a gigabit-ready PC to a fast-ethernet device such as a network-attached storage (NAS) box, you’ll get only fast-ethernet speeds. Also, consider your cables. A typical category 5 (Cat 5) cable supports gigabit ethernet, but it’s worthwhile to invest in Cat 5e cables if you are building a gigabit network from scratch. Plain old Cat 5 cabling is now considered obsolete, and Cat 5e cabling meets more-rigorous specifications, allowing it to do a better job than Cat 5 cabling can of minimizing electromagnetic interference. On the other hand, bumping up your cabling to a classification higher than Cat 5e may not benefit your network speeds; for example, Cat 6 cabling doesn’t dramatically improve speed. To see what kind of cable you have, check the cable’s side: The spec should be printed somewhere along the length of the cord.


Test Your Network
If your parts are in order and the cables are connected, you’ll want to fire up your gigabit network so that you can check its performance. But first you need to confirm that the drivers and firmware related to your network-oriented devices (motherboard, router, NAS box, and so on) are up-to-date. Suppose that you are planning to connect your PC to a gigabit NAS box via a single router. At this point you need to make sure that you are running the latest firmware for your NAS box and your router, and either the latest firmware and drivers for your motherboard or the most recent drivers for your discrete gigabit network card, depending on how you’ve set up your system. All too often, a device may not work as intended out of the box. Head over to the manufacturer’s Web site to grab the latest drivers and firmware updates; then run the accompanying driver setup program or follow the related instructions for flashing your device. The process isn’t difficult (see find.pcworld.com/63936). Fire up your network devices and use the helpful LAN Speed Test utility (www.totusoft.com/Products) to gauge the speeds that your gigabit network is attaining. After launching the tool, click the Start Test button and browse to a folder on a connected network device. Enter a size for your test file (1GB should do the trick), and the program will begin to track the read and write speeds of transfers between your system and the target device. Of course, you won’t get the maximum 125-mbps connection that a gigabit network theoretically supports. Ultimately, the speed of the storage devices doing the reading or writing—be they magnetic hard drives or flash-based storage—will limit your network’s performance. For a hard drive, relevant factors include the physical speed of the drive itself and the location where the drive writes the data on its physical platters. For a solid-state drive (SSD), the
performance you get depends on whether the drive uses faster single-level cell flash memory or slower multilevel cell flash memory, and on whether you’re reading or writing to the drive. Unless it uses a RAM drive, or an array of hard drives or SSDs, your network won’t reach the 125-mbps limit for gigabit networking. Nevertheless, you can realistically expect to achieve speeds of at least 40 to 50 mbps, which is four times as fast as the realworld speed of a typical fast-ethernet connection. Though gigabit networking might not be the Star Trek transporter of LANbased file transfers, the performance improvement that it offers over a typical fast-ethernet connection amply compensates for the time this setup process requires.

Source of Information : PC World December 2009

Monday, February 1, 2010

Protect Your Privacy on Facebook and Twitter

WEB SURFING IS no longer a solo activity. Facebook, Twitter, and other social networks have quickly become an integral part of the online culture, and with them comes an array of serious threats to your privacy. In this article, I’ll identify some of the key dangers of social networking and offer a few easy steps that you can take to stay safe online. Social networking is built on the idea of sharing information openly and fostering a sense of community. Unfortunately, an online network of individuals who actively share their experiences and seek connections with other like-minded people can be easy prey for hackers engaged in socialengineering and phishing attacks. It’s important to be aware of the threats and to use discretion in all of your online interactions.



Take Care Before You Share Online
For starters, even in an open community of sharing, you should observe commonsense boundaries. As President Obama warned students in his September address to schools, “be careful what you post on Facebook. Whatever you do, it will be pulled up again later somewhere in your life.” The core truth of that statement can be applied to any social networking site, and possibly to the Internet as a whole. As a general rule, refrain from posting things online that you will regret later. The odds are good that someone, someday, will stumble across it, and it may come back to haunt you— especially if you are planning to run for public office. If you think that abstaining from posting embarrassing or inflammatory comments online ruins the fun, you’re playing a dangerous game. Remember who your friends are, and know that a friend of a friend can be an enemy.



Don’t Lose Sight of Who Your Friends Are
When you write a Twitter tweet or post a Facebook status update, you have to keep your audience in mind. More and more these days, we hear stories about people who forgot that their boss was part of their network and then said things online that resulted in their being reprimanded or even fired. The adverse consequences of posting inappropriate on line comments have become so commonplace—at least anecdotally—that they have earned an entry in the Urban Dictionary: Facebook fired. Even announcing something as seemingly innocuous as “I’m bored” in a status up - date during work hours can have dire consequences if the wrong people see it. With services like Twitter, and with the recent changes to Facebook that permit any interested party to view and search your updates, you really have no way to hide.



Recognize the Visibility of Your Posts
You’ve thought it through, and you want to shout to the world how you feel about having to work overtime and during a weekend that you had earmarked for recreational activities. You have checked and doublechecked, and you’ve determined that your boss is not in your network, so you let loose on the keyboard and speak your mind. Unfortunately, you’re not home free (figuratively speaking) just yet. Being outside of your network, your boss can’t see your post directly, but if a Facebook friend who is connected with your boss comments on your status update—even just to say “I sympathize”— your boss may be able to click on the link through the friend and see your post. Go ahead, be social. Share your trials and tribulations with your growing network of adoring followers. But for your own safety, keep one essential rule in mind: Never post anything online that you wouldn’t be comfortable having everyone you know see—because eventually they probably will see it.



Define the Parameters of Your Privacy
Marrying privacy and social networking may seem terribly unintuitive. How can you be social and open, and yet protect your privacy? Well, just because you are choosing to share some information with a select group of people does not necessarily mean that you want to share everything with everyone, or that you are indifferent about whether the information you share is visible to all. Facebook, in particular, has drawn unwanted attention in connection with various privacy concerns. If you have used Facebook for a while, you may have noticed advertisements that incorporate your friends’ names or photos associated with them. Facebook does provide privacy controls for you to customize the types of information available to thirdparty applications. If you look at the Facebook Ads tab of the privacy controls, though, you’ll notice that it doesn’t give you any way to opt out of the internal Facebook Ads. Instead, it states (alarmingly) that “Facebook strives to create relevant and interesting advertisements to you and your friends.”



Approach Tattletale Quizzes With Caution
For many users, one of the primary attractions of Facebook is the virtually endless selection of games and quizzes. And part of their allure is their social aspect. In the advergames, you compete against your friends; through the quizzes, you learn more about them while being briefl y entertained. The ACLU exposed problems with how much information these quizzes and games share, however. Typically, when a Facebook user initiates a game or quiz, a notice pops up to declare that interacting with the application requires opening access to information; the notice also provides the user the opportunity to opt out and cancel, or to allow the access to continue. The permission page clearly informs the user up front that allowing “access will let [the application] pull your profile information, photos, your friends’ info, and other content that it requires to work.” Under the circumstances, you may wonder (as the ACLU has) why a game or quiz application would “require” access to your friends’ information in order to work.



Facebook Policy Concerns in Canada
Facebook’s privacy policies have run afoul of the Canadian government, too. Canada’s Privacy Commissioner has determined that those policies and practices violate Canadian privacy regulations, and has recommended various changes Facebook should make to comply with them. One of the commissioner’s biggest concerns involves the permanence of accounts and account data. Facebook offers users a way to disable or deactivate an account, but it doesn’t seem to provide a method for completely deleting an account. Photos and status updates might be available long after a user has shut down a Facebook profile. And like the ACLU, the Canadian government is unhappy about the amount of user information that Facebook shares with thirdparty application providers.



Exercise the Privacy Controls You Have
Although the concerns of the ACLU and the Canadian government run a little deeper, Facebook does offer privacy controls for restricting or denying access to information. Since Facebook is a social networking site designed for sharing information, many of the settings are open by default. It is up to you to access the Privacy Settings and configure the options as you see fit. For each available setting, you can choose to share information with Everyone, with My Networks and Friends, with Friends of Friends, or with Only Friends; if you prefer, you can customize the settings to finetune access further.



Beware of Hijacking and Phishing Scams
By its very nature, social networking is all about socializing, which means that users are more than usually disposed to let their guard down and share information. They come to the network to expand their professional connections, reestablish contact with old friends, and communicate in real time with pals and peers. And for predatory bad guys, launching social-engineering and phishing attacks in this convivial environment is like shooting fish in a barrel. Most people know not to respond to e-mail requests from exiled Nigerian royalty promising millions of dollars in return for help smuggling the money out of the country. (Anyone who doesn’t know better probably shouldn’t be on the Internet; such people are a danger to themselves and to others.) But what if a good friend from high school whom you haven’t seen in 18 years sends you a message on Facebook explaining how her wallet was stolen and her car broke down, and asks you to wire money to help her get home? You might be less suspicious than you should be. Attackers have figured out that family and friends are easy prey for sob stories of this type. Using other attacks or methods, they gain access to a Facebook account and hijack it. They change the password so that the legitimate owner can’t get back in, and then they proceed to reach out to the friends of the hijacked account and attempt to extort money such a Facebook message or e-mail plea, pick up the phone and call the person directly to confirm its legitimacy.



Don’t Let a Tiny URL Fool You
Another threat that has emerged recently as a result of social networking is the tiny-URL attack. Some URLs are very long and don’t work well in e-mail or in blog posts, creating a need for URLshortening services. In particular, Twitter, with its 140-character limit, has made the use of URL shortening services such as Bit.ly a virtual necessity. Unfortunately, attackers can exploit a shortened URL to lure users into accessing malicious Web sites. Since the shortened URL consists of a random collection of characters that are unrelated to the actual URL, users cannot easily determine whether it is legitimate or phony. TweetDeck, a very popular application for sending messages in Twitter, provides a ‘Show preview information for short URLs’ option, which offers some protection.
The preview window supplies details about the shortened URL, including the actual long URL that the link leads to. If you aren’t using TweetDeck for Twitter, or if you need to deal with shortened URLs on other sites and services, maintain a healthy dose of skepticism about what might lie behind the obfuscated address that a message points to.


Source of Information : PC World December 2009