Saturday, December 18, 2010

Given that they are extremely complex to implement, will MPP architectures be confined to scientific and technical markets?

To take advantage of a massively parallel architecture, one must have applications written for it. The cost of developing such applications—or of adapting other applications for MPP use—restricts the number of applications. For applications needing intense numerical computation, companies computed the ROI (Return On Investment) and decided that the investment was worthwhile.

MPP architectures force the development of applications using the message- passing paradigm. For a long time, the major barrier was the lack of interface standards and development environments. Some candidate standards have appeared (such as PVM, MPI, or OpenMP), and there is a steadily-increasing number of applications built on these.

We should note here that the efficiency of such applications is usually dependent on the performance (in particular, the latency) of message-passing (including all relevant hardware and software times). Not unreasonably, the first MPP applications to appear were those that could straightforwardly be decomposed into more or less independent subtasks, which needed only a small number of synchronizations to interwork effectively. Thus, mainly scientific and technical applications (in fields like hydrodynamics, thermodynamics, and seismology) were first to appear for MPP.

There is another brake on the flourishing of this class of machine—the lack of low-level standards providing access to the interconnect. This lack means that there is substantial difficulty in adapting higher-level libraries to the various hardware (and interconnect) platforms; effecting such a port requires high technical knowledge and skills, so moving between one manufacturer and another is difficult. Various industry initiatives, such as VIA or Infiniband, could bring sufficient standardization that MPP could spread.

MPP products are challenged in yet another direction: the appearance of peer- to-peer systems, grid computing, and clusters of PCs. A key factor of such solutions is low cost, driving the development of applications that can quickly turn into industry standards. The rise of these approaches could condemn the MPP proper to an early grave.

To finish this discussion, recall that in 1984 Teradata brought to market what were probably the first MPP systems. These systems, deploying Teradata’s own proprietary DBMS, were used in decision support, needing the processing of very large volumes of data (several terabytes). With the advent of large clusters and MPP systems from other manufacturers, the major database vendors subsequently produced versions of their software to run on this class of machine.

Are RISC processors dead, killed by Intel?

Source of Information : Elsevier Server Architectures 2005

No comments: