Data and block distribution, Computer Networking

Data Distribution

Data distribution directives tell the compiler how the program data is to be distributed between the memory areas associated with a set of processors. The logic used for data distribution is that if a set of data has independent sub-blocks, then computation on them can be carried out in parallel. They do not let the programmer to state directly which processor will do a particular computation. But it is expected that if the operands of a particular sub-computation are all found on the similar processor, the compiler will allocate that part of the computation to the processor holding the operands, where upon no remote memory accesses will be include.

Having seen how to describe one or more target processor arrangements, we need to introduce mechanisms for distributing the data arrays above those arrangements. The DISTRIBUTE directive is used to supply a data object) onto an abstract processor array.

The syntax of a DISTRIBUTE directive is:

!HPF$ DISTRIBUTE  array_lists  [ONTO arrayp]

Where array_list is the list of array to be distributed and arrayp is abstract processor array. The ONTO specifier can be used to do a distribution across a particular processor array. If no processor array is showing, one is chosen by the compiler.

HPF allows arrays to be distributed over the processors directly, but it is often more convenient to go through the intermediary of an explicit template. A template can be declared in much the similar way as a processor arrangement.

!HPF$ TEMPLATE T(50, 50, 50)

declares a 50 by 50 by 50 three-dimensional template called T. Having declared it, we can create a relation between a template and some processor arrangement by using DISTRIBUTE directive. There are three methods in which a template may be distributed over Processors: Block, cyclic and *.

(a) Block Distribution

Simple block distribution is identified by

!HPF$ DISTRIBUTE T1(BLOCK) ONTO P1

where T1 is some template and P1 is some processor arrangement.

In this case, every processor gets a contiguous block of template elements. All processors get the similar sized block. The last processor may get lesser sized block.

Example 3:

!HPF$ PROCESSORS P1(4)

!HPF$ TEMPLATE T1(18)

!HPF$ DISTRIBUTE T1(BLOCK) ONTO P1

As a result of these instructions, distribution of data will be as shown in Figure.

34_Block Distribution of Data.png

                                                                            Block Distribution of Data

In a variant of the block distribution, the number of template elements allocated to every processor can be explicitly identified, as in

!HPF$ DISTRIBUTE T1 (BLOCK (6)) ONTO P1

Distribution of data will be as shown in Figure.

 

21_Variation of Block Distribution.png

                                                                                                   Variation of Block Distribution

It means that we allocate all template elements earlier than exhausting processors, some processors are left empty.

(b) Cyclic Distribution

Simple cyclic distribution is specified by

!HPF$ DISTRIBUTE T1(CYCLIC) ONTO P1

The first processor gets the first template element, the second gets the second, and so on. When the set of processors is exhausted, then go back to the first processor, and continue allocating the template elements from there.

Example 4

!HPF$ PROCESSORS P1(4)

!HPF$ TEMPLATE T1(18)

!HPF$ DISTRIBUTE T1(CYCLIC) ONTO P1

The result of these instructions is shown in Figure .

267_Cyclic Distribution.png

                                                                                                      Cyclic Distribution

But in an analogous variant of the cyclic distribution

!HPF$ DISTRIBUTE T1 (BLOCK (3)) ONTO P1

921_Variation of Cyclic Distribution.png

                                                                                                Variation of Cyclic Distribution

That covers the case where both the processor and the template are one dimensional. When the and processor have (the same) higher dimension, each dimension can be distributed independently, mixing any of the four distribution formats. The correspondence among the template and the processor dimension is the obvious one. In

!HPF$ PROCESSORS P2 (4, 3)

!HPF$ TEMPLATE T2 (17, 20)

!HPF$ DISTRIBUTE T2 (CYCLIC, BLOCK) ONTO P2

the first dimension of T2 is distributed cyclically over the first dimension of P2; the second dimension is distributed blockwise over the second dimension of P2.

(c) * Distribution

Some dimensions of a template might  have ''collapsed distributions'', allowing a template to be distributed onto a processor arrangement with fewer dimensions than the template.

Example 5

!HPF$ PROCESSORS P2 (4, 3)

!HPF$ TEMPLATE T2 (17, 20)

!HPF$ DISTRIBUTE T2 (BLOCK, *) ONTO P1

means that the first dimension of T2 will be distributed over P1 in blockwise order but for a fixed value of the first index of T2, all values of the second subscript are mapped to the same processor.

Posted Date: 3/2/2013 8:05:28 AM | Location : United States







Related Discussions:- Data and block distribution, Assignment Help, Ask Question on Data and block distribution, Get Answer, Expert's Help, Data and block distribution Discussions

Write discussion on Data and block distribution
Your posts are moderated
Related Questions
There are many types of servers:- a) File servers b) Database servers c) Transaction servers d)  Groupware servers e) Object servers Web servers.

When an error is discovered the receiver is able to ask the sender to retransmit the entire data unit Error Correction-Forward Error Correction A receiver is able to u

Two hosts are connected by a path made of four links as shown in the figure below. The links are of data rate R = 2 Mbps each. Host A has a file of size F = 1,000,000 bytes to tran

Types of Redundancy Checks Parity Check Simple Parity Check Two Dimensional Parity Check / Longitudinal Redundancy Check (LRC) CRC (Cyclic Redundancy Check)

With a database server, the client gives SQL requests as messages to the database server. The results of every SQL command are returned over the network. The server uses its own pr

Synchronous Synchronous traffic is able to consume a portion of the 100 Mbps total bandwidth of an FDDI network while asynchronous traffic can consume the rest. Synchronous

How to prevent the data from hackers In order to prevent intruders from entering the house, it is necessary for the house owner to look after the behaviour of internal and exte

What is Piggy Backing? A process called piggybacking is used to get better the efficiency of the bidirectional protocols. When a frame is carrying data from A to B, it can also

Programming Based on Message Passing As we know, the programming model based on message passing uses high level programming languages like C/C++ along with a number of message

Error Correcting Code - Hamming Code: Hamming code is the one of the error-correcting code named after its inventor. Because of the simplicity of the hamming code, it can dete