Analysis of parallel algorithms, Computer Networking

A generic algorithm is mostly analyzed on the basis of the following parameters: the time complexity (implementation time) and the space complexity (amount of space necessary). Usually we give much more advantage to time complexity in comparison with space complexity. The subsequent section highlights the condition of analyzing the complexity of parallel algorithms. The fundamental parameters needed for the analysis of parallel algorithms are as follow:

  • Time Complexity
  • The Total Number of Processors Required
  • The Cost Involved.

Time Complexity

As it happens, the majority people who execute algorithms want to know how much of a particular resource (such as storage or time) is required for a given algorithm. The parallel architectures have been designed for improving the computation power of the variety of algorithms. Therefore, the major concern of evaluating an algorithm is the determination of

the quantity of time required to implement. Generally, the time complexity is calculated on the basis of the total number of steps implemented to accomplish the desired output.

 The Parallel algorithms usually split the problem into more symmetrical or asymmetrical sub problems and pass them to several processors and put the results back simultaneously at one end. The resource consumption in parallel algorithms is both processor cycles on every processor and also the communication overhead among the processors.

Therefore, first in the computation step, the local processor executes a logic and arithmetic operation. After that, the various processors communicate with each other for exchanging data and/or messages. Therefore, the time complexity can be calculated on the basis of computational cost and communication cost involved.

The time complexity of an algorithm differs depending upon the instance of the input for a given trouble. For example, the already sorted list (10,17, 19, 21, 22, 33) will consume less amount of time than the reverse order of list (33, 22, 21,19, 17,10). The time complexity of an algorithm has been classify into three forms:-

i)       Best Case Complexity;

ii)      Average Case Complexity;

iii)     Worst Case Complexity.

The best case complexity is the smallest amount of time required by the algorithm for a given input. The average case complexity is the average running time necessary by the algorithm for a given input. Likewise, the worst case complexity can be defined as the maximum amount of time needed by the algorithm for a given input.

Thus, the main factors involved for analyzing the time complexity depends upon the algorithm, parallel computer model and specific set of inputs. Mostly the size of the input is a purpose of time complexity of the algorithm. The generic notation for describing the time-complexity of any algorithm is talk about in the subsequent sections.

Asymptotic Notations

These notations are used for analyzing functions. Assume we have two functions f(n) and g(n) defined on real numbers,

i)  Theta Θ Notation: The set Θ(g(n)) having  all functions f(n), for which there exist positive constants c1,c2 such that f(n) is sandwiched among c1*g(n) and c2*g(n), for sufficiently large values of n. In other words,

                           Θ(g(n)) ={ 0<=c1*g(n) <= f(n) <= c2*g(n) for all n >= no }

ii) Big O Notation: The set O (g(n)) having all functions f(n), for which there exists positive constants c such that for sufficiently large values of n, we have 0<= f(n) <= c*g(n). In other words,

                                 O(g(n)) ={ 0<= f(n) <= c*g(n) for all n >= no }

iii)  Notation: The function f(n) belongs to the set (g(n)) if there exists positive constants c such that for sufficiently large values of n,    we have 0<= c*g(n) <=f(n). In other words,

                          O(g(n)) ={ 0<= c*g(n) <=f(n) for all n >= no }.

Assume we have a function f(n)= 4n2 + n, then the order of function is O(n2). The asymptotic notations give information about the lower and upper bounds on complexity of an algorithm with the help of   ? and O notations. For example, in the sorting algorithm the lower bound is  ? (n ln n) and upper bound is O (n ln n). Though, problems like matrix multiplication have difficulty like O(n3) to O(n2.38) . Algorithms which have similar upper and lower bounds are called as optimal algorithms. Thus, few sorting algorithms are optimal while matrix multiplication based algorithms are not.

Another technique of determining the performance of a parallel algorithm can be carried out after calculating a parameter called "speedup". Speedup can be distinct as the ratio of the worst case time complexity of the fastest called sequential algorithm and the worst case running time of the parallel algorithm. Mostly, speedup determines the performance improvement of parallel algorithm in comparison to sequential algorithm.

 Speedup =Worst case running time of Sequential Algorithm/Worst case running time of Parallel Algorithm

 Number of Processors

One of the other features that assist in analysis of parallel algorithms is the total number of processors required to deliver a answer to a given problem. Therefore, for a given input of size say n, the number of processors needed by the parallel algorithm is a function of n, usually denoted by TP (n).

Overall Cost

Lastly, the total cost of the algorithm is a product of total number of processors required for computation and the time complexity of the parallel algorithm.

Cost = Time Complexity * Total Number of Processors

The other form of defining the cost is that it states the total number of steps implemented collectively by the n number of processors, i.e., summation of steps. One more term related with the analysis of the parallel algorithms is effectiveness of the algorithm. It is defined as the ratio of the bad case running time of the best sequential algorithm and the cost of the parallel algorithm. The efficiency should be mostly less than or same to 1. In a condition, if efficiency is greater than 1 then it means that the sequential algorithm is quicker than the parallel algorithm.

Efficiency = Worst case running time of Sequential Algorithm/Cost of Parallel Algorithm

Posted Date: 3/2/2013 6:25:21 AM | Location : United States







Related Discussions:- Analysis of parallel algorithms, Assignment Help, Ask Question on Analysis of parallel algorithms, Get Answer, Expert's Help, Analysis of parallel algorithms Discussions

Write discussion on Analysis of parallel algorithms
Your posts are moderated
Related Questions
A communications channel is pathways over which information can be communicate. It may be described by a physical wire that connects communicating devices, or by a radio, laser, or

Q. What is an Error? At any time bits flow from one point to another they are subject to unpredictable changes because of interference The interference is able to change the

Explain about the Gopher The ghoper is a protocol designed to search, retrieve and display documents from remote sites on the  Internet. The Ghoper was formed as a piece of sof

It is possible for a datagram to generate ICMP errors after it has been fragmented. For example, suppose that each of two fragments cannot be delivered. Seemingly, this would mean

Question 1 Describe the following a. Internet Technologies b. Networks c. Media Access Control Question 2 Explain the various classes of networks Question 3 Describe various m

Overview Create a prototype web site for the "Computer Superstores" company. Your site should use effective navigation features and demonstrate good structure for  accessibility

Question 1 Explain classification of computer Network by range Question 2 Explain three major problems of Transmission lines Question 3 What is the maximum data rate of

Describe the count-to-infinity problem in distance vector

The bitonic sorting network needed log n number of stages for performing the task of sorting the list. The first n-1 stages of the circuit are able to sort two n/2 numbers and the

i need the job to be done within 3days