Analysis of parallel algorithms, Computer Networking

Assignment Help:

A generic algorithm is mostly analyzed on the basis of the following parameters: the time complexity (implementation time) and the space complexity (amount of space necessary). Usually we give much more advantage to time complexity in comparison with space complexity. The subsequent section highlights the condition of analyzing the complexity of parallel algorithms. The fundamental parameters needed for the analysis of parallel algorithms are as follow:

  • Time Complexity
  • The Total Number of Processors Required
  • The Cost Involved.

Time Complexity

As it happens, the majority people who execute algorithms want to know how much of a particular resource (such as storage or time) is required for a given algorithm. The parallel architectures have been designed for improving the computation power of the variety of algorithms. Therefore, the major concern of evaluating an algorithm is the determination of

the quantity of time required to implement. Generally, the time complexity is calculated on the basis of the total number of steps implemented to accomplish the desired output.

 The Parallel algorithms usually split the problem into more symmetrical or asymmetrical sub problems and pass them to several processors and put the results back simultaneously at one end. The resource consumption in parallel algorithms is both processor cycles on every processor and also the communication overhead among the processors.

Therefore, first in the computation step, the local processor executes a logic and arithmetic operation. After that, the various processors communicate with each other for exchanging data and/or messages. Therefore, the time complexity can be calculated on the basis of computational cost and communication cost involved.

The time complexity of an algorithm differs depending upon the instance of the input for a given trouble. For example, the already sorted list (10,17, 19, 21, 22, 33) will consume less amount of time than the reverse order of list (33, 22, 21,19, 17,10). The time complexity of an algorithm has been classify into three forms:-

i)       Best Case Complexity;

ii)      Average Case Complexity;

iii)     Worst Case Complexity.

The best case complexity is the smallest amount of time required by the algorithm for a given input. The average case complexity is the average running time necessary by the algorithm for a given input. Likewise, the worst case complexity can be defined as the maximum amount of time needed by the algorithm for a given input.

Thus, the main factors involved for analyzing the time complexity depends upon the algorithm, parallel computer model and specific set of inputs. Mostly the size of the input is a purpose of time complexity of the algorithm. The generic notation for describing the time-complexity of any algorithm is talk about in the subsequent sections.

Asymptotic Notations

These notations are used for analyzing functions. Assume we have two functions f(n) and g(n) defined on real numbers,

i)  Theta Θ Notation: The set Θ(g(n)) having  all functions f(n), for which there exist positive constants c1,c2 such that f(n) is sandwiched among c1*g(n) and c2*g(n), for sufficiently large values of n. In other words,

                           Θ(g(n)) ={ 0<=c1*g(n) <= f(n) <= c2*g(n) for all n >= no }

ii) Big O Notation: The set O (g(n)) having all functions f(n), for which there exists positive constants c such that for sufficiently large values of n, we have 0<= f(n) <= c*g(n). In other words,

                                 O(g(n)) ={ 0<= f(n) <= c*g(n) for all n >= no }

iii)  Notation: The function f(n) belongs to the set (g(n)) if there exists positive constants c such that for sufficiently large values of n,    we have 0<= c*g(n) <=f(n). In other words,

                          O(g(n)) ={ 0<= c*g(n) <=f(n) for all n >= no }.

Assume we have a function f(n)= 4n2 + n, then the order of function is O(n2). The asymptotic notations give information about the lower and upper bounds on complexity of an algorithm with the help of   ? and O notations. For example, in the sorting algorithm the lower bound is  ? (n ln n) and upper bound is O (n ln n). Though, problems like matrix multiplication have difficulty like O(n3) to O(n2.38) . Algorithms which have similar upper and lower bounds are called as optimal algorithms. Thus, few sorting algorithms are optimal while matrix multiplication based algorithms are not.

Another technique of determining the performance of a parallel algorithm can be carried out after calculating a parameter called "speedup". Speedup can be distinct as the ratio of the worst case time complexity of the fastest called sequential algorithm and the worst case running time of the parallel algorithm. Mostly, speedup determines the performance improvement of parallel algorithm in comparison to sequential algorithm.

 Speedup =Worst case running time of Sequential Algorithm/Worst case running time of Parallel Algorithm

 Number of Processors

One of the other features that assist in analysis of parallel algorithms is the total number of processors required to deliver a answer to a given problem. Therefore, for a given input of size say n, the number of processors needed by the parallel algorithm is a function of n, usually denoted by TP (n).

Overall Cost

Lastly, the total cost of the algorithm is a product of total number of processors required for computation and the time complexity of the parallel algorithm.

Cost = Time Complexity * Total Number of Processors

The other form of defining the cost is that it states the total number of steps implemented collectively by the n number of processors, i.e., summation of steps. One more term related with the analysis of the parallel algorithms is effectiveness of the algorithm. It is defined as the ratio of the bad case running time of the best sequential algorithm and the cost of the parallel algorithm. The efficiency should be mostly less than or same to 1. In a condition, if efficiency is greater than 1 then it means that the sequential algorithm is quicker than the parallel algorithm.

Efficiency = Worst case running time of Sequential Algorithm/Cost of Parallel Algorithm


Related Discussions:- Analysis of parallel algorithms

Basic working of spanning tree, Q. Basic working of Spanning Tree? Spa...

Q. Basic working of Spanning Tree? Spanning Tree - Redundant bridges may be installed to provide reliability - To prevent infinite looping of packets between bridges,

What is parabolic dish antenna, Q. What is Parabolic Dish Antenna? - In...

Q. What is Parabolic Dish Antenna? - Incoming signals means that the Signal bounces off of dish and is directed to focus - Outgoing signals signifies transmission is broadca

Show example on check sum, Q. Show Example on Check Sum? Data: 10...

Q. Show Example on Check Sum? Data: 10101001 00111001 Computing Checksum: 10101001 00111001 --------------- Sum 11100010 Receiver Side: 10101

State the concept of multicast typing, Multicast An identifier for a se...

Multicast An identifier for a set of interfaces (typically belonging to dissimilar nodes). A packet sent to a multicast address is delivered to all interfaces identified by tha

Switching, SWITCHING: A switched LAN has a single electronic device th...

SWITCHING: A switched LAN has a single electronic device that sends frames among the connected devices. A hub with several ports simulates a single shared phase. However a swi

Error detection and correction, Error detection and correction The digi...

Error detection and correction The digital traffic stream of second generation systems also lends itself to the use of error detection and correction methods. The result can be

Show the bidirectional transmission, Q. Show the Bidirectional Transmission...

Q. Show the Bidirectional Transmission? Bidirectional Transmission Each party must maintain S and R to track frames sent and expected Piggybacking hooking ACK wit

What is multimode graded-index fiber, Q. What is Multimode Graded-Index Fib...

Q. What is Multimode Graded-Index Fiber? - Ever since the core density decreases with distance from the centre the light beams refract into a curve - Eliminates problem with

Process server, Using a process server (such as inetd or xinetd) to listen ...

Using a process server (such as inetd or xinetd) to listen on a set of well-popular ports and start one another server.  I said that it had much to do with overhead.  Say you set u

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd