Introduction to parallel programming, Computer Networking

Introduction To Parallel Programming

After getting an enormous breakthrough in the serial programming and figuring out its limitations, academicians and computer professionals are focusing now on parallel programming. Parallel programming is a well-liked choice today for multi-processor architectures to solve the complex problems. If developments in the last decade give any indications, then the future belongs to of parallel computing. Parallel programming is intended to take benefits the of non-local resources to save cost and time and overcome the memory constraints.

In this section, we shall introduce parallel programming and its classifications. We shall talk about some of the high level programs used for parallel programming. There are certain compiler-directive based packages, which can be used along with some high level languages. We shall also have a detailed look upon them.

Usually, software has been written for serial computation in which programs are written for computers having a single Central Processing Unit (CPU). Here, the troubles are solved by a series of instructions, implemented one after the other, one at a time, by the CPU. Though, many complex, interrelated events happening at the same time like galactic orbital and planetary events, weather and ocean patterns and tectonic plate drift may needs super high complexity serial software. To solve these big problems and save the computational time, a new programming paradigm called parallel programming was introduced.

To develop a parallel program, we must first decide whether the problem has any part which can be parallelized. There are some problems like producing the Fibonacci sequence in the case of which there is a little scope for parallelization. Once it has been determined that the problem has some segment that can be parallelized, we break the problem into discrete chunks of work that can be distributed to multiple tasks. This partition of the problem may be data-centric or function-centric. In the earlier case, different functions work with different subsets of data while in the latter every function performs a portion of the overall work. Depending upon the type of partition approach, we need communication between the processes. Accordingly, we have to design the mechanisms for process synchronization and mode of communication.

Posted Date: 3/2/2013 7:42:57 AM | Location : United States







Related Discussions:- Introduction to parallel programming, Assignment Help, Ask Question on Introduction to parallel programming, Get Answer, Expert's Help, Introduction to parallel programming Discussions

Write discussion on Introduction to parallel programming
Your posts are moderated
Related Questions
Determine in detail about the world wide web The latest has been Web commerce that is currently making big news, and many businesses are setting up electronic shops on the Inte

Describe in detail with example how data fragmentation works?

DEFINITION: Latency = Delay. Because a store and forward switch reads the entire frame before forwarding, a larger frame takes longer than a shorter frame.

Demarcation is the point in which responsibility changes hands.


Network Layer and Routing As discussed in previous  chapters the physical  layer  provides   connection sand services to the  data link  layer while the  data link layer is

Q. Host System in TCP-IP model? - Standards adopted because of widespread use like (Internet) - The protocols came first plus the model was really just a description of the

Normal 0 false false false EN-IN X-NONE X-NONE Metropolitan area Network ( MAN) Metropoli

Determine the definition of HDLC HDLC has only one address field. In a LAN, any station may transmit to any other station. The receiving station needs to see its own address in

a. If a password to a cipher is exactly 8 characters long, and each character can be selected from [0-9], [a-z], and [A-Z], how many different passwords are possible? b. Suppose