Already have an account? Get multiple benefits of using own account!
Login in your account..!
Remember me
Don't have an account? Create your account in less than a minutes,
Forgot password? how can I recover my password now!
Enter right registered email to receive password!
Present your own fully documented and tested programming example illustrating the problem of unbalanced loads. Describe the use of OpenMP's scheduler as a means of mitigating this problem.
The below example shows a number of tasks that all update a global counter. Since threads share the same memory space, they indeed see and update the same memory location. The code returns a false result because updating the variable is much quicker than creating the thread as on a multicore processor the chance of errors will greatly increase. If we artificially increase the time for the update, we will no longer get the right result. All threads read out the value of sum, wait a while (presumably calculating something) and then update.
#include
#include "pthread.h"
int sum=0;
void adder() {
int sum = 0;
int t = sum; sleep(1); sum = t+1;
return;
}
#define NTHREADS 50
int main() {
int i;
pthread_t threads[NTHREADS];
printf("forking\n");
for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; } The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way. #include #include #include "pthread.h" int sum=0; void adder() { int sum = 0; int t = sum; sleep(1); sum = t+1; return; } #define NTHREADS 50 int main() { int i; pthread_t threads[NTHREADS]; printf("forking\n"); #pragma omp for for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1;
printf("joining\n");
for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; } The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way. #include #include #include "pthread.h" int sum=0; void adder() { int sum = 0; int t = sum; sleep(1); sum = t+1; return; } #define NTHREADS 50 int main() { int i; pthread_t threads[NTHREADS]; printf("forking\n"); #pragma omp for for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
{
if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1;
printf("Sum computed: %d\n",sum);
return 0;
The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way.
#pragma omp for
for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
Elimination of common sub expression during code optimization An optimizing transformation is a rule for rewriting a section of a program to enhance its execution efficiency wi
justify the role played by the operating system in managing the process to avoid deadlock
fork and join syntax in java
In a multiprogramming and time sharing environment several users share the system simultaneously .what are two such problems?
Operating Systems have evolved tremendously in the last few decades. The first approach for building Operating Systems, taken during the 40s through early 60s, was to allow only on
Define Disadvantages of Top Down parsing of Backtracking The disadvantages of top down parsing of backtracking: (i) Semantic actions cannot be carried out while making a pr
When programming with threads, there are three very common mistakes that programmers often make: 1. locking twice (depending on the system and type of lock, can cause crashes, h
What is the main advantage of multiprogramming? Multiprogramming makes efficient use of the CPU by overlapping the demands for the CPU and its I/O devices from various users. I
What criteria are important in choosing a file organisation?
Problem: a) Define the term ‘process' and what are the different constituents of a process. b) In the three-state process model, what does each of the three states signify?
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!
whatsapp: +91-977-207-8620
Phone: +91-977-207-8620
Email: [email protected]
All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd