Already have an account? Get multiple benefits of using own account!
Login in your account..!
Remember me
Don't have an account? Create your account in less than a minutes,
Forgot password? how can I recover my password now!
Enter right registered email to receive password!
Present your own fully documented and tested programming example illustrating the problem of unbalanced loads. Describe the use of OpenMP's scheduler as a means of mitigating this problem.
The below example shows a number of tasks that all update a global counter. Since threads share the same memory space, they indeed see and update the same memory location. The code returns a false result because updating the variable is much quicker than creating the thread as on a multicore processor the chance of errors will greatly increase. If we artificially increase the time for the update, we will no longer get the right result. All threads read out the value of sum, wait a while (presumably calculating something) and then update.
#include
#include "pthread.h"
int sum=0;
void adder() {
int sum = 0;
int t = sum; sleep(1); sum = t+1;
return;
}
#define NTHREADS 50
int main() {
int i;
pthread_t threads[NTHREADS];
printf("forking\n");
for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; } The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way. #include #include #include "pthread.h" int sum=0; void adder() { int sum = 0; int t = sum; sleep(1); sum = t+1; return; } #define NTHREADS 50 int main() { int i; pthread_t threads[NTHREADS]; printf("forking\n"); #pragma omp for for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1;
printf("joining\n");
for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; } The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way. #include #include #include "pthread.h" int sum=0; void adder() { int sum = 0; int t = sum; sleep(1); sum = t+1; return; } #define NTHREADS 50 int main() { int i; pthread_t threads[NTHREADS]; printf("forking\n"); #pragma omp for for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
{
if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1;
printf("Sum computed: %d\n",sum);
return 0;
The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way.
#pragma omp for
for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
Q. What are three benefits of dynamic (shared) linkage of libraries compared to static linkage? What are two situations where static linkage is preferable? Answer: The
Write a short note on file organization and access. There are three methods to access files 1Record Access 2Sequential Access 3Random Access The record access
Q. What are two dreadful problems that designers should solve to implement a network-transparent system? Answer: One such issue is making all the processors as well as storag
what is MFD
Q. Regard as a logical address space of eight pages of 1024 words every mapped onto a physical memory of 32 frames. a. How several bits are there in the logical address? b. H
Q. Consider the two-dimensional array A: int A[][] = new int[100][100]; Whereas A [0][0] is at location 200 in a paged memory system with pages of size 200. A little process
Define thread cancellation & target thread. The thread cancellation is the task of terminating a thread before it has completed. A thread that is to be cancelled is often refer
What is the use of inter process communication. Inter process communication gives a mechanism to allow the co-operating process to communicate with each other and synchronies t
KERNEL-LEVEL THREADS (KLT) In this level every thread management is done by kernel .No thread library except an API system calls to the kernel thread facility exists. The kern
Q. Explain about Time Sharing Systems? Time Sharing Systems Multi-programmed batched systems provide an environment where various system resources (for illustration CP
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!
whatsapp: +91-977-207-8620
Phone: +91-977-207-8620
Email: [email protected]
All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd