Already have an account? Get multiple benefits of using own account!
Login in your account..!
Remember me
Don't have an account? Create your account in less than a minutes,
Forgot password? how can I recover my password now!
Enter right registered email to receive password!
Present your own fully documented and tested programming example illustrating the problem of unbalanced loads. Describe the use of OpenMP's scheduler as a means of mitigating this problem.
The below example shows a number of tasks that all update a global counter. Since threads share the same memory space, they indeed see and update the same memory location. The code returns a false result because updating the variable is much quicker than creating the thread as on a multicore processor the chance of errors will greatly increase. If we artificially increase the time for the update, we will no longer get the right result. All threads read out the value of sum, wait a while (presumably calculating something) and then update.
#include
#include "pthread.h"
int sum=0;
void adder() {
int sum = 0;
int t = sum; sleep(1); sum = t+1;
return;
}
#define NTHREADS 50
int main() {
int i;
pthread_t threads[NTHREADS];
printf("forking\n");
for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; } The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way. #include #include #include "pthread.h" int sum=0; void adder() { int sum = 0; int t = sum; sleep(1); sum = t+1; return; } #define NTHREADS 50 int main() { int i; pthread_t threads[NTHREADS]; printf("forking\n"); #pragma omp for for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1;
printf("joining\n");
for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; } The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way. #include #include #include "pthread.h" int sum=0; void adder() { int sum = 0; int t = sum; sleep(1); sum = t+1; return; } #define NTHREADS 50 int main() { int i; pthread_t threads[NTHREADS]; printf("forking\n"); #pragma omp for for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
{
if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1;
printf("Sum computed: %d\n",sum);
return 0;
The use of OpenMP is the parallel loop. Here, all iterations can be executed independently and in any order. The pragma CPP directive then conveys this fact to the compiler. A sequential code can be easily parallelized this way.
#pragma omp for
for (i=0; i if (pthread_create(threads+i,NULL,&adder,NULL)!=0) return i+1; } printf("joining\n"); for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
for (i=0; i { if (pthread_join(threads[i],NULL)!=0) return NTHREADS+i+1; printf("Sum computed: %d\n",sum); } return 0; }
Consider a large web-based database. In some sense, Google is sort of like this. There might be many users who want to read from the database, but only a few users who are allowed
Q. What is Multiprogramming? Multiprogramming When two or more programs are in memory at the same instance, sharing the processor is referred to multiprogramming opera
ADVANTAGES AND INCONVENIENCES OF KLT Advantages: the kernel be able to simultaneously schedule many threads of the same process on many processors blocking
Define the difference between preemptive and nonpreemptive scheduling. State why strict nonpreemptive scheduling is unlikely to be used in a computer center. Preemptive schedul
Describe swapping technique in UNIX systems. Swapping is used to control memory contention among processes. If there is excessively much memory contention, processes are swappe
Q. Discuss the advantages as well as disadvantages of caching name translations for computers located in remote domains. Answer: There is a performance benefit to caching nam
Explain segmentation hardware? We define an completion to map two-dimensional user-defined addresses into one-dimensional physical addresses. This mapping is affected by means
Regard as a computer system that runs 5000 jobs per month with no deadlock-prevention or deadlock-avoidance scheme. Deadlocks take place about twice per month and the operator must
Q. What is the purpose of system calls? Answer: System calls permit user-level processes to request services of the operating system.
Q. Polling for an I/O completion is able to waste a large number of CPU cycles if the processor iterates a busy-waiting loop several times before the I/O completes. However if the
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!
whatsapp: +91-977-207-8620
Phone: +91-977-207-8620
Email: [email protected]
All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd