Math & Engineering

Advanced OpenMP Features

Published
of 35
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
Share
Description
Advanced OpenMP Features Christian Terboven, Dirk Schmidl IT Center, RWTH Aachen University Member of the HPC Group IT Center der RWTH Aachen University Sudoku IT Center
Transcript
Advanced OpenMP Features Christian Terboven, Dirk Schmidl IT Center, RWTH Aachen University Member of the HPC Group IT Center der RWTH Aachen University Sudoku IT Center der RWTH Aachen University Sudoko for Lazy Computer Scientists Lets solve Sudoku puzzles with brute multi-core force (1) Find an empty field (2) Insert a number (3) Check Sudoku (4 a) If invalid: Delete number, Insert next number (4 b) If valid: Go to next field 3 Parallel Brute-force Sudoku This parallel algorithm finds all valid solutions first call contained in a #pragma omp parallel #pragma omp single such that one tasks starts the execution of the algorithm #pragma omp task needs to work on a new copy of the Sudoku board (1) Search an empty field (2) Insert a number (3) Check Sudoku (4 a) If invalid: Delete number, Insert next number (4 b) If valid: Go to next field Wait for completion 4 #pragma omp taskwait wait for all child tasks Parallel Brute-force Sudoku (2/3) OpenMP parallel region creates a team of threads #pragma omp parallel { #pragma omp single solve_parallel(0, 0, sudoku2,false); // end omp parallel Single construct: One thread enters the execution of solve_parallel the other threads wait at the end of the single and are ready to pick up threads from the work queue Syntactic sugar (either you like it or you don t) #pragma omp parallel sections { solve_parallel(0, 0, sudoku2,false); // end omp parallel 5 Parallel Brute-force Sudoku (3/3) The actual implementation for (int i = 1; i = sudoku- getfieldsize(); i++) { if (!sudoku- check(x, y, i)) { #pragma omp task firstprivate(i,x,y,sudoku) { // create from copy constructor CSudokuBoard new_sudoku(*sudoku); new_sudoku.set(y, x, i); the Sudoku board if (solve_parallel(x+1, y, &new_sudoku)) { new_sudoku.printboard(); // end omp task #pragma omp task need to work on a new copy of #pragma omp taskwait 6 #pragma omp taskwait wait for all child tasks Runtime [sec] for 16x16 Speedup Performance Evaluation Sudoku on 2x Intel Xeon GHz Intel C , scatter binding speedup: Intel C , scatter binding Is this the best we can can do? #threads 0.0 7 Performance Analysis Event-based profiling gives a good overview : Tracing gives more details: lvl 6 Duration: 0.16 sec Every thread is executing ~1.3m tasks lvl 12 lvl 48 Duration: sec Duration: sec in ~5.7 seconds. = average duration of a task is ~4.4 μs 8 lvl 82 Duration: 2.2 μs Tasks get much smaller down the call-stack. Performance Analysis Event-based profiling gives a good overview : Tracing gives more details: Performance and Scalability Tuning Idea: If you have created sufficiently many tasks to make you cores busy, stop creating more tasks! lvl 12 Every if-clause thread is executing ~1.3m tasks final-clause, mergeable-clause natively in your program code lvl 48 Example: stop recursion in ~5.7 seconds. = average duration of a task is ~4.4 μs 9 lvl 6 lvl 82 Duration: 0.16 sec Duration: sec Duration: sec Duration: 2.2 μs Tasks get much smaller down the call-stack. Runtime [sec] for 16x16 Speedup Performance Evaluation Sudoku on 2x Intel Xeon GHz Intel C , scatter binding speedup: Intel C , scatter binding Intel C , scatter binding, cutoff speedup: Intel C , scatter binding, cutoff #threads 0 10 Scheduling and Dependencies IT Center der RWTH Aachen University Tasks in OpenMP: Scheduling Default: Tasks are tied to the thread that first executes them not neccessarily the creator. Scheduling constraints: Only the thread a task is tied to can execute it A task can only be suspended at task scheduling points Task creation, task finish, taskwait, barrier, taskyield If task is not suspended in a barrier, executing thread can only switch to a direct descendant of all tasks tied to the thread Tasks created with the untied clause are never tied Resume at task scheduling points possibly by different thread No scheduling restrictions, e.g., can be suspended at any point But: More freedom to the implementation, e.g., load balancing 12 Unsafe use of untied Tasks Problem: Because untied tasks may migrate between threads at any point, thread-centric constructs can yield unexpected results Remember when using untied tasks: Avoid threadprivate variable Avoid any use of thread-ids (i.e., omp_get_thread_num()) Be careful with critical region and locks Simple Solution: Create a tied task region with #pragma omp task if(0) 13 The taskyield Directive The taskyield directive specifies that the current task can be suspended in favor of execution of a different task. Hint to the runtime for optimization and/or deadlock prevention C/C++ #pragma omp taskyield Fortran!$omp taskyield IT Center der RWTH Aachen University taskyield Example (1/2) #include omp.h void something_useful(); void something_critical(); void foo(omp_lock_t * lock, int n) { for(int i = 0; i n; i++) #pragma omp task { something_useful(); while(!omp_test_lock(lock) ) { #pragma omp taskyield something_critical(); omp_unset_lock(lock); 15 taskyield Example (2/2) #include omp.h void something_useful(); void something_critical(); void foo(omp_lock_t * lock, int n) { for(int i = 0; i n; i++) #pragma omp task { something_useful(); while(!omp_test_lock(lock) ) { #pragma omp taskyield something_critical(); omp_unset_lock(lock); The waiting task may be suspended here and allow the executing thread to perform other work; may also avoid deadlock situations. 16 The taskgroup Construct C/C++ #pragma omp taskgroup... structured block... Fortran Specifies a wait on completion of child tasks and their descendant tasks deeper sychronization than taskwait, but!$omp taskgroup... structured block...!$omp end task with the option to restrict to a subset of all tasks (as opposed to a barrier) 17 The depend Clause C/C++ #pragma omp task depend(dependency-type: list)... structured block... The task dependence is fulfilled when the predecessor task has completed in dependency-type: the generated task will be a dependent task of all previously generated sibling tasks that reference at least one of the list items in an out or inout clause. out and inout dependency-type: The generated task will be a dependent task of all previously generated sibling tasks that reference at least one of the list items in an in, out, or inout clause. The list items in a depend clause may include array sections. 18 Concurrent Execution w/ Dep. Degree Note: ofvariables parallism exploitable in the depend in thisclause concretedo example: not necessarily T2 have and T3 to(2 indicate tasks), T1 the of next dataiteration flow has to wait for them T1 has to be completed before T2 and T3 can be #pragma omp parallel #pragma omp single executed. void process_in_parallel() { 19 { int x = 1;... for (int i = 0; i T; ++i) { #pragma omp task shared(x,...) depend(out: x) // T1 preprocess_some_data(...); #pragma omp task shared(x,...) depend(in: x) do_something_with_data(...); #pragma omp task shared(x,...) depend(in: x) do_something_independent_with_data(...); // end omp single, omp parallel T2 and T3 can be executed in parallel. // T2 // T3 Concurrent Execution w/ Dep. The following code allows for more parallelism, as there is one i per thread. Thus, two tasks may be active per thread. void process_in_parallel() { #pragma omp parallel { #pragma omp for for (int i = 0; i T; ++i) { #pragma omp task depend(out: i) preprocess_some_data(...); #pragma omp task depend(in: i) do_something_with_data(...); #pragma omp task depend(in: i) do_something_independent_with_data(...); // end omp parallel 20 Concurrent Execution w/ Dep. The following allows for even more parallelism, as there now can be two tasks active per thread per i-th iteration. void process_in_parallel() { #pragma omp parallel #pragma omp single { for (int i = 0; i T; ++i) { #pragma omp task firstprivate(i) { #pragma omp task depend(out: i) preprocess_some_data(...); #pragma omp task depend(in: i) do_something_with_data(...); #pragma omp task depend(in: i) do_something_independent_with_data(...); // end omp task // end omp single, end omp parallel 21 Real Task Dependencies void blocked_cholesky( int NB, float A[NB][NB] ) { 22 int i, j, k; for (k=0; k nb; k++) { #pragma omp task depend(inout:a[k][k]) spotrf (A[k][k]) ; for (i=k+1; i nt; i++) #pragma omp task depend(in:a[k][k]) depend(inout:a[k][i]) strsm (A[k][k], A[k][i]); // update trailing submatrix for (i=k+1; i nt; i++) { for (j=k+1; j i; j++) #pragma omp task depend(in:a[k][i],a[k][j]) depend(inout:a[j][i]) sgemm( A[k][i], A[k][j], A[j][i]); #pragma omp task depend(in:a[k][i]) depend(inout:a[i][i]) ssyrk (A[k][i], A[i][i]); * image from BSC Real Task Dependencies Jack Dongarra on OpenMP Task Dependencies: void blocked_cholesky( int NB, float A[NB][NB] ) { int i, j, k; [ ] for The (k=0; appearance k nb; k++) of { DAG scheduling constructs in the OpenMP 4.0 #pragma omp task depend(inout:a[k][k]) standard offers a particularly important example of this point. Until now, spotrf (A[k][k]) ; libraries like PLASMA had to rely on custom built task schedulers; [ ] for (i=k+1; i nt; i++) However, the inclusion of DAG scheduling constructs in the OpenMP #pragma omp task depend(in:a[k][k]) depend(inout:a[k][i]) standard, strsm along(a[k][k], with the A[k][i]); rapid implementation of support for them (with // update trailing submatrix excellent multithreading performance) in the GNU compiler suite, throws for (i=k+1; i nt; i++) { open the doors to widespread adoption of this model in academic and for (j=k+1; j i; j++) commercial applications for shared memory. We view OpenMP as the natural path forward for the PLASMA library and expect that others will #pragma omp task depend(in:a[k][i],a[k][j]) depend(inout:a[j][i]) see the same sgemm( advantages A[k][i], to A[k][j], choosinga[j][i]); this alternative. #pragma omp task depend(in:a[k][i]) depend(inout:a[i][i]) 23 ssyrk (A[k][i], A[i][i]); Full article here: algorithms-and-libraries-at-exascale/ * image from BSC taskloop Construct IT Center der RWTH Aachen University Traditional Worksharing Worksharing constructs do not compose well Pathological example: parallel dgemm in MKL 25 void example() { #pragma omp parallel { compute_in_parallel(a); compute_in_parallel_too(b); // dgemm is either parallel or sequential, // but has no orphaned worksharing cblas_dgemm(cblasrowmajor, CblasNoTrans, CblasNoTrans, m, n, k, alpha, A, k, B, n, beta, C, n); Writing such code either oversubscribes the system, yields bad performance due to OpenMP overheads, or needs a lot of glue code to use sequential dgemm only for submatrixes Ragged Fork/Join Traditional worksharing can lead to ragged fork/join patterns void example() { compute_in_parallel(a); compute_in_parallel_too(b); cblas_dgemm(, A, B, ); 26 Example: Sparse CG 27 for (iter = 0; iter sc- maxiter; iter++) { precon(a, r, z); vectordot(r, z, n, &rho); beta = rho / rho_old; xpay(z, beta, n, p); matvec(a, p, q); vectordot(p, q, n, &dot_pq); alpha = rho / dot_pq; axpy(alpha, p, n, x); axpy(-alpha, q, n, r); sc- residual = sqrt(rho) * bnrm2; if (sc- residual = sc- tolerance) break; rho_old = rho; void matvec(matrix *A, double *x, double *y) { //... #pragma omp parallel for \ private(i,j,is,ie,j0,y0) \ schedule(static) for (i = 0; i A- n; i++) { //... y0 = 0; is = A- ptr[i]; ie = A- ptr[i + 1]; for (j = is; j ie; j++) { j0 = index[j]; y0 += value[j] * x[j0]; y[i] = y0; The taskloop Construct Parallelize a loop using OpenMP tasks Cut loop into chunks Create a task for each loop chunk Syntax (C/C++) #pragma omp taskloop [simd] [clause[[,] clause], ] for-loops Syntax (Fortran)!$omp taskloop[simd] [clause[[,] clause], ] do-loops [!$omp end taskloop [simd]] 28 Clauses for taskloop Construct Taskloop constructs inherit clauses both from worksharing constructs and the task construct shared, private firstprivate, lastprivate default collapse final, untied, mergeable grainsize(grain-size) Chunks have at least grain-size and max 2*grain-size loop iterations num_tasks(num-tasks) Create num-tasks tasks for iterations of the loop 29 Example: Sparse CG 30 #pragma omp parallel #pragma omp single for (iter = 0; iter sc- maxiter; iter++) { precon(a, r, z); vectordot(r, z, n, &rho); beta = rho / rho_old; xpay(z, beta, n, p); matvec(a, p, q); vectordot(p, q, n, &dot_pq); alpha = rho / dot_pq; axpy(alpha, p, n, x); axpy(-alpha, q, n, r); sc- residual = sqrt(rho) * bnrm2; if (sc- residual = sc- tolerance) break; rho_old = rho; void matvec(matrix *A, double *x, double *y) { //... #pragma omp taskloop private(j,is,ie,j0,y0) \ grain_size(500) for (i = 0; i A- n; i++) { y0 = 0; is = A- ptr[i]; ie = A- ptr[i + 1]; for (j = is; j ie; j++) { j0 = index[j]; y0 += value[j] * x[j0]; y[i] = y0; //... More Tasking Stuff IT Center der RWTH Aachen University priority Clause The priority is a hint to the runtime system for task execution order Among all tasks ready to be executed, higher priority tasks are recommended to execute before lower priority ones 32 C/C++ #pragma omp task priority(priority-value)... structured block... priority is non-negative numerical scalar (default: 0) priority = max-task-priority ICV environment variable OMP_MAX_TASK_PRIORITY It is not allowed to rely on task execution order being determined by this clause! final Clause For recursive problems that perform task decompo-sition, stopping task creation at a certain depth exposes enough parallelism but reduces overhead. C/C++ Fortran Merging the data environment may have side-effects void foo(bool arg) { #pragma omp task final(expr) int i = 3; #pragma omp task final(arg) firstprivate(i) i++; printf( %d\n, i);!$omp task final(expr) // will print 3 or 4 depending on expr 33 mergeable Clause If the mergeable clause is present, the implemen-tation might merge the task s data environment if the generated task is undeferred or included C/C++ undeferred: if clause present and evaluates to false included: final clause present and evaluates to true #pragma omp task mergeable Fortran!$omp task mergeable Personal Note: As of today, no compiler or runtime exploits final and/or mergeable so that real world applications would profit from using them. 34 Questions? 35
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks