Parallele Programmierung von Shared Memory Systemen
Gegenstück ist MPI für Distributed Memory Systeme
Open MP benutzt in Kommentare eingebettete Compiler Direktiven
Implementierung durch Thread-Systeme
Sprachunabhängig: FORTRAN, C, C++, Java
Fork-Join Parallelität plus Atomic Regions,
(praktisch) keine Bedingungen (Cond-Wait)
Parallel-For, Parallel-Sections, Reductions
Open MP Implementierung für Java
Edinburgh: Bull, Westhead, Kambites, Obdrzalek
Format der Direktiven
//omp <direktive> <clauses> { Java code block }
Direktiven
for
sections
und section
single
und master
critical
und barrier
parallel reduction(operation: vars)
Pi/4 = arctan(1)
= integral(0,1) [ dx / ( 1+x*x ) ]
= h * summe(i=1,n) [ 1 / ( 1 + ( (i+0.5)*h )**2 ) ]
/** * Pi computation using sequential algorithm */ public class PiSeq { static final long num_steps = 1000000; static double step = 0.0; public static void main(String[] args) { double x = 0.0; double sum = 0.0; double pi = 0.0; step = 1.0/((double) num_steps); int i; for (i=1; i < num_steps; i++) { x = (i+0.5)*step; sum += 4.0/(1.0+x*x); } pi = sum * step; System.out.println("Pi = " + pi ); System.out.println("Pi = " + Math.PI + " taken from Math.PI"); } }
jetzt mit OpenMP
/** * Pi computation using OpenMP reduction */ import jomp.runtime.*; public class PiRed { static final long num_steps = 1000000; static double step = 0.0; public static void main(String[] args) { double x = 0.0; double sum = 0.0; double pi = 0.0; step = 1.0/((double) num_steps); int i; //omp parallel for reduction(+:sum) private(x) for (i=1; i < num_steps; i++) { x = (i+0.5)*step; sum += 4.0/(1.0+x*x); } pi = sum * step; System.out.println("Pi = " + pi ); System.out.println("Pi = " + Math.PI + " taken from Math.PI"); } }
make PiRed.java /usr/lib/jdk1.3/bin/java -cp /home/kredel/java/lib/jomp1.0b.jar jomp.compiler.Jomp PiRed Jomp Version 1.0.beta. Compiling class PiRed.... Parallel For Directive Encountered
make PiRed.class /usr/lib/jdk1.3/bin/javac -classpath /home/kredel/java/lib/jomp1.0b.jar PiRed.java
make np=4 PiRed /usr/lib/jdk1.3/bin/java -classpath /home/kredel/java/lib/jomp1.0b.jar:. -Djomp.threads=4 PiRed Pi = 3.141588653589875 Pi = 3.141592653589793 taken from Math.PI
/** * Hello World with OpenMP for Java */ import jomp.runtime.*; public class Hello { public static void main(String[] args) { int myid = 1; //omp parallel private(myid) { myid= OMP.getThreadNum(); System.out.println("Hallo Welt von " + myid + "!"); } } }
Implementierung durch Threads
/** * Hello World with OpenMP for Java */ import jomp.runtime.*; public class Hello { public static void main(String[] args) { int myid = 1; // OMP PARALLEL BLOCK BEGINS { __omp_Class0 __omp_Object0 = new __omp_Class0(); // shared variables __omp_Object0.args = args; // firstprivate variables try { jomp.runtime.OMP.doParallel(__omp_Object0); } catch(Throwable __omp_exception) { System.err.println("OMP Warning: Illegal thread exception ignored!"); System.err.println(__omp_exception); } // reduction variables // shared variables args = __omp_Object0.args; } // OMP PARALLEL BLOCK ENDS } // OMP PARALLEL REGION INNER CLASS DEFINITION BEGINS private static class __omp_Class0 extends jomp.runtime.BusyTask { // shared variables String [ ] args; // firstprivate variables // variables to hold results of reduction public void go(int __omp_me) throws Throwable { // firstprivate variables + init // private variables int myid; // reduction variables, init to default // OMP USER CODE BEGINS { myid= OMP.getThreadNum(); System.out.println("Hallo Welt von " + myid + "!"); } // OMP USER CODE ENDS // call reducer // output to _rd_ copy if (jomp.runtime.OMP.getThreadNum(__omp_me) == 0) { } } } // OMP PARALLEL REGION INNER CLASS DEFINITION ENDS }
Ausgabe
make np=4 Hello /usr/lib/jdk1.3/bin/java -classpath /home/kredel/java/lib/jomp1.0b.jar:. -Djomp.threads=4 Hello Hallo Welt von 0! Hallo Welt von 3! Hallo Welt von 1! Hallo Welt von 2!
© Universität Mannheim, Rechenzentrum, 2000-2002. Last modified: Wed Jul 10 23:31:46 MEST 2002