No longer does Moore’s Law result in higher frequencies and improved scalar application performance; instead, higher transistor counts lead to increased parallelism, both through more cores and through wider SIMD registers. To get the full performance benefit from these improvements in processor technology, applications need to take advantage of both forms of parallelism. This article will focus on SIMD parallelism; multi-core parallelism is addressed elsewhere, with both OpenMP* and MPI being widely used in Fortran applications.
If suitable optimized library functions are available, such as the ones in the Intel® Math Kernel Library, these may be a simple and effective way for Fortran developers to take advantage of SIMD parallelism on Intel Architecture. Otherwise, the main way has been through auto-vectorizing compilers. The SIMD intrinsic functions that are available to C and C++ developers do not have a corresponding Fortran interface, and in any case require a great deal of programming effort and introduce an undesirable architecture dependence.
Auto-vectorization has its limitations. The compiler must be conservative, and not make any optimizations that could lead to different results from unoptimized code, even for unlikely-seeming values of input data. It must also estimate whether a code transformation is likely to yield faster or slower code. These decisions are typically based on incomplete information, since language standards have not provided a way to convey to the compiler all the information it needs to make more effective decisions or to convey programmer intent. Requirements for auto-vectorization are discussed here.
The auto-vectorizer can be more effective if the compiler is given hints through compiler directives, but whether or not a loop is vectorized still depends on the compiler’s internal analysis and the information available to it. Explicit Vector Programming is an attempt to remove that uncertainty: the programmer, using his knowledge of the application, can instruct the compiler to vectorize a loop. The method is analogous to OpenMP, where a directive may require the compiler to thread a loop. Just like in OpenMP, the programmer is responsible for telling the compiler about reduction variables and private variables, to avoid race conditions.
Corresponding methods for C and C++ applications were introduced as components of Intel® Cilk™ Plus, (see references below and the Intel Compiler documentation). Some closely similar features, for C, C++ and Fortran, are now part of the OpenMP 4.0 standard, see http://www.openmp.org/.
The main features of explicit vector programming in Intel® Cilk™ Plus are array notation, SIMD enabled functions and the SIMD pragma. OpenMP 4.0 contains similar functionality, except for array notation. Intel Fortran supports similar features, but with important differences. For clarity in the examples below, lower case variables are used for C and upper case for Fortran, although Fortran is of course case insensitive.
Array Notation
In Intel® Cilk™ Plus for C and C++, array assignments are written in the form
a[0:n] = a[m:n] + b[0:n] * c[n:n]
A comparable Fortran array assignment, with a lower bound of 1 instead of 0, would be:
A(1:n) = A(1+m:n+m) + B(1:n) * C(n+1:n+n)
However, there are two important differences.
- In C, the quantity following the colon specifies the number of data elements in the assignment. Naturally, this must be the same for each array or pointer in the assignment. In Fortran, the quantity following the assignment specifies the upper bound of the array section being assigned. Thus x[j:k] in C describes the same array section as X(j:j+k-1) in Fortran.
- In C, array syntax asserts to the compiler that there is no backward dependence between data elements on the right hand side and data elements on the left hand side that would make vectorization unsafe. In the example above, no backward dependence would imply that m >= 0 and no overlap between A(1:n) and B(1:n) or C(n+1:n+n) such that, in a scalar loop, an element of A would be written before the overlapping element of B or C was read. If the assertion is incorrect, it is considered a user error and the results are unpredictable. In Fortran, there is no such assertion. The compiler may need to store a temporary copy of the right hand side (RHS) for all elements before starting assignment to the LHS. This store to a temporary array may introduce a significant performance overhead, but it may permit vectorization of loops that could not otherwise be vectorized. Note that the semantics of Fortran array notation are different from those of a Fortran 77 style DO loop.
DO i=1,n A(i) = A(i+m) + B(i) * C(n+i) ENDDO
would yield different results to the array assignment above for the case of m=-1, (assuming that A(0) and B(0) are legal addresses). In the DO loop, for i=2, the value of A(1) used on the RHS has already been modified by the previous iteration. In the array notation version, the original value of A(1) is used instead. The DO loop cannot be safely vectorized, whereas the array assignment can.
It is possible to apply compiler directives such as IVDEP to an array assignment, just as for a DO loop. An IVDEP directive allows the compiler to assume there are no potential (data-dependent) dependencies that might make vectorization unsafe; it will not, however, change the semantics of making a temporary copy. The IVDEP directive will not override a proven dependency.
Use the compiler option –vec-report2 (/Qvec-report2) to get a report explaining which loops were vectorized, which were not and the reason why. Report levels 3 and 6 (instead of 2) give additional detail.
Consecutive array assignments with similar array bounds may sometimes be fused into a single loop, to reduce loop overhead and/or memory traffic for intermediate results. When this happens, a message appears in the high level optimizer report, obtained with –opt-report-phase hlo (/Qopt-report-phase hlo).
Loops containing procedure calls
Loops containing procedure calls (subroutine or function calls) cannot in general be vectorized, unless either the function can be inlined or a vector version of the function is available. For a small number of calls to small procedures, inlining may be the preferred solution, since it requires little or no source code changes and also eliminates the overhead of a procedure call. Inlining within the same source file is enabled by default at optimization levels of –O2 and higher; inlining from one source file to another is enabled by interprocedural optimization with –ipo on Linux*, /Qipo on Windows*. By default, the compiler decides whether or not to inline a procedure call based on size and other heuristics. To ensure inlining in order to enable vectorization, the !DIR$ FORCEINLINE directive may be used, either immediately before a statement containing one or more procedure calls, or before a DO loop to inline all procedure calls within the loop. See the compiler user guide for examples. The compiler option –opt-report-phase ipo_inl (/Qopt-report-phase:ipo_inl) may be used to generate a report that shows which functions were inlined.
SIMD-enabled procedures (functions and subroutines)
For some loops, such as those containing many procedure calls, nested procedure calls or calls to large procedures, inlining may not be practicable, due to complexity, code size or compilation time. Making such procedures SIMD-enabled still allows the containing loop to be vectorized and gives the programmer more control over how vectorization is carried out. For a SIMD enabled procedure, the compiler creates both a scalar and one or more vector implementations of the procedure. The SIMD attribute of the procedure must be declared in both the caller and the callee; the best way to do this is with an explicit interface. For example:
subroutine get_proj (rad, theta, proj) !dir$ attributes vector :: get_proj ! or !$omp declare simd(get_proj) in OpenMP 4.0 real, intent(in) :: rad, theta real, intent(out) :: proj proj = rad*cos(theta) end subroutine get_proj real, dimension(N) :: p,r,t interface subroutine get_proj (rad, theta, proj) !dir$ attributes vector :: get_proj ! or !$omp declare simd(get_proj) in OpenMP 4.0 real, intent(in) :: rad, theta real, intent(out) :: proj end subroutine get_proj end interface … do i=1,N ! or instead of DO loop, use array notation to write call get_proj( r(i), t(i), p(i) ) ! call get_proj( r(:), t(:), p(:) ) enddo
With Intel Fortran Compiler 14.0 and earlier, the interface to a SIMD-enabled procedure cannot yet be accessed by USE association from a module containing the SIMD-enabled procedure. This issue will be fixed in an upcoming release.
The directive tells the compiler to create, in addition to a normal scalar version, a SIMD version of subroutine get_proj that takes vector input arguments rad and theta and returns a vector output argument proj. The length of the vector is chosen by the compiler and will depend on the targeted microarchitecture, but it can also be influenced by the programmer using an additional clause. The calling procedure contains an interface block, so that the compiler knows that a SIMD version of the subroutine is available. The compiler is then able to vectorize the loop containing the subroutine call. This is very similar to the way the compiler vectorizes a loop containing a direct call to a math function, making use of the SIMD-enabled functions in the Short Vector Math Library, libsvml. In the present example, such a simple procedure could easily be inlined; the value of SIMD-enabled procedures is in more complicated examples with calls to multiple procedures in different source files.
By default, the compiler expects that all arguments of a SIMD-enabled procedure could be vector. If the programmer knows that one or more arguments will always be scalar, they should be declared using the UNIFORM clause. This allows the compiler to avoid generating vector code for the “uniform” argument, and instead broadcast the scalar value to each vector lane. For example,
subroutine get_proj (rad, theta, proj) !dir$ attributes vector : uniform(rad) :: get_proj ! ( or !$omp declare simd(get_proj) uniform(rad) in OpenMP 4.0 ) real, intent(in) :: rad, theta real, intent(out) :: proj proj = rad*cos(theta) end subroutine get_proj interface subroutine get_proj (rad, theta, proj) !dir$ attributes vector : uniform(rad) :: get_proj ! ( or !$omp declare simd(get_proj) uniform(rad) in OpenMP 4.0 ) real, intent(in) :: rad, theta real, intent(out) :: proj end subroutine get_proj end interface real, dimension(N) :: p,t real, :: r … do i=1,N ! or instead of DO loop, use array notation to write call get_proj( r, t(i), p(i) ) ! call get_proj( r, t(:), p(:) ) enddo
The source code of the subroutine itself is unchanged, only the subroutine attributes are different.
Efficiency considerations for SIMD-enabled procedures
In C, scalar arguments are by default passed by value; in SIMD-enabled procedures, arguments are passed as a short vector of values. The Fortran default calling convention is for scalar arguments to be passed by reference; the simple extension to SIMD-enabled procedures is to pass a short vector of addresses, instead of a single address. The compiler then “gathers” data from the vector of addresses to create a short vector of values for use in subsequent vector arithmetic. The overhead from this gather means that in order to see a performance benefit, a simple Fortran SIMD-enabled procedure needs to contain more work compared to an analogous SIMD-enabled function in C. This overhead can be mitigated for input arguments by choosing to pass them by value, for example:
subroutine get_proj (rad, theta, proj) bind(C) !dir$ attributes vector : uniform(rad) :: get_proj ! or !$omp declare simd(get_proj) in OpenMP 4.0 real, intent(in), value :: rad, theta real, intent(out) :: proj proj = rad*cos(theta) end subroutine get_proj interface subroutine get_proj (rad, theta, proj) bind(C) !dir$ attributes vector : uniform(rad) :: get_proj ! or !$omp declare simd(get_proj) in OpenMP 4.0 real, intent(in), value :: rad, theta real, intent(out) :: proj end subroutine get_proj end interface real, dimension(N) :: p, t real :: r … do i=1,N ! or instead of DO loop, use array notation to write call get_proj( r, t(i), p(i) ) ! call get_proj( r, t(:), p(:) ) enddo
The VALUE keyword alone is not sufficient; it must be combined with BIND(C). Instead of these two keywords, it is possible to use the directive $DIR$ ATTRIBUTES VALUE :: argname . The keywords are preferred, since they are part of the Fortran language standard.
This method cannot be applied to output arguments, since the Fortran language standard requires INTENT(OUT) or INTENT(INOUT) arguments to be passed by reference. However, a SIMD-enabled subroutine containing a single vector output argument may be converted to a SIMD-enabled function with a vector result, which will be passed back to the calling procedure by value, avoiding the overhead of a gather. For example:
real function proj (rad, theta) bind(C) !dir$ attributes vector : uniform(rad) :: proj ! or !$omp declare simd(proj) in OpenMP 4.0 real, intent(in), value :: rad, theta proj = rad*cos(theta) end function proj interface real function proj (rad, theta) bind(C) !dir$ attributes vector : uniform(rad) :: proj ! or !$omp declare simd(proj) in OpenMP 4.0 real, intent(in), value :: rad, theta end function proj end interface real, dimension(N) :: p, t real :: r … do i=1,N ! or instead of DO loop, use array notation to write p(i) = proj( r, t(i) ) ! p(:) = proj( r, t(:) ) enddo
Any additional vector arguments with intent(out) or intent(inout) must be passed by reference in the usual way.
The SIMD directive
The Intel compiler directive, !DIR$ SIMD, and its OpenMP 4.0 equivalent !$OMP SIMD, instruct the compiler to generate vectorized code for the following loop. Unlike other compiler directives, such as IVDEP or VECTOR ALWAYS, the SIMD directive is not a hint to the compiler, it is a command. The compiler does not analyze the loop for dependencies or estimate whether vectorization is likely to give a speedup, it forges ahead and vectorizes. It is the responsibility of the programmer to ensure there are no dependencies that could make vectorization unsafe, and to judge whether vectorization may improve performance. This behavior is analogous to the !$OMP DO directive, which instructs the compiler to generate threaded code for the following do loop, leaving the programmer responsible for thread safety and for providing sufficient parallel work to make threading worthwhile.
The compiler may go to great lengths, including emulation of vector functions by scalar calls, to ensure vectorization. However, as for OpenMP, SIMD loops must still obey a few basic rules, such as the iteration count being known at entry to the loop. See the article Requirements for Vectorizing Loops with #pragma SIMD for more detail.
Here is an example of how vectorization of a loop can be enforced using the SIMD directive:
module mymod real, pointer, dimension(:) :: a, b, c, d, e end module mymod subroutine add4 use mymod implicit none integer :: i !!dir$ simd ! or !$omp simd with -openmp-simd do i=1,1000 e(i) = a(i) + b(i) + c(i) + d(i) enddo end subroutine add4
> ifort add4.f90 -c -vec-report2 -S add4.f90(12): (col. 34) remark: loop was not vectorized: existence of vector dependence add4.f90(12): (col. 34) remark: loop skipped: multiversioned
Unaided, the compiler does not auto-vectorize because it does not know whether a part of e might be aliased with one of the pointers a, b, c or d (i.e., point to an overlapping memory location). There are too many possibilities for the compiler to test them all at run-time. The IVDEP compiler directive could be used to assert that the potential dependencies are not realized, i.e., that e is independent of a, b, c and d, so it is safe to vectorize the loop. If we recompile with !DIR$ IVDEP, we get two messages: “LOOP WAS VECTORIZED” and “vectorization possible but seems inefficient”. The original message “loop skipped: multiversioned” gives a clue. If we add –opt-report-phase hlo to get more optimization detail, we see “Loop at 12 -- selected for multiversion- Assume shape array stride tests”. The compiler does not know if the pointers are pointing to contiguous chunks of memory, or to an array section with a non-unit stride. It generates separate code for each case, and judges vectorization to be worthwhile in the case of contiguous memory (unit stride), but not otherwise. We could invite the compiler to vectorize each case, irrespective of whether it thinks there will be a speed-up, by using in addition the compiler directive !DIR$ VECTOR ALWAYS. With this, both loop versions are vectorized.
A simpler way to ensure vectorization is to compile with the single directive !DIR$ SIMD. This instructs (not invites) the compiler to vectorize the loop, ignoring any considerations of dependencies or performance. With this, both loop versions are vectorized:
> ifort add4.f90 -c -vec-report2 add4.f90(13): (col. 34) remark: SIMD LOOP WAS VECTORIZED add4.f90(13): (col. 34) remark: SIMD LOOP WAS VECTORIZED
The !$OMP SIMD directive from the OpenMP 4.0 standard behaves in the same way, but requires either the –openmp or –openmp-simd command line option. If the application does not use any OpenMP threading APIs, but only SIMD constructs, -openmp-simd should be preferred.
The SIMD directive does not tell the compiler whether the target of the pointer has unit stride, so there are still two loop versions. If you know that the target always has unit stride, then add the “CONTIGUOUS” attribute to the pointer declaration to inform the compiler, which will then generate only the loop version for stride 1.
The SIMD directive may disable all dependency checking by the compiler. Using it when real dependencies are present may yield unexpected results or failures.
The SIMD directive includes the functionality of both IVDEP and VECTOR ALWAYS directives, as illustrated by the above example, but it is more powerful. In the following example, using IVDEP and VECTOR ALWAYS is not sufficient to get the loop to vectorize.
real function vec_sum(x,nx) implicit none integer, intent(in) :: nx real, intent(in), dimension(nx) :: x integer :: i interface real function func(x) !dir$ ATTRIBUTES VECTOR :: func ! or !$OMP DECLARE SIMD (func) real, intent(in) :: x end function func end interface vec_sum = 0. !!dir$ simd reduction(+:vec_sum) ! or !$OMP SIMD reduction(+:vec_sum) do i = 1,nx vec_sum = vec_sum + func(x(i)) enddo end function vec_sum
As coded, the auto-vectorizer is unable to recognize and vectorize the combination of a reduction loop with a vector function call, although it could auto-vectorize either construct separately:
> ifort -c -vec-report2 vec_sum.f90 vec_sum.f90(17): (col. 3) remark: loop was not vectorized: existence of vector dependence.
Even an IVDEP directive will not enable the compiler to auto-vectorize. However, uncomment the SIMD directive and the compiler behaves as directed:
> ifort -c -vec-report2 vec_sum.f90 vec_sum.f90(17): (col. 3) remark: SIMD LOOP WAS VECTORIZED.
The behavior is the same if the OpenMP 4.0 SIMD directive is used and the example compiled with -openmp or -openmp-simd.
This example also illustrates the use of a REDUCTION clause on the SIMD directive. Just like in an OpenMP worksharing construct such as !$OMP DO, its use is mandatory to avoid conflicts (race conditions) between different iterations that write to the same reduction variable, vec_sum in the above example.
The PRIVATE clause also functions the same way as in OpenMP, to avoid race conditions. For example, if the above loop is modified slightly to become an integral of func over x, a PRIVATE clause is needed:
!dir$ simd reduction(+:vec_sum) private(xi) do i = 1,nx xi = x0 + bin_width*(i) vec_sum = vec_sum + func(xi) enddo
Additional clauses supported by the SIMD directive include LINEAR, to tell the compiler that the loop contains one or more secondary induction variables, and ALIGNED, to tell the compiler that one or more arrays have a specified alignment in memory. ALIGNED(X:64) would tell the compiler that the start of the array X is always aligned to a 64 byte boundary in memory, and that the compiler can safely generate aligned load instructions without risking a fault, for example on Intel® Xeon Phi™ Coprocessors. It is the programmer’s responsibility to align the array; in many cases, this can be done with a command line switch such as -align array64byte. For more details about these and other clauses supported by the !DIR$ SIMD and !$OMP SIMD directives, see the Intel Fortran Compiler User and Reference Guide under SIMD Loop Directive and SIMD Directive (OpenMP* API).
If the compiler is unable to vectorize a loop preceded by a !$OMP SIMD directive, for example, if it does not conform to the requirements referenced above, the compiler emits a fatal error. If it is unable to vectorize a loop preceded with !DIR$ SIMD, by default it emits a warning diagnostic: loop was not vectorized with “simd”. This may be converted to a fatal error if the ASSERT clause is added to the directive. This can be a useful way to alert the programmer if future changes unintentionally prevent the loop from vectorizing.
If an application can be compiled with the option -no-simd, any !DIR$ SIMD directives are ignored. This can be useful for comparison testing. Any !$OMP SIMD directives are ignored by default, unless the application is built with –openmp or –openmp-simd.
References relating to explicit vector programming in C/C++ using Intel® Cilk™ Plus
- Webinar: Introduction to Vectorization using Intel® Cilk™ Plus Extensions
- Article: Getting Started with Intel® Cilk™ Plus SIMD Vectorization
- Article: Getting Started with Intel® Cilk™ Plus Array Notations
- Article: Writing data-parallel code in C/C++ using SIMD vector functions
- Article: Requirements for Vectorizing Loops with #pragma SIMD