The GLAS are generic routines that provide standard building blocks for performing vector and matrix operations.
The Level 1 GLAS perform scalar, vector and vector-vector operations,
the Level 2 GLAS perform matrix-vector operations, and the Level 3 GLAS perform matrix-matrix operations.
GLAS is generalization of BLAS (Basic Linear Algebra Subprograms)
Because the BLAS are efficient, portable, and widely available, they are commonly used in the development of
high quality linear algebra or related software, such as
LAPACK,
NumPy, or The Julia language.
Efficient Level 3 BLAS implementation requires
cache-friendly matrix blocking.
In additional, SIMD instructions should be used for all levels on modern architectures.
Why GLAS
GLAS is ...
<ul>
<li>fast to execute.</li>
<li>fast to compile.</li>
<li>fast to extend using ndslices.</li>
<li>fast to add new instruction set targets.</li>
</ul>
GLAS (Generic Linear Algebra Subprograms)
The GLAS are generic routines that provide standard building blocks for performing vector and matrix operations. The Level 1 GLAS perform scalar, vector and vector-vector operations, the Level 2 GLAS perform matrix-vector operations, and the Level 3 GLAS perform matrix-matrix operations.
Implemented Routines
The list of already implemented features.
GLAS is generalization of BLAS (Basic Linear Algebra Subprograms) Because the BLAS are efficient, portable, and widely available, they are commonly used in the development of high quality linear algebra or related software, such as LAPACK, NumPy, or The Julia language.
Efficient Level 3 BLAS implementation requires cache-friendly matrix blocking. In additional, SIMD instructions should be used for all levels on modern architectures.
Why GLAS
GLAS is ... <ul> <li>fast to execute.</li> <li>fast to compile.</li> <li>fast to extend using ndslices.</li> <li>fast to add new instruction set targets.</li> </ul>
Optimization notes
GLAS requires recent LDC >= 1.1.0-beta2.