NA Software Libraries on the NetLibraries are collections of source code, and source code packages. Much of the code is in Fortran. If you prefer to speak C++ or C, see C++ Resources , and Fortran, C, and f2c . The main library by far is Netlib .
For statistical software, the best resource is Statlib .
See also the sections on:
Netlib, including LAPACKNetlib is probably the world's largest repository of numerical methods programs. It is located at Oak Ridge National Laboratory, Knoxville, Tennessee, and at AT&T Bell Laboratories, Murray Hill, NJ.
Some gems of netlib: LAPACK provides a wide variety of linear algebra functions:
LAPACK++ is a C++ version of, sadly, only a subset of LAPACK. LAPACK++ is work in progress, and hopefully the full functionality of LAPACK will be supported soon.
ScaLAPACK is for distributed memory machines.
f2c: Most of the programs in netlib are in Fortran. However, netlib contains an excellent Fortran-to-C conversion utility, f2c. While f2c produces working C code, it is visually complex and ugly. Using f2c on a large package like LAPACK can require a good deal of time to get all the options correct. Fortunately, LAPACK has already be converted to C: see CLAPACK.
The utility f2c can also be invoked by email. Send email to firstname.lastname@example.org, with the subject "execute f2c", and body containing the non-confidential Fortran program to be converted. But the email option is of use only for very small, simple programs, since a resulting C program of any size must be linked with the f2c libraries. Usually one will have to download the f2c package anyway to generate the libraries. Generally it's easier to download the f2c package, build the libraries and the f2c conversion program, and do the conversion locally.
CAUTION: Programs created by f2c conversion use parameter passing
conventions different from most C or C++ programs. Their
callers must create the appropriate parameters before using them.
See the file f2c.ps in the f2c distribution.
A good description of this issue may also be found in
the "readme" file for clapack in netlib.
Statlib via ftp.
Quail Quail: Quantitative analysis in lisp
The Visual Numerics' Java package, JNL, is a Numerical Library for Java, is a set of classes for the most important numerical functions missing in Java. The library is comprised of one numerical type class, Complex, and three categories of numerical functions classes: the special functions class, the linear algebra classes, and the statistics class. All classes use double precision floating point as the underlying float type.
The f2j, JLAPACK project provides the LAPACK and BLAS numerical subroutines translated from their Fortran 77 source into class files, executable by the Java Virtual Machine (JVM) and suitable for use by Java programmers. This makes it possible for a Java application or applet distributed on the web to use established legacy numerical code that was originally written in Fortran. The translation was accomplished using a special purpose Fortran-to-Java (source-to-source) compiler.
JAMA is a basic linear algebra package for Java. It provides user-level classes for constructing and manipulating real, dense matrices. Five fundamental matrix decompositions are provided:
JAMA is by no means a complete linear algebra environment. For example, there are no provisions for matrices with particular structure (e.g., banded, sparse) or for more specialized decompositions (e.g. Shur, generalized eigenvalue). Complex matrices are not included. It is not our intention to ignore these important problems. We expect that some of these (e.g. complex) will be addressed in future versions. It is our intent that the design of JAMA not preclude extension to some of these additional areas.
The SciMark Java numerical benchmark consists of various kernels (FFT, Monte Carlo, sparse matrix computation, finite-difference stencils, and LU factorization) and is meant to provide an indication of how well Java environments perform on numeric and scientific applications. SciMark can be run interactively within your browser, or downloaded to run in other Java environments. The web site includes bar-graph comparisons between various computer/Java platforms, as well as an archive of previous results.
PLAPACK: Parallel Linear AlgebraPLAPACK is an MPI based Parallel Linear Algebra Package (PLAPACK) designed to provide a user friendly infrastructure for building parallel dense linear algebra libraries. The Users' Guide, "Using PLAPACK: Parallel Linear Algebra Package" is available from The MIT Press. WHAT IS DIFFERENT: PLAPACK provides three features not currently found in other publically available parallel dense linear algebra libraries:
WGS: NICONET Control Theory LibrariesThe NICONET library objectives are to bring together the existing numerical software for control and systems theory in a widely available library, called NICONET, and to extend this library to cover as far as possible the area of industrial applications.
GSL: GNU Scientific LibraryThe GNU Scientific Library (GSL) is a collection of routines for numerical computing in C. The routines have been written from scratch over a five year period by the GSL team using modern coding conventions. The subject areas covered by the library include:
ATLAS: Automatically Tuned Linear Algebra SoftwareThe ATLAS (Automatically Tuned Linear Algebra Software) provides a complete, high performance implementation of the BLAS library, and a small subsection of the LAPACK library. In addition, the associated developer release (ATLAS 3.3) possesses support for Intel's SSE2, allowing for maximal DGEMM performance of around 2 Gflop/s on a 1.5Ghz Pentium 4 (SSE1 provides a roughly 4 Gflop/s peak SGEMM on the same machine). Prebuilt archives are available for many architectures, including well-tested version of the developer release. In particular, SSE2-enabled Pentium 4 libraries are available for both Linux and Windows.
ATLAS is a software package that will automatically generate highly
optimized numerical kernels for our commodity processors. As the underlying
computing hardware doubles its speed every eighteen months, it often takes
more than a year for software to be optimized or "tuned" for performance
on a newly released CPU. Users tend to see only a fraction of the power
available from any new processor until it is well on the way to obsolescence.