You are here: Home / Documentation / Overview of PIPS

Overview of PIPS

PIPS is an automatic parallelizer for scientific programs. It takes as input Fortran or C code codes and emphasizes interprocedural techniques for program analyses (this presentation is outdated but gives a global idea)

PIPS is an automatic parallelizer for scientific programs. It takes as input Fortran or C codes and emphasizes interprocedural techniques for program analyses. PIPS specific interprocedural analyses are precondition and array region computations. PIPS is based on linear algebra techniques for analyses, e.g. dependence testing, as well as for code generation, e.g. loop interchange or tiling.

Analyses and transformations are driven by a make system, (pipsmake) which enforces consistency across analyses and modules, and results are stored for future interprocedural use in a database by a resource manager, (pipsdbm). The compiler is made of phases that are called on demand to perform the analyses or transformations required by the user.

poster_aggrandi.gif

PIPS Overall Structure

Many user interfaces are available: a Shell interface (Pips), a line interface (Tpips) and an X-window interface (Wpips) which is the best suited for users. Emacs also can be used to display source code, transformed or not (Epips). Unfortunately, only the tpips interface is still maintained because the graphical interface rely on obsolete graphical libraries. iPyPS, a iPython-based interface is ongoing.

Many interprocedural program analyses are implemented in PIPS. The result of these analyses can be displayed in various forms. For instance:

  • Use-def chains, dependence graph
  • Transformers, preconditions...
  • Symbolic complexity.
  • Effects (of instructions on data)... Array regions...
  • Call graph, interprocedural control flow graph...

Also many program transformations are available, such as:

  • Interprocedural parallelization (shared-memory oriented)
  • Scalar and array privatization
  • Loop unrolling, interchange, normalize, distribution, strip mining...
  • Dead-code elimination, partial evaluation, atomizer...

PIPS Compilers:

  • Parallelizer/vectorizer for Cray machines
  • Polyhedral method (Pr. P. Feautrier)
    • Array Data Flow Graph computation.
    • Scheduling, mapping, and associated code generation.
  • PUMA/WP65: Shared memory emulation
  • HPFC: a HPF compiler prototype.

PIPS is built on top of two tools. The first one is Newgen which manages data structures a la IDL. It provides basic manipulation functions for data structures described in a declaration file. It supports persistent data and type hierarchies.

The second tool is the Linear C3 library which handles vectors, matrices, affine constraints, generating systems and polyhedra and provides basic linear programming facilities. The algorithms used are designed for integer and/or rational coefficients. This library is extensively used for analyses such as dependence test, precondition and region computation, and for transformations, such as tiling. The Linear C3 library is a joint project with IRISA and PRISM laboratories, partially funded by CNRS. IRISA contributed an implementation of Chernikova algorithm and PRISM a C implementation of PIP (Parametric Integer Programming).

Seven years after its inception, PIPS as a workbench is still alive and well. PIPS provides a robust infrastructure for new experiments in compilation, program analysis, optimisation, transformation and parallelization. New code can be developped externally by reading complex data structures in PIPS interprocedural database.

PIPS can also be used as a reverse engineering tool. Region analyses provide useful summaries of procedure effects, while precondition-based partial evaluation and dead code elimination reduce code size.

PIPS is written mainly in C99 and developped under Linux and freely downloadable.

Document Actions

March 2017 »
March
MoTuWeThFrSaSu
12345
6789101112
13141516171819
20212223242526
2728293031