summaryrefslogtreecommitdiff
path: root/tests/README.md
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--tests/README.md75
1 files changed, 75 insertions, 0 deletions
diff --git a/tests/README.md b/tests/README.md
new file mode 100644
index 00000000..c9ca0047
--- /dev/null
+++ b/tests/README.md
@@ -0,0 +1,75 @@
+# Aeneas test suite
+
+This folder contains the test suite for Aeneas. It is organized as follows:
+- `test_runner/`: ocaml script that orchestrates the test suite;
+- `src/`: rust files and crates used as test inputs;
+- `llbc/`: temporary directory (ignored by git) that stores intermediate files generated during tests;
+- `coq/`, `fstar/`, `hol4/`, `lean/`: one folder per backend. Each folder contains:
+ - Files generated by Aeneas from our test inputs;
+ - Handwritten proof files.
+
+## Running the test suite
+
+Running the test suite has two parts:
+- We generate files from rust to each backend. This is done by running `make test` in the
+ project root. This will call the test runner on each file or crate in `tests/src`, which will call
+ `charon` and `aeneas` to generate the backend-specific outputs.
+- We check that the generated output is valid code by parsing it and type-checking it with the
+ relevant verifier (Lean, Coq, F*, etc.), and replay the hand-written proofs which use it as
+ a dependency. This is done by running `make verify` in the project root.
+
+To generate the output of a single test, run `make test-<file_name>`, e.g. `make test-loops.rs` or
+`make test-betree`. To verify the output for a single backend, run `make verify-<backend>` inside the
+`tests/` folder, e.g. `make verify-coq`.
+
+CI does both of these things; it also checks that we committed all the generated files.
+
+Important note: at the moment we don't regenerate the HOL4 outputs. We do still check them with
+hol4. This is tracked in https://github.com/AeneasVerif/aeneas/issues/62.
+
+## Adding a new test
+
+Adding a new test is easy: just add a `foo.rs` file or a `foo_crate` folder in `tests/src`. `make
+test` will find the new file and call the test runner on it.
+
+To specify options for how your test should be run, see the next section.
+
+Ideally, any non-trivial change to Aeneas should have an accompanying test that exercises the new
+code. The goal of this test suite is to cover a large portion of what Aeneas can do, so we can work
+on it confidently.
+
+## Passing options to `Aeneas` and `Charon`
+
+The test runner supports setting several options for each test.
+- For single files, it will read comments at the top of the file that start with `//@`;
+- For crates, it will read the `crate_dir/aeneas-test-options` file.
+
+In both cases it supports the same options. Typical options are:
+- `charon-args=--polonius --opaque=betree_utils`: pass these arguments to `charon`;
+- `aeneas-args=-test-trans-units`: pass these arguments to `aeneas` for all backends;
+- `[fstar] aeneas-args=-decreases-clauses -template-clauses`: pass these arguments to `aeneas` for
+ the `fstar` backend;
+- `[coq,lean] skip`: skip the test for the `coq` and `lean` backends;
+- `[fstar] known-failure`: see below;
+- `[coq,fstar] subdir=misc`: store the output in the `misc` subdirectory for these two backends.
+
+For an up-to-date list of options, see comments in `tests/test_runner/Input.ml`.
+
+## `known-failure` tests
+
+There's a special kind of test that doesn't generate backend code: `//@ known-failure` tests. These
+are meant to record:
+- Cases that produce an error so we don't forget and can fix them later;
+- Cases that _should_ error, to ensure we correctly raise an error and that the error output is nice.
+
+We record the output of the `aeneas` call in `test-file.out` alongside the original `test-file.rs`.
+This file must be committed like the rest of the test outputs. (Note: sometimes the output will be
+different between your local machine and CI, e.g. because release build changed a stacktrace. In
+that case we don't record the output, which is done with `//@ no-check-output`).
+
+It is useful to add tests there whenever you find an interesting failure case.
+
+Note however that this is for failures of Aeneas. Tests that cause Charon errors should be submitted
+to the Charon project. `known-failure` tests will not pass if the error occurs within Charon. Note
+also that there is no option for tests that translate successfully but fail to be verified by the
+backend.