codebase:architecture

Codebase Architecture Walkthrough

  • pcal: Converts the pcal language into TLA+ that can by run by TLC2.
  • tla2tex: “Pretty Prints” TLA+ code into LaTex. This LaTex can then be used for publishing.
  • tla2sany: Contains the AST and parsing logic for the TLA+ language. The AST classes generated from parsing are used by TLC2.
  • tlc2: The model checker to validate the AST generated by tla2sany. This is the largest and most complex part of the codebase by far.

This is a generalized mplified description of how this code is normally used, and the involved components. More detail will be given on some of these components in the following sections.

  1. (Optional) PCal code translated into TLA+ code. pcal.trans::main
  2. TLC2 Model Checker run from standard entrypoint tlc2.TLC::main on that code and associated .cfg file
  3. TLC2 class created, arguments parsed, and processing started. tlc2.TLC::TLC, TLC::handleParameters, TLC::process
  4. The spec is parsed and stored in Tool. This occurs in the Tool constructor.
    1. A SpecObj is created with file information about the spec
    2. A SpecProcessor is created with that SpecObj
    3. At the start of processing the spec, SpecProcessor calls SANY::frontEndMain that parses the spec into an AST
    4. The SpecProcessor performs a number of extraction and analysis operations on the AST. > Note: These are called in a precise order in the Tool constructor. Modifying this is highly error prone, though the tests will fail reliably if there is an issue.
    5. The Tool takes the results of all these operations and assigns them to final variables. The Tool is now the only source of truth for specification information, and is immutable. > Note: This is a critical separation of concerns, to separate the complex job of parsing and resolving the spec, from the job of analyzing it. The tool is the only object that spans both.
  5. The following are selected and initialized
    • The model checker or simulator. This class performs a certain type of analysis on the spec.
      • ModelChecker: Standard Breadth-First checking of entire state space
        • FPSet: Contains the fingerprints of all explored states.
        • StateQueue: Contains new states to explore.
      • DFIDModelChecker: Depth first checking of entire state space, to a pre-specified level of depth.
      • Simulator: Randomized exploration and checking of the state space.
  6. The analysis is performed
    1. Workers are run per configuration to allow multi-threaded analysis
    2. Each checker performs analysis differently, but generally the analysis consists of generating new states from existing states, and checking those new states
  7. The result is returned to the user
  1. The StateQueue is seeded with the initial states
  2. Workers take states of the StateQueue. From there they generate next potential states. They check that the fingerprint of the state is not already in the FPSet (such that analysis is not duplicated). They then check that state for safety properties, and add it to both the FPSet and the StateQueue.’
  3. Periodically the analysis pauses. If configured to create a checkpoint, a checkpoint is created. If configured to check temporal properties, the Liveness Checking workflow (described below) is performed by calling LiveChecker::check. Once done the analysis continues.
  4. When the StateQueue is empty or an error is detected, the analysis is over.
Performance Note: The vast majority of CPU cycles are spent either calculating fingerprints or storing fingerprints in the FPSet.
  1. Data structure initialized with InitStates
  2. The analysis occurs by setting a temporary max depth that increases to the configured maximum depth
  3. Worker retrieves unexplored initial state, removing it from the set of init states
  4. Worker explores the state space depth first up to the temporary max depth, storing all seen states in the FPIntSet
  5. Periodically the analysis pauses. If configured to create a checkpoint, a checkpoint is created. If configured to check temporal properties, the Liveness Checking workflow (described below) is performed by calling LiveCheck::check. Once done the analysis continues.
  6. Once there are no more levels to explore, or an error is detected, the analysis is over
  1. Initial states calculated and checked for validity
  2. Each SimulationWorker retrieves a random initial state
  3. Next states are randomly generated and checked, up to a specified depth. Each one of these runs is a trace.
  4. If liveness checking is enabled, check the trace with LiveChecker::checkTrace
  5. Once the specified number of traces have been run, or an error is detected, the analysis is over

Liveness checking is what allows checking of temporal properties.

Liveness checking is initiated by calling: LiveCheck::check or checkTrace.

Every time this occurs it creates new LiveWorker(s) to perform the analysis.

Note: Unlike standard model checking, Liveness Checking by default is not guaranteed to produce the shortest trace that violates the properties. There is a AddAndCheckLiveCheck that extends LiveCheck that attempts to do this, at the cost of significant performance. It is selected in the constructor of AbstractChecker.

States are a fundamental part of TLC. They represent a set of variable values, that entirely determine the state of the system. States are generated and checked as part of the TLC2 analysis process. The base class for this is TLCState.

Fingerprints are hashes taken of these states using the FP64 hash. Fingerprints are a 64bit hash representing the state. This is significantly smaller than storing the state itself, and yet collisions are very unlikely (though if they did occur, part of the statespace would not be explored). The fingerprinting process is initiated from TLCState::generateFingerPrint.

There are two primary implementations of state that are very similar: - TLCStateMut: Higher performance, records less information - TLCStateMutExt: Lower performance, records more information

Effectively, functions that are no-ops in TLCStateMut, persistently store that information in TLCStateMutExt. This information can include the name of the generating action, and is often needed for testing / debugging purposes.

The implementation to use is selected in the constructor of Tool, by setting a reference Empty state for the analysis.

The ModelChecker performs a breadth-first search. In a standard BFS algorithm, there are two main datastructures: a queue of items of explore next, and a set of the nodes already explored. Because of the sheer size of the state space for the ModelChecker, specialized versions of these datastructures are used.

Note: These data-structures are a large focus of the profiling / testing work, because both performance and correctness are key for the ModelChecker to explore the full state space.

Fingerprint Sets (FPSets)

FPSets are sets with two main operations - put fingerprint - determine if fingerprint in set

Located in tlc2.tool.fp. All but FPIntSet extend FPSet. FPIntSet is used specifically for depth-first-search and is not discussed here.

In practice the performance and size of the FPSet determine the extent of analysis possible. The OffHeapDiskFPSet is in practice the most powerful of the FPSets, efficiently spanning the operations across off-heap memory and the disk. There are also memory only FPSets that may be preferable for smaller state spaces.

StateQueue

Located in tlc2.tool.queue, with all implementations extending StateQueue.

The default implementation is DiskStateQueue. Because of the size of the queue, storing it in memory may not be an option.

A discussion of Symmetry Sets can be found here.

They are implemented in the TLCState::generateFingerPrint function by calculated all permutations of a particular state (for all symmetry set model values), and returning the smallest fingerprint. Therefore all permutations would have a same fingerprint, and so only the first example generated would be explored.

Note: The order dependant nature of the fingerprint means that when using symmetry sets, the majority of the cpu cycles are spent “normalizing” the data structures such that the order the fingerprint gets assembled in are consistent. An order independent fingerprint hash would remove this performance hit, however would significantly increase the likelihood of collisions unless the hash algorithm itself was upgraded.

The AttachingDebugger is the main debugger. It works in conjunction with the DebugTool to allow a user to step through a TLC execution.

Note: Using the debugger will keep the process alive indefinitely due to the listener. The eclipse network listener is not interruptable, so thread interruption behavior will not work. It relies on process exit.

The coverage functionality determines what statements in the TLA+ spec have been used. Located in tlc2.tool.coverage

String comparison can be a relatively expensive operation compared to primitive comparison. There are lots of string comparisons required in the code, but with a relatively limited set of strings.UniqueString maps each string to a monotonically increasing integer. Then comparisons are reduced to the much cheaper integer comparisons. This is used throughout the codebase.

  • codebase/architecture.txt
  • Last modified: 2024/09/28 15:58
  • by 127.0.0.1