RTIC Scope
Table of Contents
NOTE: RTIC Scope and this document are works in progress.
1 About
RTIC Scope is a zero-overhead framework for recording and analyzing execution traces from RTIC applications on ARMv7-M targets. The lack of overhead is achieved by exploiting the ITM/DWT subsystem as defined by the ARMv7-M Architecture Reference Manual, Appendix D4.
1.1 Features
The framework is split into three main components: the canonical backend, the frontend(s), and the target-side tracing crate, cortex-m-rtic-trace
.
- The backend
- is a host-side (i.e., the system that programs the target device) application which exposes two operations:
trace
- where the target is flashed with the wanted firmware and the execution trace is captured and saved to file.
replay
- where previously recorded traces are replayed for postmortem and offline analysis.
- Frontend(s)
While tracing and replaying, a trace can be forwarded to a set of frontends via Unix domain sockets where virtually endless analytics can be applied. Consider, for example, a graphical frontend alike an oscilloscope or a logic analyzer, but instead of signals the RTIC tasks and their executation statuses (running, scheduled, preempted) are plotted. A dummy frontend (used for debug and reference purposes) is available out of the box; the dummy frontend prints received events, their absolute timestamps, and the time since the last chunk of events. For example:
dummy: @1625485615052692868 ns (+124999937 ns): [] # the local timestamp clock overflowed, but nothing else happened dummy: @1625485615052769556 ns (+76688 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485615052790806 ns (+21250 ns): [Task { name: "app::toggle", action: Exited }]
At present, information only flows from the backend to the specified frontend, via
--frontend
. In the future, bidirectional communication will be possible, enabling a frontend to implement a complex hardware-in-the-loop testing suite, for example. Please refer to the project roadmap for future prospects.- The target-side tracing crate
cortex-m-rtic-trace
, is a small auxiliary crate applied to the target application under trace. It only exposes the#[trace]
macro, which is used to trace RTIC software tasks. NOTE: While hardware tasks are traced "free of charge" (see 5), software tasks are traced by writing to au32
-variable twice.
1.2 Project repositories/crates
The framework is managed under the RTIC Scope organization on GitHub. Below is a list of the main repositories that constitute the RTIC Scope project. Any other crates listed under the organization but not here are branches of other repositories pending upstream merge.
- cargo-rtic-scope
- The RTIC Scope backend which builds the target application, recovers trace information, traces the target, replays traces, etc.
- cortex-m-rtic-trace
no_std
crate used to configure the target device for tracing purposes.- examples
- A set of example target applications where the RTIC Scope framework is applied. These are also documented below.
- api
- The common API used between the RTIC Scope backend and all frontends.
- frontend-dummy
- A "noop" frontend implementation that writes received
api::EventChunk
structs to stderr with nanosecond timestamp information. - itm-decode
- A host-side library that decodes the binary trace stream received from the target to workable Rust structures.
- rfcs
- A catch-all meta-repository for discussions and feature suggestions that that encompass more than a single repository.
- rtic-scope.github.io
- The source code for this web page.
2 Requirements
2.1 Hardware
- A target device with an ARMv7-M MCU.
- A hardware probe supported by
probe-rs
or some serial hardware that exposes a serial device which then can be used to read the trace stream from the target device.
2.2 Software
- A Linux-based operating system with a recent Rust toolchain. Minimum supported Rust version TBA.
3 Getting started
Install the backend and reference frontend via
$ cargo install --git https://github.com/rtic-scope/cargo-rtic-scope.git $ cargo install rtic-scope-frontend-dummy
3.1 Examples
3.1.1 blinky
Assuming you have a STM32F401 Nucleo-64 at hand, let us get a trace from a simple blinking LED application:
$ git clone https://github.com/rtic-scope/examples.git && cd examples $ # Note content of package/workspace metadata table $ cargo metadata --format-version 1 | jq .metadata { "rtic-scope": { "interrupt_path": "stm32f4::stm32f401::Interrupt", "pac": "stm32f4", "pac_features": [ "stm32f401" ] } } $ cargo metadata --format-version 1 | jq '.packages[] | select(.name == "trace-examples") | .metadata' { "rtic-scope": { "interrupt_path": "stm32f4::stm32f401::Interrupt", "pac": "stm32f4", "pac_features": [ "stm32f401" ] } } $ cargo rtic-scope trace --bin blinky-noconf --chip stm32f401re --clear-traces --tpiu-freq 16000000 Compiling trace-examples v0.1.0 (/home/tmplt/exjobb/trace-examples) Finished dev [unoptimized + debuginfo] target(s) in 1.88s Flashing /home/tmplt/exjobb/trace-examples/target/thumbv7em-none-eabihf/debug/blinky-noconf... Flashed. Resetting target... Reset. exceptions: SysTick -> ["app", "toggle"] interrupts: software tasks: reset timestamp: 2021-07-05 13:46:53.931431868 +02:00 trace clock frequency: 16000000 Hz Buffer size of source could not be found. Buffer may overflow and corrupt trace stream without warning. Failed to resolve chunk from TimestampedTracePackets { timestamp: Timestamp { base: None, delta: Some(1940184), data_relation: Some(Sync), diverged: false }, packets: [ExceptionTrace { exception: ThreadMode, action: Entered }] }. Reason: Don't know what to do with ThreadMode. Ignoring... Don't know how to convert Sync. Skipping... Don't know how to convert Sync. Skipping... Don't know how to convert Sync. Skipping... Don't know how to convert Sync. Skipping... ^Cdummy: @1625485614177693306 ns (+1625485614177693306 ns): [] dummy: @1625485614302693243 ns (+124999937 ns): [] dummy: @1625485614427693181 ns (+124999938 ns): [] dummy: @1625485614552693118 ns (+124999937 ns): [] dummy: @1625485614677693056 ns (+124999938 ns): [] dummy: @1625485614802692993 ns (+124999937 ns): [] dummy: @1625485614927692931 ns (+124999938 ns): [] dummy: @1625485615052692868 ns (+124999937 ns): [] dummy: @1625485615052769556 ns (+76688 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485615052790806 ns (+21250 ns): [Task { name: "app::toggle", action: Exited }] dummy: @1625485615177790743 ns (+124999937 ns): [] dummy: @1625485615302790681 ns (+124999938 ns): [] dummy: @1625485615427790618 ns (+124999937 ns): [] dummy: @1625485615552790556 ns (+124999938 ns): [] dummy: @1625485615677790493 ns (+124999937 ns): [] dummy: @1625485615802790431 ns (+124999938 ns): [] dummy: @1625485615927790368 ns (+124999937 ns): [] dummy: @1625485616052768868 ns (+124978500 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485616052790181 ns (+21313 ns): [Task { name: "app::toggle", action: Exited }] dummy: @1625485616177790118 ns (+124999937 ns): [] dummy: @1625485616302790056 ns (+124999938 ns): [] dummy: @1625485616427789993 ns (+124999937 ns): [] dummy: @1625485616552789931 ns (+124999938 ns): [] dummy: @1625485616677789868 ns (+124999937 ns): [] dummy: @1625485616802789806 ns (+124999938 ns): [] dummy: @1625485616927789743 ns (+124999937 ns): [] dummy: @1625485617052768368 ns (+124978625 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485617052789618 ns (+21250 ns): [Task { name: "app::toggle", action: Exited }] dummy: @1625485617177789556 ns (+124999938 ns): [] dummy: @1625485617302789493 ns (+124999937 ns): [] dummy: @1625485617427789431 ns (+124999938 ns): [] dummy: @1625485617552789368 ns (+124999937 ns): []
Now, let us list and replay the trace we just recorded:
$ cargo rtic-scope replay --bin blinky-noconf --list 0 /home/tmplt/exjobb/trace-examples/target/rtic-traces/blinky-noconf-ge9d44c3-2021-07-05T13:46:53.trace $ cargo rtic-scope replay 0 --bin blinky-noconf Replaying /home/tmplt/exjobb/trace-examples/target/rtic-traces/blinky-noconf-ge9d44c3-2021-07-05T13:46:53.trace exceptions: SysTick -> ["app", "toggle"] interrupts: software tasks: reset timestamp: 2021-07-05 13:46:53.931431868 +02:00 trace clock frequency: 16000000 Hz Failed to resolve chunk from TimestampedTracePackets { timestamp: Timestamp { base: None, delta: Some(1940184), data_relation: Some(Sync), diverged: false }, packets: [ExceptionTrace { exception: ThreadMode, action: Entered }] }. Reason: Don't know what to do with ThreadMode. Ignoring... Don't know how to convert Sync. Skipping... Don't know how to convert Sync. Skipping... Don't know how to convert Sync. Skipping... Don't know how to convert Sync. Skipping... dummy: @1625485614177693306 ns (+1625485614177693306 ns): [] dummy: @1625485614302693243 ns (+124999937 ns): [] dummy: @1625485614427693181 ns (+124999938 ns): [] dummy: @1625485614552693118 ns (+124999937 ns): [] dummy: @1625485614677693056 ns (+124999938 ns): [] dummy: @1625485614802692993 ns (+124999937 ns): [] dummy: @1625485614927692931 ns (+124999938 ns): [] dummy: @1625485615052692868 ns (+124999937 ns): [] dummy: @1625485615052769556 ns (+76688 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485615052790806 ns (+21250 ns): [Task { name: "app::toggle", action: Exited }] dummy: @1625485615177790743 ns (+124999937 ns): [] dummy: @1625485615302790681 ns (+124999938 ns): [] dummy: @1625485615427790618 ns (+124999937 ns): [] dummy: @1625485615552790556 ns (+124999938 ns): [] dummy: @1625485615677790493 ns (+124999937 ns): [] dummy: @1625485615802790431 ns (+124999938 ns): [] dummy: @1625485615927790368 ns (+124999937 ns): [] dummy: @1625485616052768868 ns (+124978500 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485616052790181 ns (+21313 ns): [Task { name: "app::toggle", action: Exited }] dummy: @1625485616177790118 ns (+124999937 ns): [] dummy: @1625485616302790056 ns (+124999938 ns): [] dummy: @1625485616427789993 ns (+124999937 ns): [] dummy: @1625485616552789931 ns (+124999938 ns): [] dummy: @1625485616677789868 ns (+124999937 ns): [] dummy: @1625485616802789806 ns (+124999938 ns): [] dummy: @1625485616927789743 ns (+124999937 ns): [] dummy: @1625485617052768368 ns (+124978625 ns): [Task { name: "app::toggle", action: Entered }] dummy: @1625485617052789618 ns (+21250 ns): [Task { name: "app::toggle", action: Exited }] dummy: @1625485617177789556 ns (+124999938 ns): [] dummy: @1625485617302789493 ns (+124999937 ns): [] dummy: @1625485617427789431 ns (+124999938 ns): [] dummy: @1625485617552789368 ns (+124999937 ns): []
We can read from the dummy
frontend that toggling a LED takes about 21µs in debug mode.
4 Concepts
- Source
- a (trace) source is any implementation of
trait Source
from which decoded trace packets can be pulled viaIterator::next
. A source can be a live target viaDAPSource
(e.g. an STLink, hs-probe, etc.),TTYSource
(i.e. a/dev/tty*
device), or a file on disk viaFileSource
. - Sink
- a (trace) sink is any implementation of
trait Sink
to which decoded trace packets can beSink::drain
ed (alt. "forwarded"). A sink can be a file on disk viaFileSink
or a frontend viaFrontendSink
.
cargo-rtic-scope
abstracts its operation by utilizing a single source and set of sinks.
After these have been contructed along with the trace metadata, packets are continously read from the source and forwarded to all sinks.
If a sink breaks (i.e. Sink::drain
yields Err
) the user is warned.
If all sinks break, cargo exits with non-zero.
If at least one sink is available, cargo-rtic-scope
continues to trace/replay until Source::next
yields None
or Some(Err)
, or until a SIGINT signal is received.
5 How it works
5.1 The ITM/DWT subsystem
A stream of back-to-back ITM packets are read from a properly configured target or a file. Each packet contains a header and a number of payload bytes. Of special interest are exception trace packets:
The DWT unit can generate an Exception trace packet whenever then processor enters, exits, or returns to an exception. — Appendix D4.3.2
This packet then contains one of the exception numbers listed in the table below. These numbers are bound to RTIC tasks.
Exception number | Exception name/label |
---|---|
1 | Reset |
2 | NMI |
3 | HardFault |
4 | MemManage |
5 | BusFault |
7-10 | Reserved |
11 | SVCall |
12 | DebugMonitor |
13 | Reserved |
14 | PendSV |
15 | SysTick |
16 | External interrupt 0 |
. | . |
. | . |
. | . |
16 + N | External interrupt N |
Henceforth, this document will refer to these exceptions/interrupt numbers as interrupt request (IRQ) numbers.
Software tasks are similarly traced, but come at a cost of a write to a u32
variable when entering and exiting the task, resulting in a cost of two writes.
This variable is registered as a watch address in the DWT subsystem.
Any writes to this address are asynchronously intercepted in hardware, and the new value is encapsulated in an ITM packet along with the ID of the DWT comparator.
5.2 Host-side information recovery
The received IRQ numbers in a packet must be associated back to the correct RTIC tasks.
This is done in a preparatory step before the target is flashed and traced.
For example, when executing cargo rtic-scope --bin blinky [options...]
:
Device-specific information is read from the crate's manifest metadata table. This table is expected to contain three fields which is otherwise non-trivial to resolve from
blinky
's source file alone. For example:[package.metadata.rtic-scope] # The name of the used peripheral access crate (PAC) pac = "stm32f4" # Required features of the above PAC, if any pac_features = ["stm32f401"] # The full path to the enum structure containing the exceptions and interrupts, in the PAC, used in the crate interrupt_path = "stm32f4::stm32f401::Interrupt"
workspace.metadata.rtic-scope
is used as fallback if package-level metadata is not specified. It is up to the end-user to verify this metadata. In the worst case that the metadata is incorrect but still builds and maps (step 3-4), incorrect data will be forwarded to specified sinks.blinky
is build via a regularcargo build --bin blinky
and it's target directory is reused for intermediate artifacts.The RTIC application declaration,
#[app(...)] mod app {...}
, is parsed fromblinky
's source code. From this declaration, IRQ labels are extracted from each#[task(binds = ...)]
macro occurance, and software tasks are parsed and mapped from each#[trace]
. For example,binds = SysTick
, andbinds = EXTI1
might be extracted. Here, each IRQ label is associated with the RTIC task it is bound to.This parsing step places some restrictions on how the source code for an RTIC application can be written. Refer to 6.2.
A shared object file is then built which translates IRQ labels to IRQ numbers by help of the metadata acquired in step 1. The source code for this object crate may look like the following:
# generated Cargo.toml [package] name = "adhoc" version = "0.1.0" edition = "2018" [lib] crate_type = ["cdylib"] [workspace] # empty workspace, so that cargo does not think this crate belong to the # firmware which target/ we are in. [dependencies] cortex-m = "0.7" stm32f4 = { version = "", features = ["stm32f401"] }
// generated lib.rs use stm32f4::stm32f401::Interrupt; // Only external interrupts need be written here. // Exceptions-bound tasks are resolved using the table above. #[no_mangle] pub extern fn rtic_scope_func_EXTI1() -> u8 { Interrupt::EXTI1.nr() }
After loading the resultant shared library and calling all functions, a
IRQ number -> IRQ label -> RTIC task
map ("task map") is yielded.
This task map is then used to decorate ITM packets with RTIC-specific data.
An absolute timestamp is also calculated for each set of trace packets received.
This is done by sampling the time just before the target is reset and applying an offset based upon the trace clock frequency and local/global timestamps received over ITM.
This frequency must be set via --tpiu-freq
.
This (task map, reset timestamp, trace clock frequency)
tuple constitutes the metadata of a trace, and is saved as a header to all trace files.
6 Limitations
6.1 Dropped ITM packets
If the input buffer of the source is filled (i.e. that of a serial device or the internal buffer of a probe) packets will be lost or corrupted. A warning will be printed once before this buffer is overflowed, or if the buffer size cannot be determined.
6.2 RTIC application constrains
During source code parsing, an #[app(...)] mod app { ... }
is searched for.
When parsing the source code, the file is tokenized, and tokens are skipped until #[app(...)]
is encountered.
This #[app(...)]
macro TokenTree::Group
is then passed to rtic_syntax::parse2
.
This limits how an RTIC application can be written to some degree.
An example RTIC Scope-compliant application is then:
// other imports... use rtic::app; #[app(...)] mod app { // whole application declared here } // any other code...
See the examples for more examples of RTIC Scope-compliant applications.
6.3 Target-side overhead
When tracing software tasks:
- a DWT comparator must be effectively consumed. Additionally, the ID of the comparator must be communicated to the backend by writing the value to a watch address.
- When entering/exiting a software task marked for tracing, a
u8
(at minimum) must be written to a watch addess; au32
in the worst case (depending on the number of tracing software tasks1).
7 Frequently asked questions
- Where are all build artifacts stored?
- All intermediate RTIC Scope artifacts are stored under
$(cargo metadata | jq .target_directory)/cargo-rtic-trace-libadhoc/
. This is under the same target directory as the built application. - Where are all traces saved to?
By default, recorded traces are serialized to JSON and saved to
$(cargo metadata | jq .target_directory)/rtic-traces/
. It is recommended to override this location with the--trace-dir
option. The same option is used to replay traces located in a non-default location.NOTE: any traces saved to the target directory will be lost on a
cargo clean
.
8 Roadmap
A minimum viable product (MVP), v0.1.0, has been reached. This MVP only traces hardware tasks; software task tracing is not yet fully implemented.
Another milestone has been defined for a next release (v0.2.0).
9 Known issues (of note)
- Usage of an STLink probe source is not stable. (#18)
10 Publications
TBA
11 License
For non-commercial purposes, RTIC Scope is licensed under both the MIT Licence and the Apache License (Version 2.0). For commercial support and alternative licensing, inquire via <contact@grepit.se>.
RTIC Scope is maintained in cooperation with Grepit AB and Luleå Technical University, Sweden.
12 Contact, bug reports and contributions
Bug reports and contributions are welcome. Please file it under the relevant repository.
Project maintainer can be reached via email at <v@tmplt.dev>.
Footnotes:
The overhead will be u8
unless your application has more than 256 software tasks.