Skip to content

Testing Guide

This page covers the test infrastructure, the four test-complexity levels, and how to write a new test (including how to lock down a specific bug so it can never silently reappear).


Does a test already cover my scenario?

Before writing a new test, search the existing ones:

grep -r "BrightnessModifier" tests/

Example: does changing brightness actually affect the raw pixel bytes?

Yes: tests/test_brightness_mod.cpp contains exactly this assertion:

TEST_CASE("BrightnessModifierModule - loop() output scales with brightness from KvStore") {
    KvStore& kv = KvStore::instance();
    kv.clear();

    EffectsLayer layer(4, 4);
    layer.setup();
    BrightnessModifierModule m(&layer, 0.0f);  // frequency=0 → sin(0)=0
    m.setup();

    // brightness=0 → every pixel byte must be 0
    kv.setFloat("brightness", 0.0f);
    m.loop();
    layer.publish();
    Channel* ch = layer.readyChannel();
    bool allZero = true;
    for (uint32_t i = 0; i < 4u * 4u; ++i)
        if (ch->pixels[i].r != 0 || ch->pixels[i].g != 0 || ch->pixels[i].b != 0)
            allZero = false;
    CHECK(allZero);

    // brightness=1.0 → pixels should be ~127 (mid-scale)
    kv.setFloat("brightness", 1.0f);
    m.loop();
    layer.publish();
    ch = layer.readyChannel();
    bool allMid = true;
    for (uint32_t i = 0; i < 4u * 4u; ++i) {
        uint8_t r = ch->pixels[i].r;
        if (r < 120 || r > 135) { allMid = false; break; }
    }
    CHECK(allMid);
    m.teardown(); layer.teardown(); kv.clear();
}

This is a behavioral test: it drives the module through two states and checks actual output byte values, not just that the code compiled.


Test infrastructure

Framework: doctest, a single-header C++17 library that compiles and runs on PC, Raspberry Pi, and (with the ESP-IDF target) ESP32.

Location: tests/test_*.cpp, one file per feature area.

Run all unit tests:

python3 deploy/build.py -target pc    # build first (required)
python3 deploy/unittest.py            # run tests; writes test-results.json
# or run the binary directly after a build:
deploy/build/pc/tests/tests --no-header -m fail

Register a new test file in tests/CMakeLists.txt:

target_sources(tests PRIVATE
    ...
    test_my_feature.cpp
)

And add a title entry in deploy/unittest.py:

FILE_TITLES = {
    ...
    "test_my_feature.cpp": "My Feature",
}

Test complexity levels

Every test case carries an implicit complexity level. The level is assigned in deploy/unittest.py by keyword matching on the test name. Aim to have most tests at behavioral or integration.

Level What it verifies Example
smoke Code does not crash. Lifecycle (setup/loop/teardown) without an assertion on output. ArtNetOutModule - lifecycle without crash
format String or schema shape: field names present, healthReport format, schema keys. No output values checked. BrightnessModifierModule - setup registers frequency control
behavioral Actual output values, state transitions, field round-trips, boundary conditions. BrightnessModifierModule - loop() output scales with brightness from KvStore
integration Multiple modules wired together, cross-module paths, protocol correctness, HTTP/WS end-to-end. SineEffectModule publishes brightness, BrightnessMod reads it

A healthy test suite has few smoke and format tests relative to behavioral and integration. If most tests are smoke, the suite will pass even when output is silently wrong.


Writing a smoke test

Check that setup() / loop() / teardown() complete without crashing. No assertions on output.

TEST_CASE("MyModule - lifecycle") {
    MyModule m;
    m.setup();
    m.loop();
    m.teardown();
    // No CHECK — pass = no crash.
}

Writing a format test

Check schema keys and healthReport structure without verifying values.

TEST_CASE("MyModule - getSchema has expected controls") {
    MyModule m;
    m.setup();

    JsonDocument doc;
    m.getSchema(doc.to<JsonObject>());
    JsonArray ctrls = doc["controls"].as<JsonArray>();

    bool foundSpeed = false;
    for (JsonObject c : ctrls)
        if (strcmp(c["key"] | "", "speed") == 0) foundSpeed = true;
    CHECK(foundSpeed);
    m.teardown();
}

Writing a behavioral test

Check that output changes correctly when inputs change. This is the most valuable level: use it for every non-trivial state transition and every bug fix.

Pattern:

  1. Wire the module with a controlled input (fixed layer, known KvStore value, known props).
  2. Call loop() / publish().
  3. Read Channel::pixels and assert on the actual byte values.
TEST_CASE("MyEffect - output is non-zero when enabled") {
    EffectsLayer layer(4, 4);
    layer.setup();

    MyEffect m;
    m.setInput("layer", &layer);
    m.setup();
    m.loop();
    layer.publish();

    Channel* ch = layer.readyChannel();
    REQUIRE(ch != nullptr);
    bool anyNonZero = false;
    for (uint32_t i = 0; i < 4u * 4u; ++i)
        if (ch->pixels[i].r || ch->pixels[i].g || ch->pixels[i].b) anyNonZero = true;
    CHECK(anyNonZero);

    m.teardown();
    layer.teardown();
}

Using KvStore as input (for modifiers that read a shared float):

KvStore& kv = KvStore::instance();
kv.clear();
kv.setFloat("brightness", 0.5f);
// ... call loop(), check pixels ...
kv.clear();  // always restore global state

Writing an integration test

Wire multiple modules together and verify the end-to-end path. Use ModuleManager for full pipeline tests; wire manually for fast, focused ones.

Manual pipeline (fast, no file I/O):

TEST_CASE("SineEffect -> BrightnessModifier pipeline") {
    KvStore& kv = KvStore::instance(); kv.clear();

    EffectsLayer p1(4, 4), p2(4, 4);
    p1.setup(); p2.setup();

    SineEffectModule sine;
    sine.setInput("layer", &p1);
    sine.setup();
    sine.setControl("amplitude", 0.0f);  // brightness=0 published to KvStore

    BrightnessModifierModule bmod;
    bmod.setInput("layer", &p2);
    bmod.setup();

    sine.loop(); p1.publish();   // publishes brightness=0 to KvStore
    bmod.loop(); p2.publish();   // reads brightness=0, output should be black

    Channel* ch = p2.readyChannel();
    for (uint32_t i = 0; i < 4u * 4u; ++i)
        CHECK(ch->pixels[i].r == 0);

    sine.teardown(); bmod.teardown();
    p1.teardown(); p2.teardown();
    kv.clear();
}

Via ModuleManager (slower, exercises full lifecycle including state persistence; use for HTTP/WS and config-load tests, see test_http_server.cpp).


Locking down a bug with a regression test

When a bug is reported:

  1. Write a test that reproduces it (it should fail on the current code).
  2. Fix the bug.
  3. Verify the test now passes.
  4. The test stays in the suite forever, catching any future regression.

Example: user reports "setting brightness to 0 in the UI still shows dim pixels".

TEST_CASE("BrightnessModifier - brightness=0 produces all-black output (regression)") {
    KvStore& kv = KvStore::instance(); kv.clear();
    EffectsLayer layer(4, 4); layer.setup();
    BrightnessModifierModule m; m.setInput("layer", &layer); m.setup();

    kv.setFloat("brightness", 0.0f);
    m.loop(); layer.publish();

    Channel* ch = layer.readyChannel();
    for (uint32_t i = 0; i < 4u * 4u; ++i) {
        CHECK(ch->pixels[i].r == 0);
        CHECK(ch->pixels[i].g == 0);
        CHECK(ch->pixels[i].b == 0);
    }
    m.teardown(); layer.teardown(); kv.clear();
}

Name the test clearly so the intent is obvious in test-results.md. The word regression is optional but useful as a marker.


Live tests (hardware-in-the-loop)

Unit tests run on PC. Live tests run against real devices over HTTP and verify end-to-end behavior without touching USB.

Run:

python3 deploy/run.py              # all devices in devicelist.json
python3 deploy/run.py -target pc   # PC only

Live tests are registered in deploy/livetest.py:

runner.register("test6_brightness", test6_brightness_pipeline, level="behavioral")

A live test function makes HTTP calls and asserts on JSON responses:

def test6_brightness_pipeline(device):
    # Set brightness to 0 via REST
    r = requests.post(f"{device.url}/api/control",
                      json={"id": "bmod1", "key": "brightness", "value": 0})
    assert r.status_code == 200
    # Read health report and verify output is dark
    r = requests.get(f"{device.url}/api/test")
    assert r.json()["bmod1"]["checksum"] == 0  # all-black

Live tests run on every push as part of deploy/all.py if hardware is reachable. CI skips them (no hardware present).


Quick reference

Task Command
Build unit tests python3 deploy/build.py -target pc
Run all unit tests python3 deploy/unittest.py
Run one test file deploy/build/pc/tests/tests -tc="BrightnessModifier*"
Add a test file Add to tests/CMakeLists.txt + deploy/unittest.py FILE_TITLES
Check test classification See docs/status/test-results.md after running unittest.py
Run live tests python3 deploy/run.py