Developer FAQ¶
Questions raised by contributors and early reviewers about design decisions and known gaps.
PAL threading: the PC path is no-ops. Does that not make the PC build pointless?¶
Fair challenge. Currently pal::task_create_pinned, pal::yield, and the semaphore functions are no-ops on PC. That means the dual-core effects/driver pipeline does not run as real concurrent tasks; everything executes in the main thread in sequence.
The PC build still earns its keep for: fast iteration without flashing hardware, running the full automated test suite, and UI development. Real concurrent threading on PC (via POSIX pthread or C++ std::thread) is on the roadmap. The void* opaque handle that PAL already uses for semaphores and task handles was chosen specifically to accommodate that: the same API can wrap either a SemaphoreHandle_t or a sem_t under the hood.
OS handle abstraction: xTaskCreate returns a TaskHandle_t, pthread returns a pthread_t. How are those reconciled?¶
Right now they are both stored as void* (the opaque-handle pattern). On ESP32 the pointer is cast to SemaphoreHandle_t or TaskHandle_t inside the PAL implementation. On PC the pointer is currently unused (no-op path).
When real pthread threading lands, the PC path will allocate a small heap struct holding the pthread_t or sem_t and return its address as the void*. The caller never inspects the pointer, so the swap is invisible at the call site. The same approach will apply to any future Windows HANDLE-based implementation.
Types that will need this treatment: task handles, binary semaphores, mutexes, and (if added) queue handles.
Why is pal::yield() named yield()? That term belongs to cooperative multitasking, not a preemptive RTOS.¶
Good point. On FreeRTOS pal::yield() calls vTaskDelay(1), which suspends the calling task for one tick and lets the scheduler run other tasks. That is a preemptive context-switch, not a cooperative yield in the classic sense.
The name was chosen because it is short and familiar to Arduino developers, but it is admittedly imprecise. A more accurate name would be delay_ticks(1) or sleep_ms(1). Renaming is tracked in the backlog; any rename must be a single mechanical search-replace across PAL and all callers.
Semaphores are not just for inter-core sync. What about mutexes and critical sections?¶
Correct. The current PAL only exposes a binary semaphore, used specifically to signal from the effects task (Core 0) to the driver task (Core 1). That covers the one data handoff in the current pipeline.
A more complete synchronisation toolkit would include:
| Primitive | Use case |
|---|---|
| Binary semaphore (current) | One-shot signal between tasks |
| Counting semaphore | Rate-limiting, resource pools |
| Mutex | Protect shared state accessed from multiple tasks |
| Recursive mutex | Protect state where the same task may re-enter the lock |
| Critical section (disable interrupts) | Very short shared-state updates in ISR context; not needed on PC |
These will be added to PAL as the module base grows and shared mutable state becomes more common. The POSIX equivalents (sem_t, pthread_mutex_t) map cleanly onto the FreeRTOS counterparts, so the abstraction layer cost is low.
What about vTaskDelayUntil() for cyclic background tasks? And queues for async work?¶
Both are natural next additions:
Cyclic timing (vTaskDelayUntil equivalent): useful for tasks that must fire at a fixed period regardless of how long the work takes. The PAL signature would be something like pal::delay_until(uint32_t& lastWakeMs, uint32_t periodMs). On PC the POSIX equivalent is clock_nanosleep with TIMER_ABSTIME.
Queues: FreeRTOS queues (xQueueCreate / xQueueSend / xQueueReceive) are the standard mechanism for passing data between tasks without busy-polling. AsyncWebServer, AsyncUDP, and Art-Net all benefit from a queue between the network ISR/callback and the module processing loop. On PC the equivalent is a pthread condition variable plus a std::queue, or std::async with a std::future. Both will be wrapped behind a pal::queue_* API when the first async driver needs them.
How does cross-platform library handling work in practice?¶
The rule is: module code above PAL never includes a hardware-specific header directly. Concrete split:
PlatformIO (lib_deps in platformio.ini): Arduino-ecosystem libraries live here. Examples: ESPAsyncWebServer, FastLED, ArduinoJSON, rotary-encoder drivers, INA219 I2C helpers. These are only compiled when targeting ESP32.
CMake (CMakeLists.txt): cross-platform libraries are fetched with FetchContent or found with find_package. Examples: ArduinoJSON (header-only, works on PC), LittleFS (ported), doctest (tests). Hardware-specific libraries are not included.
PAL boundary: if a library call differs per platform (e.g., opening a UDP socket, reading a GPIO), it becomes a PAL function. If the library is purely algorithmic (ArduinoJSON parsing, JSON serialisation), it is used directly in module code.
What about Windows? MinGW, Cygwin, WSL, or Docker?¶
The CMake build currently targets macOS and Linux (GCC/Clang, C++17). Windows is untested but the code has no Linux-specific syscalls, so the most direct paths are:
- WSL 2 (recommended): the full Linux toolchain runs unmodified; USB flashing requires a USB-IP bridge.
- MinGW-w64 / MSYS2: provides a GCC toolchain on Windows natively; the main risk is POSIX threading and socket headers, which MinGW stubs reasonably well.
- Cygwin: similar to MinGW but heavier; generally a last resort.
- Docker: unnecessary for the PC server binary itself (see the user FAQ). Useful only as a reproducible CI/CD build environment if Windows-native toolchains prove unstable.
Native Win32 threading (CreateThread, CRITICAL_SECTION) would require a separate PAL branch; that is not planned unless a Windows-native contributor picks it up.