Deterministic scan-cycle simulator
A tree-walking interpreter that executes the IR scan by scan with simulated time. Same inputs produce same outputs, every run. CI gets a runtime, not a mock, not a static analyzer.
Product · Testing & simulation
Most industrial codebases have no automated tests, not because controls engineers don't want them, but because the tools never made it possible. Koyl makes it possible.
A deterministic scan-cycle simulator walks the IR. The source is Python, so pytest, coverage, and your CI of choice work out of the box. The same loop a controls engineer uses to verify a change, your test suite uses to gate a merge.
The shape
Import the FB, instantiate it, drive its inputs, advance the simulator one or more scan cycles, and assert on outputs. No vendor-specific testing harness. No proprietary simulator runtime. No XML.
Run it the way you run the rest of your tests:
$ pytest tests/ -v import pytest
from valve_ctrl import ValveCtrl
from plx.simulate import scan
def test_open_command_drives_output_high():
fb = ValveCtrl()
fb.cmd_open = True
fb.fault = False
scan(fb, cycles=1)
assert fb.output_open is True
assert fb.output_close is False
def test_fault_overrides_open_command():
fb = ValveCtrl()
fb.cmd_open = True
fb.fault = True
scan(fb, cycles=1)
assert fb.output_open is False
assert fb.output_close is False
def test_open_then_fault_drops_output_within_one_cycle():
fb = ValveCtrl()
fb.cmd_open = True
scan(fb, cycles=1)
assert fb.output_open is True
fb.fault = True
scan(fb, cycles=1)
assert fb.output_open is False
The full toolchain
A tree-walking interpreter that executes the IR scan by scan with simulated time. Same inputs produce same outputs, every run. CI gets a runtime, not a mock, not a static analyzer.
Pin existing behavior of an inherited routine before you touch it. Drive inputs through the simulator, capture outputs, freeze them as assertions. Refactor with confidence; CI tells you the day a change drifts.
Because the source is Python, every Python testing tool already works. pytest discovers and runs tests. coverage.py reports what scan paths are exercised. GitHub Actions, GitLab CI, Jenkins: same as any other Python codebase.
Ask the chat agent to write tests for an existing FB. It reads the IR, drives the simulator across a representative set of inputs, captures the outputs, and writes pytest assertions. You review the tests; the model does the typing.
The IR is a programmable surface. Walking it for unreachable rungs, unused tags, scope violations, and type mismatches is a Python visitor pattern, not a vendor-specific compiler hack. Layered on top of the runtime tests, not in place of them.
Why this matters
Tribal knowledge (what a routine does, why it does it, what regressions are easy to cause) leaves the building when the engineer who wrote it does. The codebase becomes load-bearing folklore.
Characterization tests turn that folklore into something CI can hold onto. Refactor with confidence; let new engineers contribute without fear of breaking something they don't yet understand. Every refactor the AI agent suggests can be replayed in the simulator and asserted on before it ever touches a controller.
Testing is not the marketing wedge. It is the foundation that makes every other claim defensible: refactor without fear, AI-suggested edits, reliable modernization.
See it on your project
Design partners get the simulator, the chat agent that writes characterization tests, and direct support from the team building it.