Explain inherited code
Open a forty-rung legacy routine and ask what it does. The agent reads the IR, walks the call graph, and gives you a plain-English explanation grounded in actual scan-cycle behavior, not in what variable names suggest.
Product · AI dev loop
Describe the change in plain English. The agent reads your existing project, applies the edit across files, runs the simulator, and shows you the diff before anything goes near hardware.
The chat works on production industrial code because it operates on the same IR the compiler produces. There is no separate AI-friendly representation. The agent, the simulator, and the exporter all read the same structure.
What it does today
Open a forty-rung legacy routine and ask what it does. The agent reads the IR, walks the call graph, and gives you a plain-English explanation grounded in actual scan-cycle behavior, not in what variable names suggest.
Turn a FB, routine, or whole project into structured documentation on demand: overviews, inline comments, README-style summaries. The agent reads the IR for the structure and the call graph for the context, so the docs reflect what the code actually does, not what its names suggest.
Ask for a refactor (split an FB, rename a tag, extract a state machine) and the agent applies the change across files, runs the scan-cycle simulator before and after, and shows you a diff with sim parity confirmed.
Pin existing behavior with tests the agent writes by observing simulation traces. Future you can refactor the routine and know in CI whether the change broke the contract.
Drop in the IO list from the electrical drawings. The agent maps tags into the project, generates the global variable list, and links them to the right modules. No copy-paste, no transcription errors.
Ask the agent to run a scenario: open the inlet valve, wait for level high, confirm pump start. It walks scan cycles, watches the variables you care about, and reports back. The same loop the simulator gives a human, the agent gives back as a result.
What the agent produced
@fb
class FB_PalletStop:
pallet_present: bool = input_var()
reset_request: bool = input_var()
+ s_jam: bool = input_var()
fault: bool = input_var()
stop_arm: bool = output_var()
station_busy: bool = output_var()
+ jam_detected: bool = output_var()
+ _jam_debounce: TON = static_var()
_reset_edge: R_TRIG = static_var()
def logic(self):
if self.fault:
self.stop_arm = False
self.station_busy = False
+ self.jam_detected = False
return
+ self.jam_detected = sustained(
+ self.s_jam, self._jam_debounce, T(ms=500)
+ )
if rising(self.pallet_present):
self.stop_arm = True
self.station_busy = True
else:
self.stop_arm = False
if rising(self.reset_request, self._reset_edge):
self.station_busy = False
from plx.simulate import scan
from controllers import FB_PalletStop
def test_pallet_rising_pulses_stop_arm_one_cycle():
fb = FB_PalletStop()
fb.pallet_present = True
scan(fb, cycles=1)
assert fb.stop_arm is True
scan(fb, cycles=1)
assert fb.stop_arm is False # pulse, not level
def test_fault_zeros_station_busy():
fb = FB_PalletStop()
fb.station_busy = True
fb.fault = True
scan(fb, cycles=1)
assert fb.station_busy is False
def test_reset_is_one_shot():
fb = FB_PalletStop()
fb.station_busy = True
fb.reset_request = True
scan(fb, cycles=1)
assert fb.station_busy is False
fb.station_busy = True
scan(fb, cycles=1)
assert fb.station_busy is True # held reset doesn't re-fire
def test_jam_debounces_500ms():
fb = FB_PalletStop()
fb.s_jam = True
scan(fb, cycles=49, dt_ms=10)
assert fb.jam_detected is False
scan(fb, cycles=2, dt_ms=10)
assert fb.jam_detected is True
$ pytest tests/test_pallet_stop.py -v
test_pallet_rising_pulses_stop_arm_one_cycle PASSED
test_fault_zeros_station_busy PASSED
test_reset_is_one_shot PASSED
test_jam_debounces_500ms PASSED
==================== 4 passed in 0.18s ====================
scan trace · jam scenario · cycle-by-cycle outputs
─────────────────────────────────────────────────────
cycle s_jam _jam_debounce.t jam_detected
1 true 0.01 s false
10 true 0.10 s false
25 true 0.25 s false
49 true 0.49 s false
50 true 0.50 s true ◄ 500ms reached
75 true 0.75 s true
─────────────────────────────────────────────────────
invariants held:
● stop_arm pulse (not level)
● fault zeros station_busy
● reset is one-shot via R_TRIG
Why on Koyl, not in a generic chat
Every PLC vendor will eventually bolt an AI chat on top of their IDE. The question is whether the underlying code is something AI is fluent in. For Koyl, the answer is yes by construction.
Frontier models have been trained on more Python than any other source code. Your PLC logic, expressed in Python, lands in the part of the model where it has seen the most of every refactor pattern, every test idiom, every name. Generic models on Structured Text or ladder logic perform worse, and always will, because the training distribution is what it is.
The agent operates on the live IR of your project, not a sandbox. It can read the existing structure, respect tag conventions you have, and produce code that fits in. Nothing about the chat experience requires you to start fresh or simplify.
Because the framework compiles to a deterministic IR and the simulator walks it scan by scan, every AI-suggested change can be replayed and asserted on. There is no black-box trust step. The model proposes; the simulator verifies; you approve.
Honest about where we are
Used for explaining inherited code, generating characterization tests, applying multi-file refactors, ingesting CSV IO maps, and observing simulation results. The L2/L3 simulation feedback loop (deploy + observe + autonomous iteration) is still in development; see ai_simulation_loop.
L1 (compile feedback) ships. L2 (deploy + observe) and L3 (autonomous iteration) in active development. Differentiator: no competitor closes the loop with runtime simulation; all stop at static verification.
Get hands-on
We are onboarding ten controls teams and system integrators willing to put production projects in front of the agent and tell us what breaks.