Product · AI dev loop

Ships today

Chat that reads, edits, and
tests your project.

Describe the change in plain English. The agent reads your existing project, applies the edit across files, runs the simulator, and shows you the diff before anything goes near hardware.

The chat works on production industrial code because it operates on the same IR the compiler produces. There is no separate AI-friendly representation. The agent, the simulator, and the exporter all read the same structure.

What it does today

Six things you can ask the agent right now.

Explain inherited code

Open a forty-rung legacy routine and ask what it does. The agent reads the IR, walks the call graph, and gives you a plain-English explanation grounded in actual scan-cycle behavior, not in what variable names suggest.

// What does this FB do, and what calls it?

Generate documentation

Turn a FB, routine, or whole project into structured documentation on demand: overviews, inline comments, README-style summaries. The agent reads the IR for the structure and the call graph for the context, so the docs reflect what the code actually does, not what its names suggest.

// Document FB_PalletStop. Add inline comments where the logic is non-obvious.

Multi-file refactors with simulator-verified diffs

Ask for a refactor (split an FB, rename a tag, extract a state machine) and the agent applies the change across files, runs the scan-cycle simulator before and after, and shows you a diff with sim parity confirmed.

// Extract the homing logic into its own FB, keep behavior identical.

Generate characterization tests for legacy logic

Pin existing behavior with tests the agent writes by observing simulation traces. Future you can refactor the routine and know in CI whether the change broke the contract.

// Write characterization tests for FB_PalletStop covering cycle, fault, and reset.

Ingest CSV IO maps and tag tables

Drop in the IO list from the electrical drawings. The agent maps tags into the project, generates the global variable list, and links them to the right modules. No copy-paste, no transcription errors.

// Import io_map.csv. Wire %I0.0 onward into station_inputs.

Observe simulation results in chat

Ask the agent to run a scenario: open the inlet valve, wait for level high, confirm pump start. It walks scan cycles, watches the variables you care about, and reports back. The same loop the simulator gives a human, the agent gives back as a result.

// Simulate startup with tank_empty=true. Confirm pump starts within 5 cycles.

What the agent produced

The diff, the tests, the trace.

@fb
class FB_PalletStop:
    pallet_present: bool = input_var()
    reset_request:  bool = input_var()
+   s_jam:          bool = input_var()
    fault:          bool = input_var()
    stop_arm:       bool = output_var()
    station_busy:   bool = output_var()
+   jam_detected:   bool = output_var()
+   _jam_debounce:  TON   = static_var()
    _reset_edge:    R_TRIG = static_var()

    def logic(self):
        if self.fault:
            self.stop_arm = False
            self.station_busy = False
+           self.jam_detected = False
            return

+       self.jam_detected = sustained(
+           self.s_jam, self._jam_debounce, T(ms=500)
+       )

        if rising(self.pallet_present):
            self.stop_arm = True
            self.station_busy = True
        else:
            self.stop_arm = False

        if rising(self.reset_request, self._reset_edge):
            self.station_busy = False

Why on Koyl, not in a generic chat

The wedge no other PLC tool can claim.

Every PLC vendor will eventually bolt an AI chat on top of their IDE. The question is whether the underlying code is something AI is fluent in. For Koyl, the answer is yes by construction.

Python is what AI is fluent in

Frontier models have been trained on more Python than any other source code. Your PLC logic, expressed in Python, lands in the part of the model where it has seen the most of every refactor pattern, every test idiom, every name. Generic models on Structured Text or ladder logic perform worse, and always will, because the training distribution is what it is.

It runs on your project

The agent operates on the live IR of your project, not a sandbox. It can read the existing structure, respect tag conventions you have, and produce code that fits in. Nothing about the chat experience requires you to start fresh or simplify.

Every change is verifiable

Because the framework compiles to a deterministic IR and the simulator walks it scan by scan, every AI-suggested change can be replayed and asserted on. There is no black-box trust step. The model proposes; the simulator verifies; you approve.

Honest about where we are

The chat ships. The autonomous simulation loop is still being built out.

Ships today

AI chat: an agent that reads, edits, and simulates your project

Used for explaining inherited code, generating characterization tests, applying multi-file refactors, ingesting CSV IO maps, and observing simulation results. The L2/L3 simulation feedback loop (deploy + observe + autonomous iteration) is still in development; see ai_simulation_loop.

L1 ships · L2/L3 in development

AI simulation feedback loop: generate → compile → simulate → observe → fix

L1 (compile feedback) ships. L2 (deploy + observe) and L3 (autonomous iteration) in active development. Differentiator: no competitor closes the loop with runtime simulation; all stop at static verification.

Get hands-on

The chat ships today. Design partners get it first.

We are onboarding ten controls teams and system integrators willing to put production projects in front of the agent and tell us what breaks.