feat: Replace simulavr implementation with Linux Host Implementation#67
feat: Replace simulavr implementation with Linux Host Implementation#67mspi92 wants to merge 1 commit intomainsail-crew:masterfrom
Conversation
Add printer hardware simulation components and server - Implemented Axis, Heatbed, InputPin, OutputPin, and Extruder classes for simulating printer hardware. - Created PinHandler abstract base class to standardize pin handling across components. - Developed FakeHardwareServer to manage communication between hardware components and external requests. - Added state management functionality to persist printer state in JSON format. - Introduced thermal simulation for heatbed and extruder using ThermalMass class. - Integrated rendering capabilities for visualizing printer state using matplotlib. - Updated project structure with necessary configuration files and dependencies. - Removed obsolete scripts and adjusted service control for new components. Signed-off-by: Marcel Spitz <git@marcel-spitz.de>
📝 WalkthroughWalkthroughThis pull request introduces a comprehensive Linux-based hardware simulator for a 3D printer, replacing AVR/SimulAVR simulation. It includes a C preloader hook intercepting system calls to fake GPIO and ADC devices, a multi-component Python simulation framework (axes, heatbed, extruder, thermal mass), Docker refactoring to support the new simulator, and updated configuration files. Changes
Sequence Diagram(s)sequenceDiagram
participant Klipper as Klipper Process
participant Hook as hardware_bridge_hook.c<br/>(LD_PRELOAD)
participant FakeServer as FakeHardwareServer<br/>(printer_simulator.py)
participant Hardware as Hardware Components<br/>(Axis, Heatbed, Extruder)
Klipper->>Hook: open("/dev/gpiochipN", ...)
Hook->>Hook: Map to fake FD, track type
Hook-->>Klipper: Return fake FD
Klipper->>Hook: ioctl(fd, GPIO_GET_LINEHANDLE, ...)
Hook->>FakeServer: Socket: GET chipN_gpioM
FakeServer->>Hardware: get(chipN_gpioM)
Hardware-->>FakeServer: Return value
FakeServer-->>Hook: Socket response
Hook-->>Klipper: Return line handle
Klipper->>Hook: ioctl(fd, GPIOHANDLE_SET_LINE_VALUES, ...)
Hook->>FakeServer: Socket: SET chipN_gpioM <value>
FakeServer->>Hardware: set(chipN_gpioM, value)
Hardware->>Hardware: Update internal state<br/>(position, heater, etc.)
FakeServer-->>Hook: Socket ACK
Hook-->>Klipper: Return success
Klipper->>Hook: pread(adc_fd, buf, ...)
Hook->>FakeServer: Socket: GET analogX
FakeServer->>Hardware: get(analogX)
Hardware-->>FakeServer: Return ADC value
FakeServer-->>Hook: Socket response
Hook-->>Klipper: Return ADC data
sequenceDiagram
participant Webcam as webcam_renderer.py
participant StateFile as State File<br/>(/home/printer/printer_data/)
participant Simulator as FakeHardwareServer
participant Renderer as PrinterRenderer
loop Every 1.0s
Webcam->>StateFile: read_state()
StateFile-->>Webcam: Axes positions (x,y,z)
Webcam->>Webcam: Check for position changes
alt Position Changed
Webcam->>Webcam: Update internal axis positions
Webcam->>Renderer: render()
Renderer->>Renderer: Draw 3D frame &<br/>nozzle sphere
Renderer->>StateFile: Save image
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (11)
printer_simulator/Makefile (2)
10-10: Consider adding atesttarget placeholder.Static analysis flagged the missing
testphony target. While tests may be run via a different mechanism, having a placeholder target (even if it just prints a message or runs Python tests) maintains Makefile convention consistency.🔧 Suggested placeholder
-.PHONY: all clean lint format typecheck +.PHONY: all clean lint format typecheck test +test: + `@echo` "Run tests via pytest or other test runner"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/Makefile` at line 10, Add a placeholder Makefile target named test and include it in the .PHONY list so the file follows Makefile conventions; update the existing .PHONY line (which currently lists all, clean, lint, format, typecheck) to include test and add a test: target that either echoes a short message or invokes the test runner (e.g., pytest) so CI/static analysis won't flag the missing phony target; reference the .PHONY declaration and the new test target when making the change.
4-6: Consider adding compiler warning flags for better code quality.Adding standard warning flags helps catch potential issues during development.
🔧 Suggested improvement
CC = gcc -CFLAGS = -shared -fPIC +CFLAGS = -shared -fPIC -Wall -Wextra -Werror LDFLAGS = -ldl🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/Makefile` around lines 4 - 6, The Makefile currently sets CC, CFLAGS, and LDFLAGS but lacks compiler warning flags; update the CFLAGS variable (used when invoking gcc) to include standard warning flags such as -Wall -Wextra and optionally -Werror or -Wpedantic to surface potential issues during build, e.g., augment the existing CFLAGS definition so all compiled objects get these warnings enabled while keeping -shared -fPIC and preserving LDFLAGS.config/linux.config (1)
50-50: Consider adding a trailing newline.The file is missing a newline at the end. While not functionally impactful, POSIX-compliant text files should end with a newline, and some tools may warn about this.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/linux.config` at line 50, Add a POSIX trailing newline to this config file by ensuring the last line "CONFIG_HAVE_GPIO_HARD_PWM=y" is terminated with a newline character (i.e., add an empty line break after CONFIG_HAVE_GPIO_HARD_PWM=y) so the file ends with a newline.scripts/service_control.sh (1)
79-86: Service ordering may matter for start/stop operations.When starting,
printer_simulatorshould ideally start beforeklipper_mcu(which depends on the simulator's socket). When stopping, the reverse order is safer. Supervisorctl processes services in the order listed, so the current order (printer_simulatorfirst) is correct forstart. However, forstop, you might want to reverse the order to ensureklipper_mcustops before its dependencies.💡 Consider explicit ordering for stop command
if [ "$UNIT" = "klipper" ]; then - sudo /usr/bin/supervisorctl "$COMMAND" printer_simulator webcam_renderer klipper_mcu klipper_klippy + if [ "$COMMAND" = "stop" ]; then + sudo /usr/bin/supervisorctl "$COMMAND" klipper_klippy klipper_mcu webcam_renderer printer_simulator + else + sudo /usr/bin/supervisorctl "$COMMAND" printer_simulator webcam_renderer klipper_mcu klipper_klippy + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/service_control.sh` around lines 79 - 86, The current loop always calls supervisorctl for services in the same order using UNIT/UNITS and COMMAND; change it so when COMMAND is "stop" you iterate UNITS in reverse order (or explicitly reverse the list for that branch) so klipper_mcu is stopped before printer_simulator, while preserving the existing forward order for "start" and "restart"; modify the block that handles start|stop|restart to detect "$COMMAND" = "stop" and reverse the iteration before calling /usr/bin/supervisorctl "$COMMAND" "$UNIT" (and keep the special-case when UNIT = "klipper" to call the multi-service supervisorctl invocation).config/supervisord.conf (1)
38-46: Hardcoded sleep for service synchronization may be fragile.The
sleep 10delay assumesprinter_simulatorwill be ready within 10 seconds. If the simulator takes longer (e.g., under heavy load) or fails to start,klipper_mcumay encounter connection issues. Consider a more robust synchronization mechanism, such as waiting for the socket file to exist.💡 Alternative approach using socket readiness check
-command=bash -c 'sleep 10 && /home/printer/klipper/out/klipper.elf' +command=bash -c 'while [ ! -S /tmp/printer_hook.sock ]; do sleep 1; done && /home/printer/klipper/out/klipper.elf'This waits for the actual socket to exist rather than relying on a fixed delay. You may want to add a timeout to prevent infinite waiting if the simulator fails entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/supervisord.conf` around lines 38 - 46, Replace the fragile fixed delay in the program:klipper_mcu startup command with a readiness loop that waits for the FAKE_HARDWARE_SOCKET to exist before launching /home/printer/klipper/out/klipper.elf; specifically, modify the command (the current bash -c 'sleep 10 && /home/printer/klipper/out/klipper.elf') so it checks for the socket path defined by FAKE_HARDWARE_SOCKET or waits with a short polling loop and a sensible timeout, then starts klipper.elf as the printer user with the same environment (LD_PRELOAD) and preserve autostart/autorestart/redirect_stderr settings.README.md (1)
46-47: Minor formatting: Add blank line before horizontal rule.The alternative command runs directly into the horizontal rule, which may cause rendering issues in some Markdown parsers and reduces readability.
📝 Suggested fix
Alternatively you can execute `docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build` + ---🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 46 - 47, Add a blank line between the alternative command line ("docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d --build") and the horizontal rule marker (---) so the command isn't directly adjacent to the HR; update the README content around the command and the '---' separator to insert one empty line before the '---'.printer_simulator/requirements-dev.txt (1)
1-4: Consider pinning dev dependency versions for reproducibility.The dev dependencies (
mypy,pre-commit,ruff) are unpinned, which could lead to inconsistent behavior across environments or CI runs as new versions are released. For production dependencies this would be critical; for dev tools it's a lower priority but still worth considering.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/requirements-dev.txt` around lines 1 - 4, The dev dependencies in requirements-dev.txt (entries "mypy", "pre-commit", "ruff") are unpinned; update that file to pin each tool to a specific version or tight constraint (e.g., exact versions or ~= constraints) to ensure reproducible installs—choose current stable versions for "mypy", "pre-commit", and "ruff", replace the bare names with those pinned strings, and consider adding a comment indicating the chosen versions and date or using a generated constraints file if you prefer automated updates.printer_simulator/printer_hardware/renderer.py (2)
23-23: Remove or implement_stop_render_threadto avoid dead state.This field is currently unused in this class.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/renderer.py` at line 23, The field _stop_render_thread is declared but never used; either remove it or wire it into the renderer's thread control. If you remove it, delete the self._stop_render_thread assignment and any unused references; otherwise implement cooperative shutdown by checking self._stop_render_thread inside the rendering loop (e.g., in _render_loop or similar), set the flag to True in a new stop_render_thread/stop method and join the thread in that method to avoid dead state, and ensure start_render_thread (or the thread starter) clears the flag before launching.
1-2: Pin matplotlib to theAggbackend for headless rendering reliability.The code only uses
plt.savefig()for file-based output in a headless container environment. Explicitly setting theAggbackend before importing pyplot prevents environment-dependent fallbacks and potential warnings:Proposed patch
+import matplotlib +matplotlib.use("Agg") import matplotlib.pyplot as plt import numpy as npAs a minor note, the
_stop_render_threadfield (line 23) is set but never used—consider removing it if threading is not planned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/renderer.py` around lines 1 - 2, The renderer currently imports matplotlib.pyplot directly which can pick an interactive backend in headless environments; before importing pyplot in printer_simulator/printer_hardware/renderer.py set the matplotlib backend to "Agg" by importing matplotlib, calling matplotlib.use("Agg"), then import matplotlib.pyplot as plt so savefig works reliably in containers; also remove the unused _stop_render_thread attribute (or use it) in the Renderer class to avoid dead code.Dockerfile (1)
11-13: Drop leftover simulavr-only packages from builder image.
rst2pdf,help2man, andtexinfolook unused in the new Linux-host flow and add build time/image size.🧹 Minimal cleanup
- ### simulavr \ - g++ make rst2pdf help2man texinfo \ + ### native builds \ + g++ make \🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Dockerfile` around lines 11 - 13, Remove the leftover simulavr-only packages from the builder image by deleting the entries "rst2pdf", "help2man", and "texinfo" (the simulavr block/commented install list) from the Dockerfile package install section so the builder no longer installs these unused tools; update the package list where the simulavr block appears (the commented lines containing "simulavr \", "g++ make rst2pdf help2man texinfo \", and the closing "### \") to omit those three package names or remove the simulavr-only block entirely.printer_simulator/printer_hardware/axis.py (1)
77-78: Normalize direction input to binary state.Restricting
directionto0/1avoids accidental non-binary values affecting motion semantics.🔧 Small hardening
elif key == self.dir_pin: - self.direction = value + self.direction = 1 if value else 0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@printer_simulator/printer_hardware/axis.py` around lines 77 - 78, The branch that assigns direction when key == self.dir_pin should coerce the incoming value to a binary state to prevent non-binary values from affecting motion; change the assignment in the Axis handling to normalize value into 0 or 1 (e.g., convert via bool/int or explicit comparison) so that self.direction is always 0 or 1, using the existing symbols self.dir_pin and self.direction to locate the assignment to update.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@Dockerfile`:
- Around line 26-27: The Dockerfile uses an unpinned git clone (RUN git clone
--depth 1 ${KLIPPER_REPO} /home/printer/klipper) which makes images
non-reproducible; update the clone to accept and use an immutable ref (e.g., ARG
KLIPPER_REF or KLIPPER_COMMIT) and either clone with --branch ${KLIPPER_REF}
--depth 1 or clone then run git -C /home/printer/klipper checkout
${KLIPPER_COMMIT} && git -C /home/printer/klipper reset --hard
${KLIPPER_COMMIT}; apply the same change to the other clone occurrences (lines
referenced in the comment) so builds are deterministic and reproducible.
In `@example-configs/addons/dual_extruder_stepper.cfg`:
- Around line 18-19: Configs use inverted pin notation ("!") which breaks
simulator pin routing because axis.py and single_extruder.py perform exact
string matches (e.g., if key == self.enable_pin) and do not normalize prefixes;
remove the leading "!" from any dir_pin and enable_pin entries in the affected
example config files so the pin strings match exactly (e.g., change dir_pin:
!gpiochip4/gpio2 to dir_pin: gpiochip4/gpio2 and enable_pin: gpiochip4/gpio3 to
enable_pin: gpiochip4/gpio3), ensuring all occurrences listed in the review are
updated.
In `@example-configs/addons/temp_sensors.cfg`:
- Line 16: The config references sensor_pin "analog3" but there is no simulator
component registering that pin, causing startup failure; either implement a
TemperatureFan simulator class that registers "analog3" as a readable analog pin
and models chamber heater behavior (use existing Heatbed and Extruder simulator
classes as patterns for methods, registration, and thermal update logic), or
update the addon documentation to state that the temp_sensors addon requires a
manual simulator extension and remove or replace sensor_pin: analog3 with a
supported/simulated pin; ensure the new TemperatureFan class name and the
sensor_pin "analog3" are consistently used where pins are registered and read.
In `@printer_simulator/hardware_bridge_hook.c`:
- Around line 308-309: The handler that currently ignores unknown ioctls on fake
fds by returning 0 (see the comment "Unknown ioctls on fake fds are simply
ignored" and the trailing return 0 in hardware_bridge_hook.c) should instead set
errno = ENOTTY and return -1 to follow POSIX semantics; update the
ioctl-handling function to include <errno.h> (or ensure errno is available),
replace the return 0 with code that sets errno = ENOTTY and returns -1, and keep
the descriptive comment so callers receive an appropriate error for unsupported
ioctls.
- Around line 65-89: The fake_fds array access must be synchronized: add a
pthread mutex (e.g., pthread_mutex_t fake_fds_lock) that is initialized before
use and use it to guard all accesses and mutations of fake_fds; wrap the body of
find_fake_fd, alloc_fake_fd (including the memset and field assignments), and
free_fake_fd in pthread_mutex_lock/unlock (or equivalent) and ensure any other
hooks that read/write fake_fds (open, ioctl, pread, close) also lock around
their fake_fds interactions to prevent races and corruption.
- Around line 158-194: The varargs handling in open() and open64() is unsafe:
open() always consumes a mode_t from va_list even when flags don't include
O_CREAT or O_TMPFILE, and open64() drops varargs entirely. Change open() to only
va_start/va_arg and obtain mode when (flags & (O_CREAT|O_TMPFILE)) is true (use
a default mode_t like 0 otherwise) and pass that mode to real_open(path, flags,
mode); update all early-return branches to avoid touching varargs when not
needed. Reimplement open64() to accept varargs like open64(const char *path, int
flags, ...) and forward the mode to open() (or call real_open) only when the
flags indicate a mode argument, using the same conditional va_list logic as
open(). Ensure va_start/va_end are paired correctly and reference the functions
open, open64, real_open, flags, O_CREAT, O_TMPFILE, and mode_t when making
changes.
- Around line 267-281: In the GPIO_GET_LINEHANDLE_IOCTL handling, check for
failures from make_dummy_fd() and alloc_fake_fd() and return a failure code
instead of proceeding: if make_dummy_fd() returns a negative value (e.g.,
line_fd < 0) or alloc_fake_fd() returns NULL, do not set req->fd or use the fake
fd, free any partial state if needed, and return a negative errno (e.g., -ENOMEM
for allocation failure / -ENFILE or a suitable negative error for fd creation
failure) so the caller sees the ioctl error; update the block around
make_dummy_fd(), alloc_fake_fd(), lf, and req->fd accordingly.
In `@printer_simulator/printer_hardware/axis.py`:
- Around line 26-53: The constructor __init__ currently accepts
full_steps_per_rotation (and distance_mm_per_rotation) without validation which
can cause a division-by-zero when computing step_size later; update __init__ in
class Axis to validate that full_steps_per_rotation is an int > 0 (and
distance_mm_per_rotation > 0) and raise a ValueError with a clear message if
not, and ensure any code that computes step_size (the step_size calculation
referenced around lines 74-75) uses the validated attributes so the
division-by-zero cannot occur at runtime.
In `@printer_simulator/printer_hardware/server.py`:
- Around line 75-85: The handler currently assumes data has tokens and silently
ignores malformed input; update the connection handling (the block that calls
conn.recv, splits into parts and calls self.get/self.set) to detect
empty/whitespace data and immediately send a clear error like "ERR Empty
command\n", validate parts before indexing and respond "ERR Invalid command\n"
when parts length or command name is unsupported, and wrap the int conversion
for SET (where the code uses int(parts[2])) in a try/except to send "ERR Invalid
value\n" on ValueError; ensure unsupported verbs return "ERR Unsupported
command\n" so clients always receive an explicit error instead of silent
failure.
In `@printer_simulator/printer_hardware/thermals.py`:
- Around line 39-53: The constructor for the thermals class does not validate
inputs, so downstream calculation of heat_capacity (derived from mass_kg) can be
zero or invalid; update __init__ to validate and raise ValueError for invalid
parameters: ensure mass_kg > 0, cooling_coefficient > 0, and (optionally)
heating_power_watts >= 0 and that ambient_temp/initial_temp are finite numbers;
reference the __init__ method and the instance attributes _mass_kg,
_cooling_coefficient, _heating_power_watts, _ambient_temp, and _temperature when
adding these guards so heat_capacity calculation later cannot divide by zero or
operate on invalid values.
- Around line 70-73: The heater state is being changed before integrating
temperature, causing the elapsed-time update to use the new state; in set_heater
you should first call __update_temperature() to advance temperature using the
current _is_active and elapsed dt, then assign self._is_active = on to change
the heater; ensure __update_temperature() consumes the elapsed time since the
last update (using its existing time-tracking) so the state transition
integration is correct.
- Around line 21-24: The calc_adc_value() implementation can produce values
outside 0..4095; modify it so after computing adc_value (using adc_max, voltage,
etc.) you clamp the result to the 12-bit range by ensuring adc_value is at least
0 and at most adc_max (then return that clamped int). Keep the existing
conversion logic but apply the clamp before returning to guarantee bounds safety
for temperatures <0°C or >500°C.
In `@printer_simulator/webcam_renderer.py`:
- Around line 26-35: load_axes assumes each axis dict has position, min_pos and
max_pos which can raise if missing; update load_axes to validate each axis
payload before calling create_axis_state (or make create_axis_state tolerant) by
checking that axes["x"], axes["y"], axes["z"] are mappings and contain the keys
"position", "min_pos", "max_pos" (or supply safe defaults) and return None or
skip rendering if any axis is malformed; reference the load_axes function and
create_axis_state so you either add explicit key checks/try/except around
create_axis_state calls or add defensive defaults/validation inside
create_axis_state to prevent exceptions from missing fields.
---
Nitpick comments:
In `@config/linux.config`:
- Line 50: Add a POSIX trailing newline to this config file by ensuring the last
line "CONFIG_HAVE_GPIO_HARD_PWM=y" is terminated with a newline character (i.e.,
add an empty line break after CONFIG_HAVE_GPIO_HARD_PWM=y) so the file ends with
a newline.
In `@config/supervisord.conf`:
- Around line 38-46: Replace the fragile fixed delay in the program:klipper_mcu
startup command with a readiness loop that waits for the FAKE_HARDWARE_SOCKET to
exist before launching /home/printer/klipper/out/klipper.elf; specifically,
modify the command (the current bash -c 'sleep 10 &&
/home/printer/klipper/out/klipper.elf') so it checks for the socket path defined
by FAKE_HARDWARE_SOCKET or waits with a short polling loop and a sensible
timeout, then starts klipper.elf as the printer user with the same environment
(LD_PRELOAD) and preserve autostart/autorestart/redirect_stderr settings.
In `@Dockerfile`:
- Around line 11-13: Remove the leftover simulavr-only packages from the builder
image by deleting the entries "rst2pdf", "help2man", and "texinfo" (the simulavr
block/commented install list) from the Dockerfile package install section so the
builder no longer installs these unused tools; update the package list where the
simulavr block appears (the commented lines containing "simulavr \", "g++ make
rst2pdf help2man texinfo \", and the closing "### \") to omit those three
package names or remove the simulavr-only block entirely.
In `@printer_simulator/Makefile`:
- Line 10: Add a placeholder Makefile target named test and include it in the
.PHONY list so the file follows Makefile conventions; update the existing .PHONY
line (which currently lists all, clean, lint, format, typecheck) to include test
and add a test: target that either echoes a short message or invokes the test
runner (e.g., pytest) so CI/static analysis won't flag the missing phony target;
reference the .PHONY declaration and the new test target when making the change.
- Around line 4-6: The Makefile currently sets CC, CFLAGS, and LDFLAGS but lacks
compiler warning flags; update the CFLAGS variable (used when invoking gcc) to
include standard warning flags such as -Wall -Wextra and optionally -Werror or
-Wpedantic to surface potential issues during build, e.g., augment the existing
CFLAGS definition so all compiled objects get these warnings enabled while
keeping -shared -fPIC and preserving LDFLAGS.
In `@printer_simulator/printer_hardware/axis.py`:
- Around line 77-78: The branch that assigns direction when key == self.dir_pin
should coerce the incoming value to a binary state to prevent non-binary values
from affecting motion; change the assignment in the Axis handling to normalize
value into 0 or 1 (e.g., convert via bool/int or explicit comparison) so that
self.direction is always 0 or 1, using the existing symbols self.dir_pin and
self.direction to locate the assignment to update.
In `@printer_simulator/printer_hardware/renderer.py`:
- Line 23: The field _stop_render_thread is declared but never used; either
remove it or wire it into the renderer's thread control. If you remove it,
delete the self._stop_render_thread assignment and any unused references;
otherwise implement cooperative shutdown by checking self._stop_render_thread
inside the rendering loop (e.g., in _render_loop or similar), set the flag to
True in a new stop_render_thread/stop method and join the thread in that method
to avoid dead state, and ensure start_render_thread (or the thread starter)
clears the flag before launching.
- Around line 1-2: The renderer currently imports matplotlib.pyplot directly
which can pick an interactive backend in headless environments; before importing
pyplot in printer_simulator/printer_hardware/renderer.py set the matplotlib
backend to "Agg" by importing matplotlib, calling matplotlib.use("Agg"), then
import matplotlib.pyplot as plt so savefig works reliably in containers; also
remove the unused _stop_render_thread attribute (or use it) in the Renderer
class to avoid dead code.
In `@printer_simulator/requirements-dev.txt`:
- Around line 1-4: The dev dependencies in requirements-dev.txt (entries "mypy",
"pre-commit", "ruff") are unpinned; update that file to pin each tool to a
specific version or tight constraint (e.g., exact versions or ~= constraints) to
ensure reproducible installs—choose current stable versions for "mypy",
"pre-commit", and "ruff", replace the bare names with those pinned strings, and
consider adding a comment indicating the chosen versions and date or using a
generated constraints file if you prefer automated updates.
In `@README.md`:
- Around line 46-47: Add a blank line between the alternative command line
("docker-compose -f docker-compose.build.yml -f docker-compose.yml up -d
--build") and the horizontal rule marker (---) so the command isn't directly
adjacent to the HR; update the README content around the command and the '---'
separator to insert one empty line before the '---'.
In `@scripts/service_control.sh`:
- Around line 79-86: The current loop always calls supervisorctl for services in
the same order using UNIT/UNITS and COMMAND; change it so when COMMAND is "stop"
you iterate UNITS in reverse order (or explicitly reverse the list for that
branch) so klipper_mcu is stopped before printer_simulator, while preserving the
existing forward order for "start" and "restart"; modify the block that handles
start|stop|restart to detect "$COMMAND" = "stop" and reverse the iteration
before calling /usr/bin/supervisorctl "$COMMAND" "$UNIT" (and keep the
special-case when UNIT = "klipper" to call the multi-service supervisorctl
invocation).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (41)
.dockerignore.gitignore.pre-commit-config.yamlDockerfilePINS.mdREADME.mdconfig/linux.configconfig/simulavr.configconfig/supervisord.confdocker-compose.build.ymldocker-compose.ymlexample-configs/addons/basic_cartesian_kinematics.cfgexample-configs/addons/dual_extruder.cfgexample-configs/addons/dual_extruder_stepper.cfgexample-configs/addons/heater_bed.cfgexample-configs/addons/led_neopixel.cfgexample-configs/addons/miscellaneous.cfgexample-configs/addons/single_extruder.cfgexample-configs/addons/temp_sensors.cfgexample-configs/printer.cfgprinter_simulator/.gitignoreprinter_simulator/Makefileprinter_simulator/hardware_bridge_hook.cprinter_simulator/printer_hardware/__init__.pyprinter_simulator/printer_hardware/axis.pyprinter_simulator/printer_hardware/heatbed.pyprinter_simulator/printer_hardware/input_pin.pyprinter_simulator/printer_hardware/output_pin.pyprinter_simulator/printer_hardware/pin_handler.pyprinter_simulator/printer_hardware/renderer.pyprinter_simulator/printer_hardware/server.pyprinter_simulator/printer_hardware/single_extruder.pyprinter_simulator/printer_hardware/state_io.pyprinter_simulator/printer_hardware/thermals.pyprinter_simulator/printer_simulator.pyprinter_simulator/pyproject.tomlprinter_simulator/requirements-dev.txtprinter_simulator/requirements.txtprinter_simulator/webcam_renderer.pyscripts/fix_venvs.shscripts/service_control.sh
💤 Files with no reviewable changes (2)
- scripts/fix_venvs.sh
- config/simulavr.config
| RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper \ | ||
| && virtualenv -p /usr/local/bin/python3 /home/printer/python-env \ |
There was a problem hiding this comment.
Pin cloned repos to immutable refs to keep images reproducible.
git clone --depth 1 without a pinned tag/commit makes rebuilds drift over time and can break CI/runtime unexpectedly.
🔧 Suggested hardening
ARG KLIPPER_REPO=https://github.com/Klipper3d/klipper.git
+ARG KLIPPER_REF=v0.12.0
ENV KLIPPER_REPO=${KLIPPER_REPO}
-RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper \
+RUN git clone --depth 1 --branch ${KLIPPER_REF} ${KLIPPER_REPO} /home/printer/klipper \
&& virtualenv -p /usr/local/bin/python3 /home/printer/python-env \
&& /home/printer/python-env/bin/pip install --no-cache-dir -r /home/printer/klipper/scripts/klippy-requirements.txt
@@
-RUN git clone --depth 1 https://github.com/Arksine/moonraker \
+ARG MOONRAKER_REPO=https://github.com/Arksine/moonraker.git
+ARG MOONRAKER_REF=v0.9.0
+RUN git clone --depth 1 --branch ${MOONRAKER_REF} ${MOONRAKER_REPO} /home/printer/moonraker \
&& /home/printer/python-env/bin/pip install --no-cache-dir -r /home/printer/moonraker/scripts/moonraker-requirements.txtAlso applies to: 42-43
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` around lines 26 - 27, The Dockerfile uses an unpinned git clone
(RUN git clone --depth 1 ${KLIPPER_REPO} /home/printer/klipper) which makes
images non-reproducible; update the clone to accept and use an immutable ref
(e.g., ARG KLIPPER_REF or KLIPPER_COMMIT) and either clone with --branch
${KLIPPER_REF} --depth 1 or clone then run git -C /home/printer/klipper checkout
${KLIPPER_COMMIT} && git -C /home/printer/klipper reset --hard
${KLIPPER_COMMIT}; apply the same change to the other clone occurrences (lines
referenced in the comment) so builds are deterministic and reproducible.
| dir_pin: !gpiochip4/gpio2 | ||
| enable_pin: !gpiochip4/gpio3 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "A) Inverted gpio pins present in configs:"
rg -nP ':\s*!gpiochip' example-configs || true
echo
echo "B) Inversion normalization in simulator code:"
rg -nP 'startswith\("!"\)|lstrip\("!"\)|strip\("!"\)|invert|inversion' printer_simulator || true
echo
echo "C) Direct key==pin comparisons (sensitive to exact string match):"
rg -nP 'if key == self\.[a-z_]*pin' printer_simulator/printer_hardware || trueRepository: mainsail-crew/virtual-klipper-printer
Length of output: 1981
Remove inverted pin notation from configs; simulator explicitly prohibits inversions.
Simulator docstrings in axis.py and single_extruder.py state "Don't use any invertions for pins." Pin routing uses exact string matching (if key == self.enable_pin), and no normalization code strips the ! prefix. Inverted notation on these pins will break string comparisons and cause routing failures.
Remove ! from:
example-configs/addons/dual_extruder_stepper.cfglines 18–19, 27example-configs/addons/dual_extruder.cfglines 5–6, 25
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@example-configs/addons/dual_extruder_stepper.cfg` around lines 18 - 19,
Configs use inverted pin notation ("!") which breaks simulator pin routing
because axis.py and single_extruder.py perform exact string matches (e.g., if
key == self.enable_pin) and do not normalize prefixes; remove the leading "!"
from any dir_pin and enable_pin entries in the affected example config files so
the pin strings match exactly (e.g., change dir_pin: !gpiochip4/gpio2 to
dir_pin: gpiochip4/gpio2 and enable_pin: gpiochip4/gpio3 to enable_pin:
gpiochip4/gpio3), ensuring all occurrences listed in the review are updated.
| kick_start_time: 0 | ||
| sensor_type: ATC Semitec 104GT-2 | ||
| sensor_pin: PA6 | ||
| sensor_pin: analog3 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# First, let's explore the repository structure
echo "=== Repository Structure ==="
find . -type f -name "*.cfg" | head -20
echo
echo "=== Looking for temp_sensors.cfg ==="
find . -type f -name "temp_sensors.cfg" -exec cat -n {} +Repository: mainsail-crew/virtual-klipper-printer
Length of output: 1406
🏁 Script executed:
#!/bin/bash
# Run the verification script provided in the review comment
set -euo pipefail
echo "== Direct usages of new identifiers =="
rg -n -C2 'analog3|gpiochip0/gpio5|gpiochip0/gpio6|gpiochip0/gpio7' || true
echo
echo "== Potential pin/adc parser or mapping logic =="
rg -n -C3 'gpiochip|analog[0-9]+|sensor_pin|tachometer_pin|adc|PinHandler|InputPin|OutputPin|FakeHardwareServer' || trueRepository: mainsail-crew/virtual-klipper-printer
Length of output: 40028
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Looking for temperature_fan or chamber_heater implementation ==="
find . -type f \( -name "*.py" -o -name "*.cfg" \) | xargs rg -l "temperature_fan|chamber_heater" | head -20
echo
echo "=== Searching for any component that registers analog3 ==="
rg -n "analog3" --type py
echo
echo "=== Checking FakeHardwareServer handling of unmapped pins ==="
grep -A 20 "read_mapping\|write_mapping" printer_simulator/printer_hardware/server.py | head -50
echo
echo "=== Checking if any component has read_pins returning analog3 ==="
rg -n "def read_pins" printer_simulator/printer_hardware/ -A 3Repository: mainsail-crew/virtual-klipper-printer
Length of output: 251
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Looking for temperature_fan class implementation ==="
find . -type f -name "*.py" -exec grep -l "class.*[Tt]emperature.*[Ff]an\|TemperatureFan" {} \;
echo
echo "=== Checking what components are actually instantiated in main simulator ==="
cat -n printer_simulator/printer_simulator.py | tail -40
echo
echo "=== Looking for any reference to chamber_heater in Python code ==="
rg -n "chamber_heater" --type pyRepository: mainsail-crew/virtual-klipper-printer
Length of output: 1735
Implement a simulator component for the temperature_fan chamber heater or explicitly document that it's unsupported.
analog3 is referenced in the config but no Python component registers it as a readable pin. When Klipper loads this config addon, it will fail at startup when attempting to read the chamber heater sensor value. Either create a TemperatureFan component class to simulate the chamber heater's thermal behavior (similar to Heatbed and Extruder), or add a note to the config indicating this addon requires manual simulator extensions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@example-configs/addons/temp_sensors.cfg` at line 16, The config references
sensor_pin "analog3" but there is no simulator component registering that pin,
causing startup failure; either implement a TemperatureFan simulator class that
registers "analog3" as a readable analog pin and models chamber heater behavior
(use existing Heatbed and Extruder simulator classes as patterns for methods,
registration, and thermal update logic), or update the addon documentation to
state that the temp_sensors addon requires a manual simulator extension and
remove or replace sensor_pin: analog3 with a supported/simulated pin; ensure the
new TemperatureFan class name and the sensor_pin "analog3" are consistently used
where pins are registered and read.
| static FakeFd fake_fds[MAX_FAKE_FDS]; | ||
|
|
||
| static FakeFd *find_fake_fd(int fd) { | ||
| for (int i = 0; i < MAX_FAKE_FDS; i++) | ||
| if (fake_fds[i].type != FD_NONE && fake_fds[i].fd == fd) | ||
| return &fake_fds[i]; | ||
| return NULL; | ||
| } | ||
|
|
||
| static FakeFd *alloc_fake_fd(int fd, FdType type) { | ||
| for (int i = 0; i < MAX_FAKE_FDS; i++) { | ||
| if (fake_fds[i].type == FD_NONE) { | ||
| memset(&fake_fds[i], 0, sizeof(FakeFd)); | ||
| fake_fds[i].fd = fd; | ||
| fake_fds[i].type = type; | ||
| return &fake_fds[i]; | ||
| } | ||
| } | ||
| return NULL; | ||
| } | ||
|
|
||
| static void free_fake_fd(int fd) { | ||
| FakeFd *f = find_fake_fd(fd); | ||
| if (f) f->type = FD_NONE; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 125
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '1,100p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 4306
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '200,250p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 2305
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '310,330p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 388
🏁 Script executed:
rg -i "pthread|mutex|lock|atomic|thread|concurrent" printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 158
🏁 Script executed:
cat printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 11303
🏁 Script executed:
grep -n "pthread\|mutex\|lock\|thread\|Thread\|THREAD\|volatile\|atomic" printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 162
🏁 Script executed:
rg -i "klipper|multi.*thread|concurrent|race" printer_simulator/ --type c --type hRepository: mainsail-crew/virtual-klipper-printer
Length of output: 934
Guard fake FD bookkeeping with synchronization.
fake_fds is mutated and read from multiple hooks (open, ioctl, pread, close) with no lock. Concurrent access can race, corrupting the FD mapping. For example, alloc_fake_fd may find a free slot and write to it while another thread is iterating through the array in find_fake_fd, or free_fake_fd may clear an entry while another thread is reading from it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/hardware_bridge_hook.c` around lines 65 - 89, The fake_fds
array access must be synchronized: add a pthread mutex (e.g., pthread_mutex_t
fake_fds_lock) that is initialized before use and use it to guard all accesses
and mutations of fake_fds; wrap the body of find_fake_fd, alloc_fake_fd
(including the memset and field assignments), and free_fake_fd in
pthread_mutex_lock/unlock (or equivalent) and ensure any other hooks that
read/write fake_fds (open, ioctl, pread, close) also lock around their fake_fds
interactions to prevent races and corruption.
| int open(const char *path, int flags, ...) { | ||
| init_real_funcs(); | ||
| va_list args; | ||
| va_start(args, flags); | ||
| mode_t mode = va_arg(args, mode_t); | ||
| va_end(args); | ||
|
|
||
| // ADC: /sys/bus/iio/devices/iio:device0/in_voltageN_raw | ||
| if (strstr(path, "in_voltage") && strstr(path, "_raw")) { | ||
| int channel = 0; | ||
| const char *p = strstr(path, "in_voltage"); | ||
| if (p) channel = atoi(p + strlen("in_voltage")); | ||
|
|
||
| int fd = make_dummy_fd(); | ||
| FakeFd *f = alloc_fake_fd(fd, FD_ADC); | ||
| if (f) f->channel = channel; | ||
| fprintf(stderr, "[fake_hw] open ADC channel %d -> fd %d\n", channel, fd); | ||
| return fd; | ||
| } | ||
|
|
||
| // GPIO: /dev/gpiochipN | ||
| if (strstr(path, "/dev/gpiochip")) { | ||
| int chip = 0; | ||
| sscanf(path, "/dev/gpiochip%d", &chip); | ||
| int fd = make_dummy_fd(); | ||
| FakeFd *f = alloc_fake_fd(fd, FD_GPIO_CHIP); | ||
| if (f) f->chip = chip; | ||
| fprintf(stderr, "[fake_hw] open gpiochip%d -> fd %d\n", chip, fd); | ||
| return fd; | ||
| } | ||
|
|
||
| return real_open(path, flags, mode); | ||
| } | ||
|
|
||
| int open64(const char *path, int flags, ...) { | ||
| return open(path, flags); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "hardware_bridge_hook.c" -type fRepository: mainsail-crew/virtual-klipper-printer
Length of output: 123
🏁 Script executed:
cat -n printer_simulator/hardware_bridge_hook.c | sed -n '150,200p'Repository: mainsail-crew/virtual-klipper-printer
Length of output: 1931
🏁 Script executed:
head -n 10 printer_simulator/hardware_bridge_hook.cRepository: mainsail-crew/virtual-klipper-printer
Length of output: 693
Fix open/open64 varargs handling; current implementation has undefined behavior.
The open() function unconditionally reads the mode argument from varargs at line 162 and passes it to real_open() at line 189, even when neither O_CREAT nor O_TMPFILE flags are set. Per POSIX, the mode argument is only present when these flags are used. Additionally, open64() at line 193 completely drops varargs when forwarding to open(), losing the mode argument when it should be passed.
🐛 Correct variadic forwarding
int open(const char *path, int flags, ...) {
init_real_funcs();
- va_list args;
- va_start(args, flags);
- mode_t mode = va_arg(args, mode_t);
- va_end(args);
+ mode_t mode = 0;
+ if ((flags & O_CREAT) || (flags & O_TMPFILE)) {
+ va_list args;
+ va_start(args, flags);
+ mode = va_arg(args, mode_t);
+ va_end(args);
+ }
@@
- return real_open(path, flags, mode);
+ if ((flags & O_CREAT) || (flags & O_TMPFILE))
+ return real_open(path, flags, mode);
+ return real_open(path, flags);
}
int open64(const char *path, int flags, ...) {
- return open(path, flags);
+ if ((flags & O_CREAT) || (flags & O_TMPFILE)) {
+ va_list args;
+ va_start(args, flags);
+ mode_t mode = va_arg(args, mode_t);
+ va_end(args);
+ return open(path, flags, mode);
+ }
+ return open(path, flags);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| int open(const char *path, int flags, ...) { | |
| init_real_funcs(); | |
| va_list args; | |
| va_start(args, flags); | |
| mode_t mode = va_arg(args, mode_t); | |
| va_end(args); | |
| // ADC: /sys/bus/iio/devices/iio:device0/in_voltageN_raw | |
| if (strstr(path, "in_voltage") && strstr(path, "_raw")) { | |
| int channel = 0; | |
| const char *p = strstr(path, "in_voltage"); | |
| if (p) channel = atoi(p + strlen("in_voltage")); | |
| int fd = make_dummy_fd(); | |
| FakeFd *f = alloc_fake_fd(fd, FD_ADC); | |
| if (f) f->channel = channel; | |
| fprintf(stderr, "[fake_hw] open ADC channel %d -> fd %d\n", channel, fd); | |
| return fd; | |
| } | |
| // GPIO: /dev/gpiochipN | |
| if (strstr(path, "/dev/gpiochip")) { | |
| int chip = 0; | |
| sscanf(path, "/dev/gpiochip%d", &chip); | |
| int fd = make_dummy_fd(); | |
| FakeFd *f = alloc_fake_fd(fd, FD_GPIO_CHIP); | |
| if (f) f->chip = chip; | |
| fprintf(stderr, "[fake_hw] open gpiochip%d -> fd %d\n", chip, fd); | |
| return fd; | |
| } | |
| return real_open(path, flags, mode); | |
| } | |
| int open64(const char *path, int flags, ...) { | |
| return open(path, flags); | |
| } | |
| int open(const char *path, int flags, ...) { | |
| init_real_funcs(); | |
| mode_t mode = 0; | |
| if ((flags & O_CREAT) || (flags & O_TMPFILE)) { | |
| va_list args; | |
| va_start(args, flags); | |
| mode = va_arg(args, mode_t); | |
| va_end(args); | |
| } | |
| // ADC: /sys/bus/iio/devices/iio:device0/in_voltageN_raw | |
| if (strstr(path, "in_voltage") && strstr(path, "_raw")) { | |
| int channel = 0; | |
| const char *p = strstr(path, "in_voltage"); | |
| if (p) channel = atoi(p + strlen("in_voltage")); | |
| int fd = make_dummy_fd(); | |
| FakeFd *f = alloc_fake_fd(fd, FD_ADC); | |
| if (f) f->channel = channel; | |
| fprintf(stderr, "[fake_hw] open ADC channel %d -> fd %d\n", channel, fd); | |
| return fd; | |
| } | |
| // GPIO: /dev/gpiochipN | |
| if (strstr(path, "/dev/gpiochip")) { | |
| int chip = 0; | |
| sscanf(path, "/dev/gpiochip%d", &chip); | |
| int fd = make_dummy_fd(); | |
| FakeFd *f = alloc_fake_fd(fd, FD_GPIO_CHIP); | |
| if (f) f->chip = chip; | |
| fprintf(stderr, "[fake_hw] open gpiochip%d -> fd %d\n", chip, fd); | |
| return fd; | |
| } | |
| if ((flags & O_CREAT) || (flags & O_TMPFILE)) | |
| return real_open(path, flags, mode); | |
| return real_open(path, flags); | |
| } | |
| int open64(const char *path, int flags, ...) { | |
| if ((flags & O_CREAT) || (flags & O_TMPFILE)) { | |
| va_list args; | |
| va_start(args, flags); | |
| mode_t mode = va_arg(args, mode_t); | |
| va_end(args); | |
| return open(path, flags, mode); | |
| } | |
| return open(path, flags); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/hardware_bridge_hook.c` around lines 158 - 194, The varargs
handling in open() and open64() is unsafe: open() always consumes a mode_t from
va_list even when flags don't include O_CREAT or O_TMPFILE, and open64() drops
varargs entirely. Change open() to only va_start/va_arg and obtain mode when
(flags & (O_CREAT|O_TMPFILE)) is true (use a default mode_t like 0 otherwise)
and pass that mode to real_open(path, flags, mode); update all early-return
branches to avoid touching varargs when not needed. Reimplement open64() to
accept varargs like open64(const char *path, int flags, ...) and forward the
mode to open() (or call real_open) only when the flags indicate a mode argument,
using the same conditional va_list logic as open(). Ensure va_start/va_end are
paired correctly and reference the functions open, open64, real_open, flags,
O_CREAT, O_TMPFILE, and mode_t when making changes.
| data = conn.recv(64).decode().strip() | ||
| parts = data.split() | ||
|
|
||
| if parts[0] == "GET" and len(parts) == 2: | ||
| value = self.get(parts[1]) | ||
| conn.send(f"{value}\n".encode()) | ||
|
|
||
| elif parts[0] == "SET" and len(parts) == 3: | ||
| key, value = parts[1], int(parts[2]) | ||
| self.set(key, value) | ||
| conn.send(b"OK\n") |
There was a problem hiding this comment.
Return explicit errors for empty/invalid commands.
The handler assumes parts[0] exists and silently drops unsupported commands; this makes client behavior brittle on malformed input.
🛠️ Proposed fix
def _handle_client(self, conn):
try:
data = conn.recv(64).decode().strip()
parts = data.split()
+ if not parts:
+ conn.send(b"ERR empty command\n")
+ return
if parts[0] == "GET" and len(parts) == 2:
value = self.get(parts[1])
conn.send(f"{value}\n".encode())
elif parts[0] == "SET" and len(parts) == 3:
- key, value = parts[1], int(parts[2])
+ try:
+ key, value = parts[1], int(parts[2])
+ except ValueError:
+ conn.send(b"ERR invalid value\n")
+ return
self.set(key, value)
conn.send(b"OK\n")
+ else:
+ conn.send(b"ERR invalid command\n")
except Exception as e:
print(f"[server] error: {e}")
finally:
conn.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/server.py` around lines 75 - 85, The
handler currently assumes data has tokens and silently ignores malformed input;
update the connection handling (the block that calls conn.recv, splits into
parts and calls self.get/self.set) to detect empty/whitespace data and
immediately send a clear error like "ERR Empty command\n", validate parts before
indexing and respond "ERR Invalid command\n" when parts length or command name
is unsupported, and wrap the int conversion for SET (where the code uses
int(parts[2])) in a try/except to send "ERR Invalid value\n" on ValueError;
ensure unsupported verbs return "ERR Unsupported command\n" so clients always
receive an explicit error instead of silent failure.
| adc_max = 4095 | ||
| voltage = (temperature_c / 500) * 3.3 | ||
| adc_value = int((voltage / 3.3) * adc_max) | ||
| return adc_value |
There was a problem hiding this comment.
Clamp ADC output to 12-bit bounds.
calc_adc_value() can currently return values outside 0..4095 for temperatures <0°C or >500°C.
🔒 Bound-safe conversion
def calc_adc_value(temperature_c: float) -> int:
adc_max = 4095
- voltage = (temperature_c / 500) * 3.3
- adc_value = int((voltage / 3.3) * adc_max)
- return adc_value
+ adc_value = int(round((temperature_c / 500.0) * adc_max))
+ return max(0, min(adc_max, adc_value))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/thermals.py` around lines 21 - 24, The
calc_adc_value() implementation can produce values outside 0..4095; modify it so
after computing adc_value (using adc_max, voltage, etc.) you clamp the result to
the 12-bit range by ensuring adc_value is at least 0 and at most adc_max (then
return that clamped int). Keep the existing conversion logic but apply the clamp
before returning to guarantee bounds safety for temperatures <0°C or >500°C.
| def __init__( | ||
| self, | ||
| mass_kg: float, | ||
| heating_power_watts: float, | ||
| ambient_temp: float = 20.0, | ||
| cooling_coefficient: float = 0.01, # k, higher = faster cooling | ||
| initial_temp: float = 20.0, | ||
| ): | ||
| self._mass_kg = mass_kg | ||
| self._ambient_temp = ambient_temp | ||
| self._cooling_coefficient = cooling_coefficient | ||
| self._temperature = initial_temp | ||
| self._heating_power_watts = heating_power_watts | ||
| self._is_active = False | ||
| self._time = time.monotonic() |
There was a problem hiding this comment.
Validate thermal parameters to prevent runtime faults.
At Line 59, heat_capacity can become zero/invalid if mass_kg is non-positive.
🛡️ Constructor guards
def __init__(
self,
mass_kg: float,
heating_power_watts: float,
@@
):
+ if mass_kg <= 0:
+ raise ValueError("mass_kg must be > 0")
+ if cooling_coefficient < 0:
+ raise ValueError("cooling_coefficient must be >= 0")
self._mass_kg = mass_kgAlso applies to: 59-60
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/thermals.py` around lines 39 - 53, The
constructor for the thermals class does not validate inputs, so downstream
calculation of heat_capacity (derived from mass_kg) can be zero or invalid;
update __init__ to validate and raise ValueError for invalid parameters: ensure
mass_kg > 0, cooling_coefficient > 0, and (optionally) heating_power_watts >= 0
and that ambient_temp/initial_temp are finite numbers; reference the __init__
method and the instance attributes _mass_kg, _cooling_coefficient,
_heating_power_watts, _ambient_temp, and _temperature when adding these guards
so heat_capacity calculation later cannot divide by zero or operate on invalid
values.
| def set_heater(self, on: bool) -> None: | ||
| self._is_active = on | ||
| self.__update_temperature() | ||
|
|
There was a problem hiding this comment.
Apply elapsed-time update before changing heater state.
At Line 71, toggling _is_active before __update_temperature() attributes past dt to the new state, which skews temperature evolution.
✅ Fix state-transition integration
def set_heater(self, on: bool) -> None:
- self._is_active = on
- self.__update_temperature()
+ self.__update_temperature()
+ self._is_active = on📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def set_heater(self, on: bool) -> None: | |
| self._is_active = on | |
| self.__update_temperature() | |
| def set_heater(self, on: bool) -> None: | |
| self.__update_temperature() | |
| self._is_active = on | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/printer_hardware/thermals.py` around lines 70 - 73, The
heater state is being changed before integrating temperature, causing the
elapsed-time update to use the new state; in set_heater you should first call
__update_temperature() to advance temperature using the current _is_active and
elapsed dt, then assign self._is_active = on to change the heater; ensure
__update_temperature() consumes the elapsed time since the last update (using
its existing time-tracking) so the state transition integration is correct.
| def load_axes(state_payload: dict) -> tuple[Axis, Axis, Axis] | None: | ||
| axes = state_payload.get("axes", {}) | ||
| required_keys = ("x", "y", "z") | ||
| if any(key not in axes for key in required_keys): | ||
| return None | ||
| return ( | ||
| create_axis_state("x", axes["x"]), | ||
| create_axis_state("y", axes["y"]), | ||
| create_axis_state("z", axes["z"]), | ||
| ) |
There was a problem hiding this comment.
Handle malformed axis payloads defensively.
load_axes() currently assumes each axis object has position, min_pos, and max_pos; missing fields can raise and stop rendering.
🛠️ Proposed fix
def load_axes(state_payload: dict) -> tuple[Axis, Axis, Axis] | None:
axes = state_payload.get("axes", {})
required_keys = ("x", "y", "z")
if any(key not in axes for key in required_keys):
return None
- return (
- create_axis_state("x", axes["x"]),
- create_axis_state("y", axes["y"]),
- create_axis_state("z", axes["z"]),
- )
+ try:
+ return (
+ create_axis_state("x", axes["x"]),
+ create_axis_state("y", axes["y"]),
+ create_axis_state("z", axes["z"]),
+ )
+ except (KeyError, TypeError):
+ return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@printer_simulator/webcam_renderer.py` around lines 26 - 35, load_axes assumes
each axis dict has position, min_pos and max_pos which can raise if missing;
update load_axes to validate each axis payload before calling create_axis_state
(or make create_axis_state tolerant) by checking that axes["x"], axes["y"],
axes["z"] are mappings and contain the keys "position", "min_pos", "max_pos" (or
supply safe defaults) and return None or skip rendering if any axis is
malformed; reference the load_axes function and create_axis_state so you either
add explicit key checks/try/except around create_axis_state calls or add
defensive defaults/validation inside create_axis_state to prevent exceptions
from missing fields.
Disclaimer upfront:
This MR is big and completely changes the way, how the virtual-klipper-printer works.
I completely understand if these changes are not accepted, but I see a real benefit in this alternative implementation and want to share it with you.
Implementation reasoning
The current implementation utilizes
simulavremulating the AVR version of klipper.simulavr doesn't simulate in realtime, observations show slowdowns of at least factor 6. (target speed is 20MHz, real execution speed is 3MHz on a top-spec'd Macbook M4).
This leads to timing issues like #43.
Additionally executing simulavr requires 100% CPU for one CPU-core.
In an attempt in fixing the
Timer too closeissue I stumbled over the possibility of compiling klipper with Linux as a Build-Target (I guess for Raspberry Pi or other SoC). The compiled output can be executed directly inside the Docker-Container. Because of missing GPIO/analog devices, this doesn't work out of the box though.On Linux GPIO pins or Analog inputs are mounted into the Linux Directory Tree and are access via syscall functions, i.e. "open", "read", "write". Example https://github.com/Klipper3d/klipper/blob/master/src/linux/analog.c#L30
With a Shared Object written in C and using LD_PRELOAD the execution of the syscalls can be intercepted and in case of an access to an analog channel or gpio device, the calls are redirected to a Printer Simulator implemented in Python.
The Printer Simulator simulates the Movement of the Stepper Motors/Endstops, making e.g. a realistic Axis Homing possible. Additionally Heatbeds and Extruders simulate heating/cooling. Information extracted from the Printer Simulator is used to generate a Webcam Image.
The complete system necesitates practically no CPU, as the extra emulation step isn't necessary.
See this mainsail screenshot showing the virtual klipper printer in action:

Alternative solutions considered/evaluated:
Isn't fully implemented and communication couldn't be established between klippy and klipper
Complexity is significantly higher, Python is more intuitive to program in
Introduces a Maintenance dependency when klipper source code changes
Necesitates privileged docker container access opening a potential security risk
Coding agent generated commit message
Add printer hardware simulation components and server
Open todos:
:latestisn't best practice. In case a user is not on latest main but pulls a new built docker image if this MR gets merged, this will break the setup