Previously, #defines requiring a value, but configured as boolean,
broke GUI code. While Configtool its self should never write such
broken #defines, they can happen with manual config file edits.
IMHO it's fine to do such repair attempts as long as it doesn't
hobble other functionality. Whatever was broken at read time will
end up disabled at write time, unless the user changes that value
in the GUI.
This is done by parsing values from the generic config before
parsing those in the user config. Values existing in the user
config overwrite those in the generic config; values not
existing there stay at the value in the generic config.
Previously, only boolean #defines were handled properly (and
by code somewhere else). Missing #defines with value were
written as boolean #define, making the file unparseable on
the next read.
We can't magically find out what the right pin is, but we can at
least make sure it gets written with valid syntax next time. This
also ensures pins can be handled in the GUI, avoiding failures
like the one reported by inline comments here:
b9fe0a5dd0
Also coalesce multiple pin-steps occurring at the same clock-time into
a single step. This allows us to treat X+Y movements at the same moment
into a single step on corexy.
We show pin output on the console when --verbose is 2. But this gets in
the way of other verbose output we may want to monitor. Move the pinouts
option to an explicit switch instead of relying on the --verbose flag.
When the simulator finishes processing the last gcode, it exits. This
leaves causes us to exit without completing the last commanded move.
Rework how we handle end-of-file parsing and wait for the dda queue to
empty before exiting at the end of file processing.
Add a function axes_um_to_steps to convert from um to steps on all axes
respecting current kinematics setting.
Extend code_stepper_axescode_axes_to_stepper_axes to convert all axes,
including E-axis for consistency.
It seems like axes_um_to_steps could be simplified to something like
"apply_kinematics_axes()" which would just do the transformation math
in-place on some axes[] to move from 'Cartesian' to 'target-kinematics'.
Then the original um_to_steps and delta_um code could remain untouched
since 2014. But I'm not sure how this will work with scara or delta
configurations. I'm fairly certain they only work from absolute positions
anyway.
Fixes#216.
(Ab)use the old Gen7 v1.3 configuration for this, as this is
rarely in use and because this board also happens to be the board
where the tested code was developed on.
All in one chunk because the infrastructure is already there.
This also implements the parallel 4-bit bus used by quite some
displays.
For now you have to add quite a number of #defines to your
config.h. First, there are all the pins required, pin names
changed to your actual board/display connection, of course:
#define DISPLAY_RS_PIN PC1
#define DISPLAY_RW_PIN PC0
#define DISPLAY_E_PIN PD2
#define DISPLAY_D4_PIN PD3
#define DISPLAY_D5_PIN PD4
#define DISPLAY_D6_PIN PD5
#define DISPLAY_D7_PIN PD6
And then the information about the display actually existing:
#define DISPLAY_BUS_4BIT
#define DISPLAY_TYPE_HD44780
Allowing to do all this in Configtool is forthcoming, of course.
CoreXY turns the X and Y motors to render a target position differently
than straight cartesian printer does. From the theory page on corexy.com,
where the motors are called A and B instead of X and Y:
dx = 1/2(dA + dB), dY = 1/2(dA - dB)
dA = dX + dY
dB = dX - dY
Accordingly, each step of a single motor results in half of a step in the
X or Y axis. To simplify this and not lose steps, make the pos[] array
hold 2*steps instead of single steps. Adjust back to single steps with
/2 where needed. Store 2*steps whenever writing to pos[] variables
which are not coreXY driven.
Since each step of X or Y (A or B) affects both X and Y position, send
updates to record_pin for all axes instead of only the "affected" axis.
The function record_pin will ignore reports for pins which did not change
from the previous call. This also helps us keep from reporting duplicate
positions for half-steps in coreXY mode, too.
Provide a simulated, simplified representation of the action of
mechanical endstop switches which turn on and off at slightly
different amounts of pressure. When any axis moves past -10,
simulate endstop "on" condition. When the axis moves to 0,
simulate the endstop "off" condition.
This support allows the simulation of G28 (home) commands.
The simulator code is compiled with different definitions than the
rest of the code even when compiling the simulator. This was done
originally to satisfy the compiler, but it was the wrong way to go.
The result is that the main Teacup code may decide to do things one
way (X_INVERT_DIR, for example) but the simulator code will do
things a different way (no X_INVERT_DIR).
Fix this by including the board and printer definitions also in the
simulator code, and use a simple enum trick to give consistent
definitions to the needed PIN definitions, safely ignoring the ones
the config does not use.
This requires that we include simulator.h after 'config.h' in all cases.
Manage that by moving simulator.h from its previous home in arduino.h
into config_wrapper.h.
After this change we will be able to reliably communicate the expected
state of the endstop pins from the simulator.
dda_clock() might be interrupted by dda_step(), and dda_step might
use or modify variables also being used in dda_clock(). It is
possible for dda to be modified when a new dda becomes live during
our dda_clock(). Check the dda->id to ensure it has not changed on
us before we actually write new calculated values into the dda.
Note by Traumflug: copied some of the explanation in the commit
message directly into the code.
dda_clock() might be interrupted by dda_step(), and dda_step might
use or modify variables also being used in dda_clock. In particular
dda->c is modified in both functions but it is done atomically in
dda_clock() to prevent dda_step() from interrupting during the
write. But dda->n is also modified in both places and it is not
protected in dda_clock().
Move updates to dda->n to the atomic section along with dda->c.
Note by Traumflug: good catch! It even makes the binary 14 bytes
smaller, so likely faster.
Yes, these strategies feel a lot like heading into uncharted
territory, because I can't find notable textbook examples on how
to select between various "classes" at compile time. Nevertheless,
it works fine, binaries are small and fast and as such it can't
be _that_ wrong.
Some target devices need extra avrdude command line switches to
get them to upload successfully. There are dozens of options which
may be useful to different people. Instead of breaking all the possible
options out into separate fields, provide a generic "Program Flags" text
field which the user can fill in similar to the CFLAGS and LDFLAGS
settings.
The Arduino Mega2560 bootloader was changed[1] to report an error when
asked to erase flash because it has never actually implemented erasing
flash. To program this bootloader with avrdude requires the -D switch
to avoid flash erase. But it seems that every arduino will work fine
with -D, as evidenced by the fact that the Arduino IDE always [2]
includes -D in the avrdude commandline. Presumably the flash is erased
during/before programming anyway and the separate erase step is unneeded.
Perhaps the -D should be always added to avrdude command line in
configtool and in Makefile-AVR. But I haven't tested any other boards
yet, and I'm being more cautious even though the Arduino IDE does
otherwise.
[1] arduino/Arduino#543
[2] d8e5997328/app/src/processing/app/debug/AvrdudeUploader.java (L168)
A few days before being done with this display hardware decided
say good bye. Accordingly I can't continue with writing related
code. Writing down what already works and what's still missing is
probably a good idea, to make sure the next fellow doesn't have
to investigate from scratch.
Currently not implemented because this costs additional binary
size and, well, with I2C being reliable now, it's difficult to
test it. And also because I'm lazy :-)
- Flag I2C_MODE_FREE was misleading, because one couldn't test
for it the same way as for I2C_MODE_BUSY. At an error
condition, 'i2c_mode & I2C_MODE_FREE' would still evaluate
to true.
- On error, drop not only the buffer, but the entire
transmission.
All of a sudden, the display works reliably, even at the
previously shaky speed of 100'000 bits/s!
TBH, probably I didn't understand some parts of Ruslan's code
earlier but tweaked it anyways. Shame on me!
If all error conditions are handled the same, there's not much
point to use distinct code for each of them.
Also, handle collisions like the other error conditions.
This saves a nice 52 bytes of program memory.
Program: 24404 bytes
Data: 1543 bytes
EEPROM: 32 bytes
Now we shouldn't experience wait cycles in i2c_write() during
typical display writes any longer. It should also distribute CPU
load of display writes a lot better.
Previously writing a line of text to the display would take
almost as long as it took to actually send it to the display,
because the I2C queue could hold only one transmission, which
effectively meant only one character. This could hold the main
loop for several milliseconds.
Now we queue characters, send them one by one, and return to the
main loop in between.
This costs 160 bytes program memory. Only 18 bytes RAM, because
the I2C queue was reduced accordingly. Now:
Program: 24456 bytes
Data: 1543 bytes
EEPROM: 32 bytes
This is
- clearing 'i2c_should_end', so i2c_write() doesn't hang and
- draining the buffer on errors.
This way we loose the remaining transmission, which is typically
half a character, but we no longer stall the entire firmware main
loop.
Actually, such error conditions are surprisingly frequent, at
least on the test hardware. Now they result in some flickering
of the displayed numbers.
This isn't pretty at all, but it shows the principle.
Unfortunately it also exploits a bug in the I2C sending mechanism,
I2C sending hangs a few seconds after reset.