In some cases when move_step_no is equal to rampup_steps, this
algorithm think it is cruising. When cruising it setup the dda->c
to dda->c_min. Which is wrong in that case.
So we let recalculate the dda->c now. The axis will become a little
bit faster for none-cruising movements. When it hits cruising,
it will be capped anyway to dda->c_min. So the "TODO: check is obsolete"
is not obsolete anymore.
Compiler are pretty smart today. sqrt is precalculated for constant values.
E.g. you need to #include <math.h>. But no need to link the libmath.
cos/sin and other stuff should also work.
In the endstop_trigger case, we look, if will are cruising.
-> Yes: Take the rampdown_steps for calculation
-> No: We are still accelerating. So we want to decelerate the same amount of steps.
We don't need to save the step_no. We can easily calculate it when needed.
Also some whitespace-work. In dda.h is only a delete of 'uint32_t step_no;'.
Saves up to 16 clock cycles in dda_step():
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 209 clock cycles.
LED on time maximum: 504 clock cycles.
LED on time average: 241.441 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 22589.
LED on time minimum: 209 clock cycles.
LED on time maximum: 521 clock cycles.
LED on time average: 276.729 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 209 clock cycles.
LED on time maximum: 504 clock cycles.
LED on time average: 262.923 clock cycles.
@phord abstract this to: This happens only when !recalc_speed,
meaning we are cruising, not accelerating or decelerating. So it
pegs our dda->c at c_min if it never made it as far as c_min.
This commit will fix https://github.com/Traumflug/Teacup_Firmware/issues/69
delta_um can become very small, where maximum_feedrate_P is constant.
When moving this division out of the loop, the result can be wrong.
dda->total_steps becomes also very small with delta_um. So this will fit perfectly.
This reverts commit cd66feb8d1.
So let's bring this part back.
We save 35 clock cycles at 'LED on time maximum'
ATmega sizes '168 '328(P) '644(P) '1280
Program: 18038 bytes 126% 59% 29% 14%
Data: 1936 bytes 190% 95% 48% 24%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 217 clock cycles.
LED on time maximum: 520 clock cycles.
LED on time average: 249.626 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 22589.
LED on time minimum: 217 clock cycles.
LED on time maximum: 537 clock cycles.
LED on time average: 284.747 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 217 clock cycles.
LED on time maximum: 520 clock cycles.
LED on time average: 270.933 clock cycles.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 18266 bytes 128% 60% 29% 15%
Data: 1936 bytes 190% 95% 48% 24%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 243 clock cycles.
LED on time maximum: 555 clock cycles.
LED on time average: 250.375 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 22589.
LED on time minimum: 243 clock cycles.
LED on time maximum: 572 clock cycles.
LED on time average: 292.139 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 243 clock cycles.
LED on time maximum: 555 clock cycles.
LED on time average: 275.699 clock cycles.
In `ACCELERATION_RAMPING` code we use the dda->id field even when we do
not enable `LOOKAHEAD`. Expose the variable and its related `idcnt`
when `ACCELERATION_RAMPING` is used.
Add a regression-test to catch this in the future.
These values were queued up just for finding out individual axis
speeds in dda_find_crossing_speed(). Let's do this calculation
with other available movement properties and save 16 bytes of RAM
per movement queue entry.
First version of this commit forgot to take care of the feedrate
sign (prevF, currF). Lack of that found by @Wurstnase. Idea of
tweaking calculation of 'dv' to achieve this also by @Wurstnase.
It was tried to set the sign immediately after calculation of the
absolute values, but that resulted in larger ( = slower) code.
Binary size down 132 bytes, among that two loops. RAM usage down
256 bytes for the standard test case:
ATmega sizes '168 '328(P) '644(P) '1280
Program: 17944 bytes 126% 59% 29% 14%
Data: 1920 bytes 188% 94% 47% 24%
EEPROM: 32 bytes 4% 2% 2% 1%
Neither of them brought a performance improvement, so we revert
both. Commits as well as revert kept to preserve the knowledge
gained.
This reverts commits
"DDA, dda_start(): use mb_tail_dda directly." and
"DDA, dda_start(): don't pass mb_tail_dda as parameter."
Performance and binary size is back to what we had before:
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19270 bytes 135% 63% 31% 15%
Data: 2179 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 218 clock cycles.
LED on time maximum: 395 clock cycles.
LED on time average: 249.051 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 237 clock cycles.
LED on time maximum: 438 clock cycles.
LED on time average: 272.216 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 237 clock cycles.
LED on time maximum: 395 clock cycles.
LED on time average: 262.572 clock cycles.
Just avoiding to pass mb_tail_dda as parameter didn't work out,
so how about using it directly? This is what this commit does.
Result: binary size another 32 bytes bigger, slowest step another
16 clock cycles slower. No dice.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19306 bytes 135% 63% 31% 15%
Data: 2179 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 218 clock cycles.
LED on time maximum: 414 clock cycles.
LED on time average: 249.436 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 237 clock cycles.
LED on time maximum: 457 clock cycles.
LED on time average: 272.256 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 237 clock cycles.
LED on time maximum: 414 clock cycles.
LED on time average: 262.595 clock cycles.
Instead, read the global variable directly.
The idea is that reading the global variable directly removes
the effort to build up a parameter stack, making things faster.
Actually, binary size increases by 4 bytes and the slowest step
takes 3 clock cycles longer. D'oh.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19274 bytes 135% 63% 31% 15%
Data: 2179 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 218 clock cycles.
LED on time maximum: 398 clock cycles.
LED on time average: 249.111 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 237 clock cycles.
LED on time maximum: 441 clock cycles.
LED on time average: 272.222 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 237 clock cycles.
LED on time maximum: 398 clock cycles.
LED on time average: 262.576 clock cycles.
As we have mb_tail_dda now, that's no longer necessary. Using
something like movebuffer[mb_tail] is more expensive than
dereferencing mb_tail_dda directly.
This is the first time we see a stepping performance improvement
since introducing mb_tail_dda. 13 clock cycles faster on the
slowest step, which is 9 cycles faster than before that
introduction.
Binary size also a nice 94 bytes down.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19270 bytes 135% 63% 31% 15%
Data: 2179 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 218 clock cycles.
LED on time maximum: 395 clock cycles.
LED on time average: 249.051 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 237 clock cycles.
LED on time maximum: 438 clock cycles.
LED on time average: 272.216 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 237 clock cycles.
LED on time maximum: 395 clock cycles.
LED on time average: 262.572 clock cycles.
Not queuing up waits for the heaters in the movement queue removes
some code in performance critical paths. What a luck we just
implemented an alternative M116 functionality with the previous
commit :-)
Performance of the slowest step is decreased a nice 29 clock
cycles and binary size decreased by a whoppy 472 bytes. That's
still 210 bytes less than before implementing the alternative
heater wait.
Best of all, average step time is down some 21 clock cycles, too,
so we increased general stepping performance by no less than 5%.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19436 bytes 136% 64% 31% 16%
Data: 2177 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 259 clock cycles.
LED on time maximum: 429 clock cycles.
LED on time average: 263.491 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 251 clock cycles.
LED on time maximum: 472 clock cycles.
LED on time average: 286.259 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 251 clock cycles.
LED on time maximum: 429 clock cycles.
LED on time average: 276.616 clock cycles.
Nullmoves are movements which don't actually move a stepper. For
example because it's a velocity change only or the movement is
shorter than a single motor step.
Not queueing them up removes the necessity to check for them,
which reduces code in critical areas. It also removes the
necessity to run dda_start() twice to get past a nullmove.
Best of this is, it also makes lookahead perform better. Before,
a nullmove just changing speed interrupted the lookahead chain,
now it no longer does. See straight-speeds.gcode and
...-Fsep.gcode, which produced different timings before, now
results are identical.
Also update the function description for dda_create().
Performance increase is impressive: another 75 clock cycles off
the slowest step, only 36 bytes binary size increase:
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19652 bytes 138% 64% 31% 16%
Data: 2175 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 280 clock cycles.
LED on time maximum: 458 clock cycles.
LED on time average: 284.653 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 272 clock cycles.
LED on time maximum: 501 clock cycles.
LED on time average: 307.275 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 272 clock cycles.
LED on time maximum: 458 clock cycles.
LED on time average: 297.625 clock cycles.
Performance of straight-speeds{-Fsep}.gcode before:
straight-speeds.gcode statistics:
LED on occurences: 32000.
LED on time minimum: 272 clock cycles.
LED on time maximum: 586 clock cycles.
LED on time average: 298.75 clock cycles.
straight-speeds-Fsep.gcode statistics:
LED on occurences: 32000.
LED on time minimum: 272 clock cycles.
LED on time maximum: 672 clock cycles.
LED on time average: 298.79 clock cycles.
Now:
straight-speeds.gcode statistics:
LED on occurences: 32000.
LED on time minimum: 272 clock cycles.
LED on time maximum: 501 clock cycles.
LED on time average: 298.703 clock cycles.
straight-speeds-Fsep.gcode statistics:
LED on occurences: 32000.
LED on time minimum: 272 clock cycles.
LED on time maximum: 501 clock cycles.
LED on time average: 298.703 clock cycles.
There we save even 171 clock cycles :-)
While this was an improvement of 9 clocks on AVRs, it had more
than the opposite effect on ARMs: 25 clocks slower on the slowest
step. Apparently ARMs aren't as efficient in reading and writing
single bits.
https://github.com/Traumflug/Teacup_Firmware/issues/189#issuecomment-262837660
Performance on AVR is back to what we had before:
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19610 bytes 137% 64% 31% 16%
Data: 2175 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 280 clock cycles.
LED on time maximum: 549 clock cycles.
LED on time average: 286.273 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 272 clock cycles.
LED on time maximum: 580 clock cycles.
LED on time average: 307.439 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 272 clock cycles.
LED on time maximum: 539 clock cycles.
LED on time average: 297.732 clock cycles.
In dda_step instead of checking our 32-bit-wide delta[n] value,
just check a single bit in an 8-bit field. Should be a tad faster.
It does make the code larger, but also about 10% faster, I think.
Performance:
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19696 bytes 138% 65% 32% 16%
Data: 2191 bytes 214% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 263 clock cycles.
LED on time maximum: 532 clock cycles.
LED on time average: 269.273 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 255 clock cycles.
LED on time maximum: 571 clock cycles.
LED on time average: 297.792 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 255 clock cycles.
LED on time maximum: 522 clock cycles.
LED on time average: 283.861 clock cycles.
This time we don't test for remaining steps, but wether the axis
moves at all. A much cheaper test, because this variable has to
be loaded into registers anyways.
Performance is now even better than without this test. Slowest
step down from 604 to 580 clock cycles.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19610 bytes 137% 64% 31% 16%
Data: 2175 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 280 clock cycles.
LED on time maximum: 549 clock cycles.
LED on time average: 286.273 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 272 clock cycles.
LED on time maximum: 580 clock cycles.
LED on time average: 307.439 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 272 clock cycles.
LED on time maximum: 539 clock cycles.
LED on time average: 297.732 clock cycles.
Apparently gcc doesn't manage to sort nested calculations. Putting
all the muldiv()s into one line gives this error:
dda.c: In function ‘update_current_position’:
dda.c:969:1: error: unable to find a register to spill in class ‘POINTER_REGS’
}
^
dda.c:969:1: error: this is the insn:
(insn 81 80 259 4 (set (reg:SI 82 [ D.3267 ])
(mem:SI (post_inc:HI (reg:HI 2 r2 [orig:121 ivtmp.106 ] [121])) [4 MEM[base: _97, offset: 0B]+0 S4 A8])) dda.c:952 95 {*movsi}
(expr_list:REG_INC (reg:HI 2 r2 [orig:121 ivtmp.106 ] [121])
(nil)))
dda.c:969: confused by earlier errors, bailing out
This problem was solved by doing the calculation step by step,
using intermediate variables. Glad I could help you, gcc :-)
Moving performance unchanged, M114 accuracy should have improved,
binary size 18 bytes bigger:
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19582 bytes 137% 64% 31% 16%
Data: 2175 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
Using the Bresenham algorithm it's safe to assume that if the axis
with the most steps is done, all other axes are done, too.
This way we save a lot of variable loading in dda_step(). We also
save this very expensive comparison of all axis counters against
zero. Minor drawback: update_current_position() is now even slower.
About performance. The slowest step decreased from 719 to 604
clocks, which is quite an improvement. Average step time increased
for single axis movements by 16 clocks and decreased for multi-
axis movements. At the bottom line this should improve real-world
performance quite a bit, because a printer movement speed isn't
limited by average timings, but by the time needed for the slowest
step.
Along the way, binary size dropped by nice 244 bytes, RAM usage by
also nice 16 bytes.
ATmega sizes '168 '328(P) '644(P) '1280
Program: 19564 bytes 137% 64% 31% 16%
Data: 2175 bytes 213% 107% 54% 27%
EEPROM: 32 bytes 4% 2% 2% 1%
short-moves.gcode statistics:
LED on occurences: 888.
LED on time minimum: 326 clock cycles.
LED on time maximum: 595 clock cycles.
LED on time average: 333.62 clock cycles.
smooth-curves.gcode statistics:
LED on occurences: 23648.
LED on time minimum: 318 clock cycles.
LED on time maximum: 604 clock cycles.
LED on time average: 333.311 clock cycles.
triangle-odd.gcode statistics:
LED on occurences: 1636.
LED on time minimum: 318 clock cycles.
LED on time maximum: 585 clock cycles.
LED on time average: 335.233 clock cycles.
We need the fastest axis instead of its steps.
Eleminates also an overflow when ACCELERATION > 596.
We save 118 bytes program and 2 bytes data.
Reviewer Traumflug's note: I see 100 bytes program and 32 bytes
RAM saving on ATmegas here. 16 and 32 on the LPC 1114. Either way:
great stuff!
Similar to M221 which sets a variable flow rate percentage, add
support for M220 which sets a percentage modifier for the
feedrate, F.
It seems a little disturbing that the flow rate modifies the next
G1 command and does not touch the buffered commands, but this
seems like the only reasonable thing to do since the M221 setting
could be embedded in the source gcode for some use cases. Perhaps
an "immediate" setting using P1 could be considered later if
needed.
`target` is an input to dda_create, but we don't modify it. We
copy it into dda->endpoint and modify that instead, if needed.
Make `target` const so this treatment is explicit.
Rely on dda->endpoint to hold our "target" data so any decisions
we make leading up to using it will be correctly reflected in our
math.
The flow rate is given as a percentage which is kept as
100 = 100% internally. But this means we must divide by 100 for
every movement which can be expensive. Convert the value to
256 = 100% so the compiler can optimize the division to a
byte-shift.
Also, avoid the math altogether in the normal case where the
flow rate is already 100% and no change is required.
Note: This also requires an increase in the size of e_multiplier
to 16 bits so values >= 100% can be stored. Previously flow
rates only up to 255% (2.5x) were supported which may have
surprised some users. Now the flow rate can be as high as
10000% (100x), at least internally.
Now it is possible to control the extruders flow.
M221 S100 = 100% of the extruders steps
M221 S90 = 90% of the extruders steps
M221 is also used in other firmwares for this. Also a lot of
hosts, like Octoprint and Pronterface using this M-Code for
this behaviour.
REPRAP style acceleration broke quite a while ago, but no one noticed.
Maybe it's not being used, and therefore also not tested. But it should
at least compile while it remains an option.
The compiler complains that dda->n is not defined and that current_id is
never used. The first bug goes back to f0b9daeea0 in late 2013.
In the interest of supporting exploratory accelerations, fix this to
build when ACCELERATION_REPRAP is chosen.