diff --git a/README.md b/README.md
index d509571..004abf6 100644
--- a/README.md
+++ b/README.md
@@ -1,2 +1,49 @@
-# klippain-IS-module
-Automated and improved Klipper input shaper workflow and calibration tools
+# Klippain Input Shaper standalone module
+
+This standalone Klippain module is the best way to automate and calibrate the input shaper sytem on your machine with a streamlined workflow. It's a two part system and here is how it works:
+ 1. Using specialy tailored Klipper macros, it first run either some tests for the belts or for the printer X/Y axis. The goal is to measure the machine axes behavior using an ADXL345 hooked to your machine toolhead.
+ 2. Then, it automatically call some Python scripts that automate a few things:
+ 1. it generate some custom made graphs that should give you all the insights of what is happenning in your machine. The goal is to get the best set of parameters for the Klipper `[input_shaper]` system (including best shaper choice, resonant frequency and damping ratio), or diagnose and fix the mechanical problems like belt tension, etc...
+ 2. it then move the graphs and associated CSVs files to your Klipper config folder to allow you to find them directly using Mainsail/Fluidd (no more SSH is needed to calibrate your input shaper!)
+ 3. it manage the folder to delete the older files and keep only a set (default is three) of the most recent results.
+
+ > **Note**
+ >
+ > This input shaper workflow module is part of the [Klippain](https://github.com/Frix-x/klippain) ecosystem. If you're already using a full Klippain installation on your machine, this is still directly included and nothing more need to be installed to use it!
+
+When you have your graphs, you can get some hints on how to read and interpret them in [my IS graphs documentation](./docs/input_shaper.md).
+
+| Belts resonances example | X resonances example | Y resonances example | Vibrations measurement example |
+|:------------------------:|:--------------------:|:--------------------:|:------------------------------:|
+|  |  |  |  |
+
+
+## Installation
+
+If you're not using the full Klippain, here are the steps to install it in your own config:
+ 1. Add the [is_workflow folder](./../../scripts/is_workflow/) at the root of your own config (ie. in your `~/printer_data/config/` directory).
+ 2. Be sure to have the `gcode_shell_command.py` Klipper extension installed. Easiest way to install it is to use the advanced section of KIAUH.
+ 3. Make the scripts executable using SSH. When in the folder (`cd ~/printer_data/config/is_workflow/scripts`), use:
+
+ ```bash
+ chmod +x ./is_workflow.py
+ chmod +x ./graph_belts.py
+ chmod +x ./graph_shaper.py
+ chmod +x ./graph_vibrations.py
+ ```
+
+ 4. Add the following at the end of your `printer.cfg` file:
+ ```
+ [include is_workflow/*.cfg]
+ ```
+
+
+## Usage
+
+Be sure your machine is homed and then call one of the following macros:
+ - `BELTS_SHAPER_CALIBRATION` to get the belt resonnance graphs. This is usefull to verify the belts tension, but also to check if the belt paths are OK.
+ - `AXES_SHAPER_CALIBRATION` to get the input shaper graphs and suppress the ringing/ghosting artefacts in your prints by setting correctly Klipper's [input_shaper] system.
+ - `VIBRATIONS_CALIBRATION` to get the machine vibrations graphs and select the best speeds for your slicer profile.
+ - `EXCITATE_AXIS_AT_FREQ` to maintain a specific excitation frequency in order to let you inspect and find out what is resonating.
+
+Then, look for the results in the results folder. You can find them directly in your config folder by using the WebUI of Mainsail/Fluidd. You can get some hints on the results in my documentation about how to [read and interpret the IS graphs](./docs/input_shaper.md).
diff --git a/docs/images/IS_docs/belt_graphs/belts_problem.png b/docs/images/IS_docs/belt_graphs/belts_problem.png
new file mode 100644
index 0000000..309da1a
Binary files /dev/null and b/docs/images/IS_docs/belt_graphs/belts_problem.png differ
diff --git a/docs/images/IS_docs/belt_graphs/belts_problem2.png b/docs/images/IS_docs/belt_graphs/belts_problem2.png
new file mode 100644
index 0000000..e03fe88
Binary files /dev/null and b/docs/images/IS_docs/belt_graphs/belts_problem2.png differ
diff --git a/docs/images/IS_docs/belt_graphs/different_tensions.png b/docs/images/IS_docs/belt_graphs/different_tensions.png
new file mode 100644
index 0000000..f851cea
Binary files /dev/null and b/docs/images/IS_docs/belt_graphs/different_tensions.png differ
diff --git a/docs/images/IS_docs/belt_graphs/different_tensions2.png b/docs/images/IS_docs/belt_graphs/different_tensions2.png
new file mode 100644
index 0000000..2cb7c96
Binary files /dev/null and b/docs/images/IS_docs/belt_graphs/different_tensions2.png differ
diff --git a/docs/images/IS_docs/belt_graphs/perfect graph.png b/docs/images/IS_docs/belt_graphs/perfect graph.png
new file mode 100644
index 0000000..90c1583
Binary files /dev/null and b/docs/images/IS_docs/belt_graphs/perfect graph.png differ
diff --git a/docs/images/IS_docs/ghosting.png b/docs/images/IS_docs/ghosting.png
new file mode 100644
index 0000000..7db79d5
Binary files /dev/null and b/docs/images/IS_docs/ghosting.png differ
diff --git a/docs/images/IS_docs/harmonic_oscillator.png b/docs/images/IS_docs/harmonic_oscillator.png
new file mode 100644
index 0000000..3b0e33b
Binary files /dev/null and b/docs/images/IS_docs/harmonic_oscillator.png differ
diff --git a/docs/images/IS_docs/how_IS_works.png b/docs/images/IS_docs/how_IS_works.png
new file mode 100644
index 0000000..d7b670c
Binary files /dev/null and b/docs/images/IS_docs/how_IS_works.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/TAP_125hz.png b/docs/images/IS_docs/shaper_graphs/TAP_125hz.png
new file mode 100644
index 0000000..ca4b657
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/TAP_125hz.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/TAP_125hz_2.png b/docs/images/IS_docs/shaper_graphs/TAP_125hz_2.png
new file mode 100644
index 0000000..894053a
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/TAP_125hz_2.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/fan-off.png b/docs/images/IS_docs/shaper_graphs/fan-off.png
new file mode 100644
index 0000000..238fc7b
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/fan-off.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/fan-on.png b/docs/images/IS_docs/shaper_graphs/fan-on.png
new file mode 100644
index 0000000..34c8fff
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/fan-on.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/insane_accels.png b/docs/images/IS_docs/shaper_graphs/insane_accels.png
new file mode 100644
index 0000000..4592dd4
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/insane_accels.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/insane_accels2.png b/docs/images/IS_docs/shaper_graphs/insane_accels2.png
new file mode 100644
index 0000000..e1675c0
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/insane_accels2.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/low_canbus.png b/docs/images/IS_docs/shaper_graphs/low_canbus.png
new file mode 100644
index 0000000..c2b4d49
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/low_canbus.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/low_canbus_solved.png b/docs/images/IS_docs/shaper_graphs/low_canbus_solved.png
new file mode 100644
index 0000000..d3e2630
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/low_canbus_solved.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/low_freq_bad.png b/docs/images/IS_docs/shaper_graphs/low_freq_bad.png
new file mode 100644
index 0000000..0e5c57f
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/low_freq_bad.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/low_freq_bad2.png b/docs/images/IS_docs/shaper_graphs/low_freq_bad2.png
new file mode 100644
index 0000000..964490d
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/low_freq_bad2.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/reso_good_x.png b/docs/images/IS_docs/shaper_graphs/reso_good_x.png
new file mode 100644
index 0000000..7838cb1
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/reso_good_x.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/reso_good_y.png b/docs/images/IS_docs/shaper_graphs/reso_good_y.png
new file mode 100644
index 0000000..b3d07cb
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/reso_good_y.png differ
diff --git a/docs/images/IS_docs/shaper_graphs/shaper_reco.png b/docs/images/IS_docs/shaper_graphs/shaper_reco.png
new file mode 100644
index 0000000..2d27fe6
Binary files /dev/null and b/docs/images/IS_docs/shaper_graphs/shaper_reco.png differ
diff --git a/docs/images/resonances_belts_example.png b/docs/images/resonances_belts_example.png
new file mode 100644
index 0000000..38de9f2
Binary files /dev/null and b/docs/images/resonances_belts_example.png differ
diff --git a/docs/images/resonances_x_example.png b/docs/images/resonances_x_example.png
new file mode 100644
index 0000000..6d6c538
Binary files /dev/null and b/docs/images/resonances_x_example.png differ
diff --git a/docs/images/resonances_y_example.png b/docs/images/resonances_y_example.png
new file mode 100644
index 0000000..86fe46f
Binary files /dev/null and b/docs/images/resonances_y_example.png differ
diff --git a/docs/images/vibrations_example.png b/docs/images/vibrations_example.png
new file mode 100644
index 0000000..3fe6b3a
Binary files /dev/null and b/docs/images/vibrations_example.png differ
diff --git a/docs/input_shaper.md b/docs/input_shaper.md
new file mode 100644
index 0000000..cfb59c7
--- /dev/null
+++ b/docs/input_shaper.md
@@ -0,0 +1,163 @@
+# Tuning Klipper's Input Shaper system
+
+As more and more people use my macros, questions about interpreting the resonnance testing results arise. This document aims to provide some guidance on how to interpret them. Keep in mind that there is no universal method: different people may interpret the results differently or could have other opinions. It's important to experiment and find what works best for your own 3D printer.
+
+
+## Understanding ringing
+When a 3D printer moves, the motors apply some force to move the toolhead along a precise path. This force is transmitted from the motor shaft to the toolhead through the entire printer motion system. When the toolhead reaches a sharp corner and needs to change direction, its inertia makes it want to continue the movement in a straight line. The motors force the toolhead to turn, but the belts act like springs, allowing the toolhead to oscillate in the perpendicular direction. These oscillations produce visible artifacts on the printed parts, known as ringing or ghosting.
+
+
+
+
+## Reading the graphs
+
+When tuning Input Shaper, keep the following in mind:
+ 1. **Focus on the shape of the graphs, not the exact numbers**. There could be differences between ADXL boards or even printers, so there is no specific "target" value. This means that you shouldn't expect to get the same graphs between different printers, even if they are similar in term of brand, parts, size and assembly.
+ 1. Small differences between consecutive test runs are normal, as ADXL quality and sensitivity is quite variable between boards.
+ 1. Perform the tests when the machine is heat-soaked and close to printing conditions, as belt tension can change with temperature.
+ 1. Avoid running the toolhead fans during the tests, as they introduce unnecessary noise to the graphs, making them harder to interpret. This means that even if you should heatsoak the printer, you should also refrain from activating the hotend heater during the test, as it will also trigger the hotend fan. However, as a bad fan can introduce some vibrations, feel free to use the test to diagnose an unbalanced fan as seen in the [Examples of Input Shaper graphs](#examples-of-input-shaper-graphs) section.
+ 1. Ensure the accuracy of your ADXL measurements by running a `MEASURE_AXES_NOISE` test and checking that the result is below 100 for all axes. If it's not, check your ADXL and wiring before continuing.
+ 1. The graphs can only show symptoms of possible problems and in different ways. Those symptoms can sometimes suggest causes, but they rarely pinpoint issues.
+ 1. Remember why you're running these tests (clean prints) and don't become too obsessive over perfect graphs.
+
+ > **Note**
+ >
+ > Click on the section names below to expand them
+
+
+1. Belt graphs
+
+**Before starting, ensure that the belts are properly tensioned**. For example, you can follow the [Voron belt tensioning documentation](https://docs.vorondesign.com/tuning/secondary_printer_tuning.html#belt-tension). This is crucial!
+
+Next, generate the belt graphs using the `BELTS_SHAPER_CALIBRATION` macro. Refer to the [IS workflow documentation](./features/is_workflow.md) for more information.
+
+#### Read the graphs
+
+On these graphs, you want both curves to look similar and overlap to form a single curve. Try to make them fit as closely as possible. It's acceptable to have "noise" around the main peak, but it should be present on both curves with a comparable amplitude. Keep in mind that when you tighten a belt, its main peak should move diagonally toward the upper right corner, changing significantly in amplitude and slightly in frequency. Additionally, the magnitude order of the main peaks *should typically* range from ~100k to ~1M on most machines.
+
+The resonant frequency/amplitude of the curves depends primarily on three parameters (and the actual tension):
+ - the *mass of the toolhead*, which is identical for both belts and has no effect here
+ - the *belt "elasticity"*, which changes over time as the belt wears. Ensure that you use the **same belt brand and type** for both A and B belts and that they were **installed at the same time**
+ - the *belt path length*, which is why they must have the **exact same number of teeth** so that one belt path is not longer than the other when tightened at the same tension
+
+**If these three parameters are met, there is no way that the curves could be different** or you can be sure that there is an underlying problem in at least one of the belt paths. Also, if the belt graphs have low amplitude curves (no distinct peaks) and a lot of noise, you will probably also have poor input shaper graphs. So before you continue, ensure that you have good belt graphs or fix your belt paths. Start by checking the belt tension, bearings, gantry screws, alignment of the belts on the idlers, and so on.
+
+#### Examples of belt graphs
+
+| Comment | Belt graphs examples 1 | Belt graphs examples 2 |
+| --- | --- | --- |
+| **Both of these two graphs are considered good**. As you can see, the main peak doesn't have to be perfect if you can get both curves to overlap |  |  |
+| **These two graphs show incorrect belt tension**: in each case, one of the belts has insufficient tension (first is B belt, second is A belt). Begin by tightening it half a turn and measuring again |  |  |
+| **These two graphs indicate a belt path problem**: the belt tension could be adequate, but something else is happening in the belt paths. Start by checking the bearings and belt wear, or belt alignment |  |  |
+
+
+
+
+
+2. Input Shaper graphs
+
+**Before starting, ensure that the belts are properly tensioned** and that you already have good and clear belt graphs (see the previous section).
+
+Next, generate the Input Shaper graphs using the `AXES_SHAPER_CALIBRATION` macro. Refer to the [IS workflow documentation](./features/is_workflow.md) for more information.
+
+#### Read the graphs
+
+To effectively analyze input shaper graphs, there is no one-size-fits-all approach due to the variety of factors that can impact the 3D printer's performance or input shaper measurements. However, here are some hints on reading the graphs:
+ - A graph with a **single and thin peak** well detached from the background noise is ideal, as it can be easily filtered by input shaping. But depending on the machine and its mechanical configuration, it's not always possible to obtain this shape. The key to getting better graphs is a clean mechanical assembly with a special focus on the rigidity and stiffness of everything, from the table through the frame of the printer to the toolhead.
+ - As for the belt graphs, **focus on the shape of the graphs, not the exact frequency and energy value**. Indeed, the energy value doesn't provide much useful information. Use it only to compare two of your own graphs and to measure the impact of your mechanical changes between two consecutive tests, but never use it to compare against graphs from other people or other machines.
+
+When you are satisfied with your graphs, you will need to use the auto-computed values at the top to set the Input Shaping filters in your Klipper configuration.
+
+
+
+Here is some info to help you understand them:
+ - These data are automatically computed by a specific Klipper algorithm. This algorithm works pretty well if the graphs are clean enough. But **if your graphs are junk, it can't do magic and will give you pretty bad recommendations**: they will do nothing or even make the ringing worse, so do not use the values and fix your printer first!
+ - The recommended acceleration values (`accel<=...`) are not meant to be read alone. You need to also look at the `vibr` and `sm` values. They will give you the percentage of remaining vibrations and the smoothing after Input Shaping, if you use the recommended acceleration.
+ - Nothing will prevent you from using higher acceleration values; they are not a limit. However, if you do so, expect more vibrations and smoothing. Also, Input Shaping may find its limits and not be able to suppress all the ringing on your parts.
+ - The remaining vibrations `vibr` value is highly linked to ringing. So try to choose a filter with a very low value or even 0% if possible.
+ - High acceleration values are not useful at all if there is still a high level of remaining vibrations. You should address any mechanical issues before continuing.
+ - Each line represents the name of a different filtering algorithm. Each of them has its pros and cons:
+ * `ZV` is a pretty light filter and usually has some remaining vibrations. My recommendation would be to use it only if you want to do speed benchies and get the highest acceleration values while maintaining a low amount of smoothing on your parts. If you have "perfect" graphs and do not care that much about some remaining ringing, you can try it.
+ * `MZV` is most of the time the best filter on a well-tuned machine. It's a good compromise for low remaining vibrations while still allowing pretty good acceleration values. Keep in mind, `MZV` is only recommended by the algorithm on good graphs.
+ * `EI` works "ok" if you are not able to get better graphs. But first, try to fix your mechanical issues as best as you can before using it: almost every printer should be able to run `MZV` instead.
+ * `2HUMP_EI` and `3HUMP_EI` are not recommended and should be used only as a last resort. Usually, they lead to a high level of smoothing in order to suppress the ringing while also using relatively low acceleration values. If you get these algorithms recommended, you can almost be sure that you have mechanical problems under the hood (that lead to pretty bad or "wide" graphs).
+
+Then, just add to your configuration:
+```
+[input_shaper]
+shaper_freq_x: ... # center frequency for the X axis filter
+shaper_type_x: ... # filter type for the X axis
+shaper_freq_y: ... # center frequency for the Y axis filter
+shaper_type_y: ... # filter type for the Y axis
+```
+
+#### Useful facts and myths debunking
+
+Sometimes people advise limiting the data to 100 Hz by manually editing the resulting .csv file because excitation does not go that high and these values should be ignored and considered wrong. This is a misconception and a bad idea because the excitation frequency is very different from the response frequency of the system, and they are not correlated at all. Indeed, it's plausible to get higher vibration frequencies, and editing the file manually will just "ignore" them and make them invisible even if they are still there on your printer. While higher frequency vibrations may not have a substantial effect on print quality, they can still indicate other issues within the system, likely noise and wear to the mechanical parts. Instead, focus on addressing the mechanical issues causing these problems.
+
+Another point is that I do not recommend using an extra-light X-beam (aluminum or carbon) on your machine, as it can negatively impact the printer's performance and Input Shaping results. Indeed, there is more than just mass at play (see the [theory behind it](#theory-behind-it)): lower mass also means more flexibility and more prone to wobble under high accelerations. This will impact negatively the Y axis graphs as the X-beam will flex under high accelerations.
+
+Finally, keep in mind that each axis has its own properties, such as mass and geometry, which will lead to different behaviors for each of them and will require different filters. Using the same input shaping settings for both axes is only valid if both axes are similar mechanically: this may be true for some machines, mainly Cross gantry configurations such as [CroXY](https://github.com/CroXY3D/CroXY) or [Annex-Engineering](https://github.com/Annex-Engineering) printers, but not for others.
+
+#### Examples of Input Shaper graphs
+
+In the following examples, the graphs are random graphs found online or sent to me for analysis. They are not necessarily to be read in pairs: the two graph columns are here to illustrate the comment with more than one example.
+
+| Comment | Example 1 | Example 2 |
+| --- | --- | --- |
+| **These two graphs are considered good**. As you can see, there is only one thin peak, well separated from the background noise |  |  |
+| **These two graphs are really bad**: there is a lot of noise all over the spectrum. Something is really wrong and you should check all moving parts and screws. You should also check the belt tension and proper geometry of the gantry (racking) |  |  |
+| These two graphs have some **low frequency energy**. This usually means that there is some binding or grinding in the kinematics: something isn't moving freely. Check the belt alignment on the idlers, bearings, etc... |  |  |
+| These two graphs show **the TAP wobble problem**: check that the TAP MGN rail has the correct preload for stiffness and that the magnets are correct N52. Also pay attention to the assembly to make sure that everything is properly tightened |  |  |
+| Here you can see **the effect of an unbalanced fan**: even if you should let the fan off during the final IS tuning, you can use this test to validate their correct behavior: an unbalanced fan usually add some very thin peak around 100-150Hz that disapear when the fan is off during the measurement |  |  |
+| The graph on the left shows **a CANbus problem** (problem solved on the right): although the general shape looks good, the graph is not smooth but spiky. There is also usually some low frequency energy. This happens when the bus speed is too low: set it to 1M to solve the problem |  |  |
+
+
+
+
+
+3. Klippain vibrations graphs
+
+More details to be added later in this section...
+
+
+
+
+
+Special note on accelerometer (ADXL) mounting point
+
+Input Shaping algorithms work by suppressing a single resonant frequency (or a range around a single resonant frequency). When setting the filter, **the primary goal is to target the resonant frequency of the toolhead and belts system** (see the [theory behind it](#theory-behind-it)), as this system has the most significant impact on print quality and is the root cause of ringing.
+
+When setting up Input Shaper, it is important to consider the accelerometer mounting point. There are mainly two possibilities, each with its pros and cons:
+ 1. **Directly at the nozzle tip**: This method provides a more accurate and comprehensive measurement of everything in your machine. It captures the main resonant frequency along with other vibrations and movements, such as toolhead wobbling and printer frame movements. This approach is excellent for diagnosing your machine's kinematics and troubleshooting problems. However, it also leads to noisier graphs, making it harder for the algorithm to select the correct filter for input shaping. Graphs may appear worse, but this is due to the different "point of view" of the printer's behavior.
+ 1. **At the toolhead's center of gravity**: I personally recommend mounting the accelerometer in this way, as it provides a clear view of the main resonant frequency you want to target, allowing for accurate input shaper filter settings. This approach results in cleaner graphs with less visible noise from other subsystem vibrations, making interpretation easier for both automatic algorithms and users. However, this method provides less detail in the graphs and may be slightly less effective for troubleshooting printer problems.
+
+A suggested workflow is to first use the nozzle mount to diagnose mechanical issues, such as loose screws or a bad X carriage. Once the mechanics are in good condition, switch to a mounting point closer to the toolhead's center of gravity for setting the input shaper filter settings by using cleaner graphs that highlights the most impactful frequency.
+
+
+
+
+## Theory behind it
+
+### Modeling the motion system
+The motion system of a 3D printer can be described as a spring and mass system, best modeled as a [harmonic oscillator](https://en.wikipedia.org/wiki/Harmonic_oscillator). This type of system has two key parameters:
+
+| Schematics | Undamped resonnant frequency
(natural frequency) | Damping ratio ζ |
+| --- | --- | --- |
+|  | $$\frac{1}{2\pi}\sqrt{\frac{k}{m}}$$ | $$\frac{c}{2}\sqrt{\frac{1}{km}}$$ |
+| See [here for examples](https://beltoforion.de/en/harmonic_oscillator/) | `k` [N/m]: spring constant
`m` [g]: moving mass | `c` [N·s/m]: viscous damping coefficient
`k` [N/m]: spring constant
`m` [g]: moving mass |
+
+When an oscillating input force is applied at a resonant frequency (or a Fourier component of it) on a dynamic system, the system will oscillate at a higher amplitude than when the same force is applied at other, non-resonant frequencies. This is called a resonance and can be dangerous for some systems but on our printers this will mainly lead to vibrations and oscillations of the toolhead.
+
+On the other hand, the damping ratio (ζ) is a dimensionless measure describing how oscillations in a system decay after a perturbation. It can vary from underdamped (ζ < 1), through critically damped (ζ = 1) to overdamped (ζ > 1).
+
+In 3D printers, it's quite challenging to measure the spring constant `k` and even more challenging to measure the viscous damping coefficient `c`, as they are affected by various factors such as belts, plastic parts, frame rigidity, rails, friction, grease, and motor control. Furthermore, a 3D printer is made up of many subsystems, each with its own behavior. Some subsystems, such as the toolhead/belts system, have a bigger impact on ringing than others, such as the motor shaft resonance for example.
+
+### How Input Shaping helps
+The rapid movement of machines is a challenging control problem because it often results in high levels of vibration. As a result, machines are typically moved relatively slowly. Input shaping is an open-loop control method that allows for higher speeds of motion by limiting vibration induced by the reference command. It can also improve the reliability of the stealthChop mode of Trinamic stepper drivers.
+
+It works by creating a command signal that cancels its own vibration, achieved by [convoluting](https://en.wikipedia.org/wiki/Convolution) specifically crafted impulse signals (A2) with the original system control signal (A1). The resulting shaped signal is then used to drive the system (Total Response). To craft these impulses, the system's undamped resonant frequency and damping ratio are used.
+
+
+
+Klipper measures these parameters by exciting the printer with a series of input commands and recording the response behavior using an accelerometer. Resonances can be identified on the resulting graphs by large spikes indicating their frequency and energy. Additionnaly, the damping ratio is usually unknown and hard to estimate without a special equipment, so Klipper uses 0.1 value by default, which is a good all-round value that works well for most 3D printers.
diff --git a/is_workflow/IS_shaper_calibrate.cfg b/is_workflow/IS_shaper_calibrate.cfg
new file mode 100644
index 0000000..04aeaf4
--- /dev/null
+++ b/is_workflow/IS_shaper_calibrate.cfg
@@ -0,0 +1,97 @@
+################################################
+###### STANDARD INPUT_SHAPER CALIBRATIONS ######
+################################################
+# Written by Frix_x#0161 #
+# @version: 1.4
+
+# CHANGELOG:
+# v1.4: added possibility to only run one axis at a time for the axes shaper calibration
+# v1.3: added possibility to override the default parameters
+# v1.2: added EXCITATE_AXIS_AT_FREQ to hold a specific excitating frequency on an axis and diagnose mechanical problems
+# v1.1: added M400 to validate that the files are correctly saved to disk
+# v1.0: first version of the automatic input shaper workflow
+
+
+### What is it ? ###
+# This macro helps you to configure the input shaper algorithm of Klipper by running the tests sequencially and calling an automatic script
+# that generate the graphs, manage the files and so on. It's basically a fully automatic input shaper calibration workflow.
+# Results can be found in your config folder using FLuidd/Maisail file manager.
+
+# The goal is to make it easy to set, share and use it.
+
+# Usage:
+# 1. Call the AXES_SHAPER_CALIBRATION macro, wait for it to end and compute the graphs. Then look for the results in the results folder.
+# 2. Call the BELTS_SHAPER_CALIBRATION macro, wait for it to end and compute the graphs. Then look for the results in the results folder.
+# 3. If you find out some strange noise, you can use the EXCITATE_AXIS_AT_FREQ macro to diagnose the origin
+
+
+[gcode_macro AXES_SHAPER_CALIBRATION]
+description: Run standard input shaper test for all axes
+gcode:
+ {% set verbose = params.VERBOSE|default(true) %}
+ {% set min_freq = params.FREQ_START|default(5)|float %}
+ {% set max_freq = params.FREQ_END|default(133.3)|float %}
+ {% set hz_per_sec = params.HZ_PER_SEC|default(1)|float %}
+ {% set axis = params.AXIS|default("all")|string|lower %}
+
+ {% set X, Y = False, False %}
+
+ {% if axis == "all" %}
+ {% set X, Y = True, True %}
+ {% elif axis == "x" %}
+ {% set X = True %}
+ {% elif axis == "y" %}
+ {% set Y = True %}
+ {% else %}
+ { action_raise_error("AXIS selection invalid. Should be either all, x or y!") }
+ {% endif %}
+
+ {% if X %}
+ TEST_RESONANCES AXIS=X OUTPUT=raw_data NAME=x FREQ_START={min_freq} FREQ_END={max_freq} HZ_PER_SEC={hz_per_sec}
+ M400
+
+ {% if verbose %}
+ RESPOND MSG="X axis shaper graphs generation..."
+ {% endif %}
+ RUN_SHELL_COMMAND CMD=plot_graph PARAMS=SHAPER
+ {% endif %}
+
+ {% if Y %}
+ TEST_RESONANCES AXIS=Y OUTPUT=raw_data NAME=y FREQ_START={min_freq} FREQ_END={max_freq} HZ_PER_SEC={hz_per_sec}
+ M400
+
+ {% if verbose %}
+ RESPOND MSG="Y axis shaper graphs generation..."
+ {% endif %}
+ RUN_SHELL_COMMAND CMD=plot_graph PARAMS=SHAPER
+ {% endif %}
+
+
+[gcode_macro BELTS_SHAPER_CALIBRATION]
+description: Run custom demi-axe test to analyze belts on CoreXY printers
+gcode:
+ {% set verbose = params.VERBOSE|default(true) %}
+ {% set min_freq = params.FREQ_START|default(5)|float %}
+ {% set max_freq = params.FREQ_END|default(133.33)|float %}
+ {% set hz_per_sec = params.HZ_PER_SEC|default(1)|float %}
+
+ TEST_RESONANCES AXIS=1,1 OUTPUT=raw_data NAME=b FREQ_START={min_freq} FREQ_END={max_freq} HZ_PER_SEC={hz_per_sec}
+ M400
+ TEST_RESONANCES AXIS=1,-1 OUTPUT=raw_data NAME=a FREQ_START={min_freq} FREQ_END={max_freq} HZ_PER_SEC={hz_per_sec}
+ M400
+
+ {% if verbose %}
+ RESPOND MSG="Belts graphs generation..."
+ {% endif %}
+ RUN_SHELL_COMMAND CMD=plot_graph PARAMS=BELTS
+
+
+[gcode_macro EXCITATE_AXIS_AT_FREQ]
+description: Maintain a specified input shaper excitating frequency for some time to diagnose vibrations
+gcode:
+ {% set FREQUENCY = params.FREQUENCY|default(25)|int %}
+ {% set TIME = params.TIME|default(10)|int %}
+ {% set AXIS = params.AXIS|default("x")|string|lower %}
+
+ TEST_RESONANCES OUTPUT=raw_data AXIS={AXIS} FREQ_START={FREQUENCY-1} FREQ_END={FREQUENCY+1} HZ_PER_SEC={1/(TIME/3)}
+ M400
diff --git a/is_workflow/IS_vibrations_measurements.cfg b/is_workflow/IS_vibrations_measurements.cfg
new file mode 100644
index 0000000..40af770
--- /dev/null
+++ b/is_workflow/IS_vibrations_measurements.cfg
@@ -0,0 +1,191 @@
+################################################
+###### VIBRATIONS AND SPEED OPTIMIZATIONS ######
+################################################
+# Written by Frix_x#0161 #
+# @version: 2.1
+
+# CHANGELOG:
+# v2.1: allow decimal entries for speed and increment and added the E axis as an option to be neasured
+# v2.0: added the possibility to measure mutliple axis
+# v1.0: first speed and vibrations optimization macro
+
+
+### What is it ? ###
+# This macro helps you to identify the speed settings that exacerbate the vibrations of the machine (ie. where the frame resonate badly).
+# It also helps to find the clean speed ranges where the machine is silent.
+# I had some strong vibrations at very specific speeds on my machine (52mm/s for example) and I wanted to find all these problematic speeds
+# to avoid them in my slicer profile and finally get the silent machine I was dreaming!
+
+# It works by moving the toolhead at different speed settings while recording the vibrations using the ADXL chip. Then the macro call a custom script
+# to compute and find the best speed settings. The results can be found in your config folder using Fluidd/Mainsail file manager.
+
+# The goal is to make it easy to set, share and use it.
+
+# This macro is parametric and most of the values can be adjusted with their respective input parameters.
+# It can be called without any parameters - in which case the default values would be used - or with any combination of parameters as desired.
+
+# Usage:
+# 1. DO YOUR INPUT SHAPER CALIBRATION FIRST !!! This macro should not be used before as it would be useless and the results invalid.
+# 2. Call the VIBRATIONS_CALIBRATION macro with the speed range you want to measure (default 20 to 200mm/s with 2mm/s increment).
+# Be carefull about the Z_HEIGHT variable that default to 20mm -> if your ADXL is under the nozzle, increase it to avoid a crash of the ADXL on the bed of the machine.
+# 3. Wait for it to finish all the measurement and compute the graph. Then look at it in the results folder.
+
+
+[gcode_macro VIBRATIONS_CALIBRATION]
+gcode:
+ #
+ # PARAMETERS
+ #
+ {% set size = params.SIZE|default(60)|int %} # size of the area where the movements are done
+ {% set direction = params.DIRECTION|default('XY') %} # can be set to either XY, AB, ABXY, A, B, X, Y, Z
+ {% set z_height = params.Z_HEIGHT|default(20)|int %} # z height to put the toolhead before starting the movements
+ {% set verbose = params.VERBOSE|default(true) %} # Wether to log the current speed in the console
+
+ {% set min_speed = params.MIN_SPEED|default(20)|float * 60 %} # minimum feedrate for the movements
+ {% set max_speed = params.MAX_SPEED|default(200)|float * 60 %} # maximum feedrate for the movements
+ {% set speed_increment = params.SPEED_INCREMENT|default(2)|float * 60 %} # feedrate increment between each move
+ {% set feedrate_travel = params.TRAVEL_SPEED|default(200)|int * 60 %} # travel feedrate between moves
+
+ {% set accel_chip = params.ACCEL_CHIP|default("adxl345") %} # ADXL chip name in the config
+
+ #
+ # COMPUTED VALUES
+ #
+ {% set mid_x = printer.toolhead.axis_maximum.x|float / 2 %}
+ {% set mid_y = printer.toolhead.axis_maximum.y|float / 2 %}
+ {% set nb_samples = ((max_speed - min_speed) / speed_increment + 1) | int %}
+
+ {% set direction_factor = {
+ 'XY' : {
+ 'start' : {'x': -0.5, 'y': -0.5 },
+ 'move_factors' : {
+ '0' : {'x': 0.5, 'y': -0.5, 'z': 0.0 },
+ '1' : {'x': 0.5, 'y': 0.5, 'z': 0.0 },
+ '2' : {'x': -0.5, 'y': 0.5, 'z': 0.0 },
+ '3' : {'x': -0.5, 'y': -0.5, 'z': 0.0 }
+ }
+ },
+ 'AB' : {
+ 'start' : {'x': 0.0, 'y': 0.0 },
+ 'move_factors' : {
+ '0' : {'x': 0.5, 'y': -0.5, 'z': 0.0 },
+ '1' : {'x': -0.5, 'y': 0.5, 'z': 0.0 },
+ '2' : {'x': 0.0, 'y': 0.0, 'z': 0.0 },
+ '3' : {'x': 0.5, 'y': 0.5, 'z': 0.0 },
+ '4' : {'x': -0.5, 'y': -0.5, 'z': 0.0 },
+ '5' : {'x': 0.0, 'y': 0.0, 'z': 0.0 }
+ }
+ },
+ 'ABXY' : {
+ 'start' : {'x': -0.5, 'y': 0.5 },
+ 'move_factors' : {
+ '0' : {'x': -0.5, 'y': -0.5, 'z': 0.0 },
+ '1' : {'x': 0.5, 'y': -0.5, 'z': 0.0 },
+ '2' : {'x': -0.5, 'y': 0.5, 'z': 0.0 },
+ '3' : {'x': 0.5, 'y': 0.5, 'z': 0.0 },
+ '4' : {'x': -0.5, 'y': -0.5, 'z': 0.0 },
+ '5' : {'x': -0.5, 'y': 0.5, 'z': 0.0 }
+ }
+ },
+ 'B' : {
+ 'start' : {'x': 0.5, 'y': 0.5 },
+ 'move_factors' : {
+ '0' : {'x': -0.5, 'y': -0.5, 'z': 0.0 },
+ '1' : {'x': 0.5, 'y': 0.5, 'z': 0.0 }
+ }
+ },
+ 'A' : {
+ 'start' : {'x': -0.5, 'y': 0.5 },
+ 'move_factors' : {
+ '0' : {'x': 0.5, 'y': -0.5, 'z': 0.0 },
+ '1' : {'x': -0.5, 'y': 0.5, 'z': 0.0 }
+ }
+ },
+ 'X' : {
+ 'start' : {'x': -0.5, 'y': 0.0 },
+ 'move_factors' : {
+ '0' : {'x': 0.5, 'y': 0.0, 'z': 0.0 },
+ '1' : {'x': -0.5, 'y': 0.0, 'z': 0.0 }
+ }
+ },
+ 'Y' : {
+ 'start' : {'x': 0.0, 'y': 0.5 },
+ 'move_factors' : {
+ '0' : {'x': 0.0, 'y': -0.5, 'z': 0.0 },
+ '1' : {'x': 0.0, 'y': 0.5, 'z': 0.0 }
+ }
+ },
+ 'Z' : {
+ 'start' : {'x': 0.0, 'y': 0.0 },
+ 'move_factors' : {
+ '0' : {'x': 0.0, 'y': 0.0, 'z': 1.0 },
+ '1' : {'x': 0.0, 'y': 0.0, 'z': 0.0 }
+ }
+ },
+ 'E' : {
+ 'start' : {'x': 0.0, 'y': 0.0 },
+ 'move_factor' : 0.05
+ }
+ }
+ %}
+
+ #
+ # STARTING...
+ #
+ {% if not 'xyz' in printer.toolhead.homed_axes %}
+ { action_raise_error("Must Home printer first!") }
+ {% endif %}
+
+ {% if params.SPEED_INCREMENT|default(2)|float * 100 != (params.SPEED_INCREMENT|default(2)|float * 100)|int %}
+ { action_raise_error("Only 2 decimal digits are allowed for SPEED_INCREMENT") }
+ {% endif %}
+
+ {% if (size / (max_speed / 60)) < 0.25 and direction != 'E' %}
+ { action_raise_error("SIZE is too small for this MAX_SPEED. Increase SIZE or decrease MAX_SPEED!") }
+ {% endif %}
+
+ {% if not (direction in direction_factor) %}
+ { action_raise_error("DIRECTION is not valid. Only XY, AB, ABXY, A, B, X, Y, Z or E is allowed!") }
+ {% endif %}
+
+ {action_respond_info("")}
+ {action_respond_info("Starting speed and vibration calibration")}
+ {action_respond_info("This operation can not be interrupted by normal means. Hit the \"emergency stop\" button to stop it if needed")}
+ {action_respond_info("")}
+
+ SAVE_GCODE_STATE NAME=STATE_VIBRATIONS_CALIBRATION
+
+ M83
+ G90
+
+ # Going to the start position
+ G1 Z{z_height}
+ G1 X{mid_x + (size * direction_factor[direction].start.x) } Y{mid_y + (size * direction_factor[direction].start.y)} F{feedrate_travel}
+
+ # vibration pattern for each frequency
+ {% for curr_sample in range(0, nb_samples) %}
+ {% set curr_speed = min_speed + curr_sample * speed_increment %}
+ {% if verbose %}
+ RESPOND MSG="{"Current speed: %.2f mm/s" % (curr_speed / 60)|float}"
+ {% endif %}
+
+ ACCELEROMETER_MEASURE CHIP={accel_chip}
+ {% if direction == 'E' %}
+ G0 E{curr_speed*direction_factor[direction].move_factor} F{curr_speed}
+ {% else %}
+ {% for key, factor in direction_factor[direction].move_factors|dictsort %}
+ G1 X{mid_x + (size * factor.x) } Y{mid_y + (size * factor.y)} Z{z_height + (size * factor.z)} F{curr_speed}
+ {% endfor %}
+ {% endif %}
+ ACCELEROMETER_MEASURE CHIP={accel_chip} NAME=sp{("%.2f" % (curr_speed / 60)|float)|replace('.','_')}n1
+
+ G4 P300
+ M400
+ {% endfor %}
+
+ {% if verbose %}
+ RESPOND MSG="Graphs generation... Please wait a minute or two and look in the configured folder."
+ {% endif %}
+ RUN_SHELL_COMMAND CMD=plot_graph PARAMS="VIBRATIONS {direction}"
+
+ RESTORE_GCODE_STATE NAME=STATE_VIBRATIONS_CALIBRATION
diff --git a/is_workflow/IS_workflow_cmd.cfg b/is_workflow/IS_workflow_cmd.cfg
new file mode 100644
index 0000000..2b0008c
--- /dev/null
+++ b/is_workflow/IS_workflow_cmd.cfg
@@ -0,0 +1,4 @@
+[gcode_shell_command plot_graph]
+command: ~/printer_data/config/is_workflow/scripts/is_workflow.py
+timeout: 600.0
+verbose: True
diff --git a/is_workflow/scripts/graph_belts.py b/is_workflow/scripts/graph_belts.py
new file mode 100644
index 0000000..924db73
--- /dev/null
+++ b/is_workflow/scripts/graph_belts.py
@@ -0,0 +1,548 @@
+#!/usr/bin/env python3
+
+#################################################
+######## CoreXY BELTS CALIBRATION SCRIPT ########
+#################################################
+# Written by Frix_x#0161 #
+# @version: 1.0
+
+# CHANGELOG:
+# v1.0: first version of this tool for enhanced vizualisation of belt graphs
+
+
+# Be sure to make this script executable using SSH: type 'chmod +x ./graph_belts.py' when in the folder!
+
+#####################################################################
+################ !!! DO NOT EDIT BELOW THIS LINE !!! ################
+#####################################################################
+
+import optparse, matplotlib, sys, importlib, os
+from textwrap import wrap
+from collections import namedtuple
+import numpy as np
+import matplotlib.pyplot, matplotlib.dates, matplotlib.font_manager
+import matplotlib.ticker, matplotlib.gridspec, matplotlib.colors
+import matplotlib.patches
+
+matplotlib.use('Agg')
+
+MAX_TITLE_LENGTH = 65
+ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" # For paired peaks names
+
+PEAKS_DETECTION_THRESHOLD = 0.20
+CURVE_SIMILARITY_SIGMOID_K = 0.6
+DC_GRAIN_OF_SALT_FACTOR = 0.75
+DC_THRESHOLD_METRIC = 1.5e9
+DC_MAX_UNPAIRED_PEAKS_ALLOWED = 4
+
+# Define the SignalData namedtuple
+SignalData = namedtuple('CalibrationData', ['freqs', 'psd', 'peaks', 'paired_peaks', 'unpaired_peaks'])
+
+
+######################################################################
+# Computation of the PSD graph
+######################################################################
+
+# Calculate estimated "power spectral density" using existing Klipper tools
+def calc_freq_response(data):
+ helper = shaper_calibrate.ShaperCalibrate(printer=None)
+ return helper.process_accelerometer_data(data)
+
+
+# Calculate or estimate a "similarity" factor between two PSD curves and scale it to a percentage. This is
+# used here to quantify how close the two belts path behavior and responses are close together.
+def compute_curve_similarity_factor(signal1, signal2):
+ freqs1 = signal1.freqs
+ psd1 = signal1.psd
+ freqs2 = signal2.freqs
+ psd2 = signal2.psd
+
+ # Interpolate PSDs to match the same frequency bins and do a cross-correlation
+ psd2_interp = np.interp(freqs1, freqs2, psd2)
+ cross_corr = np.correlate(psd1, psd2_interp, mode='full')
+
+ # Find the peak of the cross-correlation and compute a similarity normalized by the energy of the signals
+ peak_value = np.max(cross_corr)
+ similarity = peak_value / (np.sqrt(np.sum(psd1**2) * np.sum(psd2_interp**2)))
+
+ # Apply sigmoid scaling to get better numbers and get a final percentage value
+ scaled_similarity = sigmoid_scale(-np.log(1 - similarity), CURVE_SIMILARITY_SIGMOID_K)
+
+ return scaled_similarity
+
+
+# This find all the peaks in a curve by looking at when the derivative term goes from positive to negative
+# Then only the peaks found above a threshold are kept to avoid capturing peaks in the low amplitude noise of a signal
+def detect_peaks(psd, freqs, window_size=5, vicinity=3):
+ # Smooth the curve using a moving average to avoid catching peaks everywhere in noisy signals
+ kernel = np.ones(window_size) / window_size
+ smoothed_psd = np.convolve(psd, kernel, mode='same')
+
+ # Find peaks on the smoothed curve
+ smoothed_peaks = np.where((smoothed_psd[:-2] < smoothed_psd[1:-1]) & (smoothed_psd[1:-1] > smoothed_psd[2:]))[0] + 1
+ detection_threshold = PEAKS_DETECTION_THRESHOLD * psd.max()
+ smoothed_peaks = smoothed_peaks[smoothed_psd[smoothed_peaks] > detection_threshold]
+
+ # Refine peak positions on the original curve
+ refined_peaks = []
+ for peak in smoothed_peaks:
+ local_max = peak + np.argmax(psd[max(0, peak-vicinity):min(len(psd), peak+vicinity+1)]) - vicinity
+ refined_peaks.append(local_max)
+
+ return np.array(refined_peaks), freqs[refined_peaks]
+
+
+# This function create pairs of peaks that are close in frequency on two curves (that are known
+# to be resonances points and must be similar on both belts on a CoreXY kinematic)
+def pair_peaks(peaks1, freqs1, psd1, peaks2, freqs2, psd2):
+ # Compute a dynamic detection threshold to filter and pair peaks efficiently
+ # even if the signal is very noisy (this get clipped to a maximum of 10Hz diff)
+ distances = []
+ for p1 in peaks1:
+ for p2 in peaks2:
+ distances.append(abs(freqs1[p1] - freqs2[p2]))
+ distances = np.array(distances)
+
+ median_distance = np.median(distances)
+ iqr = np.percentile(distances, 75) - np.percentile(distances, 25)
+
+ threshold = median_distance + 1.5 * iqr
+ threshold = min(threshold, 10)
+
+ # Pair the peaks using the dynamic thresold
+ paired_peaks = []
+ unpaired_peaks1 = list(peaks1)
+ unpaired_peaks2 = list(peaks2)
+
+ while unpaired_peaks1 and unpaired_peaks2:
+ min_distance = threshold + 1
+ pair = None
+
+ for p1 in unpaired_peaks1:
+ for p2 in unpaired_peaks2:
+ distance = abs(freqs1[p1] - freqs2[p2])
+ if distance < min_distance:
+ min_distance = distance
+ pair = (p1, p2)
+
+ if pair is None: # No more pairs below the threshold
+ break
+
+ p1, p2 = pair
+ paired_peaks.append(((p1, freqs1[p1], psd1[p1]), (p2, freqs2[p2], psd2[p2])))
+ unpaired_peaks1.remove(p1)
+ unpaired_peaks2.remove(p2)
+
+ return paired_peaks, unpaired_peaks1, unpaired_peaks2
+
+
+######################################################################
+# Computation of a basic signal spectrogram
+######################################################################
+
+def calc_specgram(data):
+ N = data.shape[0]
+ Fs = N / (data[-1,0] - data[0,0])
+ # Round up to a power of 2 for faster FFT
+ M = 1 << int(.5 * Fs - 1).bit_length()
+ window = np.kaiser(M, 6.)
+ def _specgram(x):
+ return matplotlib.mlab.specgram(
+ x, Fs=Fs, NFFT=M, noverlap=M//2, window=window,
+ mode='psd', detrend='mean', scale_by_freq=False)
+
+ d = {'x': data[:,1], 'y': data[:,2], 'z': data[:,3]}
+ pdata, bins, t = _specgram(d['x'])
+ for ax in 'yz':
+ pdata += _specgram(d[ax])[0]
+ return pdata, bins, t
+
+
+######################################################################
+# Computation of the differential spectrogram
+######################################################################
+
+# Performs a standard bilinear interpolation for a given x, y point based on surrounding input grid values. This function
+# is part of the logic to re-align both belts spectrogram in order to combine them in the differential spectrogram.
+def bilinear_interpolate(x, y, points, values):
+ x1, x2 = points[0]
+ y1, y2 = points[1]
+
+ f11, f12 = values[0]
+ f21, f22 = values[1]
+
+ interpolated_value = (
+ (f11 * (x2 - x) * (y2 - y) +
+ f21 * (x - x1) * (y2 - y) +
+ f12 * (x2 - x) * (y - y1) +
+ f22 * (x - x1) * (y - y1)) / ((x2 - x1) * (y2 - y1))
+ )
+
+ return interpolated_value
+
+
+# Interpolate source_data (2D) to match target_x and target_y in order to interpolate and
+# get similar time and frequency dimensions for the differential spectrogram
+def interpolate_2d(target_x, target_y, source_x, source_y, source_data):
+ interpolated_data = np.zeros((len(target_y), len(target_x)))
+
+ for i, y in enumerate(target_y):
+ for j, x in enumerate(target_x):
+ # Find indices of surrounding points in source data
+ # and ensure we don't exceed array bounds
+ x_indices = np.searchsorted(source_x, x) - 1
+ y_indices = np.searchsorted(source_y, y) - 1
+ x_indices = max(0, min(len(source_x) - 2, x_indices))
+ y_indices = max(0, min(len(source_y) - 2, y_indices))
+
+ x1, x2 = source_x[x_indices], source_x[x_indices + 1]
+ y1, y2 = source_y[y_indices], source_y[y_indices + 1]
+
+ f11 = source_data[y_indices, x_indices]
+ f12 = source_data[y_indices, x_indices + 1]
+ f21 = source_data[y_indices + 1, x_indices]
+ f22 = source_data[y_indices + 1, x_indices + 1]
+
+ interpolated_data[i, j] = bilinear_interpolate(x, y, ((x1, x2), (y1, y2)), ((f11, f12), (f21, f22)))
+
+ return interpolated_data
+
+
+# This function identifies a "ridge" of high gradient magnitude in a spectrogram (pdata) - ie. a resonance diagonal line. Starting from
+# the maximum value in the first column, it iteratively follows the direction of the highest gradient in the vicinity (window configured using
+# the n_average parameter). The result is a sequence of indices that traces the resonance line across the original spectrogram.
+def detect_ridge(pdata, n_average=3):
+ grad_y, grad_x = np.gradient(pdata)
+ magnitude = np.sqrt(grad_x**2 + grad_y**2)
+
+ # Start at the maximum value in the first column
+ start_idx = np.argmax(pdata[:, 0])
+ path = [start_idx]
+
+ # Walk through the spectrogram following the path of the ridge
+ for j in range(1, pdata.shape[1]):
+ # Look in the vicinity of the previous point
+ vicinity = magnitude[max(0, path[-1]-n_average):min(pdata.shape[0], path[-1]+n_average+1), j]
+ # Take an average of top few points
+ sorted_indices = np.argsort(vicinity)
+ top_indices = sorted_indices[-n_average:]
+ next_idx = int(np.mean(top_indices) + max(0, path[-1]-n_average))
+ path.append(next_idx)
+
+ return np.array(path)
+
+
+# This function calculates the time offset between two resonances lines (ridge1 and ridge2) using cross-correlation in
+# the frequency domain (using FFT). The result provides the lag (or offset) at which the two sequences are most similar.
+# This is used to re-align both belts spectrograms on their resonances lines in order to create the combined spectrogram.
+def compute_cross_correlation_offset(ridge1, ridge2):
+ # Ensure that the two arrays have the same shape
+ if len(ridge1) < len(ridge2):
+ ridge1 = np.pad(ridge1, (0, len(ridge2) - len(ridge1)))
+ elif len(ridge1) > len(ridge2):
+ ridge2 = np.pad(ridge2, (0, len(ridge1) - len(ridge2)))
+
+ cross_corr = np.fft.fftshift(np.fft.ifft(np.fft.fft(ridge1) * np.conj(np.fft.fft(ridge2))))
+ return np.argmax(np.abs(cross_corr)) - len(ridge1) // 2
+
+
+# This function shifts data along its second dimension - ie. time here - by a specified shift_amount
+def shift_data_in_time(data, shift_amount):
+ if shift_amount > 0:
+ return np.pad(data, ((0, 0), (shift_amount, 0)), mode='constant')[:, :-shift_amount]
+ elif shift_amount < 0:
+ return np.pad(data, ((0, 0), (0, -shift_amount)), mode='constant')[:, -shift_amount:]
+ else:
+ return data
+
+
+# Main logic function to combine two similar spectrogram - ie. from both belts paths - by detecting similarities (ridges), computing
+# the time lag and realigning them. Finally this function combine (by substracting signals) the aligned spectrograms in a new one.
+# This result of a mostly zero-ed new spectrogram with some colored zones highlighting differences in the belts paths.
+def combined_spectrogram(data1, data2):
+ pdata1, bins1, t1 = calc_specgram(data1)
+ pdata2, _, _ = calc_specgram(data2)
+
+ # Detect ridges
+ ridge1 = detect_ridge(pdata1)
+ ridge2 = detect_ridge(pdata2)
+
+ # Compute offset using cross-correlation and shit/align and interpolate the spectrograms
+ offset = compute_cross_correlation_offset(ridge1, ridge2)
+ pdata2_aligned = shift_data_in_time(pdata2, offset)
+ pdata2_interpolated = interpolate_2d(t1, bins1, t1, bins1, pdata2_aligned)
+
+ # Combine the spectrograms
+ combined_data = np.abs(pdata1 - pdata2_interpolated)
+ return combined_data, bins1, t1
+
+
+# Compute a composite and highly subjective value indicating the "chance of mechanical issues on the printer (0 to 100%)"
+# that is based on the differential spectrogram sum of gradient salted with a bit of the estimated similarity cross-correlation
+# from compute_curve_similarity_factor() and with a bit of the number of unpaired peaks.
+# This result in a percentage value quantifying the machine behavior around the main resonances that give an hint if only touching belt tension
+# will give good graphs or if there is a chance of mechanical issues in the background (above 50% should be considered as probably problematic)
+def compute_comi(combined_data, similarity_coefficient, num_unpaired_peaks):
+ filtered_data = combined_data[combined_data > 100]
+
+ # First compute a "total variability metric" based on the sum of the gradient that sum the magnitude of will emphasize regions of the
+ # spectrogram where there are rapid changes in magnitude (like the edges of resonance peaks).
+ total_variability_metric = np.sum(np.abs(np.gradient(filtered_data)))
+ # Scale the metric to a percentage using the threshold (found empirically on a large number of user data shared to me)
+ base_percentage = (np.log1p(total_variability_metric) / np.log1p(DC_THRESHOLD_METRIC)) * 100
+
+ # Adjust the percentage based on the similarity_coefficient to add a grain of salt
+ adjusted_percentage = base_percentage * (1 - DC_GRAIN_OF_SALT_FACTOR * (similarity_coefficient / 100))
+
+ # Adjust the percentage again based on the number of unpaired peaks to add a second grain of salt
+ peak_confidence = num_unpaired_peaks / DC_MAX_UNPAIRED_PEAKS_ALLOWED
+ final_percentage = (1 - peak_confidence) * adjusted_percentage + peak_confidence * 100
+
+ # Ensure the result lies between 0 and 100 by clipping the computed value
+ final_percentage = np.clip(final_percentage, 0, 100)
+
+ return final_percentage
+
+
+######################################################################
+# Graphing
+######################################################################
+
+def plot_compare_frequency(ax, lognames, signal1, signal2, max_freq):
+ # Plot the two belts PSD signals
+ ax.plot(signal1.freqs, signal1.psd, label="\n".join(wrap(lognames[0], 60)), alpha=0.6)
+ ax.plot(signal2.freqs, signal2.psd, label="\n".join(wrap(lognames[1], 60)), alpha=0.6)
+
+ # Trace the "relax region" (also used as a threshold to filter and detect the peaks)
+ psd_lowest_max = min(signal1.psd.max(), signal2.psd.max())
+ peaks_warning_threshold = PEAKS_DETECTION_THRESHOLD * psd_lowest_max
+ ax.axhline(y=peaks_warning_threshold, color='black', linestyle='--', linewidth=0.5)
+ ax.fill_between(signal1.freqs, 0, peaks_warning_threshold, color='green', alpha=0.15, label='Relax Region')
+
+ # Trace and annotate the peaks on the graph
+ paired_peak_count = 0
+ unpaired_peak_count = 0
+ offsets_table_data = []
+
+ for _, (peak1, peak2) in enumerate(signal1.paired_peaks):
+ label = ALPHABET[paired_peak_count]
+ amplitude_offset = abs(((signal2.psd[peak2[0]] - signal1.psd[peak1[0]]) / max(signal1.psd[peak1[0]], signal2.psd[peak2[0]])) * 100)
+ frequency_offset = abs(signal2.freqs[peak2[0]] - signal1.freqs[peak1[0]])
+ offsets_table_data.append([f"Peaks {label}", f"{frequency_offset:.2f} Hz", f"{amplitude_offset:.2f} %"])
+
+ ax.plot(signal1.freqs[peak1[0]], signal1.psd[peak1[0]], "x", color='black')
+ ax.plot(signal2.freqs[peak2[0]], signal2.psd[peak2[0]], "x", color='black')
+ ax.plot([signal1.freqs[peak1[0]], signal2.freqs[peak2[0]]], [signal1.psd[peak1[0]], signal2.psd[peak2[0]]], ":", color='gray')
+
+ ax.annotate(label + "1", (signal1.freqs[peak1[0]], signal1.psd[peak1[0]]),
+ textcoords="offset points", xytext=(8, 5),
+ ha='left', fontsize=14, color='black')
+ ax.annotate(label + "2", (signal2.freqs[peak2[0]], signal2.psd[peak2[0]]),
+ textcoords="offset points", xytext=(8, 5),
+ ha='left', fontsize=14, color='black')
+ paired_peak_count += 1
+
+ for peak in signal1.unpaired_peaks:
+ ax.plot(signal1.freqs[peak], signal1.psd[peak], "x", color='black')
+ ax.annotate(str(unpaired_peak_count + 1), (signal1.freqs[peak], signal1.psd[peak]),
+ textcoords="offset points", xytext=(8, 5),
+ ha='left', fontsize=14, color='red', weight='bold')
+ unpaired_peak_count += 1
+
+ for peak in signal2.unpaired_peaks:
+ ax.plot(signal2.freqs[peak], signal2.psd[peak], "x", color='black')
+ ax.annotate(str(unpaired_peak_count + 1), (signal2.freqs[peak], signal2.psd[peak]),
+ textcoords="offset points", xytext=(8, 5),
+ ha='left', fontsize=14, color='red', weight='bold')
+ unpaired_peak_count += 1
+
+ # Compute the similarity (using cross-correlation of the PSD signals)
+ similarity_factor = compute_curve_similarity_factor(signal1, signal2)
+ ax.plot([], [], ' ', label=f'Estimated similarity: {similarity_factor:.1f}% ({unpaired_peak_count} unpaired peaks)')
+ print(f"Belts estimated similarity: {similarity_factor:.1f}%")
+
+ # Setting axis parameters, grid and graph title
+ ax.set_xlabel('Frequency (Hz)')
+ ax.set_xlim([0, max_freq])
+ ax.set_ylabel('Power spectral density')
+ psd_highest_max = max(signal1.psd.max(), signal2.psd.max())
+ ax.set_ylim([0, psd_highest_max + psd_highest_max * 0.05])
+
+ ax.xaxis.set_minor_locator(matplotlib.ticker.AutoMinorLocator())
+ ax.yaxis.set_minor_locator(matplotlib.ticker.AutoMinorLocator())
+ ax.ticklabel_format(axis='y', style='scientific', scilimits=(0,0))
+ ax.grid(which='major', color='grey')
+ ax.grid(which='minor', color='lightgrey')
+ fontP = matplotlib.font_manager.FontProperties()
+ fontP.set_size('small')
+ ax.set_title('Belts Frequency Profiles (estimated similarity: {:.1f}%)'.format(similarity_factor), fontsize=14)
+ ax.legend(loc='best', prop=fontP)
+
+ # Print the table of offsets ontop of the graph below the original legend (upper right)
+ if len(offsets_table_data) > 0:
+ columns = ["", "Frequency delta", "Amplitude delta", ]
+ offset_table = ax.table(cellText=offsets_table_data, colLabels=columns, bbox=[0.67, 0.60, 0.3, 0.2], loc='upper right', cellLoc='center')
+ offset_table.auto_set_font_size(False)
+ offset_table.set_fontsize(8)
+ offset_table.auto_set_column_width([0, 1, 2])
+ offset_table.set_zorder(100)
+ cells = [key for key in offset_table.get_celld().keys()]
+ for cell in cells:
+ offset_table[cell].set_facecolor('white')
+ offset_table[cell].set_alpha(0.6)
+
+ return similarity_factor, unpaired_peak_count
+
+
+def plot_difference_spectrogram(ax, data1, data2, signal1, signal2, similarity_factor, max_freq):
+ combined_data, bins, t = combined_spectrogram(data1, data2)
+
+ # Compute the COMI value from the differential spectrogram sum of gradient, salted with
+ # the similarity factor and the number or unpaired peaks from the belts frequency profile
+ # Be careful, this value is highly opinionated and is pretty experimental!
+ comi = compute_comi(combined_data, similarity_factor, len(signal1.unpaired_peaks) + len(signal2.unpaired_peaks))
+ print(f"Chances of mechanical issues: {comi:.1f}%")
+ ax.set_title(f"Differential Spectrogram (COMI: {comi:.1f}%)", fontsize=14)
+ ax.plot([], [], ' ', label=f'Chances of mechanical issues (COMI): {comi:.1f}%')
+
+ # Draw the differential spectrogram with a specific norm to get light grey zero values and red for max values (vmin to vcenter is not used)
+ norm = matplotlib.colors.TwoSlopeNorm(vcenter=np.min(combined_data), vmax=np.max(combined_data))
+ ax.pcolormesh(bins, t, combined_data.T, cmap='RdBu_r', norm=norm, shading='gouraud')
+ ax.set_xlabel('Frequency (hz)')
+ ax.set_xlim([0., max_freq])
+ ax.set_ylabel('Time (s)')
+ ax.set_ylim([0, t[-1]])
+
+ fontP = matplotlib.font_manager.FontProperties()
+ fontP.set_size('medium')
+ ax.legend(loc='best', prop=fontP)
+
+ # Plot vertical lines for unpaired peaks
+ unpaired_peak_count = 0
+ for _, peak in enumerate(signal1.unpaired_peaks):
+ ax.axvline(signal1.freqs[peak], color='red', linestyle='dotted', linewidth=1.5)
+ ax.annotate(f"Peak {unpaired_peak_count + 1}", (signal1.freqs[peak], t[-1]*0.05),
+ textcoords="data", color='red', rotation=90, fontsize=10,
+ verticalalignment='bottom', horizontalalignment='right')
+ unpaired_peak_count +=1
+
+ for _, peak in enumerate(signal2.unpaired_peaks):
+ ax.axvline(signal2.freqs[peak], color='red', linestyle='dotted', linewidth=1.5)
+ ax.annotate(f"Peak {unpaired_peak_count + 1}", (signal2.freqs[peak], t[-1]*0.05),
+ textcoords="data", color='red', rotation=90, fontsize=10,
+ verticalalignment='bottom', horizontalalignment='right')
+ unpaired_peak_count +=1
+
+ # Plot vertical lines and zones for paired peaks
+ for idx, (peak1, peak2) in enumerate(signal1.paired_peaks):
+ label = ALPHABET[idx]
+ x_min = min(peak1[1], peak2[1])
+ x_max = max(peak1[1], peak2[1])
+ ax.axvline(x_min, color='blue', linestyle='dotted', linewidth=1.5)
+ ax.axvline(x_max, color='blue', linestyle='dotted', linewidth=1.5)
+ ax.fill_between([x_min, x_max], 0, np.max(combined_data), color='blue', alpha=0.3)
+ ax.annotate(f"Peaks {label}", (x_min, t[-1]*0.05),
+ textcoords="data", color='blue', rotation=90, fontsize=10,
+ verticalalignment='bottom', horizontalalignment='right')
+
+ return
+
+
+######################################################################
+# Custom tools
+######################################################################
+
+# Simple helper to compute a sigmoid scalling (from 0 to 100%)
+def sigmoid_scale(x, k=1):
+ return 1 / (1 + np.exp(-k * x)) * 100
+
+# Original Klipper function to get the PSD data of a raw accelerometer signal
+def compute_signal_data(data, max_freq):
+ calibration_data = calc_freq_response(data)
+ freqs = calibration_data.freq_bins[calibration_data.freq_bins <= max_freq]
+ psd = calibration_data.get_psd('all')[calibration_data.freq_bins <= max_freq]
+ peaks, _ = detect_peaks(psd, freqs)
+ return SignalData(freqs=freqs, psd=psd, peaks=peaks, paired_peaks=None, unpaired_peaks=None)
+
+
+######################################################################
+# Startup and main routines
+######################################################################
+
+def parse_log(logname):
+ with open(logname) as f:
+ for header in f:
+ if not header.startswith('#'):
+ break
+ if not header.startswith('freq,psd_x,psd_y,psd_z,psd_xyz'):
+ # Raw accelerometer data
+ return np.loadtxt(logname, comments='#', delimiter=',')
+ # Power spectral density data or shaper calibration data
+ raise ValueError("File %s does not contain raw accelerometer data and therefore "
+ "is not supported by this script. Please use the official Klipper "
+ "graph_accelerometer.py script to process it instead." % (logname,))
+
+
+def setup_klipper_import(kdir):
+ global shaper_calibrate
+ kdir = os.path.expanduser(kdir)
+ sys.path.append(os.path.join(kdir, 'klippy'))
+ shaper_calibrate = importlib.import_module('.shaper_calibrate', 'extras')
+
+
+def belts_calibration(lognames, klipperdir="~/klipper", max_freq=200.):
+ setup_klipper_import(klipperdir)
+
+ # Parse data
+ datas = [parse_log(fn) for fn in lognames]
+ if len(datas) > 2:
+ raise ValueError("Incorrect number of .csv files used (this function needs two files to compare them)")
+
+ # Compute calibration data for the two datasets with automatic peaks detection
+ signal1 = compute_signal_data(datas[0], max_freq)
+ signal2 = compute_signal_data(datas[1], max_freq)
+
+ # Pair the peaks across the two datasets
+ paired_peaks, unpaired_peaks1, unpaired_peaks2 = pair_peaks(signal1.peaks, signal1.freqs, signal1.psd,
+ signal2.peaks, signal2.freqs, signal2.psd)
+ signal1 = signal1._replace(paired_peaks=paired_peaks, unpaired_peaks=unpaired_peaks1)
+ signal2 = signal2._replace(paired_peaks=paired_peaks, unpaired_peaks=unpaired_peaks2)
+
+ fig = matplotlib.pyplot.figure()
+ gs = matplotlib.gridspec.GridSpec(2, 1, height_ratios=[4, 3])
+ ax1 = fig.add_subplot(gs[0])
+ ax2 = fig.add_subplot(gs[1])
+ fig.suptitle("CoreXY relative belt calibration tool", fontsize=16)
+
+ similarity_factor, _ = plot_compare_frequency(ax1, lognames, signal1, signal2, max_freq)
+ plot_difference_spectrogram(ax2, datas[0], datas[1], signal1, signal2, similarity_factor, max_freq)
+
+ fig.set_size_inches(10, 12)
+ fig.tight_layout()
+ fig.subplots_adjust(top=0.93)
+
+ return fig
+
+
+def main():
+ # Parse command-line arguments
+ usage = "%prog [options] "
+ opts = optparse.OptionParser(usage)
+ opts.add_option("-o", "--output", type="string", dest="output",
+ default=None, help="filename of output graph")
+ opts.add_option("-f", "--max_freq", type="float", default=200.,
+ help="maximum frequency to graph")
+ opts.add_option("-k", "--klipper_dir", type="string", dest="klipperdir",
+ default="~/klipper", help="main klipper directory")
+ options, args = opts.parse_args()
+ if len(args) < 1:
+ opts.error("Incorrect number of arguments")
+ if options.output is None:
+ opts.error("You must specify an output file.png to use the script (option -o)")
+
+ fig = belts_calibration(args, options.klipperdir, options.max_freq)
+ fig.savefig(options.output)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/is_workflow/scripts/graph_shaper.py b/is_workflow/scripts/graph_shaper.py
new file mode 100644
index 0000000..c6e14b5
--- /dev/null
+++ b/is_workflow/scripts/graph_shaper.py
@@ -0,0 +1,335 @@
+#!/usr/bin/env python3
+
+#################################################
+######## INPUT SHAPER CALIBRATION SCRIPT ########
+#################################################
+# Derived from the calibrate_shaper.py official Klipper script
+# Copyright (C) 2020 Dmitry Butyugin
+# Copyright (C) 2020 Kevin O'Connor
+#
+# Written by Frix_x#0161 #
+# @version: 1.1
+
+# CHANGELOG:
+# v1.1: - improved the damping ratio computation with linear approximation for more precision
+# - reworked the top graph to add more information to it with colored zones,
+# automated peak detection, etc...
+# - added a full spectrogram of the signal on the bottom to allow deeper analysis
+# v1.0: first version of this script inspired from the official Klipper
+# shaper calibration script to add an automatic damping ratio estimation to it
+
+
+# Be sure to make this script executable using SSH: type 'chmod +x ./graph_shaper.py' when in the folder!
+
+#####################################################################
+################ !!! DO NOT EDIT BELOW THIS LINE !!! ################
+#####################################################################
+
+import optparse, matplotlib, sys, importlib, os, math
+from textwrap import wrap
+import numpy as np
+import matplotlib.pyplot, matplotlib.dates, matplotlib.font_manager
+import matplotlib.ticker, matplotlib.gridspec
+
+matplotlib.use('Agg')
+
+MAX_TITLE_LENGTH=65
+PEAKS_DETECTION_THRESHOLD=0.05
+PEAKS_EFFECT_THRESHOLD=0.12
+SPECTROGRAM_LOW_PERCENTILE_FILTER=5
+
+
+######################################################################
+# Computation
+######################################################################
+
+# Find the best shaper parameters using Klipper's official algorithm selection
+def calibrate_shaper_with_damping(datas, max_smoothing):
+ helper = shaper_calibrate.ShaperCalibrate(printer=None)
+
+ calibration_data = helper.process_accelerometer_data(datas[0])
+ for data in datas[1:]:
+ calibration_data.add_data(helper.process_accelerometer_data(data))
+
+ calibration_data.normalize_to_frequencies()
+
+ shaper, all_shapers = helper.find_best_shaper(calibration_data, max_smoothing, print)
+
+ freqs = calibration_data.freq_bins
+ psd = calibration_data.psd_sum
+ fr, zeta = compute_damping_ratio(psd, freqs)
+
+ print("Recommended shaper is %s @ %.1f Hz" % (shaper.name, shaper.freq))
+ print("Axis has a resonant frequency ω0=%.1fHz with an estimated damping ratio ζ=%.3f" % (fr, zeta))
+
+ return shaper.name, all_shapers, calibration_data, fr, zeta
+
+
+# Compute damping ratio by using the half power bandwidth method with interpolated frequencies
+def compute_damping_ratio(psd, freqs):
+ max_power_index = np.argmax(psd)
+ fr = freqs[max_power_index]
+ max_power = psd[max_power_index]
+
+ half_power = max_power / math.sqrt(2)
+ idx_below = np.where(psd[:max_power_index] <= half_power)[0][-1]
+ idx_above = np.where(psd[max_power_index:] <= half_power)[0][0] + max_power_index
+ freq_below_half_power = freqs[idx_below] + (half_power - psd[idx_below]) * (freqs[idx_below + 1] - freqs[idx_below]) / (psd[idx_below + 1] - psd[idx_below])
+ freq_above_half_power = freqs[idx_above - 1] + (half_power - psd[idx_above - 1]) * (freqs[idx_above] - freqs[idx_above - 1]) / (psd[idx_above] - psd[idx_above - 1])
+
+ bandwidth = freq_above_half_power - freq_below_half_power
+ zeta = bandwidth / (2 * fr)
+
+ return fr, zeta
+
+
+def compute_spectrogram(data):
+ N = data.shape[0]
+ Fs = N / (data[-1,0] - data[0,0])
+ # Round up to a power of 2 for faster FFT
+ M = 1 << int(.5 * Fs - 1).bit_length()
+ window = np.kaiser(M, 6.)
+ def _specgram(x):
+ return matplotlib.mlab.specgram(
+ x, Fs=Fs, NFFT=M, noverlap=M//2, window=window,
+ mode='psd', detrend='mean', scale_by_freq=False)
+
+ d = {'x': data[:,1], 'y': data[:,2], 'z': data[:,3]}
+ pdata, bins, t = _specgram(d['x'])
+ for ax in 'yz':
+ pdata += _specgram(d[ax])[0]
+ return pdata, bins, t
+
+
+# This find all the peaks in a curve by looking at when the derivative term goes from positive to negative
+# Then only the peaks found above a threshold are kept to avoid capturing peaks in the low amplitude noise of a signal
+# An added "virtual" threshold allow me to quantify in an opiniated way the peaks that "could have" effect on the printer
+# behavior and are likely known to produce or contribute to the ringing/ghosting in printed parts
+def detect_peaks(psd, freqs, window_size=5, vicinity=3):
+ # Smooth the curve using a moving average to avoid catching peaks everywhere in noisy signals
+ kernel = np.ones(window_size) / window_size
+ smoothed_psd = np.convolve(psd, kernel, mode='same')
+
+ # Find peaks on the smoothed curve
+ smoothed_peaks = np.where((smoothed_psd[:-2] < smoothed_psd[1:-1]) & (smoothed_psd[1:-1] > smoothed_psd[2:]))[0] + 1
+ detection_threshold = PEAKS_DETECTION_THRESHOLD * psd.max()
+ effect_threshold = PEAKS_EFFECT_THRESHOLD * psd.max()
+ smoothed_peaks = smoothed_peaks[smoothed_psd[smoothed_peaks] > detection_threshold]
+
+ # Refine peak positions on the original curve
+ refined_peaks = []
+ for peak in smoothed_peaks:
+ local_max = peak + np.argmax(psd[max(0, peak-vicinity):min(len(psd), peak+vicinity+1)]) - vicinity
+ refined_peaks.append(local_max)
+
+ peak_freqs = ["{:.1f}".format(f) for f in freqs[refined_peaks]]
+
+ num_peaks = len(refined_peaks)
+ num_peaks_above_effect_threshold = np.sum(psd[refined_peaks] > effect_threshold)
+
+ print("Peaks detected on the graph: %d @ %s Hz (%d above effect threshold)" % (num_peaks, ", ".join(map(str, peak_freqs)), num_peaks_above_effect_threshold))
+
+ return np.array(refined_peaks), num_peaks, num_peaks_above_effect_threshold
+
+
+######################################################################
+# Graphing
+######################################################################
+
+def plot_freq_response_with_damping(ax, calibration_data, shapers, selected_shaper, fr, zeta, max_freq):
+ freqs = calibration_data.freq_bins
+ psd = calibration_data.psd_sum[freqs <= max_freq]
+ px = calibration_data.psd_x[freqs <= max_freq]
+ py = calibration_data.psd_y[freqs <= max_freq]
+ pz = calibration_data.psd_z[freqs <= max_freq]
+ freqs = freqs[freqs <= max_freq]
+
+ fontP = matplotlib.font_manager.FontProperties()
+ fontP.set_size('x-small')
+
+ ax.set_xlabel('Frequency (Hz)')
+ ax.set_xlim([0, max_freq])
+ ax.set_ylabel('Power spectral density')
+ ax.set_ylim([0, psd.max() + psd.max() * 0.05])
+
+ ax.plot(freqs, psd, label='X+Y+Z', color='purple')
+ ax.plot(freqs, px, label='X', color='red')
+ ax.plot(freqs, py, label='Y', color='green')
+ ax.plot(freqs, pz, label='Z', color='blue')
+
+ ax.xaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(5))
+ ax.yaxis.set_minor_locator(matplotlib.ticker.AutoMinorLocator())
+ ax.ticklabel_format(axis='y', style='scientific', scilimits=(0,0))
+ ax.grid(which='major', color='grey')
+ ax.grid(which='minor', color='lightgrey')
+
+ ax2 = ax.twinx()
+ ax2.set_ylabel('Shaper vibration reduction (ratio)')
+
+ best_shaper_vals = None
+ no_vibr_shaper = None
+ no_vibr_shaper_freq = None
+ no_vibr_shaper_accel = 0
+
+ # Draw the shappers curves and add their specific parameters in the legend
+ # This adds also a way to find the best shaper with 0% of vibrations (to be printed in the legend later)
+ for shaper in shapers:
+ shaper_max_accel = round(shaper.max_accel / 100.) * 100.
+ label = "%s (%.1f Hz, vibr=%.1f%%, sm~=%.2f, accel<=%.f)" % (
+ shaper.name.upper(), shaper.freq,
+ shaper.vibrs * 100., shaper.smoothing,
+ shaper_max_accel)
+ linestyle = 'dotted'
+ if shaper.name == selected_shaper:
+ linestyle = 'dashdot'
+ selected_shaper_freq = shaper.freq
+ best_shaper_vals = shaper.vals
+ if (shaper.vibrs * 100 == 0.) and (shaper_max_accel > no_vibr_shaper_accel):
+ no_vibr_shaper_accel = shaper_max_accel
+ no_vibr_shaper = shaper.name
+ no_vibr_shaper_freq = shaper.freq
+ ax2.plot(freqs, shaper.vals, label=label, linestyle=linestyle)
+ ax.plot(freqs, psd * best_shaper_vals, label='With %s applied' % (selected_shaper.upper()), color='cyan')
+
+ # Draw the detected peaks and name them
+ # This also draw the detection threshold and warning threshold (aka "effect zone")
+ peaks, _, _ = detect_peaks(psd, freqs)
+ peaks_warning_threshold = PEAKS_DETECTION_THRESHOLD * psd.max()
+ peaks_effect_threshold = PEAKS_EFFECT_THRESHOLD * psd.max()
+
+ ax.plot(freqs[peaks], psd[peaks], "x", color='black', label='Detected peaks', markersize=8)
+ for idx, peak in enumerate(peaks):
+ if psd[peak] > peaks_effect_threshold:
+ fontcolor = 'red'
+ fontweight = 'bold'
+ else:
+ fontcolor = 'black'
+ fontweight = 'normal'
+ ax.annotate(f"{idx+1}", (freqs[peak], psd[peak]),
+ textcoords="offset points", xytext=(8, 5),
+ ha='left', fontsize=14, color=fontcolor, weight=fontweight)
+ ax.axhline(y=peaks_warning_threshold, color='black', linestyle='--', linewidth=0.5)
+ ax.axhline(y=peaks_effect_threshold, color='black', linestyle='--', linewidth=0.5)
+ ax.fill_between(freqs, 0, peaks_warning_threshold, color='green', alpha=0.15, label='Relax Region')
+ ax.fill_between(freqs, peaks_warning_threshold, peaks_effect_threshold, color='orange', alpha=0.2, label='Warning Region')
+
+ # Final user recommendations added to the legend with an added 0% vibration shaper and the estimated damping ratio over stock Klipper's algorithms
+ ax2.plot([], [], ' ', label="Recommended shaper: %s @ %.1f Hz" % (selected_shaper.upper(), selected_shaper_freq))
+ ax2.plot([], [], ' ', label="Recommended low vibrations shaper: %s @ %.1f Hz" % (no_vibr_shaper.upper(), no_vibr_shaper_freq))
+ ax2.plot([], [], ' ', label="Estimated damping ratio (ζ): %.3f" % (zeta))
+
+ # Add the main resonant frequency and damping ratio of the axis to the graph title
+ ax.set_title("Axis Frequency Profile (ω0=%.1fHz, ζ=%.3f)" % (fr, zeta), fontsize=14)
+ ax.legend(loc='upper left', prop=fontP)
+ ax2.legend(loc='upper right', prop=fontP)
+
+ return freqs[peaks]
+
+
+# Plot a time-frequency spectrogram to see how the system respond over time during the
+# resonnance test. This can highlight hidden spots from the standard PSD graph from other harmonics
+def plot_spectrogram(ax, data, peaks, max_freq):
+ pdata, bins, t = compute_spectrogram(data)
+
+ # We need to normalize the data to get a proper signal on the spectrogram
+ # However, while using "LogNorm" provide too much background noise, using
+ # "Normalize" make only the resonnance appearing and hide interesting elements
+ # So we need to filter out the lower part of the data (ie. find the proper vmin for LogNorm)
+ vmin_value = np.percentile(pdata, SPECTROGRAM_LOW_PERCENTILE_FILTER)
+
+ ax.set_title("Time-Frequency Spectrogram", fontsize=14)
+ ax.pcolormesh(bins, t, pdata.T, norm=matplotlib.colors.LogNorm(vmin=vmin_value),
+ cmap='inferno', shading='gouraud')
+
+ # Add peaks lines in the spectrogram to get hint from peaks found in the first graph
+ if peaks is not None:
+ for idx, peak in enumerate(peaks):
+ ax.axvline(peak, color='cyan', linestyle='dotted', linewidth=0.75)
+ ax.annotate(f"Peak {idx+1}", (peak, t[-1]*0.9),
+ textcoords="data", color='cyan', rotation=90, fontsize=12,
+ verticalalignment='top', horizontalalignment='right')
+
+ ax.set_xlim([0., max_freq])
+ ax.set_ylabel('Time (s)')
+ ax.set_xlabel('Frequency (Hz)')
+
+ return
+
+
+######################################################################
+# Startup and main routines
+######################################################################
+
+def parse_log(logname):
+ with open(logname) as f:
+ for header in f:
+ if not header.startswith('#'):
+ break
+ if not header.startswith('freq,psd_x,psd_y,psd_z,psd_xyz'):
+ # Raw accelerometer data
+ return np.loadtxt(logname, comments='#', delimiter=',')
+ # Power spectral density data or shaper calibration data
+ raise ValueError("File %s does not contain raw accelerometer data and therefore "
+ "is not supported by this script. Please use the official Klipper "
+ "calibrate_shaper.py script to process it instead." % (logname,))
+
+
+def setup_klipper_import(kdir):
+ global shaper_calibrate
+ kdir = os.path.expanduser(kdir)
+ sys.path.append(os.path.join(kdir, 'klippy'))
+ shaper_calibrate = importlib.import_module('.shaper_calibrate', 'extras')
+
+
+def shaper_calibration(lognames, klipperdir="~/klipper", max_smoothing=None, max_freq=200.):
+ setup_klipper_import(klipperdir)
+
+ # Parse data
+ datas = [parse_log(fn) for fn in lognames]
+
+ # Calibrate shaper and generate outputs
+ selected_shaper, shapers, calibration_data, fr, zeta = calibrate_shaper_with_damping(datas, max_smoothing)
+
+ fig = matplotlib.pyplot.figure()
+ gs = matplotlib.gridspec.GridSpec(2, 1, height_ratios=[4, 3])
+ ax1 = fig.add_subplot(gs[0])
+ ax2 = fig.add_subplot(gs[1])
+ fig.suptitle("\n".join(wrap(
+ "Input Shaper calibration (%s)" % (', '.join(lognames)), MAX_TITLE_LENGTH)), fontsize=16)
+
+ peaks = plot_freq_response_with_damping(ax1, calibration_data, shapers, selected_shaper, fr, zeta, max_freq)
+ plot_spectrogram(ax2, datas[0], peaks, max_freq)
+
+ fig.set_size_inches(10, 12)
+ fig.tight_layout()
+ fig.subplots_adjust(top=0.93)
+
+ return fig
+
+
+def main():
+ # Parse command-line arguments
+ usage = "%prog [options] "
+ opts = optparse.OptionParser(usage)
+ opts.add_option("-o", "--output", type="string", dest="output",
+ default=None, help="filename of output graph")
+ opts.add_option("-f", "--max_freq", type="float", default=200.,
+ help="maximum frequency to graph")
+ opts.add_option("-s", "--max_smoothing", type="float", default=None,
+ help="maximum shaper smoothing to allow")
+ opts.add_option("-k", "--klipper_dir", type="string", dest="klipperdir",
+ default="~/klipper", help="main klipper directory")
+ options, args = opts.parse_args()
+ if len(args) < 1:
+ opts.error("Incorrect number of arguments")
+ if options.output is None:
+ opts.error("You must specify an output file.png to use the script (option -o)")
+ if options.max_smoothing is not None and options.max_smoothing < 0.05:
+ opts.error("Too small max_smoothing specified (must be at least 0.05)")
+
+ fig = shaper_calibration(args, options.klipperdir, options.max_smoothing, options.max_freq)
+ fig.savefig(options.output)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/is_workflow/scripts/graph_vibrations.py b/is_workflow/scripts/graph_vibrations.py
new file mode 100644
index 0000000..2d8be3a
--- /dev/null
+++ b/is_workflow/scripts/graph_vibrations.py
@@ -0,0 +1,262 @@
+#!/usr/bin/env python3
+
+##################################################
+###### SPEED AND VIBRATIONS PLOTTING SCRIPT ######
+##################################################
+# Written by Frix_x#0161 #
+# @version: 1.2
+
+# CHANGELOG:
+# v1.2: fixed a bug that could happen when username is not "pi" (thanks @spikeygg)
+# v1.1: better graph formatting
+# v1.0: first version of the script
+
+
+# Be sure to make this script executable using SSH: type 'chmod +x ./graph_vibrations.py' when in the folder !
+
+#####################################################################
+################ !!! DO NOT EDIT BELOW THIS LINE !!! ################
+#####################################################################
+
+import optparse, matplotlib, re, sys, importlib, os, operator
+from collections import OrderedDict
+import numpy as np
+import matplotlib.pyplot, matplotlib.dates, matplotlib.font_manager
+import matplotlib.ticker
+
+matplotlib.use('Agg')
+
+
+######################################################################
+# Computation
+######################################################################
+
+def calc_freq_response(data):
+ # Use Klipper standard input shaper objects to do the computation
+ helper = shaper_calibrate.ShaperCalibrate(printer=None)
+ return helper.process_accelerometer_data(data)
+
+
+def calc_psd(datas, group, max_freq):
+ psd_list = []
+ first_freqs = None
+ signal_axes = ['x', 'y', 'z', 'all']
+
+ for i in range(0, len(datas), group):
+
+ # Round up to the nearest power of 2 for faster FFT
+ N = datas[i].shape[0]
+ T = datas[i][-1,0] - datas[i][0,0]
+ M = 1 << int((N/T) * 0.5 - 1).bit_length()
+ if N <= M:
+ # If there is not enough lines in the array to be able to round up to the
+ # nearest power of 2, we need to pad some zeros at the end of the array to
+ # avoid entering a blocking state from Klipper shaper_calibrate.py
+ datas[i] = np.pad(datas[i], [(0, (M-N)+1), (0, 0)], mode='constant', constant_values=0)
+
+ freqrsp = calc_freq_response(datas[i])
+ for n in range(group - 1):
+ data = datas[i + n + 1]
+
+ # Round up to the nearest power of 2 for faster FFT
+ N = data.shape[0]
+ T = data[-1,0] - data[0,0]
+ M = 1 << int((N/T) * 0.5 - 1).bit_length()
+ if N <= M:
+ # If there is not enough lines in the array to be able to round up to the
+ # nearest power of 2, we need to pad some zeros at the end of the array to
+ # avoid entering a blocking state from Klipper shaper_calibrate.py
+ data = np.pad(data, [(0, (M-N)+1), (0, 0)], mode='constant', constant_values=0)
+
+ freqrsp.add_data(calc_freq_response(data))
+
+ if not psd_list:
+ # First group, just put it in the result list
+ first_freqs = freqrsp.freq_bins
+ psd = freqrsp.psd_sum[first_freqs <= max_freq]
+ px = freqrsp.psd_x[first_freqs <= max_freq]
+ py = freqrsp.psd_y[first_freqs <= max_freq]
+ pz = freqrsp.psd_z[first_freqs <= max_freq]
+ psd_list.append([psd, px, py, pz])
+ else:
+ # Not the first group, we need to interpolate every new signals
+ # to the first one to equalize the frequency_bins between them
+ signal_normalized = dict()
+ freqs = freqrsp.freq_bins
+ for axe in signal_axes:
+ signal = freqrsp.get_psd(axe)
+ signal_normalized[axe] = np.interp(first_freqs, freqs, signal)
+
+ # Remove data above max_freq on all axes and add to the result list
+ psd = signal_normalized['all'][first_freqs <= max_freq]
+ px = signal_normalized['x'][first_freqs <= max_freq]
+ py = signal_normalized['y'][first_freqs <= max_freq]
+ pz = signal_normalized['z'][first_freqs <= max_freq]
+ psd_list.append([psd, px, py, pz])
+
+ return first_freqs[first_freqs <= max_freq], psd_list
+
+
+def calc_powertot(psd_list, freqs):
+ pwrtot_sum = []
+ pwrtot_x = []
+ pwrtot_y = []
+ pwrtot_z = []
+
+ for psd in psd_list:
+ pwrtot_sum.append(np.trapz(psd[0], freqs))
+ pwrtot_x.append(np.trapz(psd[1], freqs))
+ pwrtot_y.append(np.trapz(psd[2], freqs))
+ pwrtot_z.append(np.trapz(psd[3], freqs))
+
+ return [pwrtot_sum, pwrtot_x, pwrtot_y, pwrtot_z]
+
+
+######################################################################
+# Graphing
+######################################################################
+
+def plot_total_power(ax, speeds, power_total):
+ ax.set_title('Vibrations decomposition')
+ ax.set_xlabel('Speed (mm/s)')
+ ax.set_ylabel('Energy')
+
+ ax.plot(speeds, power_total[0], label="X+Y+Z", alpha=0.6)
+ ax.plot(speeds, power_total[1], label="X", alpha=0.6)
+ ax.plot(speeds, power_total[2], label="Y", alpha=0.6)
+ ax.plot(speeds, power_total[3], label="Z", alpha=0.6)
+
+ ax.xaxis.set_minor_locator(matplotlib.ticker.AutoMinorLocator())
+ ax.yaxis.set_minor_locator(matplotlib.ticker.AutoMinorLocator())
+ ax.grid(which='major', color='grey')
+ ax.grid(which='minor', color='lightgrey')
+ fontP = matplotlib.font_manager.FontProperties()
+ fontP.set_size('medium')
+ ax.legend(loc='best', prop=fontP)
+
+ return
+
+
+def plot_spectrogram(ax, speeds, freqs, power_spectral_densities, max_freq):
+ spectrum = np.empty([len(freqs), len(speeds)])
+
+ for i in range(len(speeds)):
+ for j in range(len(freqs)):
+ spectrum[j, i] = power_spectral_densities[i][0][j]
+
+ ax.set_title("Summed vibrations spectrogram")
+ ax.pcolormesh(speeds, freqs, spectrum, norm=matplotlib.colors.LogNorm(),
+ cmap='inferno', shading='gouraud')
+ ax.set_ylim([0., max_freq])
+ ax.set_ylabel('Frequency (hz)')
+ ax.set_xlabel('Speed (mm/s)')
+
+ return
+
+
+######################################################################
+# Startup and main routines
+######################################################################
+
+def parse_log(logname):
+ with open(logname) as f:
+ for header in f:
+ if not header.startswith('#'):
+ break
+ if not header.startswith('freq,psd_x,psd_y,psd_z,psd_xyz'):
+ # Raw accelerometer data
+ return np.loadtxt(logname, comments='#', delimiter=',')
+ # Power spectral density data or shaper calibration data
+ raise ValueError("File %s does not contain raw accelerometer data and therefore "
+ "is not supported by graph_vibrations.py script. Please use "
+ "calibrate_shaper.py script to process it instead." % (logname,))
+
+
+def extract_speed(logname):
+ try:
+ speed = re.search('sp(.+?)n', os.path.basename(logname)).group(1).replace('_','.')
+ except AttributeError:
+ raise ValueError("File %s does not contain speed in its name and therefore "
+ "is not supported by graph_vibrations.py script." % (logname,))
+ return float(speed)
+
+
+def sort_and_slice(raw_speeds, raw_datas, remove):
+ # Sort to get the speeds and their datas aligned and in ascending order
+ raw_speeds, raw_datas = zip(*sorted(zip(raw_speeds, raw_datas), key=operator.itemgetter(0)))
+
+ # Remove beginning and end of the datas for each file to get only
+ # constant speed data and remove the start/stop phase of the movements
+ datas = []
+ for data in raw_datas:
+ sliced = round((len(data) * remove / 100) / 2)
+ datas.append(data[sliced:len(data)-sliced])
+
+ return raw_speeds, datas
+
+
+def setup_klipper_import(kdir):
+ global shaper_calibrate
+ kdir = os.path.expanduser(kdir)
+ sys.path.append(os.path.join(kdir, 'klippy'))
+ shaper_calibrate = importlib.import_module('.shaper_calibrate', 'extras')
+
+
+def vibrations_calibration(lognames, klipperdir="~/klipper", axisname=None, max_freq=200., remove=0):
+ setup_klipper_import(klipperdir)
+
+ # Parse the raw data and get them ready for analysis
+ raw_datas = [parse_log(filename) for filename in lognames]
+ raw_speeds = [extract_speed(filename) for filename in lognames]
+ speeds, datas = sort_and_slice(raw_speeds, raw_datas, remove)
+
+ # As we assume that we have the same number of file for each speeds. We can group
+ # the PSD results by this number (to combine vibrations at given speed on all movements)
+ group_by = speeds.count(speeds[0])
+ # Compute psd and total power of the signal
+ freqs, power_spectral_densities = calc_psd(datas, group_by, max_freq)
+ power_total = calc_powertot(power_spectral_densities, freqs)
+
+ fig, (ax1, ax2) = matplotlib.pyplot.subplots(2, 1, sharex=True)
+ fig.suptitle("Machine vibrations - " + axisname + " moves", fontsize=16)
+
+ # Remove speeds duplicates and graph the processed datas
+ speeds = list(OrderedDict((x, True) for x in speeds).keys())
+ plot_total_power(ax1, speeds, power_total)
+ plot_spectrogram(ax2, speeds, freqs, power_spectral_densities, max_freq)
+
+ fig.set_size_inches(10, 10)
+ fig.tight_layout()
+ fig.subplots_adjust(top=0.92)
+
+ return fig
+
+
+def main():
+ # Parse command-line arguments
+ usage = "%prog [options] "
+ opts = optparse.OptionParser(usage)
+ opts.add_option("-o", "--output", type="string", dest="output",
+ default=None, help="filename of output graph")
+ opts.add_option("-a", "--axis", type="string", dest="axisname",
+ default=None, help="axis name to be shown on the side of the graph")
+ opts.add_option("-f", "--max_freq", type="float", default=1000.,
+ help="maximum frequency to graph")
+ opts.add_option("-r", "--remove", type="int", default=0,
+ help="percentage of data removed at start/end of each files")
+ opts.add_option("-k", "--klipper_dir", type="string", dest="klipperdir",
+ default="~/klipper", help="main klipper directory")
+ options, args = opts.parse_args()
+ if len(args) < 1:
+ opts.error("No CSV file(s) to analyse")
+ if options.output is None:
+ opts.error("You must specify an output file.png to use the script (option -o)")
+ if options.remove > 50 or options.remove < 0:
+ opts.error("You must specify a correct percentage (option -r) in the 0-50 range")
+
+ fig = vibrations_calibration(args, options.klipperdir, options.axisname, options.max_freq, options.remove)
+ fig.savefig(options.output)
+
+
+if __name__ == '__main__':
+ main()
diff --git a/is_workflow/scripts/is_workflow.py b/is_workflow/scripts/is_workflow.py
new file mode 100644
index 0000000..13521e8
--- /dev/null
+++ b/is_workflow/scripts/is_workflow.py
@@ -0,0 +1,215 @@
+#!/usr/bin/env python3
+############################################
+###### INPUT SHAPER KLIPPAIN WORKFLOW ######
+############################################
+# Written by Frix_x#0161 #
+# @version: 2.0
+
+# CHANGELOG:
+# v2.0: new version of this as a Python script (to replace the old bash script) and implement the newer and improved shaper plotting scripts
+# v1.7: updated the handling of shaper files to account for the new analysis scripts as we are now using raw data directly
+# v1.6: - updated the handling of shaper graph files to be able to optionnaly account for added positions in the filenames and remove them
+# - fixed a bug in the belt graph on slow SD card or Pi clones (Klipper was still writing in the file while we were already reading it)
+# v1.5: fixed klipper unnexpected fail at the end of the execution, even if graphs were correctly generated (unicode decode error fixed)
+# v1.4: added the ~/klipper dir parameter to the call of graph_vibrations.py for a better user handling (in case user is not "pi")
+# v1.3: some documentation improvement regarding the line endings that needs to be LF for this file
+# v1.2: added the movement name to be transfered to the Python script in vibration calibration (to print it on the result graphs)
+# v1.1: multiple fixes and tweaks (mainly to avoid having empty files read by the python scripts after the mv command)
+# v1.0: first version of the script based on a Zellneralex script
+
+# Usage:
+# This script was designed to be used with gcode_shell_commands directly from Klipper
+# Parameters availables:
+# BELTS - To generate belts diagrams after calling the Klipper TEST_RESONANCES AXIS=1,(-)1 OUTPUT=raw_data
+# SHAPER - To generate input shaper diagrams after calling the Klipper TEST_RESONANCES AXIS=X/Y OUTPUT=raw_data
+# VIBRATIONS - To generate vibration diagram after calling the custom (Frix_x#0161) VIBRATIONS_CALIBRATION macro
+
+
+
+import os
+import time
+import glob
+import sys
+import shutil
+import tarfile
+from datetime import datetime
+
+#################################################################################################################
+RESULTS_FOLDER = os.path.expanduser('~/printer_data/config/adxl_results')
+SCRIPTS_FOLDER = os.path.expanduser('~/printer_data/config/scripts')
+KLIPPER_FOLDER = os.path.expanduser('~/klipper')
+STORE_RESULTS = 3
+#################################################################################################################
+
+from graph_belts import belts_calibration
+from graph_shaper import shaper_calibration
+from graph_vibrations import vibrations_calibration
+
+RESULTS_SUBFOLDERS = ['belts', 'inputshaper', 'vibrations']
+
+
+def is_file_open(filepath):
+ for proc in os.listdir('/proc'):
+ if proc.isdigit():
+ for fd in glob.glob(f'/proc/{proc}/fd/*'):
+ try:
+ if os.path.samefile(fd, filepath):
+ return True
+ except FileNotFoundError:
+ pass
+ return False
+
+
+def get_belts_graph():
+ current_date = datetime.now().strftime('%Y%m%d_%H%M%S')
+ lognames = []
+
+ for filename in glob.glob('/tmp/raw_data_axis*.csv'):
+ # Wait for the file handler to be released by Klipper
+ while is_file_open(filename):
+ time.sleep(3)
+
+ # Extract the tested belt from the filename and rename/move the CSV file to the result folder
+ belt = os.path.basename(filename).split('_')[3].split('.')[0].upper()
+ new_file = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[0], f'belt_{current_date}_{belt}.csv')
+ shutil.move(filename, new_file)
+
+ # Save the file path for later
+ lognames.append(new_file)
+
+ # Generate the belts graph and its name
+ fig = belts_calibration(lognames, KLIPPER_FOLDER)
+ png_filename = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[0], f'belts_{current_date}.png')
+
+ return fig, png_filename
+
+
+def get_shaper_graph():
+ current_date = datetime.now().strftime('%Y%m%d_%H%M%S')
+
+ globbed_files = glob.glob('/tmp/raw_data*.csv')
+ if len(globbed_files) > 1:
+ print("There is more than 1 measurement.csv found in the /tmp folder. Unable to plot the shaper graphs!")
+ print("Please clean the files in the /tmp folder and start again.")
+ sys.exit(1)
+
+ filename = globbed_files[0]
+
+ # Wait for the file handler to be released by Klipper
+ while is_file_open(filename):
+ time.sleep(3)
+
+ # Extract the tested axis from the filename and rename/move the CSV file to the result folder
+ axis = os.path.basename(filename).split('_')[3].split('.')[0].upper()
+ new_file = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[1], f'resonances_{current_date}_{axis}.csv')
+ shutil.move(filename, new_file)
+
+ # Generate the shaper graph and its name
+ fig = shaper_calibration([new_file], KLIPPER_FOLDER)
+ png_filename = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[1], f'resonances_{current_date}_{axis}.png')
+
+ return fig, png_filename
+
+
+def get_vibrations_graph(axis_name):
+ current_date = datetime.now().strftime('%Y%m%d_%H%M%S')
+ lognames = []
+
+ for filename in glob.glob('/tmp/adxl345-*.csv'):
+ # Wait for the file handler to be released by Klipper
+ while is_file_open(filename):
+ time.sleep(3)
+
+ # Cleanup of the filename and moving it in the result folder
+ cleanfilename = os.path.basename(filename).replace('adxl345', f'vibr_{current_date}')
+ new_file = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[2], cleanfilename)
+ shutil.move(filename, new_file)
+
+ # Save the file path for later
+ lognames.append(new_file)
+
+ # Sync filesystem to avoid problems as there is a lot of file copied
+ os.sync()
+
+ # Generate the vibration graph and its name
+ fig = vibrations_calibration(lognames, KLIPPER_FOLDER, axis_name)
+ png_filename = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[2], f'vibrations_{current_date}_{axis_name}.png')
+
+ # Archive all the csv files in a tarball and remove them to clean up the results folder
+ with tarfile.open(os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[2], f'vibrations_{current_date}_{axis_name}.tar.gz'), 'w:gz') as tar:
+ for csv_file in glob.glob(os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[2], f'vibr_{current_date}*.csv')):
+ tar.add(csv_file, recursive=False)
+ os.remove(csv_file)
+
+ return fig, png_filename
+
+
+# Utility function to get old files based on their modification time
+def get_old_files(folder, extension, limit):
+ files = [os.path.join(folder, f) for f in os.listdir(folder) if f.endswith(extension)]
+ files.sort(key=lambda x: os.path.getmtime(x), reverse=True)
+ return files[limit:]
+
+def clean_files():
+ # Define limits based on STORE_RESULTS
+ keep1 = STORE_RESULTS + 1
+ keep2 = 2 * STORE_RESULTS + 1
+
+ # Find old files in each directory
+ old_belts_files = get_old_files(os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[0]), '.png', keep1)
+ old_inputshaper_files = get_old_files(os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[1]), '.png', keep2)
+ old_vibrations_files = get_old_files(os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[2]), '.png', keep1)
+
+ # Remove the old belt files
+ for old_file in old_belts_files:
+ file_date = "_".join(os.path.splitext(os.path.basename(old_file))[0].split('_')[1:3])
+ for suffix in ['A', 'B']:
+ csv_file = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[0], f'belt_{file_date}_{suffix}.csv')
+ if os.path.exists(csv_file):
+ os.remove(csv_file)
+ os.remove(old_file)
+
+ # Remove the old shaper files
+ for old_file in old_inputshaper_files:
+ csv_file = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[1], os.path.splitext(os.path.basename(old_file))[0] + ".csv")
+ if os.path.exists(csv_file):
+ os.remove(csv_file)
+ os.remove(old_file)
+
+ # Remove the old vibrations files
+ for old_file in old_vibrations_files:
+ os.remove(old_file)
+ tar_file = os.path.join(RESULTS_FOLDER, RESULTS_SUBFOLDERS[2], os.path.splitext(os.path.basename(old_file))[0] + ".tar.gz")
+ if os.path.exists(tar_file):
+ os.remove(tar_file)
+
+
+def main():
+ # Check if results folders are there or create them
+ for result_subfolder in RESULTS_SUBFOLDERS:
+ folder = os.path.join(RESULTS_FOLDER, result_subfolder)
+ if not os.path.exists(folder):
+ os.makedirs(folder)
+
+ if len(sys.argv) < 2:
+ print("Usage: plot_graphs.py [SHAPER|BELTS|VIBRATIONS]")
+ sys.exit(1)
+
+ if sys.argv[1].lower() == 'belts':
+ fig, png_filename = get_belts_graph()
+ elif sys.argv[1].lower() == 'shaper':
+ fig, png_filename = get_shaper_graph()
+ elif sys.argv[1].lower() == 'vibrations':
+ fig, png_filename = get_vibrations_graph(axis_name=sys.argv[2])
+ else:
+ print("Usage: plot_graphs.py [SHAPER|BELTS|VIBRATIONS]")
+ sys.exit(1)
+
+ fig.savefig(png_filename)
+
+ clean_files()
+ print(f"Graphs created. You will find the results in {RESULTS_FOLDER}")
+
+
+if __name__ == '__main__':
+ main()