Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increases
Forum rules
Be nice to others! Respect the FreeCAD code of conduct!
Be nice to others! Respect the FreeCAD code of conduct!
-
- Posts: 52
- Joined: Sat Jul 25, 2020 2:36 pm
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
hi user1234. I did indeed try this before with no difference.
Just now I tried again but no luck.
Both the tick box OFF and number at 0
But thx for bringing up the idea.
Just now I tried again but no luck.
Both the tick box OFF and number at 0
But thx for bringing up the idea.
-
- Posts: 52
- Joined: Sat Jul 25, 2020 2:36 pm
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
Hi,
in my mind that's a workaround, but can't be accepted as a solution....
Gruß Herbert
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
It isn't even a workaround as it doesn't help anyway
Can this come from Python? I don't think so. Or is this from an underlying library? I hope not!
A Sketcher Lecture with in-depth information is available in English, auf Deutsch, en français, en español.
-
- Posts: 52
- Joined: Sat Jul 25, 2020 2:36 pm
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
I think I'm a bit further down the line.
In the "grbl_post.py" you can find ;
# Parse the op
gcode += parse(obj)
Commenting this seems to fix the Memory issue.
Off-course no GRBL is written out to the file, but it seems somehow related to this step.
parse is an function in the same grbl_post.py
In the "grbl_post.py" you can find ;
# Parse the op
gcode += parse(obj)
Commenting this seems to fix the Memory issue.
Off-course no GRBL is written out to the file, but it seems somehow related to this step.
parse is an function in the same grbl_post.py
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
I don't think that shows anything important. Clearly if you do not process the g-code it should not cause memory usage.
There is nothing evil about the term "parse"; it is just the name of the function that creates the g-code. It could be called anything.
I tried the same experiment with linuxcnc_post.py, and the results were similar. (Parse is also used in linuxcnc_post.py.)
Gene
There is nothing evil about the term "parse"; it is just the name of the function that creates the g-code. It could be called anything.
I tried the same experiment with linuxcnc_post.py, and the results were similar. (Parse is also used in linuxcnc_post.py.)
Gene
-
- Posts: 52
- Joined: Sat Jul 25, 2020 2:36 pm
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
I'm far from being a Python expert but do have some scripting background.
It shows that up to this point no memory is taken by the code executed earlier.
The approach I took here is to see when the memory start creeping up.
Which module/script it is happening in.
I understand that the complete g-code code (whatever the size is going to be) is generated in memory.
After is has been generated the output is saved in a file.
One would expect that after saving the file the memory could be freed.
Also weird that a during the generation of the gcode file of several Mbytes, it takes Gbytes in Memory.
I assume the different gcode post processors (grbl - linuxcnc- etc) are based on a template. So they behave the same.
I'll dig further...
It shows that up to this point no memory is taken by the code executed earlier.
The approach I took here is to see when the memory start creeping up.
Which module/script it is happening in.
I understand that the complete g-code code (whatever the size is going to be) is generated in memory.
After is has been generated the output is saved in a file.
One would expect that after saving the file the memory could be freed.
Also weird that a during the generation of the gcode file of several Mbytes, it takes Gbytes in Memory.
I assume the different gcode post processors (grbl - linuxcnc- etc) are based on a template. So they behave the same.
I'll dig further...
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
That should reveal where the culprit is. Hopefully it can be fixed and is not inside of an external library.
A Sketcher Lecture with in-depth information is available in English, auf Deutsch, en français, en español.
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
I think I suggested about a year ago someone should be doing leakage testing on FreeCAD ( my usage is mostly Part Design + PathWB. )
My mobo only supports 4GB of RAM and after a while my whole system gets very slow to change windows / desktops and generally gets sluggish. Looks a lot like disk trashing swap files as RAM dries up.
My mobo only supports 4GB of RAM and after a while my whole system gets very slow to change windows / desktops and generally gets sluggish. Looks a lot like disk trashing swap files as RAM dries up.
-
- Posts: 52
- Joined: Sat Jul 25, 2020 2:36 pm
Re: Using the PATH WB and Adaptive processing not releasing memory. Repeating the PostProcessing into a GRBL file increa
I've made some progress.
The attached code is modified version of the std. grbl_post.py PostProcessor. This a a very dirty quick fix. No cleanup done.
This was a trail&error approach as I'm not an Python expert.
Can someone please test this a confirm my findings. Use it at your own risk.
In my case it results in a memory saving of approx. 2GB - 0,2MB = 1,8GB per run of the PostProcessor
Exising grbl_post.py uses up to approx. 2GB on my model when PostProcessing to a GRBL file
The modified code grbljcl_post.py uses up to approx. 200MB on the same model when PostProcessing to a GRBL file.
Still 200MB to go but no clue yet.
No difference was found between the 2 output files using unix's : diff
(Except for the timestamp and the postprocessor name which are in the output file)
Looking forward to response from others with the same findings.
The attached code is modified version of the std. grbl_post.py PostProcessor. This a a very dirty quick fix. No cleanup done.
This was a trail&error approach as I'm not an Python expert.
Can someone please test this a confirm my findings. Use it at your own risk.
In my case it results in a memory saving of approx. 2GB - 0,2MB = 1,8GB per run of the PostProcessor
Exising grbl_post.py uses up to approx. 2GB on my model when PostProcessing to a GRBL file
The modified code grbljcl_post.py uses up to approx. 200MB on the same model when PostProcessing to a GRBL file.
Still 200MB to go but no clue yet.
No difference was found between the 2 output files using unix's : diff
(Except for the timestamp and the postprocessor name which are in the output file)
Looking forward to response from others with the same findings.
- Attachments
-
- grbljcl_post.py
- (22.78 KiB) Downloaded 52 times