PrzemoF wrote:I'm going to post here things that probably should be changed in the source code, but I might not be able to do it on my own soon. It also may be a hint for a new developer what he/she might do [...]
An idea for pure Python programmers who want to create a dead-serious workbench, to be used by endusers and developers alike to stress and prioritize important features of FreeCAD.
Will your skills match the challenge? Have a look what technology is involved:
Here is the feature: similar to the macro recorder a "regression test recorder workbench".
It could offer three GUI buttons: "start", "stop", "cancel". And a textfield: "testname". It records user interactions in the same way as the macro recorder does, whatever users do on the GUI. Finally, when the user presses stop, it loops over all effective items in the tree and saves their transformations. All is saved as a Python
unittest file, which when executed, will replay the recorded interaction and compare the recorded transformations against the actual transformations when replayed.
So instead of having a macro and no comparison against any expected result, your tool builds up a regression test. We then motivate other users to record model creations that are important to them. Experts know how to model things that bring FreeCAD close to its limits of what it can do. The recordings will be great for automated regression testing. We can execute them on our continuous integration after every single build. Builds and tests are triggered by every commit to the GitHub repository by any developer. They get executed on all operating systems.
You do
not write tests! You
create your own GUI workbench with its own GUI menu that offers record-replay functionality similar to the MacroRecorder. End users who want to make sure certain functionality is found again in future FreeCAD releases will use it "to manifest their interests". Developers who must modify or maintain code will use the test results as daily feedback.
Here is an official example what Python unittests look like. With
assertions test bots compare expected against actual values, that means recorded transformations against actual transformations as effectively found during replay.
Code: Select all
import unittest
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
unittest.main()
(
https://docs.python.org/3.5/library/uni ... ic-example)