CI improvemens: General improvements#3984
CI improvemens: General improvements#3984hdiethelm wants to merge 25 commits intoLinuxCNC:masterfrom
Conversation
|
Remove eatmydata tested: |
f59978a to
1921e21
Compare
|
Did you see this message at the "Complete job" stage: |
Yes, commit is already there, I just wait for the last CI to pass before the next push. |
|
@BsAtHome Feel free to cancel any pipeline that has failed jobs. I think I don't have the access rights to do so. |
|
Now the bigger change will be to run all in prepared dockers or runners. It probably won't reduce the runtime a lot, estimated: 3-5 min. However, it is just nice for the debian to not hammer their package repo just for fun. But let's see how well that works. @BsAtHome Debian is the main target, right? So running rip-* under a debian container is also fine? At the moment, they run with an ubuntu-24.04 runner. |
|
Debian is the primary target, yes. Nice to see improvement and it is primarily in the independent packages building the documentation. Those were the slowest all the time. The tests are not going to be significantly faster because they run in sequence. We've been discussing parallel execution, but that requires #2722 to be fully implemented. There are a significant number of issues not addressed in that PR (some noted, others implied, still others to be discovered). |
|
#3983 is in. |
c617459 to
0c9aaf1
Compare
Thanks, re-based on top of master.
It depends what the target is. For local usage, #2722 is the most comfortable way for users. But it can be also done another way:
Due to the test runner will only shorten the CI if the package-indep gets below 12min, I see this as low prio for the CI. @andypugh Objections if I push docker images to the linuxcnc github? They will appear somewhere here: https://github.com/LinuxCNC/linuxcnc/packages I can probably do that in CI without any additional rights. However, I might need you if I mess something up and packages have to be deleted. I will try it in my account first with the free credits but to do something meaningful, I have to do it in the linuxcnc CI. |
|
Maybe the curl progress meter can be shut off at download. @NTULINUX, when do these kernels move to linuxcnc.org? Is there a procedure? |
|
Opened #3992 as a small stopgap for the firefox snap flake (issue #3991). It adds one |
|
@grandixximo This is fine for me. The main target here is to not have to do all this installs at all. But this will take some time. I could extract the --cpu improvements (25% faster package build) and if desired:
in a separate PR, so this can be already merged while I am busy here. |
|
So, I have a container based prototype running in my gitlab: Do you think such an approach is viable? If yes, I will try how to automatically update the docker images from time to time and then port all other targets. Advantage:
Disadvantage:
I will not yet merge this it this PR. As soon as I do this, packages will most probably appear in the original linuxcnc repo. |
Let's continue the discussion here. LinuxCNC is a bit special due to it need's a ton of depency's to build. The container needed is like 3.4GB in size.
If you find something, just tell me. I had some inspiration from https://github.com/open-webui/open-webui due to I know they build containers from CI and many different manuals / articles. |
|
The size of the container should be acceptable and manageable if it is in a local repo for quick download. If there need to be many different versions, that could be a problem. However, I have no clue what deals have been made with github. @andypugh, do you know of any deals with github? BTW, you need to rebase after I merged the snapped firefox killer. |
It fails anyways, let's see if there is any difference in time Error is: ERROR: ld.so: object 'libeatmydata.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
Scripts are easyer maintainable and can be tested locally apt upgrade should not be needed: Runner / docker should be up to date
There is no reason to fetch all for test builds There are no submodules, so deactivated
The old ones are outdated and will be removed as mentioned by the warning in the actions tab.
Done, was not so bad, only one man was missing, so I was able to enforce it. But I had to add an exclude option so I can exclude the auto updated VERSION / debian/changelog file. The regexp is not perfect, it would also exclude VERSION123 but better than nothing. Any good idea to do this? The doc step still fails with changed .po files. Should I ignore these? |
|
For the regex false-positives (VERSION123 etc.), git pathspec excludes give exact-path matching with no regex anchoring needed: git status -u --porcelain -- ':(exclude)VERSION' ':(exclude)debian/changelog'That replaces the For the |
Thanks for the hint with pathspec. I tend towards this solution. Regexp is the thing that many people don't really understand and get it wrong (including me). |
|
BTW: With the dockers from my repo, you can easily test parts of the CI locally with this command run from the local source directory and then copy-past snippets from ci.yml: I prefer podman over docker due to podman runs without setuid so the created files are not owned by root. |
Advantage: Reusable / Can be run locally
The docs build, integration and translations are fragile. Failing on translations/.po can lead to some self-inflicted wounds. That said, the most common problem is the newline consistency, where translations are missing or adding trailing newlines. Most of that gets caught now because msgmerge fails and that is good. This is not normally merged into the main tree, but just the weblate branch failing CI. That fail is a Good ThingTM. The default man-pages should all function and there should not be any non-.gitignored remains in docs/man/*. That bug has bitten many times. I think, in general, there should be no non-.gitignored remains anywhere after the build and test. That also goes for some of the junk left-over in config/* when, f.ex. running RIP qtdragon from a sim config. |
If I understand correctly, the new CI enforces this already
There are no tests for the UI's, since they are not run, no left over can be caught, we'd have to make tests that run the UIs, and then check for leftovers. @hdiethelm I could work on a separate PR for UI tests, would that be ok? |
Great!
As i understand it, there has been talk about test-running some/all/any of the sims in xvfb and do some basic functional testing. That would be a very great test target because the UIs also break. Some sim UIs haven't seen a test in ages (one ini-file has a very old version tag). See #3756. |
|
@hdiethelm planning to start on #3756 (xvfb-based UI smoke tests) once this lands or in parallel. Quick coordination question on the dep side before I start: my tests will need at minimum I think this is a different situation from your rtapi cleanup V2 (#3919) where I rebased: that one shares files in Two ways to handle the deps:
Either works for me. Lean which way? I'll branch from master so neither PR blocks the other. Whoever lands second adjusts the dep list (one-line edit). |
Probably you need 2. so the CI still runs on your branch. I have to rebase to master anyway before merge to check if all is still fine. Just do your thing, as long as there are only a few packages more, i will see them, no issue or if mine gets in first this should also be no issue for you to rebase before merge. |
|
So, I finally managed to get all packages inside the docker container: I would not call it elegant but it does the job. Docker build goes up from 7 to 12 min but due to this is only needed from time to time to be up to date, I don't see any issue. Or has anyone an idea how to install these packages without building / installing linuxcnc? The linuxcnc pipeline stays at ~ 21m duration / 3h usage. The advantage is that the debian archives are not loaded for nothing and that if they are down, the build still works. |
Actually, I check after build. Should I move the check to after test? It's easy to do, I can even test after build and after test, takes no time at all. @BsAtHome What to do with the untracked po files? At the moment, I don't ignore them but I made this build step non-failing, similar to cppcheck / shellcheck: |
Default ist RTAI. No issue so far but when RTAI, this is built
The po-files should just be ignored. There is no point in failing on them. The whole i18n and l10n stuff needs to be cleaned up anyway. But that is subordinate to fixing the docs build in general (docs build is quite fragile). And all of that is subordinate to reorganizing the docs structure. Currently, cppcheck and shellcheck must be ignored because the build is still not clean. FWIW, the ability to add -Werror took many, many months before all the kinks were ironed out. |
Done:
I know that, introducing this kind of checks takes a lot of patience. Best in my opinion is to warn only for certain files but make the test failing for everything that is already good. So at least no new issues are introduced and then go from there step by step. |
|
Any clue why this fails? Might be a broken package went into sid just right now? I did not do any relevant changes... |
|
@BsAtHome Thinking about this checks, of course I found some issues I created. One option would be to do something like this once and commit the file:
|
This is the follow up to #3983
It will be more in depth changes so riskier and will take some time.
There will be a few pushes, including some that should fail on purpose to test if everything works as desired. Just tell me if I abuse the CI to much and I will find a different solution.
It is experimental for now but I need a PR so the CI runs.