Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop some type of CI/QA for the packages #69

Open
rgerhards opened this issue Nov 10, 2017 · 38 comments
Open

Develop some type of CI/QA for the packages #69

rgerhards opened this issue Nov 10, 2017 · 38 comments

Comments

@rgerhards
Copy link
Member

rgerhards commented Nov 10, 2017

Ideally, we should have some system to automatically test at the generated packages routinely (once a day would be nice for the daily ones). How this exactly can be done needs to be found out.

A rough idea (but nothing more) is along these lines - for a minimal system:

  • use Travis CI scheduled builds (gives us Ubuntu 14.04 testing only, but at least that)
  • add ppa
  • try to install components from it, check that apt succeeds
  • we could then possibly run an adpated rsyslog testbench (more work to do)

Even without testbench it would be better than what we have today. Let's reach for the low hanging fruit, then build on that. It would be great if we could find someone willing to help setting such a thing up.

@rgerhards rgerhards changed the title Develop some time of CI/QA for the packages Develop some type of CI/QA for the packages Nov 11, 2017
rgerhards added a commit that referenced this issue Nov 13, 2017
This is to be run daily by Travis, ensure it least bare basic
usability of the PPA.

see also #69
@deoren
Copy link

deoren commented Feb 24, 2018

@rgerhards What do you think of using Docker containers for this? The steps you've already noted would probably be enough to catch issues such as #73.

Probably a set of containers would do:

  • Ubuntu 14.04
  • Ubuntu 16.04
  • Ubuntu 17.10

and then in a few months a container for Ubuntu 18.04 could be added.

@atc0005
Copy link
Contributor

atc0005 commented Jun 7, 2020

Ideally, we should have some system to automatically test at the generated packages routinely (once a day would be nice for the daily ones). How this exactly can be done needs to be found out.

@rgerhards GitHub Actions supports scheduled events:

https://help.github.com/en/actions/reference/events-that-trigger-workflows#scheduled-events-schedule

Any interest in using this? I've got some experience setting up Action Workflows for some of my projects and could take a stab at this if you'd be willing to use it. I recall reading somewhere that you prefer to be hosting provider agnostic when possible, but with the reference to Travis I assume that you don't have a strong preference here.

@rgerhards
Copy link
Member Author

Any interest in using this?

definitely interested in it! Any samples, cooperation would be appreciated.

@atc0005
Copy link
Contributor

atc0005 commented Jun 8, 2020

Any interest in using this?

definitely interested in it! Any samples, cooperation would be appreciated.

Great, I'll try taking a stab at this soon and will report back. Once the PPA GHAW is working properly a similar job could probably be setup for OBS.

atc0005 added a commit to atc0005/rsyslog-pkg-ubuntu that referenced this issue Jun 11, 2020
This GHAW performs the following actions on a set schedule:

- Install stock rsyslog, emit rsyslog and pkg cache info
- Enable PPA, install PPA-provided rsyslog, emit pkg and cache info
- Install additional PPA-provided rsyslog packages

This process is performed for both the "daily stable" and "scheduled
stable" PPAs.

References:

- rsyslog#69
- #1
@atc0005
Copy link
Contributor

atc0005 commented Jun 11, 2020

@rgerhards Still working on this, but so far, so good:

https://github.com/atc0005/rsyslog-pkg-ubuntu/actions/runs/132537854

Going to replace use of sudo apt install with sudo apt-get install (let a few slip by) and also capture the output from systemctl status rsyslog as an additional item after installing and restarting rsyslog.

I'll squash the commits and submit a PR probably later today or tomorrow.

The file can be found here (for now):

atc0005/rsyslog-pkg-ubuntu@98d7b0f

@atc0005
Copy link
Contributor

atc0005 commented Jun 11, 2020

@rgerhards I made those changes and some additional. Switched the timing to hourly, included Ubuntu 20.04 in the mix. By the time you read this the job should have run a number of times more and you'll be able to get a sense for how the output would collect.

I initially had daily or even every 4 hours as a schedule target, but the whole set completes very quickly; the last run completed in 2 minutes, 7 seconds. Even so, I left the other options commented out so that I could easily switch the timing to whatever you prefer. It may be worth leaving an alternate "test" value staged in comments so that you can switch out the schedule if/when you need to troubleshoot the process.

@atc0005
Copy link
Contributor

atc0005 commented Jun 11, 2020

Already it looks like including Ubuntu 20.04 surfaced at least one unexpected item:

E: Unable to locate package rsyslog-mmjsonparse
##[error]Process completed with exit code 100.

Could be a false-positive though related to an earlier failure.

This was with installing all of the packages in one "block" vs separate installation commands (one per package). I'm going to go ahead and modify the workflow to use that approach for both jobs.

refs: https://github.com/atc0005/rsyslog-pkg-ubuntu/actions/runs/132678652

@rgerhards
Copy link
Member Author

@friedl can you pls have a look at the issue

@rgerhards
Copy link
Member Author

@atc0005 thx - this looks very good. I admit I had only a short glimpse and do not yet fully understand of whats going on, but it looks very useful. I guess we need to discuss quite a bit of things :-)

If I understand correctly, we could also easily use OBS as a test target. Also I assume we can also test CentOS and Fedora? Note that I myself am currently very focused on OBS and only very occasionally worked on the old system with PPA (which I am far from fully understanding).

@rgerhards
Copy link
Member Author

Reading actions/runner-images#45 I wonder if we could work something along rsyslog's buildbot CI environment. But granted, testing with docker is not fully comprehensive (no real system startup or systemd involved) and adding full VM on-demand creation would possibly cause quite some work on the buildbot front. Any good compromse? I mean testing on Ubuntu is better than nothing, but it's really a small subset, especially from the enterprise PoV...

@atc0005
Copy link
Contributor

atc0005 commented Jun 12, 2020

Already it looks like including Ubuntu 20.04 surfaced at least one unexpected item:

E: Unable to locate package rsyslog-mmjsonparse
##[error]Process completed with exit code 100.

I eventually worked out the syntax and the Ubuntu 20.04 jobs are running entirely separate from the Ubuntu 16.04 and 18.04 jobs allowing those to run to completion. Thankfully the chosen settings don't "mask" the results from the Ubuntu 20.04 job runs allowing us to see that they failed and to some degree why they failed.

Latest example as of this writing (changed the timing in the last commit):

https://github.com/atc0005/rsyslog-pkg-ubuntu/runs/765022335?check_suite_focus=true

The complaint this time was regarding another package not being found.

I booted up a local LXD Ubuntu 20.04 container, added the ppa:adiscon/v8-devel repo and then proceeded to try and install the packages that the GitHub Actions Workflow attempts to handle, but didn't make it far. Though it appears that the PPA is "registered", apt-cache policy rsyslog seems to illustrate that the PPA is not being consulted. When I repeat the process, but this time adding ppa:adiscon/v8-stable packages from that PPA are installed.

Seems that something is up with the adiscon/v8-devel PPA specific to Ubuntu 20.04.

@atc0005
Copy link
Contributor

atc0005 commented Jun 12, 2020

Reading actions/virtual-environments#45 I wonder if we could work something along rsyslog's buildbot CI environment. But granted, testing with docker is not fully comprehensive (no real system startup or systemd involved) and adding full VM on-demand creation would possibly cause quite some work on the buildbot front. Any good compromse? I mean testing on Ubuntu is better than nothing, but it's really a small subset, especially from the enterprise PoV...

You are right, GitHub Actions limits the virtual Linux environments to Ubuntu. I imagine it works for a lot of use cases (including the scope of this one repo), but not for testing packages intended for other distros.

I've been using LXD containers for quick, local testing and really like how lightweight they are and how they attempt to emulate a full VM environment. It doesn't sound like they would be an option within the environments provided by GitHub Actions, but maybe for buildbot use they would be.

Maybe start with GitHub Actions for this repo to test package installation from PPA and OBS, sort out the kinks there and then work on buildbot for other distros.

Once the buildbot setup is stable, potentially either retire the GitHub Actions setup here or leave it running in parallel.

@atc0005
Copy link
Contributor

atc0005 commented Jun 12, 2020

I booted up a local LXD Ubuntu 20.04 container, added the ppa:adiscon/v8-devel repo and then proceeded to try and install the packages that the GitHub Actions Workflow attempts to handle, but didn't make it far. Though it appears that the PPA is "registered", apt-cache policy rsyslog seems to illustrate that the PPA is not being consulted. When I repeat the process, but this time adding ppa:adiscon/v8-stable packages from that PPA are installed.

Seems that something is up with the adiscon/v8-devel PPA specific to Ubuntu 20.04.

I forgot to add: I'll leave the Workflow running with its current schedule of every 15 minutes in case your team wishes to test resolution of the PPA "not registering" (for lack of a more appropriate description) properly with Ubuntu 20.04. Once that is sorted it should be picked up in the next scheduled Workflow run and provide the results under the Actions tab of my fork. I can also go ahead and clean up the Workflow file and submit as a PR here if you'd like to get it merged in at the current state, or @friedl can copy the existing file and test in another fork. Whatever works for you guys.

Also, I went ahead and added a Workflow for installing from OBS. It hasn't been tested yet, but should run shortly. I'll check-in on it later to see if it had issues and if minor, will work to correct them today.

@rgerhards
Copy link
Member Author

@atc0005 I guess the key point is that we get to some script which cleverly uses docker containers to do as many checks as possible. That way we could integrate into Travis and/or buildbot. We already do some parts of that with e.g. the clang static analyzer. I guess we can borrow ideas from the github actions to generate these scripts.

The bottom line is that we cannot do a full startup and functional test of rsyslog/the package via docker (we can get systemd running, so we may get to "half the real thing", but we would really need on-demand VMs to do this). Better go for the lower-hanging fruit, at least for now.

@atc0005
Copy link
Contributor

atc0005 commented Jun 12, 2020

Quick update: The OBS Workflow is working now. Ubuntu 16.04 didn't like the official setup directions, so I modified the process somewhat to pipe the signing key into apt-key add instead of dropping the file into the trusted keys location (Ubuntu 18.04+ was fine with this, but not 16.04). Now all three supported LTS editions trust the OBS repo and are passing.

@friedl
Copy link
Contributor

friedl commented Jun 12, 2020

@friedl can you pls have a look at the issue

I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.

The 16.04 install on the other hand works fine, because that one completed for the 8,2006.0~ build.

@atc0005
Copy link
Contributor

atc0005 commented Jun 16, 2020

If you feel there is value, I can go ahead and submit the work done thus far in a cleaned-up PR to this repo (likely tomorrow). While it won't provide the necessary support for other related repos (e.g., the CentOS/RHEL package repo), hopefully it will be useful for monitoring the packages generated from this repo?

If you do see some value, at what frequency would you like to run the jobs? Once daily, 4 hours?

@rgerhards
Copy link
Member Author

@atc0005 I agree it helps in any case, so it makes sense to go forward. If I understand correctly, this job runs on the repo, so it catches both daily and scheduled stable build. Then I would think once a day is sufficient.

Question: how do we get error notification? Requires it polling the github project (TBH, this will not work very well) or is there any way to obtain push notifications?

@atc0005
Copy link
Contributor

atc0005 commented Jun 17, 2020

@rgerhards: I agree it helps in any case, so it makes sense to go forward.

Great, I'll prepare a Pull Request then. I had hoped to get it done today, but that hasn't worked out. I'll try to get this in tomorrow.

If I understand correctly, this job runs on the repo

It can run anywhere you like, but it may make sense to run it here in this repo just so you can reference the results. GitHub offers "badges" that display the results of the last job (or in this case "jobs"), so right on the main README you could display whether the OBS and PPA package jobs are failing or passing. The badges could link directly to the latest results for each.

so it catches both daily and scheduled stable build. Then I would think once a day is sufficient.

If we use the current GitHub Actions Workflows I drafted then both of those builds will be tested. I'll update them to use a daily schedule before submitting the PR.

Question: how do we get error notification? Requires it polling the github project (TBH, this will not work very well) or is there any way to obtain push notifications?

There are multiple ways to be notified:

Screenshot of the settings:

image

This is of my personal account settings. I don't recall if there is a GitHub Organization-wide setting.

I thought that there was Webhook support for GitHub Actions based on prior reading, but I may have overlooked it when I checked just now.

@atc0005
Copy link
Contributor

atc0005 commented Jun 19, 2020

@friedl: I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.

Is this only for the short term or is this expected to persist for a while? I ask because I'm trying to determine how the PPA-based workflow should handle that scenario.

Right now the daily stable PPA workflow is configured to allow the Ubuntu 16.04 and 18.04 jobs to continue when the 20.04 job fails. Should that behavior continue (marking the 20.04 job as "experimental"), or should the 16.04 and 18.04 jobs be halted when the 20.04 fails due to a subpackage installation issue?

@atc0005
Copy link
Contributor

atc0005 commented Jul 2, 2020

@friedl: I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.

Is this only for the short term or is this expected to persist for a while? I ask because I'm trying to determine how the PPA-based workflow should handle that scenario.

Right now the daily stable PPA workflow is configured to allow the Ubuntu 16.04 and 18.04 jobs to continue when the 20.04 job fails. Should that behavior continue (marking the 20.04 job as "experimental"), or should the 16.04 and 18.04 jobs be halted when the 20.04 fails due to a subpackage installation issue?

Hi all,

Just looping back to see if you saw my last response and if you have any further feedback. Worst case I can just squash the commits for what I have and submit as-is with further tweaks via follow-up PRs.

atc0005 added a commit to atc0005/rsyslog-pkg-ubuntu that referenced this issue Jul 3, 2020
CHANGES

The Workflows provided by this Pull Request perform the following
actions on a daily schedule:

- Install stock Ubuntu rsyslog

- Enable official "upstream" repo, install upstream rsyslog

- Install additional upstream rsyslog packages as separate steps to
  help isolate potential installation issues

Other output captured:

- rsyslog packages and cache info
- systemctl status output
- dpkg -l output filtered to rsyslog, adiscon
- 'apt-cache policy rsyslog' results before and after adding PPA

These tasks are performed on these Ubuntu releases:

- Ubuntu 16.04
- Ubuntu 18.04
- Ubuntu 20.04

This process is performed for:

- "daily stable" PPA
- "scheduled stable" PPA
- Open Build Service (OBS) repo

The main README has been updated to provide status badges to quickly
display the status of the latest workflow job executions.

REFERENCES

- rsyslog#69
- #1
@atc0005
Copy link
Contributor

atc0005 commented Jul 3, 2020

@friedl: I think the reason is, that the daily stable for 20.04 is still not complete because of the missing librelp. In the log for the 20.04 build of the daily stable, you can see that it tries to install subpackages from rsyslog-8.2001.0, which never existed for 20.04.

Is this only for the short term or is this expected to persist for a while? I ask because I'm trying to determine how the PPA-based workflow should handle that scenario.
Right now the daily stable PPA workflow is configured to allow the Ubuntu 16.04 and 18.04 jobs to continue when the 20.04 job fails. Should that behavior continue (marking the 20.04 job as "experimental"), or should the 16.04 and 18.04 jobs be halted when the 20.04 fails due to a subpackage installation issue?

Hi all,

Just looping back to see if you saw my last response and if you have any further feedback. Worst case I can just squash the commits for what I have and submit as-is with further tweaks via follow-up PRs.

@rgerhards This is the path I ended up going. I attempted to cleanup the work I've done on the branch and tested in my fork and have submitted #104 for review/consideration.

As previously discussed this doesn't cover anything aside from this repo, but perhaps having the workflows execute here with updated the status badges on the README will make the changes worth including.

@rgerhards
Copy link
Member Author

@atc0005 sorry for the delay, as you possibly have noticed I was busy on a related effort, getting the repo to a PR based workflow. Now working on the integration of your work. I guess we can also look into how to merge this all together into a PR based "check-before-merge" style of pipeline.

@atc0005
Copy link
Contributor

atc0005 commented Jul 6, 2020

@rgerhards: sorry for the delay

I understand. Only so much time in the day!

I guess we can also look into how to merge this all together into a PR based "check-before-merge" style of pipeline.

GitHub Actions supports that as well. From what I remember, you can set multiple on triggers (probably not official description) where you want a Workflow to run. PR #104 triggers on a schedule, but you can also trigger on pull requests.

For some of my personal projects I have build/linting tasks run when PRs are opened and when the linked branch is updated (force-pushed or fast-forward).

Relevant YAML block:

# Run builds for Pull Requests (new, updated)
# `synchronized` seems to equate to pushing new commits to a linked branch
# (whether force-pushed or not)
on:
  pull_request:
    types: [opened, synchronize]

I believe that you can combine the two if desired to have the same Workflow run on a schedule and on specific events.

Once you are happy with the Workflows, you can set them as required within the GitHub repo configuration and merges will be blocked unless the Workflow runs pass (unless you allow administrators to override the status checks and merge anyway).

@rgerhards
Copy link
Member Author

@atc0005 just want to point you to the buildbot-based CI I am setting up in case you haven't yet seen it. This is a tester (title is no longer correct, bug is under investitigation, it's an OBS problem):

https://build.rsyslog.com/#/builders/239/builds/36

relevant part of buildbot config is here:

https://github.com/rsyslog/buildbot-config/blob/master/master/master_include_rsyslogpkg.py#L101

@rgerhards
Copy link
Member Author

@atc0005

I believe that you can combine the two if desired to have the same Workflow run on a schedule and on specific events.

I think the workflow obviously needs to be adapted so that it takes newly built packages from the PR. This currently looks a bit of a problem to me (maybe we can setup a pure testing only repo for this case, but at least it sounds somewhat complicated...). Not keen of the idea to pull them in via local files (for reproduction).

Currently this looks like a mayor step / stumbling stone to me. I guess it's irrelevant if it is buildbot or github actions in this regard (with buildbot we may be able to do some more magic on the builder machine, though).

@atc0005
Copy link
Contributor

atc0005 commented Jul 6, 2020

@rgerhards: maybe we can setup a pure testing only repo for this case

I see what you mean. You'd have to build the code, the packages, have a test repo in place to host the packages and then test installing from that repo. Sounds pretty brittle.

You can however install packages locally after building them though via apt-get install ./path/to/filename, so while it might not check all of the boxes, just building the package and installing it would be a step forward?

It might almost be worth thinking of the goal as a series of milestones and shoot for the easiest first, then iterate off of the results (which is likely how you're already looking at this).

Packing building is outside of my current skillset however, so I am likely overlooking a lot of blockers.

Tangent: Is there a primary/consolidated list of available packages somewhere? I'm thinking in terms of how to keep the new GitHub Action Workflow files updated with newer packages as they become available (or are phased out).

I'm seeing a scenario like with the official project's changelog: merging a PR there requires a fairly quick follow-up update to the changelog so that the PR changes are reflected.

@rgerhards
Copy link
Member Author

You can however install packages locally after building them though via apt-get install ./path/to/filename, so while it might not check all of the boxes, just building the package and installing it would be a step forward?

I think we can create a local repository and install from there. This should be very close to the "real thing", especially when it comes to dependency resolution. Looking at this.

It might almost be worth thinking of the goal as a series of milestones and shoot for the easiest first, then iterate off of the results (which is likely how you're already looking at this).

yup, that's what I am aiming at

Packing building is outside of my current skillset however, so I am likely overlooking a lot of blockers.

Please also have a look at #117 - comments and suggestions are appreciated. Fighting with caching ATM. This resembles what I already do on buildbot. But actions has greater potential.

Tangent: Is there a primary/consolidated list of available packages somewhere? I'm thinking in terms of how to keep the new GitHub Action Workflow files updated with newer packages as they become available (or are phased out).

Sounds useful, but is not yet there. I have begun to work on the packaging projects and there is a lot of movement ATM. I guess it makes sense to compile a list once it is stable. Help is always appreciated :-)

@atc0005
Copy link
Contributor

atc0005 commented Jul 8, 2020

Sounds useful, but is not yet there. I have begun to work on the packaging projects and there is a lot of movement ATM. I guess it makes sense to compile a list once it is stable. Help is always appreciated :-)

The only thought I had regarding automating the package list is to setup a GitHub Action that clones the repo and treating one of the package list files (I forget what they're called) as authoritative, checks that list against the current GitHub Actions Workflows. This would help ensure that the daily installation of available packages stays in sync with the actual available packages.

Please also have a look at #117 - comments and suggestions are appreciated. Fighting with caching ATM.

Take a look at these links:

You can specify the runner type for each job in a workflow. Each job in a workflow executes in a fresh instance of the virtual machine. All steps in the job execute in the same instance of the virtual machine, allowing the actions in that job to share information using the filesystem.

In short, I think you'll need to setup different jobs and not one job with many steps. That (based on those docs) should provide a fresh environment for each build.

@rgerhards
Copy link
Member Author

In short, I think you'll need to setup different jobs and not one job with many steps. That (based on those docs) should provide a fresh environment for each build.

as there is a lot of common work to do, I think it is useful to have it in one job. But multiple jobs may get us better concurrency :) maybe it's best to combine both and share the common artifact - will give it a try. Thx!

@rgerhards
Copy link
Member Author

Update: switched to matrix build - it turned out that the overhead is less than 1 minute per builder VM, so it looks acceptable. On the other hand, persisting an artifact in this case is pretty complex. Now working on local installs...

@rgerhards
Copy link
Member Author

@atc0005 thx for introducing me to github actions! A very powerful tool :-)

I just also tried to use it on the rsyslog project, but for some reason the PR actions is not activated. Maybe you have an idea what the issue is? https://github.com/rsyslog/rsyslog/pull/4349/files

@atc0005
Copy link
Contributor

atc0005 commented Jul 8, 2020

@atc0005 thx for introducing me to github actions! A very powerful tool :-)

You're welcome, and I agree, there is a ton of functionality there. I barely feel I've managed to scratch the surface!

I just also tried to use it on the rsyslog project, but for some reason the PR actions is not activated. Maybe you have an idea what the issue is? https://github.com/rsyslog/rsyslog/pull/4349/files

Not sure, but I noticed that there is at least one existing CI failures. I'm not sure if that would block GitHub Actions from running. I'm also not completely sure whether the workflows would be active with them still residing within your branch and not within a branch on the main repo.

Also, double-check to ensure that Actions are enabled for the main repo. Perhaps they default to off if 3rd-party CI is already enabled?

https://github.com/rsyslog/rsyslog/settings/actions

@rgerhards
Copy link
Member Author

@atc0005 just FYI: I have also integrated Github Actions experimentally into the main rsyslog project. Looks promising!

@rgerhards
Copy link
Member Author

@atc0005 sry, forgot: any further suggestings for testing inside the packaging project are appreciated. The compile, package build and package "publish" (via local repository) is now done, I guess we could not integrate some of the cron job checks. Just need to move to some other work I had stopped to get to the current stage :-)

@atc0005
Copy link
Contributor

atc0005 commented Jul 13, 2020

@rgerhards: sry, forgot: any further suggestings for testing inside the packaging project are appreciated. The compile, package build and package "publish" (via local repository) is now done, I guess we could not integrate some of the cron job checks. Just need to move to some other work I had stopped to get to the current stage :-)

Wow, that workflow is pretty comprehensive. I glanced over it, but will need to dig further until I can fully understand all of the steps. I've wanted to learn how to setup an APT repo for a while now, perhaps I'll pickup the essentials from reading over what you've done thus far.

Regarding the cron job checks, are you ready to have those Workflows attempt to install recent packages?

I updated the jobs which run in my fork and installation of packages you've only recently created are failing (not found). I'm guessing that they're not yet available in the PPA or OBS repos? Is it worth updating the Workflows in this repo with those packages (knowing it will cause the jobs to fail until fixed), or wait until you have the packages available for installation from the repos?

@rgerhards
Copy link
Member Author

@atc0005 thx for the comments, I appreciate them. without your comments and samples, we would never have arrived here!

The current workflow does not yet publish to any repository. I use local file install for the current checks (that's a deb repository inside the local file system) and use the artifacts created in the initial steps to do so. So it is not the real thing. Also, I use OBS to build the packages, which simplifies things (and makes sense assuming we want to get rid of the PPAs at some point in time).

The next step is possibly storing of secrets, so that we can do a real publish. With that, we have the issue that repository population is inherently async (both OBS and PPA). so we cannot do a real check. Maybe this means we probably need to run the deb build tools ourself. Than we can do a sync publish and full check.

That's another big effort. Maybe I'll do a similar CI workflow for the RPMs first - there we know exactly how to build the rpms. So if nothing else, I could actually implement the sync "publish & full check" method. As you see, lot's of exciting work to do! :-)

@atc0005
Copy link
Contributor

atc0005 commented Jul 13, 2020

@rgerhards: thx for the comments, I appreciate them. without your comments and samples, we would never have arrived here!

You're welcome, I'm glad I could help in some small way.

I believe I understand the rest of your follow-up. Building and publishing (as a single unit of work) is currently not automated, so as soon as you finish packing a new module (e.g., #102), it requires manual effort to make it available via the repos (PPA, OBS).

When you build and publish newer modules, what would be your trigger or "hook" point for updating the two earlier workflows:

  • Install rsyslog packages from PPA
  • Install rsyslog packages from OBS

Assuming that those workflows still have value (as a dashboard status on the main project README if nothing else), I guess what I'm asking is when you'd like to have them updated so that they attempt to install newer packages (e.g., #102).

Example changes to the workflow files:

diff --git a/.github/workflows/install-rsyslog-packages-from-obs.yml b/.github/workflows/install-rsyslog-packages-from-obs.yml
index 0a49ae5..49e9cac 100644
--- a/.github/workflows/install-rsyslog-packages-from-obs.yml
+++ b/.github/workflows/install-rsyslog-packages-from-obs.yml
@@ -196,5 +196,8 @@ jobs:
       - name: Install rsyslog-mmkubernetes
         run: sudo apt-get install rsyslog-mmkubernetes

+      - name: Install rsyslog-impcap
+        run: sudo apt-get install rsyslog-impcap
+
       - name: Display installed rsyslog packages
         run: dpkg -l | grep -E 'rsyslog|adiscon'
diff --git a/.github/workflows/install-rsyslog-packages-from-ppa.yml b/.github/workflows/install-rsyslog-packages-from-ppa.yml
index 2c123b4..6f5a024 100644
--- a/.github/workflows/install-rsyslog-packages-from-ppa.yml
+++ b/.github/workflows/install-rsyslog-packages-from-ppa.yml
@@ -198,6 +198,9 @@ jobs:
       - name: Install rsyslog-mmkubernetes
         run: sudo apt-get install rsyslog-mmkubernetes

+      - name: Install rsyslog-impcap
+        run: sudo apt-get install rsyslog-impcap
+
       - name: Display installed rsyslog packages
         run: dpkg -l | grep -E 'rsyslog|adiscon'

@@ -351,5 +354,8 @@ jobs:
       - name: Install rsyslog-mmkubernetes
         run: sudo apt-get install rsyslog-mmkubernetes

+      - name: Install rsyslog-impcap
+        run: sudo apt-get install rsyslog-impcap
+
       - name: Display installed rsyslog packages
         run: dpkg -l | grep -E 'rsyslog|adiscon'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants