[Dnsmasq-discuss] unittests

Dominik Derigs dl6er at dl6er.de
Tue Oct 12 03:22:46 UTC 2021


Hey Petr,

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> Hi Dominik,
> 
> those tests look great. Something like that is exactly what I had
> on
> mind for dnsmasq itself. Would you mind if I borrow few things
> and try
> to make some dnsmasq-only parts, not dependent on pihole?

Go ahead and take what is useful! Just pointing out that you'll
want to look at out development branch (not yet merged to master)
because tests were recently improved to run a locally running
powerDNS authoritative server + recursor. This is set up here:

https://github.com/pi-hole/FTL/blob/development/test/pdns/setup.sh

DNS records are tested to resolve as expected, e.g., here:

https://github.com/pi-hole/FTL/blob/24af7889588f2567705ba100a4cfd9bee62b6ef2/test/test_suite.bats#L268-L357

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> Just curious. Why do you support linking only to static
> libraries? Is
> pihole project opposed to be eventually packages as a normal
> distribution package?

Yes, we do not want to get packaged into distro packets because
this would maybe pin users to extremely old versions and would
drastically increase workload on our side when we'd have to
backport security fixes instead just shipping the most recent
version all the time to users. All libraries are compiled
statically so you can just take the binary and run it on every
compatible processor. May it be Raspbian, Armbian or whatever
else on an SoC. We even have seen users running this on embedded
Linux smart plugs (yes, WiFi plugs). Furthermore, the same x86_64
binary will work on all distributions so we don't have to
provide/compile multiple versions for Fedora, Debian/Ubuntu and
all their versions with different library versions. With our
approach, only the processor architecture needs to match. Then
one and the same binary will run everywhere.
The x86_64-musl binary statically bundles musl-libc instead of
gnu-libc (at the expense of being even larger) to work even when
glibc is not available (like on alpine systems often used inside
docker).

This allows us to do things like quick fixes on dedicated branch
and just having users to run

pihole checkout ftl some_branch

to get the fitting binary generated by the CI. No local compiling
necessary (it would take ages to compile SQLite3 from source on a
Raspberry Pi, at least one hour I'd say).

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> I had to modify CMakeLists.txt, when I had all
> devel libraries needed. It would not even compile.

That's interesting. When following
https://docs.pi-hole.net/ftldns/compile/, it should work. But
I'll admit that I stopped using Fedora some time ago because it
was repeatedly showing annoying bugs on my hardware, so the steps
over there may be outdated.

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> Though it seems
> amazing to find my own commits in a project I never contributed
> directly.

We are preserving authorship on the commits imported for the
embedded dnsmasq. 

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> Do you use some kind of container or dedicated VM to run these
> tests?

Yes, they run in a docker container providing everything that is
needed (incl. precompiled static binaries to speed up the
process). This containers also contain all the auxiliary stuff
needed to run the tests (such as bats or said powerDNS).

The docker container scripts for the various architectures are
here:

https://github.com/pi-hole/docker-base-images/tree/master/ftl-build

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> Do you rely just on github services to run those tests?

No, running them on Github Actions is even a rather recent
addition (master doesn't have it yet). We also run all the test
on CircleCI. We started with Travis CI but the free plan was
closed down pretty much so we'd be queued forever as we have
independent jobs for architectures x86_64-musl, x86_64, x86_32,
armv8a, armv7hf, armv6hf, armv5te, armv4t, aarch64. In they run
in parallel, compiling + testing the generated binaries takes
about 2 minutes. When they sequentially, this can take up to 20
minutes and longer. CircleCI just started doing the same (only
one job in parallel), hence we decided to test Github Actions as
well now. Eventually, we will switch to using Github Actions only
as it turns out to be very powerful yet free.

On Tue, 2021-10-12 at 04:40 +0200, Petr Menšík wrote:
> I admit those
> checks run on every PR looks quite neat, I would love to run
> something
> similar also for dnsmasq. Once we had some tests to run, it might
> be
> possible to run them on all new commits just similar way.

The first question is what infrastructure is used for tests. I'm
not sure how this would fit together with the self-hosted Gitweb
on Simon's server. But tests could be meant to run locally
*before* committing only instead of running a second time on some
CI that also spits out ready-to-use binaries.

Don't hesitate to ask if any questions come up.

Best,
Dominik




More information about the Dnsmasq-discuss mailing list