Skip to content

Test Strategy

The test strategy regarding embedded software of a device defines:

  • What shall be verified
  • How much resources shall be engaged
  • How the verifications shall be organized
    • When (eg: before delivery, during operation, etc.)
    • In which environment (eg: testing environment, with simulators,...)

What shall be verified (generalities)

Verifications depend on whether the scope is an end product or a platform (such as Welma).

Device manufacturers / operators will want to verify:

  • that the device fulfills the service requirements, ie. that it does what it was built for. This verification is done through functional testing. It indirectly verifies the software parts that actually support the functions of the device.

  • that the device is not vulnerable to accidental misuses nor deliberate attacks.

Platform providers will want to verify:

  • drivers (ie: that embedded software correctly controls the hardware)
  • platform features (eg: software update, secure boot)
  • software parts (eg: kernel and packages self-tests: LTP, ptest)
  • API changes on software upgrades (kernel, glibc,...)
  • that the product is not vulnerable to accidental misuses nor deliberate attacks.

The following paragraphs focus on the platform provider case.

Application level

The following should be verified:

  • Features of the software system

  • Security:

    • Passwords
    • Port scanning
    • Penetration tests
  • Application needs:

    • Users and groups
    • Permissions on files and peripherals
  • Others:

    • Resilience to file system corruptions
    • Namespaces, cgroups
    • Partitions (sizes, names)
    • Versions of specific software (openssl, libc, ...)

Drivers

Examples of things to be verified:

  • RAM: Check that the whole memory is visible
  • GPU: Check that acceleration is actually used
  • Real time clock (RTC): Check clock drift
  • Wifi: Check that the device can connect to a Wifi hotspot
  • Bluetooth, Etherne, USB, etc.

Detect changes in API on upgrade of base packages (kernel, busybox,...)

Changes in base packages can break an application. It remains undetected unless the application is fully tested.

Real life examples:

  • Upgrading busybox (1.24 -> 1.35): arguments or results have changes:
    • timeout: timeout -t SECS PROG ARGS changed to timeout SECS PROG ARGS
    • mount, pgrep: messages printed on standard output slightly changed
    • [[ 1 –a 2 ]]: raises an error in v1.35 (but not in 1.24)
    • init: behavior changed on reboot request during startup
  • Upgrading Linux kernel (4.4 -> 5.10) :
    • API on socket has changed (socket AF_ALG for crc32, ...)
    • DTB changed (.dtsi)
  • Upgrading glibc
    • The behavior of sem_timedwait changed in the following case: system date going backward during the call.
  • gcc

Packages Self Tests (LTP and ptest)

These are integrated by Pluma Linux Advanced Test Suite.

  • PCI enumeration test
  • I2C
  • SSH
  • systemd services
  • ping
  • RAM regions (/proc/iomem)
  • RNG
  • ptests
  • boot & reboot time
  • Resilience to Power off
  • Wifi: connection to AP, scan, ping delay and loss, iperf, authentication
  • LTP (Linux Test Project):
    • commands (ln, cp, du, ...)
    • containers (namespaces, cgroups)
    • fs
    • hotplug
    • io
    • kernel
    • math
    • memory
    • net
    • security
    • syscalls
    • thread, process

What we verify in Welma

  • Software update
  • Secure boot
  • Secure storage
  • Watchdog

How much resources shall be engaged

This depends on the size of the project and the severity of a failure of the product (eg: impact on natural environment, human health, financial cost, individual comfort, ...).

Some hints to consider:

  • Automating tests is costly but has actual benefits on big projects.

  • Some automating are more costly than others (for instance, checking the color displayed on a screen might require a camera), and may be replaced by manual testing, with a human doing the verification.

  • Some tests take a very long time (eg: several days) and may be executed on dedicated machines.

How the verifications shall be organized

Verifications are done at every step of the development stage:

  • Developers test their code on their workbench.
  • Developers push their code to a continuous integration (CI) platform.
  • Testers test official releases of the product, in a testing environment.

More verifications are then done:

  • Manufacturers verify the assembly of the products on the production line.
  • Operators verify the commissioning of the devices, in the operating environment.
  • Operators monitor their fleet, in the operating environment.

The test strategy shall define how verifications shall be divided among these steps. Most should be done in the development stage, but it is generally not possible to use the operating environment at this stage.

Technical types of tests

We have these technical types of test:

  • testing the behavior of software running on a real device
  • testing contents, layout, permissions of files in the image of the Linux distribution (without installing it on a device)
  • testing the behavior of software running in a simulated environment (eg: unit tests on a workstation)