Automate non-functional tests with Puppeteer and Lighthouse

Non-functional requirements, such as page load speed, accessibility and search engine optimisation are often tested towards the end of a project. We know this to be an outdated approach, but the reality is, running a full suite of these types tests on a daily basis would be far too time-consuming.

Traditionally, an accessibility test would involve a tester manually executing an audit and interpreting the results, sometimes with little acceptance criteria. These audits, useful as they can be, are often reactive in nature and the outcome usually consists of firefighting a range of uncovered issues. Until now, it has been difficult to prevent non-functional issues from being introduced into a codebase by way of automation. This may be about to change though.

Google have released a couple of impressive tools recently; Puppeteer and Lighthouse. These have the potential to change the way web applications are tested for the better.

Puppeteer is an automation tool for driving tests headlessly through Chrome. The way in which these tests are driven through the DevTools Protocol provides access to the inner workings of the browser and all network requests. The benefit of this is a level of speed and stability rarely seen in UI automation. We’ve trialled Puppeteer within Red Badger and have liked what we’ve seen a lot so far.

Lighthouse is an open source auditing tool that provides feedback on a page’s performance, accessibility, SEO and usability amongst other things. Lighthouse runs the test under conditions similar to those a user would experience in the real world, by throttling the network connection and emulating mobile device usage. In isolation, this tool is extremely powerful and a big step forward in the testing space, providing useful feedback via an easy to read report. The real power, however, comes from integrating with Puppeteer, along with the assertion library Jest.

By integrating these tools, a development team can run a suite of non-functional tests against their site as regularly as they would any other type of test, even in the CI pipeline. A tester can set the boundaries that determine whether a test passes or fails and act upon an issue the minute it arises. For example, if you value a high level of accessibility on your site, you can set a high benchmark score to reflect this. It’s a huge timesaver to know that you are maintaining a predetermined level in these areas and reassuring to know that any deviations from these levels will be flagged as a failing test.

An example test to verify that the page load speed of the site meets the acceptable level:

    it('passes the set threshold for page load speed', async () => {
        const pageSpeedScore = await commonMethods.getLighthouseResult(lhr, 'pageSpeed');
        expect(pageSpeedScore).toBeGreaterThanOrEqual(75);
    });
 An extract of test output in the console with a failing test

An extract of test output in the console with a failing test

 Debugging a failing Lighthouse test. In this case the page speed falls short of the 75/100 threshold

Debugging a failing Lighthouse test. In this case the page speed falls short of the 75/100 threshold

If you like the sound of this and are keen to give it a try, we have created an open source repository to get you started. This basic suite of tests will verify a range of properties of a site, from its page load speed to whether it can be deemed a progressive web app. For the most part, we have focussed on validating the accessibility of the chosen site, measuring the colour contrast and presence of alt-text throughout the page, amongst other properties. These are more granular measurements within the area of accessibility, but the same could be achieved within any area covered by Lighthouse.

It’s important to note that the Pass/Fail thresholds set in this suite are placeholders only. Each of these values should be updated to a score that reflects the teams’ agreed targets. The value of these tests will only be realised when the benchmark figures are agreed upon as a team. More information on the scoring system used in Lighthouse can be found here

We’re keen to see how you build upon this suite, so please feel free to fork and share your code and ideas with us @OnionTerror85 or @redbadgerteam. Feedback and comments are always welcome too.

 

We're always on the lookout for prospective Badgers, so if you like the look of us, we'd love to hear from you. All our current vacancies can be found here