On Testing – Part 3

Browser-based Testing

In part 2 of this series, I talked about the lower levels of the test pyramid: unit & integration tests. In this post, I’ll be focusing on the highest level of the pyramid: browser-based tests.

Browser-based tests are slow. Really slow. They run an order of magnitude slower than other tests. Large browser test suites can quickly bog down your CI pipeline. Make sure there’s a good reason to write a test case as a browser-based test and that it can’t be accomplished at a lower level of the pyramid.

The sweet spot for browser tests

If you have an existing suite of browser tests, there’s a good chance you have too many tests. It’s all too easy to get carried away. Keep your browser tests limited to what they are good at:

Critical functionality – Tests for core functionality in your system where you can’t take any chances. Failure of these tests would constitute a severe degradation of the system to your users.
Basic Page Load Success – Very basic smoke tests which verify a successful page load with no errors. Have one for each important page of your site.
Workflow tests – Tests which span multiple pages or interactions across components on a page that perform one logical action. Workflow tests should focus on validating workflow, not corner cases. That means the tests should be limited to:

  • A single happy path test to completely enumerate all steps of the workflow.
  • A single sad path test to verify proper error handling and reporting to the user. The failure case should be canonical and shouldn’t rely on complicated validation rules 1.

Here are some general rules for writing browser tests I’ve found key to a reliable and robust suite.

Rule #1: don’t be creative

Creativity is great, but not when it comes to browser tests. If you can’t find a straight-forward way to interact with elements on the page, don’t look for a way to work around the problem. Make sure you construct a sound test based on reliable, stable and explicit UI selectors. Go back to the code if necessary and instrument it such that a reliable test can be written.

Use the PageObject Pattern

The PageObject pattern is a well-known pattern in the testing community. It’s a solid pattern. In a nutshell, it’s applying object-oriented design principles to the domain of web page testing.

Test ID’s, test ID’s, test ID’s.

There seem to be some different opinions on whether to use test ID’s on your pages. I’m in the always use them camp. Use them exclusively. Here’s why:

Test ID’s make the contract between code and tests explicit rather than implicit.

Prefer test ID’s to other selectors. It decouples tests from the UI. Test ID’s keep the test API focused on the basic abstraction of page structure. Other selectors are usually too fragile to be relied upon in the long run. They make the tests more brittle to changes in the UI.

Test ID’s are an implementation detail of the underlying PageObjects in the system, and shouldn’t be referenced in tests directly. If test ID’s form an explicit contract between the code and the tests, they ensure a reliable connection to the code. If a developer is changing markup on a page and sees a test ID, they are much more likely to move it to the correct logical location in a re-factoring2. If a test only relies on an element ID or class that is part of the code, a developer has no way of knowing if it’s relied on by a test without looking through all the tests.

The Page Component pattern

Another important pattern I haven’t seen explicitly mentioned anywhere is what I like to call the page component pattern. It’s really just a logical extension of the PageObject pattern.

The idea is simple: create abstractions for common UI components that are used across pages of your site. Some examples of page components might be dropdowns, modals, or date pickers. The thing that makes them really powerful is when they are combined with test ID’s.

Consider a site where a particular domain object will be rendered in various types of lists. For example, you might render a list of the object for search results, or allow a user to add the object to a favourites list. The two pages may have completely different page templates or layouts.

By creating a page component that abstracts the notion of a list of objects, you can completely decouple the underlying UI code from the page structure, having each page template implement the same test ID contract. The same page component can now be used for both cases, and potentially many more.

You can encapsulate some common operations in the component, such as:

  • Is a specific object present in the list?
  • Get a specific object’s container element from the list.
  • Get the list of object ID’s present in the list.

Once you get a handle to a container of an element in the list, you can use a type-specific component representing that element’s internal structure, which you can further interact with using that type’s API, and so on.

Page components are also great for isolating messy implementation details in UI code you don’t control. For example, if you are using a UI library for your dropdowns, you can abstract the details of the library’s markup behind a page component. This allows you to easily swap out the dropdown implementation in the future with minimal impact to the tests.

Always wait on conditions before referencing elements

Before accessing web elements, always wait on their condition. Never assume an element will be available when you access it, even if you know it doesn’t make a network call. Any kind of client-side processing can introduce latency, which in the absence of explicit waits can create flaky test scenarios. Waiting on a condition which is already true introduces no significant overhead and makes tests behave reliably across environments which may have different performance characteristics.

Flaky tests: The Quarantine

When it comes to browser-based tests, test flakiness can be a real issue. It depends a lot on test quality, but it inevitably comes up at some point.

The idea behind the quarantine is simple: have a separate target in your CI system to isolate and run flaky tests regularly. Assuming the tests provide sufficient value, ticket changes required to address the flakiness issue. This provides an automated way to regain confidence in a previously flaky test before reintroducing it into the main CI pipeline. The bar for your flaky test quarantine should be mercilessly high. Test in multiple browsers, and make sure the test passes multiple times in a row before you consider returning it to the main pipeline. If the failures are more byzantine, consider whether they can be practically tested at all.

Pyramid Feedback Loop for Failures

Even with the best efforts and intentions, tests are sometimes written at the incorrect level of the pyramid. This is more common in the higher levels. It’s a good idea to have a feedback loop on failures to ensure you can’t capture them somewhere lower in the pyramid. Test failures at the browser level should be relatively rare. When the tests do fail, it should be for browser-specific reasons, or due to bugs in workflows. If you find a test which feels like it could be caught earlier in the pipeline, try to move it. This is another reason to keep fast feedback cycle time as a key motivator of your testing strategy.

Wrapping up

There are two key ingredients to a great test suite: reliability and fast feedback. How fast is relative and depends on many factors, including ones specific to your organization. But if your test suite isn’t reliable and providing fast feedback, your team will be far less likely to depend on it, and will subconsciously look for ways around it, which defeats the whole purpose of doing automated testing.

I hope you found this series helpful. Please leave a comment! I’d love to hear your feedback. Happy testing!


1 If different error cases can cause the workflow to branch into other workflows, these need to be tested as well.

2 Assuming the abstract structure of the page isn’t changing; if it is, the contract has to be changed.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.