usethis::use_testthat(3)
13 Testing basics
Testing is a vital part of package development: it ensures that your code does what you want. Testing, however, adds an additional step to your workflow. To make this task easier and more effective this chapter will show you how to do formal automated testing using the testthat package.
The first stage of your testing journey is to become convinced that testing has enough benefits to justify the work. For some of us, this is easy to accept. Others must learn the hard way.
Once you’ve decided to embrace automated testing, it’s time to learn some mechanics and figure out where testing fits into your development workflow.
As you and your R packages evolve, you’ll start to encounter testing situations where it’s fruitful to use techniques that are somewhat specific to testing and differ from what we do below R/
.
13.1 Why is formal testing worth the trouble?
Up until now, your workflow probably looks like this:
- Write a function.
- Load it with
devtools::load_all()
, maybe via Ctrl/Cmd + Shift + L. - Experiment with it in the console to see if it works.
- Rinse and repeat.
While you are testing your code in this workflow, you’re only doing it informally. The problem with this approach is that when you come back to this code in 3 months time to add a new feature, you’ve probably forgotten some of the informal tests you ran the first time around. This makes it very easy to break code that used to work.
Many of us embrace automated testing when we realize we’re re-fixing a bug for the second or fifth time. While writing code or fixing bugs, we might perform some interactive tests to make sure the code we’re working on does what we want. But it’s easy to forget all the different use cases you need to check, if you don’t have a system for storing and re-running the tests. This is a common practice among R programmers. The problem is not that you don’t test your code, it’s that you don’t automate your tests.
In this chapter you’ll learn how to transition from informal ad hoc testing, done interactively in the console, to automated testing (also known as unit testing). While turning casual interactive tests into formal tests requires a little more work up front, it pays off in four ways:
-
Fewer bugs. Because you’re explicit about how your code should behave, you will have fewer bugs. The reason why is a bit like the reason double entry book-keeping works: because you describe the behaviour of your code in two places, both in your code and in your tests, you are able to check one against the other.
With informal testing, it’s tempting to just explore typical and authentic usage, similar to writing examples. However, when writing formal tests, it’s natural to adopt a more adversarial mindset and to anticipate how unexpected inputs could break your code.
If you always introduce new tests when you add a new feature or function, you’ll prevent many bugs from being created in the first place, because you will proactively address pesky edge cases. Tests also keep you from (re-)breaking one feature, when you’re tinkering with another.
Better code structure. Code that is well designed tends to be easy to test and you can turn this to your advantage. If you are struggling to write tests, consider if the problem is actually the design of your function(s). The process of writing tests is a great way to get free, private, and personalized feedback on how well-factored your code is. If you integrate testing into your development workflow (versus planning to slap tests on “later”), you’ll subject yourself to constant pressure to break complicated operations into separate functions that work in isolation. Functions that are easier to test are usually easier to understand and re-combine in new ways.
Call to action. When we start to fix a bug, we first like to convert it into a (failing) test. This is wonderfully effective at making your goal very concrete: make this test pass. This is basically a special case of a general methodology known as test driven development.
Robust code. If you know that all the major functionality of your package is well covered by the tests, you can confidently make big changes without worrying about accidentally breaking something. This provides a great reality check when you think you’ve discovered some brilliant new way to simplify your package. Sometimes such “simplifications” fail to account for some important use case and your tests will save you from yourself.
13.2 Introducing testthat
This chapter describes how to test your R package using the testthat package: https://testthat.r-lib.org
If you’re familiar with frameworks for unit testing in other languages, you should note that there are some fundamental differences with testthat. This is because R is, at heart, more a functional programming language than an object-oriented programming language. For instance, because R’s main object-oriented systems (S3 and S4) are based on generic functions (i.e., a method implements a generic function for a specific class), testing approaches built around objects and methods don’t make much sense.
testthat 3.0.0 (released 2020-10-31) introduced the idea of an edition of testthat, specifically the third edition of testthat, which we refer to as testthat 3e. An edition is a bundle of behaviors that you have to explicitly choose to use, allowing us to make otherwise backward incompatible changes. This is particularly important for testthat since it has a very large number of packages that use it (almost 5,000 at last count). To use testthat 3e, you must have a version of testthat >= 3.0.0 and explicitly opt-in to the third edition behaviors. This allows testthat to continue to evolve and improve without breaking historical packages that are in a rather passive maintenance phase. You can learn more in the testthat 3e article and the blog post Upgrading to testthat edition 3.
We recommend testthat 3e for all new packages and we recommend updating existing, actively maintained packages to use testthat 3e. Unless we say otherwise, this chapter describes testthat 3e.
13.3 Test mechanics and workflow
13.3.1 Initial setup
To setup your package to use testthat, run:
This will:
Create a
tests/testthat/
directory.-
Add testthat to the
Suggests
field in theDESCRIPTION
and specify testthat 3e in theConfig/testthat/edition
field. The affectedDESCRIPTION
fields might look like:Suggests: testthat (>= 3.0.0) Config/testthat/edition: 3
-
Create a file
tests/testthat.R
that runs all your tests whenR CMD check
runs (Section 4.5). For a package named “pkg”, the contents of this file will be something like:library(testthat) library(pkg) test_check("pkg")
This initial setup is usually something you do once per package. However, even in a package that already uses testthat, it is safe to run use_testthat(3)
, when you’re ready to opt-in to testthat 3e.
Do not edit tests/testthat.R
! It is run during R CMD check
(and, therefore, devtools::check()
), but is not used in most other test-running scenarios (such as devtools::test()
or devtools::test_active_file()
). If you want to do something that affects all of your tests, there is almost always a better way than modifying the boilerplate tests/testthat.R
script. This chapter details many different ways to make objects and logic available during testing.
13.3.2 Create a test
As you define functions in your package, in the files below R/
, you add the corresponding tests to .R
files in tests/testthat/
. We strongly recommend that the organisation of test files match the organisation of R/
files, discussed in Section 6.1: The foofy()
function (and its friends and helpers) should be defined in R/foofy.R
and their tests should live in tests/testthat/test-foofy.R
.
R tests/testthat
└── foofy.R └── test-foofy.R
foofy <- function(...) {...} test_that("foofy does this", {...})
test_that("foofy does that", {...})
Even if you have different conventions for file organisation and naming, note that testthat tests must live in files below tests/testthat/
and these file names must begin with test
. The test file name is displayed in testthat output, which provides helpful context1.
usethis offers a helpful pair of functions for creating or toggling between files:
Either one can be called with a file (base) name, in order to create a file de novo and open it for editing:
use_r("foofy") # creates and opens R/foofy.R
use_test("blarg") # creates and opens tests/testthat/test-blarg.R
The use_r()
/ use_test()
duo has some convenience features that make them “just work” in many common situations:
- When determining the target file, they can deal with the presence or absence of the
.R
extension and thetest-
prefix.- Equivalent:
use_r("foofy.R")
,use_r("foofy")
- Equivalent:
use_test("test-blarg.R")
,use_test("blarg.R")
,use_test("blarg")
- Equivalent:
- If the target file already exists, it is opened for editing. Otherwise, the target is created and then opened for editing.
If R/foofy.R
is the active file in your source editor, you can even call use_test()
with no arguments! The target test file can be inferred: if you’re editing R/foofy.R
, you probably want to work on the companion test file, tests/testthat/test-foofy.R
. If it doesn’t exist yet, it is created and, either way, the test file is opened for editing. This all works the other way around also. If you’re editing tests/testthat/test-foofy.R
, a call to use_r()
(optionally, creates and) opens R/foofy.R
.
Bottom line: use_r()
/ use_test()
are handy for initially creating these file pairs and, later, for shifting your attention from one to the other.
When use_test()
creates a new test file, it inserts an example test:
test_that("multiplication works", {
expect_equal(2 * 2, 4)
})
You will replace this with your own description and logic, but it’s a nice reminder of the basic form:
- A test file holds one or more
test_that()
tests. - Each test describes what it’s testing: e.g. “multiplication works”.
- Each test has one or more expectations: e.g.
expect_equal(2 * 2, 4)
.
Below we go into much more detail about how to test your own functions.
13.3.3 Run tests
Depending on where you are in the development cycle, you’ll run your tests at various scales. When you are rapidly iterating on a function, you might work at the level of individual tests. As the code settles down, you’ll run entire test files and eventually the entire test suite.
Micro-iteration: This is the interactive phase where you initiate and refine a function and its tests in tandem. Here you will run devtools::load_all()
often, and then execute individual expectations or whole tests interactively in the console. Note that load_all()
attaches testthat, so it puts you in the perfect position to test drive your functions and to execute individual tests and expectations.
# tweak the foofy() function and re-load it
devtools::load_all()
# interactively explore and refine expectations and tests
expect_equal(foofy(...), EXPECTED_FOOFY_OUTPUT)
test_that("foofy does good things", {...})
Mezzo-iteration: As one file’s-worth of functions and their associated tests start to shape up, you will want to execute the entire file of associated tests, perhaps with testthat::test_file()
:
testthat::test_file("tests/testthat/test-foofy.R")
In RStudio, you have a couple shortcuts for running a single test file.
If the target test file is the active file, you can use the “Run Tests” button in the upper right corner of the source editor.
There is also a useful function, devtools::test_active_file()
. It infers the target test file from the active file and, similar to how use_r()
and use_test()
work, it works regardless of whether the active file is a test file or a companion R/*.R
file. You can invoke this via “Run a test file” in the Addins menu. However, for heavy users (like us!), we recommend binding this to a keyboard shortcut; we use Ctrl/Cmd + T.
Macro-iteration: As you near the completion of a new feature or bug fix, you will want to run the entire test suite.
Most frequently, you’ll do this with devtools::test()
:
devtools::test()
Then eventually, as part of R CMD check
with devtools::check()
:
devtools::check()
devtools::test()
is mapped to Ctrl/Cmd + Shift + T. devtools::check()
is mapped to Ctrl/Cmd + Shift + E.
The output of devtools::test()
looks like this:
devtools::test()
ℹ Loading usethis
ℹ Testing usethis
✓ | F W S OK | Context
✓ | 1 | addin [0.1s]
✓ | 6 | badge [0.5s]
...
✓ | 27 | github-actions [4.9s]
...
✓ | 44 | write [0.6s]
══ Results ═════════════════════════════════════════════════════════════════
Duration: 31.3 s
── Skipped tests ──────────────────────────────────────────────────────────
• Not on GitHub Actions, Travis, or Appveyor (3)
[ FAIL 1 | WARN 0 | SKIP 3 | PASS 728 ]
Test failure is reported like this:
Failure (test-release.R:108:3): get_release_data() works if no file found
res$Version (`actual`) not equal to "0.0.0.9000" (`expected`).
`actual`: "0.0.0.1234"
`expected`: "0.0.0.9000"
Each failure gives a description of the test (e.g., “get_release_data() works if no file found”), its location (e.g., “test-release.R:108:3”), and the reason for the failure (e.g., “res$Version (actual
) not equal to”0.0.0.9000” (expected
)“).
The idea is that you’ll modify your code (either the functions defined below R/
or the tests in tests/testthat/
) until all tests are passing.
13.4 Test organisation
A test file lives in tests/testthat/
. Its name must start with test
. We will inspect and execute a test file from the stringr package.
But first, for the purposes of rendering this book, we must attach stringr and testthat. Note that in real-life test-running situations, this is taken care of by your package development tooling:
- During interactive development,
devtools::load_all()
makes testthat and the package-under-development available (both its exported and unexported functions). - During arms-length test execution, this is taken care of by
devtools::test_active_file()
,devtools::test()
, andtests/testthat.R
.
Here are the contents of tests/testthat/test-dup.r
from stringr:
test_that("basic duplication works", {
expect_equal(str_dup("a", 3), "aaa")
expect_equal(str_dup("abc", 2), "abcabc")
expect_equal(str_dup(c("a", "b"), 2), c("aa", "bb"))
expect_equal(str_dup(c("a", "b"), c(2, 3)), c("aa", "bbb"))
})
#> Test passed 🥳
test_that("0 duplicates equals empty string", {
expect_equal(str_dup("a", 0), "")
expect_equal(str_dup(c("a", "b"), 0), rep("", 2))
})
#> Test passed 🌈
test_that("uses tidyverse recycling rules", {
expect_error(str_dup(1:2, 1:3), class = "vctrs_error_incompatible_size")
})
#> Test passed 🌈
This file shows a typical mix of tests:
- “basic duplication works” tests typical usage of
str_dup()
. - “0 duplicates equals empty string” probes a specific edge case.
- “uses tidyverse recycling rules” checks that malformed input results in a specific kind of error.
Tests are organised hierarchically: expectations are grouped into tests which are organised in files:
A file holds multiple related tests. In this example, the file
tests/testthat/test-dup.r
has all of the tests for the code inR/dup.r
.-
A test groups together multiple expectations to test the output from a simple function, a range of possibilities for a single parameter from a more complicated function, or tightly related functionality from across multiple functions. This is why they are sometimes called unit tests. Each test should cover a single unit of functionality. A test is created with
test_that(desc, code)
.It’s common to write the description (
desc
) to create something that reads naturally, e.g.test_that("basic duplication works", { ... })
. A test failure report includes this description, which is why you want a concise statement of the test’s purpose, e.g. a specific behaviour. An expectation is the atom of testing. It describes the expected result of a computation: Does it have the right value and right class? Does it produce an error when it should? An expectation automates visual checking of results in the console. Expectations are functions that start with
expect_
.
You want to arrange things such that, when a test fails, you’ll know what’s wrong and where in your code to look for the problem. This motivates all our recommendations regarding file organisation, file naming, and the test description. Finally, try to avoid putting too many expectations in one test - it’s better to have more smaller tests than fewer larger tests.
13.5 Expectations
An expectation is the finest level of testing. It makes a binary assertion about whether or not an object has the properties you expect. This object is usually the return value from a function in your package.
All expectations have a similar structure:
They start with
expect_
.They have two main arguments: the first is the actual result, the second is what you expect.
If the actual and expected results don’t agree, testthat throws an error.
Some expectations have additional arguments that control the finer points of comparing an actual and expected result.
While you’ll normally put expectations inside tests inside files, you can also run them directly. This makes it easy to explore expectations interactively. There are more than 40 expectations in the testthat package, which can be explored in testthat’s reference index. We’re only going to cover the most important expectations here.
13.5.1 Testing for equality
expect_equal()
checks for equality, with some reasonable amount of numeric tolerance:
expect_equal(10, 10)
expect_equal(10, 10L)
expect_equal(10, 10 + 1e-7)
expect_equal(10, 11)
#> Error: 10 (`actual`) not equal to 11 (`expected`).
#>
#> `actual`: 10.0
#> `expected`: 11.0
If you want to test for exact equivalence, use expect_identical()
.
expect_equal(10, 10 + 1e-7)
expect_identical(10, 10 + 1e-7)
#> Error: 10 (`actual`) not identical to 10 + 1e-07 (`expected`).
#>
#> `actual`: 10.00000000
#> `expected`: 10.00000010
expect_equal(2, 2L)
expect_identical(2, 2L)
#> Error: 2 (`actual`) not identical to 2L (`expected`).
#>
#> `actual` is a double vector (2)
#> `expected` is an integer vector (2)
13.5.2 Testing errors
Use expect_error()
to check whether an expression throws an error. It’s the most important expectation in a trio that also includes expect_warning()
and expect_message()
. We’re going to emphasize errors here, but most of this also applies to warnings and messages.
Usually you care about two things when testing an error:
- Does the code fail? Specifically, does it fail for the right reason?
- Does the accompanying message make sense to the human who needs to deal with the error?
The entry-level solution is to expect a specific type of condition:
1 / "a"
#> Error in 1/"a": non-numeric argument to binary operator
expect_error(1 / "a")
log(-1)
#> Warning in log(-1): NaNs produced
#> [1] NaN
expect_warning(log(-1))
This is a bit dangerous, though, especially when testing an error. There are lots of ways for code to fail! Consider the following test:
expect_error(str_duq(1:2, 1:3))
This expectation is intended to test the recycling behaviour of str_dup()
. But, due to a typo, it tests behaviour of a non-existent function, str_duq()
. The code throws an error and, therefore, the test above passes, but for the wrong reason. Due to the typo, the actual error thrown is about not being able to find the str_duq()
function:
str_duq(1:2, 1:3)
#> Error in str_duq(1:2, 1:3): could not find function "str_duq"
Historically, the best defense against this was to assert that the condition message matches a certain regular expression, via the second argument, regexp
.
expect_error(1 / "a", "non-numeric argument")
expect_warning(log(-1), "NaNs produced")
This does, in fact, force our typo problem to the surface:
expect_error(str_duq(1:2, 1:3), "recycle")
#> Error in str_duq(1:2, 1:3): could not find function "str_duq"
Recent developments in both base R and rlang make it increasingly likely that conditions are signaled with a class, which provides a better basis for creating precise expectations. That is exactly what you’ve already seen in this stringr example. This is what the class
argument is for:
# fails, error has wrong class
expect_error(str_duq(1:2, 1:3), class = "vctrs_error_incompatible_size")
#> Error in str_duq(1:2, 1:3): could not find function "str_duq"
# passes, error has expected class
expect_error(str_dup(1:2, 1:3), class = "vctrs_error_incompatible_size")
If you have the choice, express your expectation in terms of the condition’s class, instead of its message. Often this is under your control, i.e. if your package signals the condition. If the condition originates from base R or another package, proceed with caution. This is often a good reminder to re-consider the wisdom of testing a condition that is not fully under your control in the first place.
To check for the absence of an error, warning, or message, use expect_no_error()
:
expect_no_error(1 / 2)
Of course, this is functionally equivalent to simply executing 1 / 2
inside a test, but some developers find the explicit expectation expressive.
If you genuinely care about the condition’s message, testthat 3e’s snapshot tests are the best approach, which we describe next.
13.5.3 Snapshot tests
Sometimes it’s difficult or awkward to describe an expected result with code. Snapshot tests are a great solution to this problem and this is one of the main innovations in testthat 3e. The basic idea is that you record the expected result in a separate, human-readable file. Going forward, testthat alerts you when a newly computed result differs from the previously recorded snapshot. Snapshot tests are particularly suited to monitoring your package’s user interface, such as its informational messages and errors. Other use cases include testing images or other complicated objects.
We’ll illustrate snapshot tests using the waldo package. Under the hood, testthat 3e uses waldo to do the heavy lifting of “actual vs. expected” comparisons, so it’s good for you to know a bit about waldo anyway. One of waldo’s main design goals is to present differences in a clear and actionable manner, as opposed to a frustrating declaration that “this differs from that and I know exactly how, but I won’t tell you”. Therefore, the formatting of output from waldo::compare()
is very intentional and is well-suited to a snapshot test. The binary outcome of TRUE
(actual == expected) vs. FALSE
(actual != expected) is fairly easy to check and could get its own test. Here we’re concerned with writing a test to ensure that differences are reported to the user in the intended way.
waldo uses a few different layouts for showing diffs, depending on various conditions. Here we deliberately constrain the width, in order to trigger a side-by-side layout.2 (We’ll talk more about the withr package below.)
withr::with_options(
list(width = 20),
waldo::compare(c("X", letters), c(letters, "X"))
)
#> old | new
#> [1] "X" -
#> [2] "a" | "a" [1]
#> [3] "b" | "b" [2]
#> [4] "c" | "c" [3]
#>
#> old | new
#> [25] "x" | "x" [24]
#> [26] "y" | "y" [25]
#> [27] "z" | "z" [26]
#> - "X" [27]
The two primary inputs differ at two locations: once at the start and once at the end. This layout presents both of these, with some surrounding context, which helps the reader orient themselves.
Here’s how this would look as a snapshot test:
test_that("side-by-side diffs work", {
withr::local_options(width = 20)
expect_snapshot(
waldo::compare(c("X", letters), c(letters, "X"))
)
})
If you execute expect_snapshot()
or a test containing expect_snapshot()
interactively, you’ll see this:
Can't compare snapshot to reference when testing interactively
ℹ Run `devtools::test()` or `testthat::test_file()` to see changes
followed by a preview of the snapshot output.
This reminds you that snapshot tests only function when executed non-interactively, i.e. while running an entire test file or the entire test suite. This applies both to recording snapshots and to checking them.
The first time this test is executed via devtools::test()
or similar, you’ll see something like this (assume the test is in tests/testthat/test-diff.R
):
── Warning (test-diff.R:63:3): side-by-side diffs work ─────────────────────
Adding new snapshot:
Code
waldo::compare(c(
"X", letters), c(
letters, "X"))
Output
old | new
[1] "X" -
[2] "a" | "a" [1]
[3] "b" | "b" [2]
[4] "c" | "c" [3]
old | new
[25] "x" | "x" [24]
[26] "y" | "y" [25]
[27] "z" | "z" [26]
- "X" [27]
There is always a warning upon initial snapshot creation. The snapshot is added to tests/testthat/_snaps/diff.md
, under the heading “side-by-side diffs work”, which comes from the test’s description. The snapshot looks exactly like what a user sees interactively in the console, which is the experience we want to check for. The snapshot file is also very readable, which is pleasant for the package developer. This readability extends to snapshot changes, i.e. when examining Git diffs and reviewing pull requests on GitHub, which helps you keep tabs on your user interface. Going forward, as long as your package continues to re-capitulate the expected snapshot, this test will pass.
If you’ve written a lot of conventional unit tests, you can appreciate how well-suited snapshot tests are for this use case. If we were forced to inline the expected output in the test file, there would be a great deal of quoting, escaping, and newline management. Ironically, with conventional expectations, the output you expect your user to see tends to get obscured by a heavy layer of syntactical noise.
What about when a snapshot test fails? Let’s imagine a hypothetical internal change where the default labels switch from “old” and “new” to “OLD” and “NEW”. Here’s how this snapshot test would react:
── Failure (test-diff.R:63:3): side-by-side diffs work──────────────────────────
Snapshot of code has changed:
old[3:15] vs new[3:15]
" \"X\", letters), c("
" letters, \"X\"))"
"Output"
- " old | new "
+ " OLD | NEW "
" [1] \"X\" - "
" [2] \"a\" | \"a\" [1]"
" [3] \"b\" | \"b\" [2]"
" [4] \"c\" | \"c\" [3]"
" "
- " old | new "
+ " OLD | NEW "
and 3 more ...
* Run `snapshot_accept('diff')` to accept the change
* Run `snapshot_review('diff')` to interactively review the change
This diff is presented more effectively in most real-world usage, e.g. in the console, by a Git client, or via a Shiny app (see below). But even this plain text version highlights the changes quite clearly. Each of the two loci of change is indicated with a pair of lines marked with -
and +
, showing how the snapshot has changed.
You can call testthat::snapshot_review('diff')
to review changes locally in a Shiny app, which lets you skip or accept individual snapshots. Or, if all changes are intentional and expected, you can go straight to testthat::snapshot_accept('diff')
. Once you’ve re-synchronized your actual output and the snapshots on file, your tests will pass once again. In real life, snapshot tests are a great way to stay informed about changes to your package’s user interface, due to your own internal changes or due to changes in your dependencies or even R itself.
expect_snapshot()
has a few arguments worth knowing about:
cran = FALSE
: By default, snapshot tests are skipped if it looks like the tests are running on CRAN’s servers. This reflects the typical intent of snapshot tests, which is to proactively monitor user interface, but not to check for correctness, which presumably is the job of other unit tests which are not skipped. In typical usage, a snapshot change is something the developer will want to know about, but it does not signal an actual defect.-
error = FALSE
: By default, snapshot code is not allowed to throw an error. Seeexpect_error()
, described above, for one approach to testing errors. But sometimes you want to assess “Does this error message make sense to a human?” and having it laid out in context in a snapshot is a great way to see it with fresh eyes. Specifyerror = TRUE
in this case:expect_snapshot(error = TRUE, str_dup(1:2, 1:3) )
transform
: Sometimes a snapshot contains volatile, insignificant elements, such as a temporary filepath or a timestamp. Thetransform
argument accepts a function, presumably written by you, to remove or replace such changeable text. Another use oftransform
is to scrub sensitive information from the snapshot.variant
: Sometimes snapshots reflect the ambient conditions, such as the operating system or the version of R or one of your dependencies, and you need a different snapshot for each variant. This is an experimental and somewhat advanced feature, so if you can arrange things to use a single snapshot, you probably should.
In typical usage, testthat will take care of managing the snapshot files below tests/testthat/_snaps/
. This happens in the normal course of you running your tests and, perhaps, calling testthat::snapshot_accept()
.
13.5.4 Shortcuts for other common patterns
We conclude this section with a few more expectations that come up frequently. But remember that testthat has many more pre-built expectations than we can demonstrate here.
Several expectations can be described as “shortcuts”, i.e. they streamline a pattern that comes up often enough to deserve its own wrapper.
-
expect_match(object, regexp, ...)
is a shortcut that wrapsgrepl(pattern = regexp, x = object, ...)
. It matches a character vector input against a regular expressionregexp
. The optionalall
argument controls whether all elements or just one element needs to match. Read theexpect_match()
documentation to see how additional arguments, likeignore.case = FALSE
orfixed = TRUE
, can be passed down togrepl()
.string <- "Testing is fun!" expect_match(string, "Testing") # Fails, match is case-sensitive expect_match(string, "testing") #> Error: `string` does not match "testing". #> Actual value: "Testing is fun!" # Passes because additional arguments are passed to grepl(): expect_match(string, "testing", ignore.case = TRUE)
expect_length(object, n)
is a shortcut forexpect_equal(length(object), n)
.expect_setequal(x, y)
tests that every element ofx
occurs iny
, and that every element ofy
occurs inx
. But it won’t fail ifx
andy
happen to have their elements in a different order.-
expect_s3_class()
andexpect_s4_class()
check that an objectinherit()
s from a specified class.expect_type()
checks thetypeof()
an object.model <- lm(mpg ~ wt, data = mtcars) expect_s3_class(model, "lm") expect_s3_class(model, "glm") #> Error: `model` inherits from 'lm' not 'glm'.
expect_true()
and expect_false()
are useful catchalls if none of the other expectations does what you need.
The legacy function
testthat::context()
is superseded now and its use in new or actively maintained code is discouraged. In testthat 3e,context()
is formally deprecated; you should just remove it. Once you adopt an intentional, synchronized approach to the organisation of files belowR/
andtests/testthat/
, the necessary contextual information is right there in the file name, rendering the legacycontext()
superfluous.↩︎The actual waldo test that inspires this example targets an unexported helper function that produces the desired layout. But this example uses an exported waldo function for simplicity.↩︎