Link Search Menu Expand Document

Automated Tests

Automated tests can be added to your Interactive Labs to verify they are working as expected. Each time the Lab is changed, the tests are re-run and the results displayed in the dashboard.

Cypress

Cypress.io is a complete end-to-end testing tool that can be used to verify that your interactive labs are working as expected.

You can learn more about Cypress at https://docs.cypress.io/guides/overview/why-cypress.html.

Cypress version 13.3.0 is currently supported.

Instruction-Guided Tests

Many Labs can be tested with a Cypress command that inspects the lab instructions for actions a learner should take and then performs them. We call these tests “instruction-guided” because they use your lab instructions to infer how to complete the lab, without you having to write detailed test code.

it("performs all lab actions", () => {
  cy.performAllLabActions();
});

This works best when the lab instructions explicity specify every step the learner must take to complete the lab.
(Challenges are not yet supported for this reason.)

The performAllLabActions Cypress command makes some assumptions by default:

  • A code block contains a shell command unless a language is specified on the code block
  • A successful code blocks exits with a zero status, and a failing code block exits with a non-zero status
  • Any links to web applications within the lab environment (on environments.katacoda.com) respond with HTTP 200 when they are working as expected and some other status if they are not

However, if your lab doesn’t conform to these expectations, you may still be able to annotate it so that the performAllLabActions Cypress command can successfully test it.

Annotating Lab Instructions

To help the performAllLabActions Cypress command navigate your lab, you may annotate the lab in the following ways.

Long-running Commands

Some shell commands will never terminate for the duration of the lab or until the user interrupts them. Examples include, top, watch, or development servers running in the foreground.

By default the performAllLabActions Cypress command will wait for these commands to exit, and it will fail the test if they do not exit. To tell it not to wait, add the test-no-wait annotation to the execute macro:

```
top
```{{execute test-no-wait}}

The interrupt modifier may be used in the next code block to interrupt the long-running shell command, or the long-running command may be started in a 2nd terminal by adding the T2 modifier and left running.

If you would also like the test to validate that the shell command started successfully, surround the code block with a div tag, and add a data-test-output that says what output to look for:

<div data-test-output="* Listening on http://0.0.0.0:3000">

  ```
  rails server
  ```{{execute test-no-wait}}

</div>

To render markdown inside of html block tags, you must separate the fenced code from the div with blank lines as shown in this example.

After starting the rails server command, the test will wait for the output * Listening on http://0.0.0.0:3000 and fail if it does not appear. (Currently it looks for the output anywhere in the terminal, not just since the command started, but that may change in the future.)

The test-no-wait and data-test-output must currently be used together; any code block that uses one must also use the other, but that may change in the future.

Commands in a REPL

By default, the performAllLabActions Cypress command will assume that code blocks contain shell commands. But if a language is specified on the code block, the performAllLabActions command will interpret the output of the code block’s execution differently.

These languages are currently supported:

Language Prompt Errors
Bash See below
 
Ruby 3 (irb) “3.2.2 :001 > “
(/^3.\d+.\d+ :\d{3,} > $/)
[/error/i, /^Traceback (most recent call last)/]
Python ”>>> “
(/^>>> $/)
[/Traceback (most recent call last)/]
SQL “mysql> “ or “postgres=> “
(/^[_=a-z0-9]*> $/i)
[/^ERROR \d+ \(/, /^ERROR: /, /^psql: error: /]
↳ PostgreSQL (psql) “some_database=> “
(/^[_a-z0-9]+=> $/i)
[/^ERROR: /, /^psql: error: /]
↳ MySQL “mysql> “
(/^mysql> $/i)
[/^ERROR \d+ \(/]

The prompt and error patterns are used to determine when a code block finishes executing and whether it was successful.

Note that using the sql language may enable better highlighting, but using mysql or psql will use more specific prompt and error patterns. Starting with sql and use the more specific patterns only if necessary.

Bash shell is not on the list because it is too difficult to detect errors from output alone. For code blocks with no language, a shell-specific algorithm will be used. This alogrithm appends a command to print the exit status of the last command of each code block.

By default, the performAllLabActions Cypress command will assume that links to the lab environment should respond with a HTTP 200 status. To instead validate them by checking for a string in the HTTP response body, add a data-test-contains attribute to the link:

<a href="https://[[HOST_SUBDOMAIN]]-3000-[[KATACODA_HOST]].environments.katacoda.com"
  data-test-contains="Ruby on Rails 7.1">check for Ruby on Rails</a>

If you are currently using Markdown syntax for the link, you’ll need to convert it to an HTML-style link, for example:

[text](url)

would become:

<a href="url" data-test-contains="string">text</a>

Writing your own tests

If you are not able to test your lab successfully using instruction-guided tests (above), even after adding annotations, you may still be able to test your lab with some high-level Cypress commands that are designed specifically for interacting with labs.

The performAllLabActions Cypress command works by internally calling many of these same commands.

Cypress Commands for Labs

Example detailed test

Here’s an example detailed test using some of the above Cypress commands:

describe("My Test", () => {
  it("Starts development server", () => {
    cy.startScenario();
    cy.watchForVsCodeLoaded(); // Don't await this

    cy.clickStepActions("Install Dependencies");

    cy.clickCodeBlockContaining("npm start");
    cy.get("a")
      .contains("the link text")
      .then($link => cy.followLink($link));
  });
});

(Such a test should only be necessary if a instruction-guided test cannot test your lab.)

Writing your own tests vs. instruction-guided tests

Compared with instruction-guided tests, tests that you write will need to be updated more frequently as you change the lab instructions. For example, if you add a code block to the instructions, your test will not run that code block until you update it to run the code block.

To reduce maintenance effort required by your tests, we encourage you refer to code blocks by a unique substring that will not need to be updated every time you refine your instructions. For example, if you ask the user to run apt-get install -y curl in the lab instructions, you may run that code block with the following test code:

cy.clickCodeBlockContaining("apt-get install");

Omitting the -y option and curl package allows you to adjust the option and package list in the instructions without having to also change your test in lock step.

Low-level Cypress Commands

Cypress provides a number of built-in commands that can interact with Cypress’s headless browser.

In addition, the following custom Cypress commands allow you to interact with your lab during the test. This may be useful if your instructions rely on the user to perform actions in a web application in another tab, such as in Google Cloud Console or a Kubernetes Dashboard. You will then need your test to replicate the effect of those actions, for example by running equivalent commands within the lab terminal.

The more detail you encode into your test, the more careful you will need to be to keep it up-to-date with your instructions. For example, if you remove a step that you mistakenly though was unnecessary from your instructions and forget to remove it from your test, your tests will continue to pass despite a problem with the instructions. To avoid that problem, consider verifying the prose instructions you are replicating, so that, if they change, a test failure will alert you to update your test code. For example:

cy.contains("Enter this value into the cloud console for instance XYZ");
cy.terminalType(`cloud instance add --name XYZ --value ${thisValue}`);
Function Details Example
startScenario Start and visit the Lab being tested in the browser cy.startScenario();
terminalType Type and execute commands within the Terminal cy.terminalType("uname");
terminalShouldContain Wait for the terminal to contain the given text cy.terminalShouldContain('Linux');
terminalShouldNotContain Wait for the given text to be removed from the terminal. Replaces terminalNotShouldContain. cy.terminalShouldNotContain('some text');
terminalDoesNotContain Assert that the given text is currently absent from the terminal, without waiting as terminalShouldNotContain does cy.terminalDoesNotContain('some text');
contains Wait for the Lab page to contain the given text cy.contains('Start Scenario');
terminalShouldHavePath Wait for the path to be accessible from the terminal cy.terminalShouldHavePath("/tmp/build-complete.stamp");
terminalValueAfter Return the first space-delimited string after the last occurrence of the given text cy.terminalValueAfter("exit status=");
stepHasText Wait for the Lab instructions to contain the given text cy.stepHasText("Step 1: ...");

These custom Cypress commands can be combined with the built-in Cypress commands, documented on docs.cypress.io.

Creating The First Test

  1. Within the lab you want to test, add a folder called .cypress.
  2. Create a file with the suffix _spec.js (such as test1_spec.js). This will contain your Cypress tests you want to run. The underlying Katacoda platform supports splitting your tests over multiple files, but most often you will want to test your entire lab from start to finish in one test case in one file. Subdirectories are not supported.
  3. Create your Cypress test using the helpers above. Ensure that cy.startScenario() is called to load your lab.
  4. Commit the test to Git and push to the remote Git Repository to trigger Katacoda re-processing.
  5. Visit the Dashboard to view the results.

A complete example can be found at https://github.com/katacoda/scenario-examples/tree/main/automatedtest

Automatically-Generated Tests

In order to reduce the effort required to get initial test coverage for your labs, we are working towards drafting tests that you may then customize to your needs. They will be delivered to you as a Gitlab Merge Request so that you can adjust them before merging if needed.

These tests are generated based on the commands embedded in your lab. They tend to work best for labs that can by completed solely by running the commands embedded in them. If your lab requires the learner to perform a manual step such as editing a file, you will need to adjust the generated test.

They currently support running commands in a Bash shell or in a Python REPL.

Example

The following is an example of a test to verify the interactive lab:

describe('My First Test', () => {
  before(() => {
    // This would start and visit the Lab being tested. Without
    // this the test will fail.
    cy.startScenario();
  });

  it('finds the content "Start Scenario"', () => {
    cy.contains('Start Scenario');
  });

  it('runs commands', () => {
    cy.terminalType("uname");

    cy.terminalShouldContain('Linux');
  })
})

A complete example can be found at https://github.com/katacoda/scenario-examples/tree/main/automatedtest.

More examples of using Cypress can be found at https://docs.cypress.io/examples/examples/recipes.html#Fundamentals.

View Results

The results of the test execution can be viewed in the Dashboard. You can find test runs for your organization at https://dashboard.katacoda.com/content/testruns. If you have tests in place, a test run will be triggered automatically whenever changes are pushed to your labs’ repository. You can also re-run all of your tests at once by clicking the “Force Retest” button.

Katacoda Testruns Dashboard

If your test fails, it may because of an issue with the lab instructions or an issue with the lab itself. Look for screenshots of the test failure on the test result page. Also refer to the logs from the test run (those logs are not shown for passing test runs).

Katacoda Testruns Logs

Under the Covers

Want to see what’s happening under the covers? Here’s a video of a test execution:

This happens on every change to your repo to ensure your labs and environments are ready for your learners.