Link Search Menu Expand Document

Solver Utility

Writing and delivering challenges have many similarities to developing and producing an application. High-quality applications typically offer methods to manually and automatically test the application. Testing is important during development, production release, and pipelines that automate tests against published challenges.

As an author up to this point, you have created the markdown files for the tasks, the verification code to verify each task, and provided some hints with a combination of scripts and text. You also need a way to rapidly test the scenario. Also, you need to give other testers and producers the ability to complete the scenario without being a subject matter expert.

There is a Solver utility that helps you organize the verification and hints and provides a mechanism for solution scripts. Solver is not required, but it can shorten your time to production and increase the quality by promoting testing.

Solver is a command-line tool that you can install. Some of the commands are designed to help you during authoring time, but most commands are used when the scenario is running.

The CLI tool offers a variety of commands. One of the handiest commands is solver next that completes the step for the tester. Solver tracks the current task number and executes the commands necessary to complete the task. While you could produce a cheat sheet for your testers and producers to read and interpret, it’s better to code the solutions. This way solver can solve each sequential task to verify the validation logic. With the verification logic, you provide the corresponding solution logic to complete the task. The learners never run this support utility, as it’s just for you and your testers to verify the quality of your challenge. Also, when you come back to the challenge in 6-months, it may be hard to remember all the tasks to complete the challenge. Manually reading through a readme cheat sheet does not scale well when you have multiple challenges to create and maintain.

Katacoda also provides the Cypress framework for automated testing. Your Challenge’s Cypress scripts can also directly call the solver commands. It’s highly recommended to leverage this Solver tool.

With Solver, the source of truth to complete each task is coded in solution script functions.

Quickly Create a Challenge Using Solver

The best way to get started with Solver is to try it. Follow these instructions to create your first skeleton of a working Challenge that uses solver.

  1. Create an alias for the solver tool. Assign the latest release version number to the <semver> in this statement: alias solver="docker run --rm<semver>
  2. Navigate to the base directory of all your O’Reilly scenarios and challenges.
  3. Create a new challenge: solver create --archetype=linux --destination=. --force
  4. Rename the new challenge directory from challenge-linux-solver to something that makes sense for your new challenge. You can also change the title and description in the index.json in that project.
  5. Commit and push your updated set of solutions and challenges.
  6. Wait a few minutes for the solution to publish then see the new tile in the scenarios and challenges in your ORM profile.

Quickly Test a Challenge Using Solver

When a challenge starts there is a solutions script present that can answer each task sequentially. Normally this script lays dormant within the produced challenges as an encrypted file /opt/ These solutions are never available for learners. Since you are the author or tester and have access to the source code for the challenge, then you can find and copy the key to decrypt the /opt/ to /usr/local/bin/

  1. Copy the decryption key found in in the source code to your clipboard.
  2. In the Challenge type solver solutions --decrypt=<paste key>
  3. Run solver next to solve the current task. solver all will complete all the tasks.

Solver will sequentially solve each task to the end of the challenge. If the end is not reached then there is a defect in the verification or solutions scripts.

Solver Commands

This is the current command list for solver as of version 0.4.1:

$ solver --help

Usage: solver [-hV] [COMMAND]
An authoring tool and utility for the O'Reilly Challenges framework. Verify tasks, 
provide hints, and solve tasks in a Challenge. Works with the provided,, and as the supporting sources.

  -h, --help      Show this help message and exit.
  -V, --version   Print version information and exit.
  solutions, sol  Install solutions for testing. Requires authoring passcode.
  next            Solve current task and on success advance current task number.
  all             Solve all remaining tasks.
  until           Solve all tasks from current task until reaching given task number.
  verify          Verify task number is complete.
  hint            Get hint give a task number and hint number.
  view            Reveal the verifications, hints, and solutions for a task.
  reset           Clear task tracker so next task is returned back to 1.
  status          Get the next task to solve.
  create          Create a Challenge project from the given archetype when in
                    authoring context.
  check           Determine required artifacts for challenge are present and
                    correct in either authoring or challenge contexts.

Once solutions have been decrypted commands such as next, all, and until will solve 
the challenge. Before publication, the 'all' command must solve all tasks without 

Install Solver For Authoring

Solver is an open-source tool shared on GitHub.

The best way to get started with Solver is to download the tool to your local development environment and start using it to create a new challenge. Currently, the tool is offered as a Debian Linux command-line tool or as a container image. Additional native options for OSX and Windows are on the roadmap. Choose one of these two options to install the Solver tool to your development environment.

Install Solver to Linux

The tool can be downloaded from the release page

wget -q -O solver$SOLVER_VERSION/solver-$SOLVER_VERSION-linux
chmod +x solver && mv solver /usr/local/bin/

Install Solver using a Container Image (OS agnostic)

Create an alias for the solver command. Assign the latest release version number to the <semver> in this statement:

alias solver="docker run --rm$SOLVER_VERSION"

Using Solver for Authoring

Once installed, the standard commands such as solver --version and solver --help will show it’s working. Most of the commands are used by the challenge framework or used by you while running and testing a challenge. However, two commands can help you on the development side.

  1. The create command will place a small, yet fully functioning challenge source project on your local drive.
  2. The solution --encrypt command will encrypt the solutions script to prevent learners from seeing and accessing the solutions used for testing.

Create a Challenge

A fast way to get started is to use the create command. The create command need to know the directory where to place the project and the type of project:

solver create --destination=tmp --archetype=linux

Only the archetype linux is available and others are on the roadmap. This command creates a small, functional, and canonical challenge project complete with all the files that solver will need. The challenge will be using the same version of the solver utility you are using locally as instructed in

Once created, explore the layout and files. For each task in the index.json there is a reference to and These two bridge the challenge framework to the solver utility. On inspection of these two small scripts you’ll see the current task number is obtained, then it verifies the task, and requests to advance the task once the verification passes.


The verification for each task is provided by you in the assets/ script. Inspect the script and you’ll see each task has a verification function named function verify_task_N(). When you provide these named functions with corresponding task numbers then solver will call the appropriate verification function per task.

These functions typically have multiple verification steps. Each verification returns a number. When verification fails, then that number is returned to solver. This number is the hint identifier. Given this task number and the hint number, solver will return the appropriate hint to the challenge framework for the display to the learner when the hints are enabled.

Notice in the call to solver verify -q. This verification calls the verification code with the current task number. Solver expects all the verification logic to be found in the file /usr/local/bin/ Solver then expects one shell script function to be defined for each step. Solver finds the function if the name is verify_task_n, where n is the number of the task. Here is an example verification function for step 7 of a challenge:

function verify_task_7() {

  # Is image created
  docker images "$REGISTRY/$image_name" | grep -c "$REGISTRY/$image_name"
  if [[ $? -ne 0 ]]
    return 1

  # Is image created with correct version
  docker images "$REGISTRY/$image_name" | grep -c "$version"
  if [[ $? -ne 0 ]]
    return 2

This verification example checks for two states. The verification can contain one or more checks on individual states. Each failed verification returns a hint/error number that is greater than zero. A returned zero or no return indicates a verification success. Each verification failure number is directly mapped to a hint. These return codes essentially are error codes mapped to hints. The hints are error messages but transposed in writing style to guide the learner toward the solution. The presented hint text is not in the verification method since the text is in a markdown format and would conflate the verification source code. Hint text authoring is detailed below. Two or more verifications can share the same return error code, but in most contexts, you want to associate each verification with a unique hint.

To install verifications add to the challenge assets directory and add this assets instruction in index.json:

"assets": {
  "host01": [
    {"file": "", "target": "/usr/local/bin/", "chmod": "+x"},
    {"file": "", "target": "/opt"},
    {"file": "", "target": "/opt"},


You provide all of the hints in assets/ as simple markdown text. On inspecting assets/ you’ll see each hint is sequentially grouped into the tasks.

The verifications functions return validation error codes. Each code within a task is mapped to a hint. Since hints are written in human language, all of the hints are placed in This allows other producers to easily copy edit the content to improve the quality of the hints. Solver has a function that when given a step and verification error number, Solver extracts the specific hint from Solver expects each hint to be marked with a markdown header (##) such as this:

## Task 1, Hint 1

Click on the _IDE_ tab to easily edit the files included in the instructions. 
Changes are automatically saved.

## Task 1, Hint 2

In the `my-nginx.yaml` file change all the  tokens to values to match 
the instructions.

The markdown syntax follows the same Challenge markdown form. The tag must be exactly this string ## Task x, Hint n. Where x is the task number starting at 1 and n is the error number returned in the verification function when a specific verification fails. For instance, for above ## Task 1, Hint 2, the correlating verification function may look like this:

function verify_task_1() {
  # Have all {{...}} been substituted?
  cat $manifest | grep -q "{{\|}}"
  if [[ $? -eq 0 ]]
    return 2

There normally are a few verification blocks in the same function that return other verification numbers. If your verifications are fine-grained enough to be contextually aware of where the learner most likely is on their path to solving each task, then the corresponding hints can be very effective for the learner when they get stuck. Your time invested in quality verifications and hints widens the inclusion of learners with different skills. Remember your goal is to make your learners successful and complete the challenge. The best challenges tap the learner’s skills without making them impossible and frustrating. Learning happens when the challenges are met.

To add the file to your challenge add the following to your Katacoda scenario index.json file:

"assets": {
  "host01": [
    {"file": "", "target": "/usr/local/bin/", "chmod": "+x"},
    {"file": "", "target": "/opt"},
    {"file": "", "target": "/opt"},

Installing Solver Tool Into Challenges

When a Challenge starts the Solver utility must be in the system path. It’s too large to install as an asset, so it’s best to install it in the background when the Challenge starts.

Place these the installation instructions in the shell script associated with courseData. courseData is typically the background script defined with the introduction page when the scenario begins:

"intro": {
  "text": "",
  "courseData": "", // <- Add binary install to this script
  "code": ""

In the background script ( add this installation:

wget -q -O solver$SOLVER_VERSION/solver-$SOLVER_VERSION-runner
chmod +x solver && mv solver /usr/local/bin/

Assign the latest release version number to the <semver> in the above statement.

Connect A Challenge to Solver

To summarize, Solver relies on the presence of a few opinionated files.

File Purpose Installation target
solver This CLI utility /usr/local/bin Small script called from each step that queries solver to validate the current step. (See note below) Small script called from each step that queries solver for the current hint. (See note below) Collection of all the challenge hints sequentially organized by step and hint number. /opt Collection of shell script functions that verify each step. /usr/local/bin The commands that provide the required instructions to complete each task. Solutions are organized into bash functions. This resides in the /assets/ folder but is never copied as an asset into the challenge, instead copy the (never install via assets copy) Collection of shell script functions that solve each step. The decrypted file is copied to /usr/local/bin. Use solver solutions --encrypt to create the enc file from This resides in the /assets/ folder and must be copied as an asset to /opt. Run the solver solutions --decrypt <key> command to open the solutions for the all, next, and until commands. The key can be found in /opt Holds instructions and secret key to decrypt the solutions while in the challenge. The decrypted solutions are only for authors and other people who need to test your scenario. Also for automated tests. This is not for learners and this markdown file nor the key should never be copied to the scenario or given to the learners. (never install via assets copy)

NOTE: These two files are in the challenge repo root next to the step/task markdown files. These files do not get loaded as assets to a target.

Solver can be called from the command line by people, but as you can see the challenge framework invokes solver from and In the index.json declaration for each step, you associate each task to these bridging scripts and

    "details": {
        "steps": [
                "title": "Bananas",
                "text": "",
                "verify": "",
                "hint": ""
                "title": "Apples",
                "text": "",
                "verify": "",
                "hint": ""

Notice that the markdown file for each step is still sequentially numbered file. What’s different is the verify and hint entries call the same scripts for all steps. These two scripts query solver status -q for the current step, then call to verify the step or to obtain the appropriate hint.


For every presented task there must be a verification function. For every verification, there must be a corresponding solution. A solution is a function that will solve each task. Think of it as matter and anti matter. The solution functions are essentially the unit tests that cancel out the verification functions. The solution functions are committed in the assets directory, but should never be published when the challenge goes live. Solutions are vital for both manual and automated testing of the challenges.

Learners will never run Solver and should never be given access to the solutions. However, the solutions must be present to help producers, copy editors, and other testers easily run your challenge without being a subject matter expert. Your solutions will also allow you to rapidly develop and test each challenge. When you revisit your challenge in 6-months it’s going to be hard to remember all the tasks to complete the challenge, and manually reading through a readme cheat sheet does not scale well when you have multiple challenges to create and maintain. With the solution functions installed, the Cypress scripts can also call each challenge solution through its happy path test, automatically.

For solutions to function, Solver expects an executable shell script called /usr/local/bin/ This file is optional and should not be present when the learner is running a Challenge instance. The next, all, and until commands will solve each step. These commands will abort if the script is not present. After the Challenge has started, the tester (human or automated) would decrypt the solution before next, all, and until can function.

Just like the verifications file, solver expects one shell script function to be defined for each step. Solver finds the function if the name is solve_task_n, where n is the number of the task. Here is an example verification function for steps 1 and 2 of a challenge:

function solve_task_1() {
  cp -n my-nginx.yaml{,.bak1}
  sed -i 's/{{container port}}/80/g;s/{{service port}}/80/g;s/{{selector}}/app: my-app/g' my-nginx.yaml
  kubectl apply -f my-nginx.yaml
  kubectl wait --for=condition=Available deployment/my-nginx

function solve_task_2() {
  kubectl port-forward service/my-nginx 8080:80 > /dev/null &
  echo "Forwarding..."
  sleep 4
  curl http://localhost:8080 > page-1.html

Each solution function solves each task in the most direct way, programmatically. A learner’s hands-on solution may not be as clean and as efficient as how the solution functions that solve the task. This is why the solution functions only solve the single “happy path” to the challenge goal of success. Your verification functions hopefully will have a bit more context and hints to account for various happy and unhappy paths the learners may try to solve each task.

Solutions Encryption

At authoring time, each time the assets/ file is updated, it needs to be re-encrypted into assets/ file with a passcode. The command solver solutions --encrypt will ensure a new passcode is created and used to encrypted and updated assets/ file. The passcode is written to assets/ and all these solution related files should be stored in version control.

This passcode is for only authors and other testers and should not be revealed to learners. Never copy, or the key as an asset to the Challenge. When in the Challenge, as an author or tester, refer to this key in the source code to install the solutions script with solver solutions --decrypt <key>. Once the /usr/local/bin/ script is present the solver testing commands like next, all, until, and solve will help solve each task.

Here is a table to help understand how the solutions files should be used:

File Created by In VCS Copy to
Purpose You Contains your solution function for each step. Solver The encrypted file based on the assigned passcode Solver Documents the assigned passcode to reference later when testing the live challenge. Do not share with learners.

To add the file to your challenge add the following to the challenge index.json file:

"assets": {
  "host01": [
    {"file": "", "target": "/usr/local/bin/", "chmod": "+x"},
    {"file": "", "target": "/opt"},
    {"file": "", "target": "/opt"},

Manual decryption can also be done with openssl enc -aes-128-ecb -d -in /opt/ -out /usr/local/bin/ -K $(echo -n <key> | hexdump -ve '1/1 "%.2x"') and its only mentioned to let those curious to know how solver is encrypting and dycrypting this file.


Each verification function typically verifies several states of a single task.

As you are authoring and updating the file be sure to encrypt the new changes with solver sol -e. It’s important this is done before committing the source to version control.

Make sure each verification state failure returns a different number so your hints can be contextually detailed. This contextual detailing with hints makes these challenges a very effective learning medium for learners. The better you make your verification and hints, the more the learners will appreciate your teaching guidance.

Two verifications for a step may share the same return error code, but in most contexts, you want to associate each verification with a unique hint. The goal is to make hints as unique and context-aware as possible to best guide the learner.

The order of verification needs to follow logically from general validation of the steps to final details. For instance, if you have asked the learner to create a language source code file, first check for the file presence, then check for a valid compilable file, then check for specific content within the source code that fulfills the instructions. The validation order should reflect the natural steps the learner would be expected to follow progressing from general checks to final details. Performing validations in the reverse or mixed order would not make sense.

While the verification and solutions are written in Bash shell, these methods can call out to other programs and scripting languages of your choice. You are not limited to just implementing in just shell scripts. Solver does expect to call the discoverable shell functions verify_task_n() and solve_task_n().

If your verification functions have too many micro verifications, then it could be an indication that the step instructions are asking the learner to solve too many things at once. If this is the case, consider breaking the step into two tasks, or more. A verification with one check may indicate the step is too simplistic. There are no hard guidelines for what makes a step too easy or hard, but this is one indicator to help you measure your scope for each step.

Try not to get too clever, hacky, or ask the learners to perform steps that are outside the scope of the whole challenge goal. Keep each step focused on getting to the finish line of the challenge goal.

Scenarios and challenges are atomic and modular, no don’t expect the learner has tried other scenarios or challenges in a particular sequence.

Validate your scripts with

Correct wording in,, and any other text content through Grammarly.

When editing markdown and other (domain-specific languages (DSL) in your favorite editor, ensure you have installed a linter for consistent, higher quality, and maintainable sources. Linting sources are also recommended when publishing examples to the learners. For VSCode, these are a few recommended extensions:


Here is the Linux Challenge: Example Using Solver for a complete and canonical live challenge scenario that utilizes solver.

A live O’Reilly Challenge that uses Solver: Kubernetes Challenge: Ingress to Canary Deployment

A live O’Reilly Challenge that uses Solver: Kubernetes Challenge: Scaling and Updating an Application

Here is the solver project.