Link Search Menu Expand Document

Challenges Authoring Guide

Challenges, despite the name, shouldn’t necessarily be “challenging” in the sense of being difficult. Someone who has already learned the necessary skills should be able to easily complete the corresponding challenge.

The authoring responsibilities for a challenge are somewhat more extensive than a lab. For a lab, you set up then index.json file and create a markdown file for each step. For a challenge, you configure your index.json file to let the Katacoda platform that it’s a challenge, then compose the following for each task:

  • A markdown task or instruction file
  • An .sh file containing some verification(s) for that task
  • A markdown file containing a hint for the learner to help them proceed*

As a reminder, once you understand the basic authoring workflow, you can use the Solver utility to assist in creating your challenges.

* Note that in challenges versions <1.0, a task’s hints are provided as *.sh files. If possible, please update to challenges@v1.0 when updating your challenges.

Writing the Challenge index.json File

While the basic format of a challenge’s index.json file is similar to that of a lab’s (including the intro, finish, and credits fields), challenges require a few extra elements.

Letting the Platform Know it’s a Challenge

In order to let the learning platform (and the underlying Katacoda platform) know that the challenge directory should be rendered as a challenge, you need to add "type": "challenge@1.0" to the top of the index.json file, as shown here:

{
    "type": "challenge@1.0",
    "title": "Challenge Example",
    "description": "Basic template for a challenge",
    

This tells Katacoda that the session will be a challenge, as well as which version of the challenge API to use.

NOTE: The current recommended type for all new challenges is "challenge@1.0". The "challenge@0.9" version is still available for existing challenges, however it may be deprecated in the future. Your editor or technical instructional designer will notify you if and when you need to make any updates to your challenges.

Specifying the Title, Text, Verification Script, and Hint for Each Task

While regular scenarios have “steps,” the challenges API interprets steps as “tasks.” Your index.json should continue to specify an array of steps (tasks) in the same way that you would denote them for a lab, but you will need to include a verify script and hint file for each step as well. For example:

{
    "type": "challenge@1.0",
    "title": "Challenge Template",
    "description": "Example template for a challenge",
    "difficulty": "Beginner",
    "time": "5 minutes",
    "details": {
        "steps": [
            {
                "title": "Create a New .config File",
                "text": "1_task.md",
                "verify": "1_verify.sh",  // <-- New
                "hint": "1_hint.md"       // <-- New
            },
            {
                "title": "Increase Widget Capacity",
                "text": "2_task.md",
                "verify": "2_verify.sh",  // <-- New
                "hint": "2_hint.md"       // <-- New
            }
        ],

The title for each task should briefly summarize the task goal at hand. Lead with the verb, indicating the action to be taken. You should not put the task number in the title field.

The text element points to a markdown file that will provide a more detailed prompt for the task. The verify element points to a verification shell script, and the hint element likewise points to a markdown file containing a hint for that task.

We recommend numbering all task-related files together so that they’re easy to find in your repo. For example:

  • 1_task.md
  • 1_hint.md
  • 1_verify.sh
  • 2_task.md
  • 2_hint.md
  • 2_verify.sh

Choosing a Layout for Your Challenge

Authors are encouraged to use the common and clean layout of just "uilayout": "terminal" and if the learner will be asked to manipulate and edit files, embed the VS Code IDE with "showide": true. Here is a typical environment:

  "environment": {
    "showide": true,
    "hidesidebar": true,
    "uilayout": "terminal"
  },

Writing Task Instructions

The task instructions should be written as short concise requests to the learners with any details necessary for the verification(s) to pass. Tasks assume the learner has the fundamental skills to carry out the request. The trick is to provide enough instructions to meet the goal, without telling the learner how to implement the solutions. The hints can also give further details, but should also avoid fully revealing the solution. The task should just ask for the “what,” and not show the “how.”

In a challenge, you assume that the learner understands all the mechanics, so a brief, directly approach to writing tasks is preferred. For example, a 1_task.md file might read:

You want the widget `bar` to take on a 3D configuration. Create a declaration called _foo.json_ and apply it to widget `bar`.

The syntax for the markdown for challenge tasks and hints is mostly the same as the markdown syntax for labs, except challenges don’t support the following markdown extensions in task and hint files:

  • {{execute}}
  • {{interrupt}}
  • {{copy}}
  • {{open}}

NOTE: The use of images in the intro and finish pages for labs are defined the same with a single relative dot (.) reference to the assets directory, such as ./assets/cat.png. However, if you need to include images in the challenge tasks and/or hints, use the double dot (..) ../assets/dog.png so that the challenge can correctly pull the image.

For more information on authoring challenge tasks and challenges more generally, see Challenge Authoring Tips.

Writing Task Verification

As previously mentioned, verify element points to a Bash shell script (.sh) file. This script is evaluated continuously in the background until it returns an exit code of 0 (success), at which point the task is flagged as completed, and the challenge proceeds to display the next task. There are no parameters passed to the verification script, and the script is expected to return the standard zero for success or a non-zero for failure (i.e., an incomplete task). Note that there is a few seconds of delay built into the verification loop.

For example, if bananas.txt doesn’t exist, the exit code 1 (no success) is returned:

$ test -f ./bananas.txt
$ echo $?
1

As soon as we create bananas.txt, the next time the verification test is run, it will pass, returning an exit code of 0 (success):

$ touch bananas.txt
$ test -f ./bananas.txt; echo $?
0

IMPORTANT: Your verification functions must be non-blocking. If you call a function that blocks, make sure there is a timeout around the call to ensure it does not block indefinitely.

As a reminder, bash expects .sh scripts to begin with #!/bin/bash. Make sure to follow that convention when saving your verification script into its own .sh file, as in:

#!/bin/bash

test -f /root/bananas.txt

While testing, it can be convenient to manually run any verification scripts in the foreground to ensure you get the expected exit code. As a reminder, you can check the exit code of the most recently run command by typing echo $?.

For a more on verification scripts see Task Verification. For additional verification tooling, check out the Solver utility.

Writing Hints

The hint element in index.json points to a .md file that provides a single hint text each task.

Hints are useful for providing learners with an indication of how to unblock themselves if they appear to be taking a long time or have missed something in their solution that’s stopping them from proceeding.

Writing solutions

Each challenge you write should be accompanied by a SOLUTIONS.md file that details the code or command(s) that will pass verification for each task. Users will not have access to the solutions file. Your solution file is for O’Reilly internal QA purposes only. The following best practices should be used when writing your solutions file:

  • Write your solutions in a clear, linear fashion for easy testing.
  • Clearly denote what command(s) or code should be pasted into the environment and where in order for the task to succeed.

For example:

Task 1

select title, length from film limit 5; or

Task 2

Create /root/my_app/src/People.js and make it look like this:

export function People() {
  return (
    <>
      <h1>All the people</h1>
    </>
  )
}
  • Assume your QA tester has no coding or command line experience. Avoid jargon, lingo, or assumptions that would make testing the lab difficult for a non-technical user.
  • Test your lab against your Solutions.md instructions to ensure they are accurate.