Link Search Menu Expand Document

Task Verification

Your test commands can be something simple, like checking for the presence of a file, or more complex, like executing a more elaborate .sh script that, in turn, runs other commands (such as regular expressions matching or test suites) as needed.

You are free to implement the verification logic in any language so long as it’s called by the script, e.g., calling out to Python, Go, Node.js, or any other language, script, or tool you’d like to use to perform the verification. For most purposes, native Bash script commands written directly into the verification script are sufficient. This verification test logic is similar to the “Verified Steps” feature

Writing Detailed Verifications

A key to successful challenges is detailed verifications. When a learner is presented with a task, the challenge engine is continuously calling a verification script and watching for a return code of 0 (i.e., no error).

Some tasks can be simple such as:

Create a file called foo.json in the current directory.

The solution for that instruction would be a simple touch command, and the verification would be something like test -f foo.json. However, the reality is that many of your tasks will involve a sub-series of other microvalidations. For example, you might have a task such as:

Create a JSON file in the current directory with the name foo. Ensure the JSON follows the correct schema for creating the widget name my-widget with a mode of enhanced.

This is a distinct task in the challenge, but it involves creating a file in a valid form. As an author, if your verification script just checked for the presence of the file, you would be missing so many other verifications that the learner can still get wrong. In this example, you would validate for:

  • The presence of file name in the current directory
  • Valid JSON syntax
  • Valid schema for a widget
  • Valid value settings for the widget parameters

Some verifications can involve a dozen checks. If you are finding that there are too many verifications, then consider breaking the task into two tasks and find better delineations between the two tasks. If you find the verification is too simple, then perhaps what you’re asking the learner to do is not challenging enough.

Keep in mind that the results of the verifications have a direct correlation to the hints you provide. The better your verifications, the better your hints. If each verification failure returns a unique error code, you can map each error code to a hint.

Designing for Continuous Verification

Your test command will be executed about once every second. For quick-to-execute tests, this works well! Something slower—like compiling an entire application—will introduce delay to the UI.

For example, if the test command recompiles the learner’s program and that process takes ~30 seconds to execute, then there will be a delay of at least 30 seconds before the learner is told that they have completed the task. For this reason, it’s best to avoid verification functions that are blocking or lack timeout options.

We realize this is not ideal. For now, faster-to-execute tests are better. In the future, we may introduce an option to “click to verify,” so instead of running the test command repeatedly, we only run it when the learner has indicated they think they are done with the task (just as with our “Verified Steps” feature).