seth
is a "find-and-fix" vulnerability scoring engine for creating cybersecurity training exercises,
with a focus on resistance to reverse engineering. It runs on Linux systems, and is designed for use in
competitions similar to CyberPatriot and eCitadel.
This engine is useful in the case when competition organizers wish to stop competitors from finding answers
through reverse engineering or dynamic analysis, in an enforcable way.
- On a separate machine from the competition VM, clone this repository, and install the prerequisite
Pycryptodome
python package. - Create your checks in the
config.yaml
file on this separate machine. In theconfig_example.yaml
file, there is a list of all supported checks, with comments. - Run
make
to build the scoring engine. The compiled executable will be placed in theengine
file. The configuration is baked into the scoring engine, so make sure that you recompile withmake
every time a configuration file is changed. - Clone this repository onto the competition VM. Run
install.sh
to put the necessary files in place. - Move the
engine
file onto the competition VM, and place it at/opt/scoring/engine
. - Run
systemctl enable ScoringEngine
andsystemctl start ScoringEngine
to set the engine up. - If desired, place shortcuts to the scoring report (found at
/opt/scoring/ScoringReport.html
, or at a configurable location) on the main user's desktop.
The idea behind this engine is that the engine does not have any more information about the scored vulnerabilities than the competitor. This means that any reverse-engineering of the source, or the configuration, is cryptographically impossible to reverse back into the checks that are scored. In addition to this, the engine is resistant toward attacks that monitor system calls or file accesses, because it makes no distinction between files that are scored and files that are not scored.
This does come at a cost; many types of checks may not be supported. Most prominently, this engine does not support any check that involves a negation (think "user does not exist" or "file/program has been deleted"). Because of this, if you want a more general-purpose engine, please use a more fully-featured engine such as aeacus. However, if you have found a cryptographically secure (and similar to our existing paradigm) way to do it, feel free to submit an issue or PR and I might implement it!
For performance reasons, the scoring only checks files located in certain directories. While this does cover a large portion of the filesystem, you may wish to include other parts of the filesystem in your scoring checks. To accomplish this, just add a line in the main
function for that directory (or for more security, a few directories above it), with more details in the code comments.
Also, in some cases the shared libraries on a competition VM are drastically different from those on the machine used to build the engine. In that case, I would recommend against building the engine statically, because this can often introduce special instruction sets that are not compatible with older computers. Instead, take the config.h
file generated by the config/config.py
parser, and build the engine from a fresh repository on the competition VM. Do not ever put the YAML configuration onto the competition VM, as disk recovery techniques may be used to recover the plaintext configuration. By only including config.h
, you ensure that no more information than what is already baked into the engine touches the competition VM.
I am no longer a CyberPatriot competitor, so this project may not be frequently updated. In addition to this, the code as it is may not be top quality, as it was initially designed to be a small team project.
This project is in no way affiliated with or endorsed by the Air Force Association, University of Texas San Antonio, or the CyberPatriot program.
Thanks to shiversoftdev for his Magistrate project, which was a huge inspiration behind this scoring engine.
Also, thanks to Astro for the project name idea, and for designing the scoring report CSS.