Thank you for your time and for picking our paper for the artifact evaluation.
This documentation contains the steps necessary to reproduce the artifacts for our paper titled Remote attestation of confidential VMs using ephemeral vTPMs
We use a Dell PowerEdge R6525 machine on the Cloudlab Infrastructure to evaluate all the experiments.
The artifact contains the source code of the vTPM implementation running inside the Secure VM Service Module (SVSM). Also, the entire software stack consisting of Linux kernel, Qemu, OVMF, and Keylime is available as submodules.
-
To evaluate the experiments, you need to configure the BIOS to enable SEV-SNP related settings as described in the official repository
- If you encounter any issues in preparing the host, please refer to the existing github issues.
-
Once you have successfully setup the node, skip to Manual setup to continue with the evaluation.
- Create an account on Cloudlab and login.
-
The easiest way to setup our experiment is to use "Repository based profile".
-
Create an experiment profile by selecting
Experiments > Create Experiment profile
-
Select
Git Repo
and use this repository. The profile comes pre-installed with source code for evaluating DRAMHiT hash table.
https://github.com/svsm-vtpm/cloudlab-profiles
-
Populate the name field and click
Create
-
If successful, instantiate the created profile by clicking
Instantiate
button on the left pane. -
NOTE You can select different branches on the git repository. Please select
svsm-vtpm-ae
branch. -
For a more descriptive explanation and its inner details, consult the cloudlab documentation on [repo based profiles](https://docs.cloudlab.us/creating-profiles.html#(part._repo-based-profiles)
-
The
profile
git repository contains a bootstrapping script which automatically clones and builds the following repositories, upon a successful bootup of the node.
- Use the helper script
prepare.sh
to build all the necessary components. At a high-level, theprepare.sh
script does the following:
- installs prerequisites
- builds all the software
- Linux host, guest, ovmf, qemu, svsm.bin
- generates an ssh key
- downloads an Ubuntu cloud-image and prepares user-data
./prepare.sh init
./prepare.sh install
- Now reboot the host machine to boot with the SEV-SNP enabled kernel. Make sure you pick the appropriate kernel from the GRUB menu or change the defaults in `/etc/default/grub'
- To make the artifact evaluation process more accessible, we have prepared a cloudlab node with the appropriate BIOS options and the host kernel that enables SEV-SNP.
- To play around with the guest image and interact with the SVSM-vTPM, launch the guest.
Once everything is built (usually it is on the preconfigured node), one can directly launch the guest using the script.
prepare.sh
script installs the guest kernel automatically.
./prepare.sh prep_guest
- Login to the guest via ssh.
prepare.sh
should append a guest configuration to the ssh config file~/.ssh/config
.
ssh guest
- Inside the guest, now you can access the vTPM
ls /dev/tpm*
sudo tpm2_pcrread
-
The script automatically invokes the appropriate guest image (Qemu based vTPM or SVSM based vTPM) and runs the TPM benchmark automatically and collects the log inside the
linux
directory which is shared with the guest.- Run the TPM benchmarking on SVSM-vTPM configuration . This communicates with the SVSM-vTPM running inside the SVSM.
./prepare.sh run_svtpm
- Run the TPM benchmarking on a regular Qemu-vTPM (based on
swtpm
)
./prepare.sh run_qvtpm
-
To generate the tpm_overhead plot, invoke the script
fig4.sh
that extracts the data from the above generated logs to plot the figure.
cd tpm_overhead
./fig4.sh
- The figure should be written to
tpm_overhead.pdf
.