Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply TBI to virtual addresses on aarch64. #310

Merged
merged 1 commit into from
Jan 16, 2024
Merged

Apply TBI to virtual addresses on aarch64. #310

merged 1 commit into from
Jan 16, 2024

Conversation

pcc
Copy link
Contributor

@pcc pcc commented Jun 27, 2023

In tag-based KASAN modes, TCR_EL1.TBI1 is enabled, which causes the top 8 bits of virtual addresses to be ignored for address translation purposes. Do the same in the page table iterator. There is no harm in doing so unconditionally, as the architecture does not support >56 bit VA sizes.

@osandov
Copy link
Owner

osandov commented Jun 30, 2023

There is no harm in doing so unconditionally

On non-KASAN configurations, this could lead us to read from bogus addresses with garbage in the most significant bits without realizing this, right? If there's an easy way to avoid this, it'd be nice to, but otherwise it's not a huge deal.

I also wonder whether it makes sense to apply this mask earlier than in the page table iterator. /proc/kcore at least exports the kernel memory ranges so that the page table iterator isn't normally needed, so applying the mask earlier in the memory reader code could avoid calling the page table iterator at all. It is still used for vmalloc and module memory for vmcores. Whether this would help you depends on how your remote target exports memory ranges, though.

@pcc
Copy link
Contributor Author

pcc commented Jun 30, 2023

On non-KASAN configurations, this could lead us to read from bogus addresses with garbage in the most significant bits without realizing this, right?

In most cases with garbage addresses it seems unlikely that bits 56..63 will contain garbage without bits VA_BITS..55 also containing garbage, so I don't think we lose much by masking out these bits.

If there's an easy way to avoid this, it'd be nice to, but otherwise it's not a huge deal.

Yes, the KASAN state isn't really exposed very easily. We could do something like check for the presence of KASAN-specific symbols in the kernel symbol table, but with HW tags KASAN there's also a runtime component to whether KASAN is enabled (it can be disabled with kasan=off or if the hardware doesn't support MTE). It's also worth noting that TBI0 is also enabled in all userspace processes, so we'd need to mask the bits when reading a userspace page table even if KASAN is disabled. So I reckon it's probably not worth it.

I also wonder whether it makes sense to apply this mask earlier than in the page table iterator. /proc/kcore at least exports the kernel memory ranges so that the page table iterator isn't normally needed, so applying the mask earlier in the memory reader code could avoid calling the page table iterator at all.

I think it makes sense. We could have something like an arch-specific hook for fixing up tagged addresses (similar to untagged_addr in the kernel).

Whether this would help you depends on how your remote target exports memory ranges, though.

For my remote target support, I just have one direct mapping in virtual space of size 4096 for the swapper page table and everything else uses the page table reader. I hadn't looked at the /proc/kcore support very closely so I hadn't realized that most things weren't going through the page table reader in that case.

@pcc
Copy link
Contributor Author

pcc commented Jul 1, 2023

I think it makes sense. We could have something like an arch-specific hook for fixing up tagged addresses (similar to untagged_addr in the kernel).

Done in the new patch.

In tag-based KASAN modes, TCR_EL1.TBI1 is enabled, which causes the
top 8 bits of virtual addresses to be ignored for address translation
purposes. Do the same when reading from memory. There is no harm in doing
so unconditionally, as the architecture does not support >56 bit VA sizes.

Signed-off-by: Peter Collingbourne <[email protected]>
@pcc
Copy link
Contributor Author

pcc commented Dec 5, 2023

Rebased; ping. I needed to patch this in so that I could test #376 with tag-based KASAN.

@osandov osandov merged commit d22f434 into osandov:main Jan 16, 2024
38 checks passed
@osandov
Copy link
Owner

osandov commented Jan 16, 2024

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants