Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEM libraries (non-ROM and ROM) #672

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

KEM libraries (non-ROM and ROM) #672

wants to merge 4 commits into from

Conversation

MM45
Copy link
Contributor

@MM45 MM45 commented Dec 16, 2024

Initial attempt at libraries for KEM (both in standard model and ROM).
There are several points of interest/discussion listed below which I think should be agreed upon before merging.
The resulting conclusions for these discussion points will directly translate to several other libraries I have coming up, meaning the merging of them will be smoother.

Points of Interest/Discussion (standard model):

  • Currently, the module type for (CCA) oracles is a functor type (taking a scheme module as parameter). I feel like this is not necessarily ideal, because one might want to have an oracle that does not directly use the scheme sometimes (but, e.g., only parts of its code). To achieve this with the current functor type, one would need to create oracles that take a scheme but don't use it, which seems weird. However, we still want to provide sensible defaults for the oracles in library (and these defaults will probably be fine a lot of the time), which needs a scheme module parameter (as we don't have concrete schemes to refer to in the library, of course). Alternatively, we could define a non-functor oracle type and then still define the concrete defaults as functors taking a scheme parameter. However, we also want to prove generic relations (for a generic scheme and generic oracles, see theory Relations). If the oracle type is a non-functor type, these generic lemmas would require memory separation between the (generic) scheme and the (generic) oracles. This then prevents us from instantiating any of these lemmas with the provided oracle defaults (using a generic scheme), as these take the scheme as a parameter (and hence their memory is not separated from the scheme's memory by definition). In summary, I am a bit torn between the approaches: 1) functor oracle type (forcing all oracles to be given a scheme, which you might not always want or need; however, allows for proving completely generic lemmas and instantiating them with the defaults), and (2) non-functor type oracles (allowing for the usage oracles without a scheme parameter; however, does not allow instantiating completely generic lemmas with the default oracles).
  • For the derandomized scheme (see SchemeDerand), I went with the most basic and direct approach, which seems the most intuitive (which is nice), but doesn't include much hedging against misuse (which is not nice). Thoughts and feelings? (One thing to note here is that, currently, I use an operator sk2pk to denote a mapping from secret keys to public keys which the key generation is supposed to conform to. Since this operator will be fixed when instantiating the theory, everything is only usable for schemes that adhere to this specific sk2pk instantiation. So, one might argue that there is no flexibility gain from having encaps in the module type as opposed to having an encaps operator. However, I would say that a single sk2pk is somewhat likely to cover multiple schemes, think of schemes just adding the public key as a whole to the secret key, while encaps is almost always unique to a single scheme.)
  • I model the fact that the shared key space can depend on the public key with a mapping that takes a public key and returns a distribution over shared keys. As far as I am aware, EC currently does not allow for dependent types (is that the right term here?) that allow to have a type dependent on the value of some other type. Am I correct in thinking this and, if so, is this distribution map a decent alternative?
  • I put a Relations theory below the definitions, meant to contain relations only using KEMs (i.e., no other cryptographic primitives involved). Currently, it mainly contains relatively simple hierarchical relations. Do we want this to be in this file? If so, I currently crudely import all necessary abstract theories (from the definitions) in a row at the top of this theory. Is this a satisfactory way to go about it, or do we want to have some more code encapsulation here?

Points of Interest/Discussion (ROM):

  • Currently, I sort of enforce a dependency on PROM by having an incomplete instantiation of this theory (only instantiating the input and output types) and using the resulting random oracle types in the KEM-ROM definitions. I did this since I feel like we would like to have these kind of Primitive-ROM libraries always consistent with our main/default ROM theory, and we should get an automatic check when changes in this default ROM theory would cause inconsistency with the Primitive-ROM theory. Thoughts and feelings?

Miscellaneous Points of Interest:

  • If modeling dependent types using a distribution map is something we choose to do (see above) and we decide to keep the dependency on PROM (see above), would it be possible to adapt/extend the PROM theory to make the distribution considered in each query dependent on some public information (provided during initialization) to accommodate this?

@MM45 MM45 assigned MM45 and fdupress and unassigned MM45 and fdupress Dec 16, 2024
@MM45 MM45 requested a review from fdupress December 16, 2024 11:28
@MM45 MM45 self-assigned this Dec 16, 2024
@fdupress
Copy link
Member

This is too large for me to do a proper review before the holidays. (In fact, it is so large that the non-ROM file's diff does not get displayed by default.)

I've left a couple of single comments and will keep throwing some at it, but feel free to make updates to the branch before we break off—and I'll look at them after January 2.

@MM45
Copy link
Contributor Author

MM45 commented Dec 17, 2024

Of course, that is all good. I wasn't expecting you to review this in its entirety anytime soon, and I think it is fine if this takes quite a while and some discussion before being ready for merging. I was mostly interested in discussing the points of interest before anything else. However, do whatever you can or feel like is best.

@fdupress
Copy link
Member

So I got back to this a lot later than I expected, but here are some thoughts.

On functors for oracle types: being interface-oriented helps in coming up with generic definitions (to some extent). I would define the oracle type without dependency on the scheme, then define the default oracle wrappers as functors. (And you can also define a type for oracle wrappers at almost no cost once you have those.)

On distribution mappings: I think this is something we do all the time. For example, the PROM theory allows the distribution we sample the output of the "random oracle" from to depend on the input to the query. (This allows us to use the same theory to also capture "probabilistic computation oracles.") If we go deep into the complexity theory, then there is a potential issue with manipulating arbitrary mappings like this in definitions that are trusted components of a security claim. This strengthens the case for having a layered library: a bunch of weird overcomplicated theories that capture super generic intuition and abstract away some of the gory details of the proof, but that computationally make no sense (for the initiated: Igor would be one of those), and then a collection of well-behaved top-level definitions that are clean, make concrete sense, and can be formally related to their abstract brethren. How to do that nicely is an open question.

As a concrete example of this that also addresses your request to extend the PROM theory with public parameters: what you're asking for can be done as a thin layer around the PROM theory. (Outer init takes and stores the parameters pm, then calls inner init; outer get takes in input X, then calls inner get with pm and X.) Having a clean top-level definition that is not defined by wrapping would be good. (But it adds maintenance burden.) One big question to answer here is whether you wish to allow reinitialisation, and whether you wish to allow reinitialisation to change the public parameters.

More to come, but I'm typing this on my phone and my hands are cold.

@MM45
Copy link
Contributor Author

MM45 commented Dec 18, 2024

On functors for oracle types: being interface-oriented helps in coming up with generic definitions (to some extent). I would define the oracle type without dependency on the scheme, then define the default oracle wrappers as functors. (And you can also define a type for oracle wrappers at almost no cost once you have those.)

I know there is still more of your reply to come, so I'll wait with commenting on most of it, but I wanted to already respond to the oracle type part. So, maybe it is because I am not fully following on the "wrapper type" part, but having this "non-functor type but functor default" means that any generic lemma of the form lemma foo (S <: Scheme) (O{-S} <: Oracle) ... (see theory Relations at the bottom of the non-ROM KEM theory for a plethora of such lemmas) will not apply to the default oracles, at least for generic schemes. That is, this lemma is not instantiable by O_Default(S), for generic S, since O_Default(S) and S cannot be shown to have separated memories (this would only work if S is concrete and does not read or write its own memory in procedures called by O_Default). My question thus would be: Do we simply accept the fact that we cannot prove such generic lemmas and have them be instantiated by the provided default oracles in this way, or do you have some workaround in mind (perhaps something with this "wrapper type" that I am not fully getting)?

@fdupress
Copy link
Member

On functors for oracle types: being interface-oriented helps in coming up with generic definitions (to some extent). I would define the oracle type without dependency on the scheme, then define the default oracle wrappers as functors. (And you can also define a type for oracle wrappers at almost no cost once you have those.)

I know there is still more of your reply to come, so I'll wait with commenting on most of it, but I wanted to already respond to the oracle type part. So, maybe it is because I am not fully following on the "wrapper type" part, but having this "non-functor type but functor default" means that any generic lemma of the form lemma foo (S <: Scheme) (O{-S} <: Oracle) ... (see theory Relations at the bottom of the non-ROM KEM theory for a plethora of such lemmas) will not apply to the default oracles, at least for generic schemes. That is, this lemma is not instantiable by O_Default(S), for generic S, since O_Default(S) and S cannot be shown to have separated memories (this would only work if S is concrete and does not read or write its own memory in procedures called by O_Default). My question thus would be: Do we simply accept the fact that we cannot prove such generic lemmas and have them be instantiated by the provided default oracles in this way, or do you have some workaround in mind (perhaps something with this "wrapper type" that I am not fully getting)?

If you have lemmas that need to talk about both the scheme and the way in which the oracles wrap it, then you lose (some) genericity anyway, and those only work when you know the oracles are defined as wrappers around the scheme. So you fall back to that less generic mode of definition just for those lemmas. (But importantly, the experiments themselves still remain expressed over oracles.) I don't think any of the relations you have expressed right now require this, though:

  • Eqv_OWCCA1_OWCCA2 is just interpreting a CCA1 adversary as a CCA2 adversary that ignores one of its oracles;
  • all the others do a bit more work and involve a reduction which is not the identity, but it is the nature of all those reductions that they operate over the oracles themselves without needing direct access to the scheme.

So you could express all those lemmas without ever quantifying over the scheme S is my feeling, only ever quantifying over the assumption oracles, and then giving a concrete reduction. This is a hard statement to test without doing a whole lot of leg work, and even then we would only be able to test it locally to the reductions you've already expressed. We might gain in generality, we might lose in usability/understandability. In the end, it goes back to answering "Who are we writing this library for?" first.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants