-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decide how project results should be consumed & communicate this #15
Comments
I did not say that it would not aim to provide a common API. As I mentioned in yesterday's OQS call, I think of PQ Code Package as being somewhat analogous to PQClean. There is a common structure and API to each implementation, packaged (as source code) in a way that makes it possible to be consumed by other projects. |
OK, then I misunderstood this:
Thanks for the clarification.
I'm still not sure I understand: Are you saying that PQCP will have a common API for consumers to rely on (as PQClean does)? Or is the statement simply that each separate implementation will have its own API? Or is the "probably" in your original statement the operative term, i.e., is it simply unclear what PQCP will provide (except source code)? |
Given that we haven't started pulling together any code yet, I don't know exactly how it will happen. I think it is recognized that there is value in having similar structure and APIs where possible, because it enables exactly the type of easier integration with downstream consumers. How strict that requirement is will be a question for the PQCP contributors and TSC as they start seeing each implementation and goals, but I hope that it will move towards being as similar as possible. One inherent restriction to that commonality will be that there will be implementations in different languages -- C, Rust, maybe Go and more -- which obviously have not only their own syntax but also their own conventions. But if there are multiple implementations within one language (e.g., C / assembly), then it seems quite prudent to aim for those to have the same API, like PQClean does. |
At a very high level, the goal is: if someone needs a source code implementation of ML-KEM, PQCP is a good / the best place to get it. They will be assured that the implementation of ML-KEM in PQCP is high quality, developed in part by members of the Kyber team who know the algorithm intimately, has high levels of assurance, and will be updated if/when bugs are found. They won't need to hunt around on 5 different Github projects with 5 different maintainers with uncertainty about whether the code will be maintained who reviewed it, whether they understand the algorithm, etc. |
ACK. So I understand it's open and undecided. So feel free to keep this issue around to finalize and communicate a decision. Personally, I'd urge the project to aim for at least a common C API such as to allow re-use of PQCP within OQS, i.e., to mimic what PQClean did for OQS. What I also take from this discussion is that OQS cannot rely on PQCP for anything until further notice, exactly as it's open as and what exactly PQCP will provide, let alone by when. |
I agree that some kind of standard to help consuming the code would be helpful. |
Yes! |
@bhess are the import mechanisms currently used to get some implementations into liboqs copy_from_upstream.py and copy_from_xkcp ? |
Yes - copy_from_upstream is for importing the KEM/Signature algorithms. copy_from_xkcp is just for importing the xkcp (Keccak/SHA3) common code. |
Presuming we're working to the premise that the reference ML-KEM implementation for liboqs will ultimately become the generic implementation in this repo, one of the first validations for pq-code-package would be to
Does this make sense? I'm thinking it starts to pragmatically address the relationship between the projects, and starts to define what the deliverable of pqcp is, and a pattern by which it can be consumed. In itself it doesn't add a lot of value to liboqs (first-pass it's very mechanical), since you already have this process, but hopefully will pay off in future as we get more algos, and we need to start somewhere. I've taken a look at the code, but I know both of you know it in detail and have a lot of experience, but if this approach makes sense I'd be happy to try and help move it forward. |
What about the base line premise to keep the relationship as loose as possible, very much along the lines of PQClean and OQS? That way both projects can basically proceed pretty much independently and friction-less. Also, I'm personally not yet convinced of the value of importing something designed to be production quality (PQCP code) into something that still needs to be retrofitted for this (OQS)... Tagging @beldmit for thoughts as a representative of the only entity that so far voiced interest for "productive use" of OQS: What'd be your take: Would you rather import production quality code straight from PQCP or via another layer (OQS)? Or asked differently: What value does OQS bring you justifying the effort outlined above? |
According to some statements it seems this project does not aim to provide a library/common algorithm APIs. If this is the case, the results of this project need significant additional, external investment to be consumed: Per-implementation integration efforts, separate certification paths for each usage of each separate PQCP code package, etcpp. This may be desired, but doesn't this beg the question: What does PQCP provide more than what
github
already provides: a place where to find code and jointly work on it?A clearly communicated "external API" (what can you gain by using PQCP) would be helpful to inform anyone possibly interested in using the results of this project incl. separate discussions regarding OQS' mode of operation going forward.
The text was updated successfully, but these errors were encountered: