Skip to content

2024 06 21 Hybrid Meeting Notes

Tim Cappalli edited this page Jun 25, 2024 · 1 revision

2024-06-21 (Hybrid Meeting)

Organizer: Tim Cappalli

Location: Mountain View, CA

Agenda

  • Intros
  • Administrivia, agenda bashing, charter updates
  • Threat Modeling for Decentralized Identities (#115)
  • Testing Framework (Marcos)
  • Request types, credential types, registries, etc
    • Should we have a common and interoperable definition of request types and their privacy properties? (#117, #85)
    • Trustmarks, consortia, trust frameworks, etc (also discussed in #59)
  • Break for lunch
  • Higher Level Discussions
    • ZKP solutions for age verification
    • Forward looking: multi credential presentations
    • Forward looking: issuance
  • Spec Items
    • Iframe and cross-origin usage (#78)
    • Consume transieant activation (#91)
    • Error codes
      • Both transport and web layer
    • clientData payload (#95)
    • Data type for the response (#119)

Attendees

  • Marcos Cáceres (Apple)
  • Tim Cappalli (Okta)
  • Heather Flanagan (Spherical Cow Consulting)
  • Ted Thibodeau (he/him) (OpenLink Software)
  • Alan Buxey (MyUNiDAYS Ltd.)
  • elf Pavlik (independent, Solid CG)
  • Torsten Lodderstedt (SPRIND)
  • Brian Campbell (Ping "not the W3C group" Identity)
  • David Waite (Ping "not the golf equipment" Identity)
  • Simone Onofri (W3C)
  • Chris Fredrickson (Google Chrome)
  • Wendy Seltzer (Tucows)
  • Nick Doty (CDT)
  • Benjamin VanderSloot (Mozilla)
  • John Bradley (Yubico)
  • Philippe Le Hegaret (W3C)
  • Judith Bush (OCLC)
  • Manu Sporny (Digital Bazaar)
  • Michael Knowles (Google Chrome)
  • Joseph Heenan (Authlete/OIDF)
  • Sam Goto (Google Chrome)
  • Lee Campbell (Google / Android)
  • Helen Qin (Google / Android)
  • Bill Chen (Google / Payments)
  • Rick Byers (Google Chrome)
  • Pamela Dingle (Microsoft)
  • Gareth Oliver (Google)
  • Tim Shamilov (Block)
  • Aaron Parecki (Okta)

Notes

Administrivia, agenda bashing, charter updates

Scribe: Heather Flanagan

  • (Tim) Any agenda bashing?

Nope

Charter Update

Scribe: Heather Flanagan

  • (Simone) Last PR https://github.com/w3c/charter-drafts/pull/540/files
  • (Simone) Last Issue: https://github.com/w3c/charter-drafts/issues/541
  • (Simone) Expect the recharter to go out for AC vote next week. If there are no formal objections, then this will complete in 28 days. If there are formal objections, it will take 3 months to resolve.
  • (Tim) The WICG will dissolve if the updated charter is approved as the work would move into the FedID WG.
  • (Tim) At TPAC, the WICG work item will meet regardless, separate from the FedID WG.
  • (Marcos) we won’t have time to have a proper meeting there about this. Should plan accordingly.
  • (Marcos) where are we with the stages?
  • (Simone) we have a PR from Nick Doty that will be accepted before it goes out for AC vote.
  • (Wendy) if you have any questions or issues with the charter, please comment on the issue. We want to resolve any potential issues before the possibility of formal objection
  • (Marcos) it’s in a state that it could be sent to lawyers?
  • (Wendy) Yes.
  • (Tim) if there is anything unclear about our scope/guidelines, let us know.
  • (Tim) [see diagram] we are focused on only one small piece of the layer diagram. (See green arrows). We are working on the blue line in FIDO as a new transport called Hybrid. The PR for Hybrid is about 85% done; it will go - through a 90 day review, during which you can see it now in the public record.
  • (Nick) could we do a version of this diagram with the user in it? It’s unclear where the user is making a decision, what info is being requested. Will it happen at the wallet, somewhere else, both?
  • (Tim) Agreed. This diagram was only intended to show the plumbing.
  • (Tim S) For 4 and 7 in the diagram, NFC was mentioned. Will some of the native APIs be an alternative to OIDC?
  • (Tim) This is only online presentation so it’s not considering NFC or other transports.
  • (Lee) we only consider these lines as the Hybrid transport. This isn’t the same as the ISO data flow.
  • (Tim) This has been an interesting conversation about in-person vs remote. There is nothing preventing some of the in-person scenarios being used in a more remote presentation. It depends on the use case.
  • (Lee) You could see a world where the RP side is an in-person terminal at a store or airport, sort of an online presentation.
  • (Marcos) Is there a link to the FIDO proposal?
  • (Tim) Yes, but it’s not public yet.
  • (Rick) In response to Nick, how much of the user journey do we consider in scope for discussion here? Makes sense to have some requirements on the user journey. On Chrome/Android, we expect the verifier to ask the user, “do - you want to use a credential?” but we can’t enforce that. We’ll have the platform get the user's permission to speak to a wallet. We do that again at remote presentation (5), but it’s the wallet we expect to gather consent. - That’s almost three different parties that want user input, which is heavyweight but probably what we want regardless.
  • (Lee) you could take this diagram and replace user interaction with user consent. It happens at almost every level.
  • (Rick) we don’t currently have user consent at the browser level because we trust it at the OS level.
  • (Tim) In WebAuthn, sometimes it’s the browser, sometimes it’s the browser and authenticator. It’s a similar pattern.
  • (Lee) it’s almost identical in the technical implementation and UI flows.
  • (David) The in-person use of hybrid requires both the holder and verifier to have Internet connectivity.

Threat Modeling for Decentralized Identities (#115)

Scribe: Heather Flanagan

How to Threat Model (from yesterday) https://docs.google.com/presentation/d/1ZqN-qAjeCp13WmTyyfXQJTMS-9JVidy82UAVzyrzqDg/edit#slide=id.p

Issue: https://github.com/WICG/digital-credentials/issues/115

  • (Simone) [sharing the draft threat model (issue 115) for Digital Credentials]
  • (Marcos) This is great, thank you for putting it together. It is extensive and long, and we don’t want it to go to waste. Should we plan some kind of working session to dive into the details and see if there are actual - issues that come out of this and what we should do with them?
  • (Tim) Could we meet before TPAC and at TPAC about this? I will set something up.
  • (Manu) +1. This is a good, comprehensive doc. Wondering how this guides the work this group is doing? It’s one thing to say “here are all the ways this can go wrong” but if it doesn’t have an impact on the work we’re doing, - that’s a concern. The doc serves two purposes: highlight no-go areas that the group has come to consensus on (e.g., global allow lists are not a good approach) and point organizations/governments to that. Is this document - going to have teeth, and if so, how?
  • (Simone) Yes, I was discussing this with individuals at a recent meeting (Paulo, Davide). This will inform their work on the architecture reference framework. We have to also decide if this is a single doc or multiple that - will also cover human rights.
  • (Lee) Agree with Manu — we have to make use of this. It has to be a framework we can use to reason through our solutions. We should be able to judge our API and our design decisions against different properties of the - threat model. There are things we’ve implicitly said we’ll do, but we don’t have anything to point back to to say why. if we’re going to have this doc, it should be used to justify those decisions (e.g., response - encryption). That’s what gives it teeth.
  • (Simone) I agree.
  • (Nick) I especially think the LINDDUN issue is really helpful. I don’t think this doc includes all the threats because there are some beyond privacy and security. We’ve been talking in PING [(W3C) Privacy Interest Group] - about risks of exclusion, people not being allowed to enter websites because they don’t have the right credential(s) or they don’t have access to credentials. There are also issues of consolidation/centralization that lead - to power imbalances. It’s useful to the models there to generate the risks, to let us be systematic, but want to combine this with the work being done in PING. Then we can get normative guidance on the mitigations.
  • (Simone) Agreed.
  • (John) Thanks for putting this together. Agree that Paolo is a good person to review this with. There are some things in the latest ARF that haven’t been taken into account. Also agree with Nick; had a conversation with - reps from the gov’t of Bhutan a few weeks ago and we need to design something that works in places where not everyone has a cell phone (or their own cell phone). We’re in danger of designing this under the assumption that - people will be using a credential in a native app on a phone they control. People using these things in shared library computers or a shared phone must also be considered.
  • (Simone) Delegation is a risk we should manage. There are additional use cases we need to consider.
  • (Torsten) Thanks for putting this together. I did work on the OAuth security threat model and tried to get that into something that could be executed by developers. The group should use this doc to inform design decisions - and the API. A protocol developer needs to know what security assumptions were built into the protocol. We need recommendations that protocol developers can take into account and that developers can take into account. This - whole thing is super complex. Everything is being used in entire ecosystems which will need their own risks and threats defined and recommendations provided. If we can turn this doc into an executable thing for each layer, - that would be great.
  • (Simone) this should be the input to create a security, privacy, human rights consideration section. This is just the start; there will need to be other things included. There will need to be a schema for the wallet.
  • (Lee) +1 to what Torsten said. There are different layers of the stack and we need recommendations for each level. Issuers will have their own rules as set by laws and that will be outside the scope of the API. We need to - think about what we can/should do.
  • (Marcos) In the W3C, we have well-established ways of doing this. Security people do not like to have a section on its own; they want it embedded in the doc. We also have the mDL work that can serve as guidance for - developers. We have templates that we can use that will help us reach out to the appropriate people. Example: see the HTML spec and its authoring guidelines.
  • (Tim) +1 to Torsten. What’s unclear to me is where pieces of this should happen. Do we need to get OWF more engaged for wallet development? They aren’t a standards org but they are providing implementation guidance.
  • (Torsten) I’m also involved in the OWF. The developers of the API and the people doing the threat analysis are the first to provide guidance on how the API should be used. We should also have a group in the OWF, but without - knowledge of the security assumptions of the API, it’s impossible to provide guidance.
  • (Rick) Thinking about how to scale this given so many groups. This group should think about the modularity of the system. Maybe our API should include something about risk signals we consume and produce? The end to end - system will only work well if each component contributes what it can to risk signals and mitigations. The Chromium implementation is working on a risk engine which we were thinking of as an implementation detail, but maybe - some aspect of that can go into the spec.
  • (Tim) This is the interesting thing to me. Do we want the security/privacy to be separate in the API? We should have the meta discussion in the context of the larger ecosystem. The wallets will mostly likely never interact - with the web platform API. How do we expand this conversation to all the layers?
  • (Lee) We need a FIDO for this, something that covers the entire stack so we can give guidance on the whole ecosystem point of view. But we don’t have a FIDO for this space. When we talk to governments, we need full-stack - guidance.
  • (Tim) Maybe the value of OWF is that it is not a standards org and can just focus on guidance?
  • (Marcos) I just sent a PR where we can review things as they come in wrt privacy and security so they won’t be missed. Someone should step up to own review of these. https://github.com/WICG/digital-credentials/pull/128/files
  • (Nick) WE’re setting ourselves a broad, challenging task. This group cannot solve all of them, but we don’t want to just pass this off to another layer. It’s useful to identify shared responsibilities and how we’re going to - address the problems even if they can’t be solved in the API. Tried to frame this in the proposal for the charter by stating we should work together on a doc describing the threats and mitigations so this group takes - responsibility to make sure that doc is defined well enough to apply to our work. We should make a commitment that we should do enough that it addresses the implications for the API or any other deliverable before we get to - the call for implementations
  • (Rick) Don’t disagree about getting everyone in the same room to agree on the doc. Want to caution against thinking that is the solution. Torsten’s point that different constituents have different contexts is a good one. We - need to think about where the extensibility points are. Maybe we can come up with a framework for the terminology and the dials and different issuers can use different dials. The answer won’t be consensus, it will be - something to support customization.
  • (Tim) any suggestions on how to slice and dice this doc? It’s crossing so many groups.
  • (Rick) There is work already in PING. Unless we have a better suggestion, suggest we take this doc and shift it to PING’s credential repo so it’s all in one place.
  • (Simone) Something cross-organization that would require peer review is a good idea.
  • (Torsten) Like the comprehensive approach to security, but want to point out that we should also find a way to focus on the items that impact the DC API and that we can move forward in a reasonable time frame. If moving to - PING means this takes 3 years, it’s too long. Considering an incremental approach may be more advisable. Let’s prioritize the work.
  • (Manu) Keeping everyone apprised, there is a useful tool in the W3C GitHub: notify mailing list. Liaisons for the VC WG, we do a notification to our other groups to say “here’s a summary of all the PRs last week and issues - if you want to take a look.” That allows us to say “we’ve been in touch with all these groups on a weekly basis.” This works across the organization. Let’s lean on that rather than depending on people.
  • (Tim) we should do that.
  • (Nick) PING or the Privacy WG is a good enough home and we are willing to do some of the work. As Torsten said, there is a much broader set of topics and we should consider how this can be modular. The WG would commit to - finish the part of the doc that’s relevant to DC API. PING could make sure it’s all published according to W3C process.
  • (Lee) When we talk about the larger ecosystem problems, there are tons of other groups (e.g., ISO) doing this. OIDF does this, too. If we’re yet another group doing this, we should do our best to align and deconflict with - those other groups. We don’t want to contradict the guidance that’s out there. A lot of the time, it’s the same people working across all these groups. Let’s not flood regulators with conflicting views.
  • (Nick) Sound advice, but not too concerned about regulators hearing conflicting advice.
  • (Tim) We should put together a list of who is working in what other groups to identify the gaps.

AI Generated Summary (Google Gemini)

The discussion centered around a draft threat model for Digital Credentials created by Simone. There was consensus that the document was valuable but needed further work to be actionable. Key points include:

  • Making the threat model actionable: The group discussed how to use the threat model to inform design decisions and API development. It should provide clear recommendations for different layers of the digital credential ecosystem (issuers, wallets, etc.).
  • Collaboration across groups: There was a focus on collaborating with relevant W3C groups (PING, OWF) and potentially moving the document to a central repository like PING's credential repo.
  • Prioritization and Modularity: The group acknowledged the broad scope of the threat model and discussed the need to prioritize the most critical issues and potentially break the document into modular parts.
  • Alignment with existing efforts: There was a concern about duplicating efforts from other groups like ISO and OIDF. The group should strive to align with existing threat models and guidance.
  • Next Steps: The group agreed to take several next steps including:
    • Scheduling working sessions to discuss the threat model in detail.
    • Assigning ownership for ongoing review of security and privacy considerations.
    • Identifying who is working on threat models in other groups.

Testing Framework

Scribe: Heather Flanagan

https://github.com/web-platform-tests/wpt/pull/46642

  • (Marcos) We’ve put together an initial testing framework for the DC API. There were things in the testing framework that aren’t in the spec (yet) like requiring user activation. Took the foundation that Google laid down and - scripted what we are working towards agreement on. The ask here is that we now have slots where we can test stuff across origins, testing the request structure, and slotting things into the testing framework: would like to - have someone write tests through WPT that we review together. People came up with a fake test request format to help get back the object we want to work with. Eventually we may end up with a more sophisticated testing - framework using Web Driver. Let’s start thinking about how we'll get to that.
  • (Tim) Say more about web platform tests.
  • (Marcos) Normally, when writing software internally, there are unit tests that run against it. Web platform tests aren’t much different. They are a testing framework that has been put together by multiple browser vendors - with shared APIs. What drives these tests is WebDriver. It’s not browser specific. If you’re not a browser vendor, you can use this to test error conditions. Web platform tests give us interoperability; the results of these - tests become implementation reports for the community.
  • (Rick) In Chromium, we have a policy that anything exposed to the web must have web platform tests. As we’re developing an API, we’ll be creating tentative tests as we work on implementation.
  • (Marcos) when we have more browser vendors working on these in the WG, then maybe we can stop calling them tentative. The group should decide what “tentative” means for us.
  • (Sam) What do you want us to do? Write web platform tests in a separate directory while they aren’t in the spec?
  • (Marcos) Yes, initially, if it’s Chrome-specific, keep it in Chrome.
  • (Sam) With the user activation requirement, do you want us to turn that into a Chrome-specific request?
  • (Marcos) We just need to agree if we bring them back in.
  • (Sam) So you want the tests to be consistent with the spec?
  • (Marcos) Yes.
  • (Philippe) Web platform tests are not controlled by W3C or W3C working groups. Glad to hear they are being used. Also, when it comes down to tests, there are always some that will be manual. There is an API called WebDriver - that is meant to expose some features from web browsers for the feature of running the test suite on that. If we are missing ways to activate functions of web browsers, we could propose extensions to WebDriver to simulate - conditions that would be impossible to simulate in other ways.
  • (Sam) That’s a question I have: setting up WebDriver infrastructure is non-trivial, but it is the right thing to do. Is there an interim where we use fakes, or do you want to jump straight into WebDriver tests?
  • (Philippe) My intuition is to write the tests manually first with the goal of making them automated in WebDriver later.
  • (Marcos) Worried about doing the manual testing; we did that with Web Payments and never ran them again. So would like to start with the fake thing first and then think about what we need from WebDriver. So it is work, but - it’s not that much: the user cancelled, bad data, etc.
  • (Rick) what we do in Chromium is exactly that. We found manual tests to be worse than useless because they offer false confidence. FedCM has gone well — that spec has WebDriver extension points.
  • (Sam) We did this recently for FedCM and it was really good. It helped us have regression tests and interop tests, but it came at a price in terms of effort to set up. We also did them late in development. So the question - is, how far can the fakes take us? Will they work for the next few months as we get WebDriver tests set up?
  • (Ben) Having something that kind of works for web platform tests can be useful as a stopgap.
  • (Tim) It’s also valuable for developers. There is a virtual authenticator for Chrome. It’s a great way to do testing and it does use WebDriver.
  • (Rick) That's exactly what we need for DC API.
  • (Tim) if we did something that let this consume just the OpenID4VP parameters, that would be enough. Nena’s code for this is on GitHub, if anyone wants to start with it and adapt it.
  • (Sam) To summarize what I heard: web platform tests should be consistent with the specs as written, and anything not part of the spec should go into implementation specific tests.
  • (Marcos) We want to keep tests in sync as much as possible.
  • (Rick) What if we create a Chromium directory to keep the history?
  • (Marcos) This is a WebKit infrastructure problem. When we import stuff, we have to manually turn things off. It’s helpful when it’s in a different directory.
  • (Ben) if you want to say the things that we have agreement on re: what belongs in the API, we can mark it as non-tentative and put everything else in a “tentative” folder.
  • (Marcos) If we have consensus about what’s in the text, happy to remove “tentative”.
  • (Tim) Let’s follow up on this on the next virtual call.

AI Generated Summary (Google Gemini)

The discussion is about creating automated tests for the Digital Credentials API (DC API).

  • Web platform tests are preferred: These tests are part of a shared framework used by different browsers and are ideal for interoperability testing. WebDriver is a tool used to automate these tests.
  • Start with manual or fake tests: While WebDriver based tests are ideal long-term, creating a basic testing suite manually or using temporary fakes is a good first step.
  • Agreement on API functionality is needed: Tests should reflect what the DC API will ultimately become. Functionality not yet finalized can be put in separate folders.
  • Next steps: The group will discuss how to proceed with testing on the next call

Request types, credential types, registries, etc

Scribe: Heather Flanagan

  • (Tim) There have been several discussions and issues around where a user agent might want to raise a warning to a user or ask for consent. Chrome has some ideas about risk scores. This doc doesn’t specify the requests and - responses that come back. Do we need a concept about request type? Every time we talk about a global allow list, there’s a strong push against it, so what’s the solution: a registry, trust mark, trust frameworks?
  • (Rick) Every time we talk about trust properties, it’s confusing that we’re talking about such wildly different use cases. Would like to propose a way to call out lower risk scenarios. Maybe we can do it with current APIs - or maybe need to augment a registry. Does this group want to have a registry of lower risk requests? And maybe something in the metadata to communicate back to the wallet with that info? We wouldn’t exclude handling things - not on that list.
  • (Manu) +1 to Rick. I want to speak to technical capabilities. The VC specs have a globally unambiguous way to identify types. Same is true for mDL. So the capability is there to build this. The problem is what happens when - we decide something is low risk, and we send it through the API, it feels like a potential abuse vector. Not sure how we address that part of the technical issue unless browsers are going to do deep introspection on the - response format. Expect lots of people want their credential in that category. Not sure that we have a mechanism to confirm that the data being transmitted is really low-risk data.
  • (Rick) It depends on the threat model. Our threat model has to include trust in the wallet, so if we have a signal from the wallet along the lines of “this is an unlinkable response”, I think we can trust that.
  • (Tim) Don’t you need that before it goes to the wallet?
  • (Rick) ideally you’d have this at request time. But you could do it the other way. You could have something in the request that says “wallets can only respond if the response is unlinkable.”
  • (Manu) To a degree. The concern with the unlinkable bits is that there are regions that will make different decisions as to whether something like age verification should be unlinkable. There’s room for tuning this, so not - saying no. We just need to discuss it at more depth.
  • (Rick) We promise our users transparency and control, so we’d want a consistent UI for unlinkable age verification and have that differ from the UI for linkable age verification.
  • (Ben) Thinking about the registry idea, the trust framework has some lower hanging fruit. Think about what the highest risk operation looks like and figure out the uses of that that don’t have ecosystem/human rights risk and treat everything else as potentially malicious (i.e., induced demand problem). That approach might be easier to do.
  • (Nick) Rather than a particular registry, what we want to define are the properties of a response registration (e.g., inherently linkable, government-issued, temporary/same-origin only credential). Don’t agree there is a - scalar risk model we can assign. We could define at a spec level those properties and what a wallet should do with them.
  • (Lee) Thinking about a request from a native API perspective. When you make the call, the first UI is the credential selection screen. And we want to have everything in there that will let a user make an informed choice. - That’s where the friction/warnings should be included. We are sending all this to the wallet and telling it to inform the browser with what it would reply back with. We could extend the API between the wallet and the - platform to allow for this. Second thing: what are these bits? Is there anything more than linkable/unlinkable? We always refer to age, but is there more? We need to make this more concrete. There are lots of linkable - presentations that are very low risk (e.g., presenting your drivers license to the DMV that issued it). Need to think about this further.
  • (Joseph) Meta point first: sometimes people refer to type as credential type, other times as request type. Need to be more clear. It’s hard to define these things accurately. I don’t think we can divide all requests into - linkable/unlinkable. There are degrees. There is a similar problem with age requests and whether it’s just an age proof or whether it would leak other information.
  • (Pavlik) When we talk about a registry of types, we talk about machine-readable, but we also need human-readable labels for the users. We need to talk about what’s being shown to the user. What is the source of the labels - being presented to the user?
  • (Ben) We’re talking about unlinkability and conflating two things: cryptographic unlinkability, and what the data holds that makes the information inherently linkable
  • (Lee) we should call out the two types of unlinkability. Presentation of linkable attributes (e.g., SSN) versus linkable attributes inherited in a protocol
  • (Manu) Are we framing this incorrectly? Is there any benefit to separating these things, or should we start this with “this is the info you’re sharing with the RP, some of which may be linkable”? The inherent risk of this - sharing depends on context that we can’t include in the API. If we look at the goal of this, we’re trying to remove friction and not show a screen, so we may be better off not trying to make the optimization now at all and - punting it to the future.
  • (Lee) Agree it’s hard. Would be good to think through what the UI should look like. In an mdoc presentation for “over 18”, you’re sharing three things: the device key, the issuer (who asserts you’re over 18), and the over - 18 info. How do you show all this to the user? It would be good if we could do work as a group as to how to convey the implicit properties of the protocols to the user.
  • Manu: "You are sharing your device's unique identifier" <-- :)
  • (John) Lee covered some of the points I wanted to make. There are several things we disclose implicitly and we don’t have a good way around that. I don’t know that any of this is safe. The wallets providing info back to the - platform is a good idea, but maybe we need to think about whether RPs should even be allowed to ask for an mdoc because it’s inherently correlatable? We have to be willing to consider saying no to people with respect to - privacy. the unintentional attributes that end up being shared is a significant concern.
  • (David) Another thing to keep in mind, this isn’t just technical controls, it’s also trust framework and regulatory controls. For example, an indicator that “intent to retain” was false, when presenting a license, giving - some additional trust
  • (Ben) Agree with Lee. Getting to that third aspect and explaining it to the user might be an unreasonable lift. At a certain point, you are putting too much work on the user and they’ll just click through.
  • (Manu) We don’t warn people today when they enter linkable information on websites. It’s just quietly entered even though the browser often knows what info is being submitted. Is there a class of info being submitted here - that we know would be included in an autofill anyway so we don’t have to specify further info to the user? Sharing a movie ticket number within a despotic regime is a dangerous thing to do. Sharing it in a home town in a - country that respects your rights is ok. Yes, we should document this stuff but still not convinced that we are going to be able to craft the correct language for an individual so they make the right decision in a more - dangerous scenario. Maybe the best we can do is say “this is the info you’re sharing with the site.”
  • (Simone) Agree with Nick, we don’t know whether a verifier is malicious. The baseline is to let the user know that some things can be linkable.
  • (Wendy) Thinking about what we can do for user privacy given the affordance of new tools. User research shows it’s complex to present meaningful information to user and have them make meaningful decisions. Sometimes we - manage that by delegating to agents to make decisions on the users’ behalf.
  • (Tim) I don’t think there’s precedence for this in the W3C, but could we get companies around the table to collaborate on UX research in this space?
  • (Lee) We are doing UX research. Need to check internally on how much of that we can share.
  • (Tim) If we can start having those conversations now, we’ll get to a better conclusion on our spec.
  • (Lee) Doesn’t seem too controversial to share something more broadly (we did something similar for passkeys).
  • (Rick) I want to caution on the line between what makes sense to standardize and what’s a platform implementation detail. I don’t think we want consensus;, we want browsers to compete on this point. One good analogy we can - look at is Safe Browsing. We have to expect this system will need to be tuned over time, which is a good argument not to standardize all the details. We’re constantly adapting and fighting attackers which will require us to - change.
  • (Tim) It’s input to the discussion, not input to the standard.
  • (Lee) There is a bunch of info that is linkable, but we would autofill it without a second thought. Users understand the implications of doing that. What worries me are the unintended consequences. If it’s your address - signed by the DMV plus some other stuff, it’s that other stuff that comes along that is most concerning. We have technology to be specific, but it’s not universally applicable across every presentation protocol. Let’s make - sure we’re not breaking user expectations and sharing more info than when we just autofill.
  • (Rick) This whole debate has to be context specific. Is there anyone that can talk to the EUDI context rather than the CA DMV context?
  • (John) A one-size-fits-all approach is not going to work. From the German perspective, being able to ask for anything from the German wallet, you’ll need to go to the German government to ask for permission and prove you - have a legal reason to ask for that information. Every country will have a registry for the RPs that have that legal reason to request data from their wallet. This is almost the exact opposite to what seems to be happening - in the US.
  • (Rick) Is there something we can do to help implementation understand the context (e.g., the differences between the EUDI and US wallets)
  • (Lee) For Android, the wallet tells us what they did when they do the matching. We don’t know how to set these signals yet.
  • (John) One of the differences, and it’s hard in the US: RPs are going to be expected to have some kind of contract or legal framework about what information they can ask for and how they will retain it. Does the RP adhere - to a legal framework that the issuer/wallet trusts and understands? Or is it a scenario of “we have no idea who is asking or why”? In the German case, if you don’t have a registered, legal need, you don’t even get a claim. - It depends on the regulatory framework.
  • (Lee) We did think about when you do the matching, what did you do to make that match. We did this so you could say “this RP is conformant to this level of compliance.” But it would be up to the wallet to give us that info; - it would not happen in the platform.
  • (John) In the US there are third party certifications that could inform the wallet. How does the wallet selector get the full context of the wallet? How does the metadata of the RP or the trustmark get into the selector? We - need to provide the information on who trusts the verifier to the wallet selector.
  • (David) Some depend on the model. Unsure if the overall system model is consistent in this group. I see the wallet as a secondary user agent for sharing attributes. These APIs are enforcing that those are behaving correctly - and that there isn’t a malicious wallet in the system. But a lot of the purpose is to get out of the way.
  • (Nick) Think that responsibility question is exactly right. Had been thinking this wasn’t just on the wallet. In the EUDI case, there will be a government authority that vets the request is appropriate/legal, and would hope - that in the US case there will be private parties offering attestation or verification, but would think that’s something that needs to be presented to the user from the origin. Let’s not give all that responsibility to the - wallet, but give a way for verifiers to have a way to determine who is trusted.
  • (Manu) With my CA DMV hat on, the goal is to not get in the way of an individual who needs to make a decision based on their current needs. Example: CA has wildfires and people’s houses burn down and they lose all their - identity documents. They run out of the house with a mobile phone but nothing else. First responders need to be able to identify people without asking for permission first. We’re just looking at signaling mechanisms, not - dissimilar to how we do TLS certs. The browser knows the verifier is part of a trust network; they get a gold star. But we don’t want to get between an individual and the first responder and how they might need to interact.
  • (Lee) at the platform level, there’s a difference between outright blocking and UX friction. There is also advice to issuers about what scales and what doesn’t.
  • (John) One of the ideas behind federation metadata is that it can hold trust mark details so the verifier can be identified.
  • (Tim) So maybe we should brainstorm on how to get that data upstream.
  • (John) I am biased towards Connect Federation, but that’s leaping towards an answer and we should think about that more generically. The wallet/browser needs to dereference the trust information about the verifier and there - will be many trust marks they might want to retrieve from a well-known location.
  • (Lee) It's worth thinking about what Nick said. Could come up with a standardized way to represent data retention formats and who has signed it to verify the statement. In the EU, that will be the governments, and it will sit in a well-known URL.
  • (Tim) there is precedence with FedCM — they make calls on behalf of the parties without disclosing the parties.
  • (Lee) We have to think about how it’s going to work, to prevent caching too much info.
  • (Wendy) There is a lot of interesting labeling and communication, enabling users (or an entity on their behalf) to manage the data since things have changed since P3P. We need to be conscious that things will look different - in different jurisdictions.
  • (Lee) We also need to be careful about avoiding creating a new root CA model.
  • (Rick) It depends. If it’s just about a UI hint, then maybe it’s ok to have an opinion that’s just heuristics. But what if we get it wrong? In Chromium, we feel we should be free to have a bunch of tweakable heuristics that - allow us to determine UI friction, and it’s ok to be wrong.
  • (Lee) It’s friction vs making authoritative claims.
  • (Sam) Something Rick said was interesting — the browser (to an extent) trusts the wallet. I don’t know what that extent is, but the wallet is one of the entities to best ask the user the question. In the construction we - have today, the user selects the wallet through a credential chooser. In that gesture, the user has vouched for that wallet in some way (maybe at installation time) and then later when they say “we want a credential from - this wallet.” The browser interstitials will only be applicable when the browser thinks the wallet is being untrustworthy.
  • (Rick) It’s not that binary. To give users transparency and control, we need to provide a consistent UI. There’s an argument that the browser should get out of the way, but we need to be a backstop when something goes wrong.
  • (Lee) We basically trust the wallets to have the data and act on the user’s behalf. We are responsible for the credential selection screen and that the user understands what’s happening when they select one credential over - another. That choice needs to be made before getting into a wallet. Not that we are putting up friction because we don’t trust the wallet.
  • (Sam) Is passkeys a good analogy? They have password managers that manage the passkeys. They could mess up, but we don’t have a browser UI that says “the password manager didn’t do a good job in managing your credentials.” - The browser trusts the password managers.
  • (Lee) We are not modeling for broken or malicious wallets.
  • (Tim) In passkeys, the most important part is issuance, not selection. In DC, it’s both.
  • (Rick) It’s the larger ecosystem risks.
  • (Sam) The ecosystem risks are also dealt with at issuance time. Some wallets are part of a known, trusted ecosystem. The issuers would police whether the wallets are doing good things.
  • (Rick) It’s a difference of opinion.
  • (Tim) Great discussion. We’ll figure out next steps between now and our next call.

AI Generated Summary (Google Gemini)

This discussion is about how to handle different types of credential requests and how much information to show users about those requests. There are a few key points:

  • Balancing Risk and Friction: Finding the right balance between informing users about potential privacy risks and making the credential selection process smooth is a challenge.
  • Standardization vs Flexibility: There is debate about how much to standardize how credential requests are presented to users. Some argue for a common approach, while others believe browsers should be able to innovate in their UIs.
  • Context Matters: The appropriate way to handle credential requests depends on the specific context, such as the regulatory environment and the type of credential being requested.
  • Role of Wallets: Wallets are expected to play a role in protecting user privacy by potentially filtering requests or providing additional information to the user.
  • Next Steps: The group will continue this discussion to figure out how to move forward.

Higher Level Discussions: Forward looking: multi credential presentations

Scribe: Heather Flanagan

  • (Tim) We’ve said these are out of scope, but we wanted to have a high-level discussion
  • (Lee) Fundamentally it won’t change our API at all, but I think we need to have an opinion on it. Today, we’ve been saying replying with multiple documents is out of scope for our API because it’s difficult to organize our - response when split across multiple credentials. E.g., if you want car insurance and driver’s license details, it’s very difficult for the platform to construct a UI across multiple wallets and get appropriate consent. Hard - for us to orchestrate the flow, better for the website to do the orchestration — ask for one credential, then the other. That’s been our position for a while. But we’re getting a lot of requests and the protocols above us - like OpenID4VP support this. So do we ban these or not? For our API, it might not actually make any difference in the spec, but it would be a change in our messaging. Obviously it’s easy to do when all the documents are - coming from a single wallet; we could support that now. It’s problematic to make two calls, especially when there’s a notion of linkage between the two documents (like car insurance and driver’s license being the same - person), then you might need to do value matching for the 2nd selector, and that would add a lot of requirements to the platform around value matching.
  • (John) An example happening in the EU — EWC large scale pilot that Visa and other financial players are part of have an architecture that’s similar to SPC on WebAuthn side, but using VCs. The plan is that the person will - use their EID to identity proof themselves to their bank, their bank will issue them a new VC which is potentially crypto-linked to the EID, something the EU is calling related keys. Merchants who are selling age-restricted - things would, in a single call, ask for the person’s age>21 and their VISA card, and the EUID wallet would provide a combined response. That’s a real use case being worked on in the EU, but whether that’s a good idea is - another question. That would be for a PSD2 bank transfer or 3DS credit card flow; they intend to use that. Others are looking at adding transaction confirmation details to the request, so the bank VC can present a dialog - and sign over the challenge.
  • (Lee) Yes I know about that, and it’s one of the primary motivating ones. It’s a clear multi-document presentation to take credit card and age, clear requirement from EWC. OID has an answer, so I think we need some answer.
  • (Joseph) I think it’s important to break the single-wallet and multi-wallet cases apart. As Lee said, the single-wallet case probably just works today. I think we should try to make that work. It’s a nice flow in VP today - and has nice UX benefits. The multi-wallet case doesn’t really work in VP today, even when using custom schemes. Maybe we should experiment, but I’m not highly hopeful that we can find something that works except maybe via - a second step.
  • (DW) One of the issues I raised on the OID side from the query format discussions, was that the ecosystems we’re talking about are diverse enough that you won’t have a guarantee that these will always be in the same wallet. - What could work for German users might not work for US users and vice-versa. So a robust website is probably going to have to do this themselves just based on the fact that the national IDs are going to be different — - places, policies for release, etc. Different people are going to have a different number of wallets. Unless we have a strategy of encouraging consolidation of wallets, it won’t be a good experience.
  • (Lee) So far our stance has been to make multiple calls. But in the in-person case, it doesn’t make sense to tap multiple times. If I have to solve for in-person, maybe I should just solve for online too?
  • (Marcos) I like Lee’s analogy here. I’m worried that we have a solution looking for a problem and that we’re looking to use this to solve the SPC case. Let’s not do that, let’s start with the simple case first.
  • (Nick) I understand we’re talking about future potential things, maybe someone will come up with a great UI that users will understand. I think we should say that, in general, this isn’t suitable on the web, that you should - ask for one thing at a time. Asking for lots of things at once is a good way to get users to say “ok fine, I’m tired of this”. For the clarity of explanation (which is why we have this general guidance for permissions), we - should give this advice. This might be good feedback to take back to OID, to not allow multiple document requests at this point.
  • (Joseph) I expected Nick to take the opposite approach because it means the verifier ends up with information it can’t use. If the verifier can’t proceed without both, it’s more confusing to the user to get just half the - information.
  • (Nick) I understand that, but there are also ways to answer a lot of these requests other than just digital credentials. We risk encouraging websites to not offer a fallback for users who don’t have digital credentials.
  • (Marcos) Agreed
  • (Joseph) Just because it’s possible to ask for multiple things doesn’t mean that the wallet has to satisfy them. We can have optionality. It could really harm acceptance if we force poor user experiences.
  • (Manu) I agree it could be a very poor experience in some cases. E.g., in reality they might want to see age verification and loyalty card and asking for multiple at the same time is already standard practice. When talking - to the __ organization, if I tell them that this is a limit of the DC API then they’ll route around it. I’m not suggesting we have a solution for multi-wallet, but for single-wallet, there are solutions.
  • (Marcos) The use case isn’t in question, the question is whether this API is the right solution or whether there are others like payment requests. Let’s not get caught up on whether this is the right API.
  • (Pam) The other thing we need to talk about is how this happens over time. E.g., a retailer might only ask for both one time, and the next time they issue the loyalty card, the age is included there. So maybe we need to - look at how often this will happen to set priority.
  • (Marcos) Going back to the real-world case that you do have to present two documents, it’s not that different.
  • (John) There are real use cases. If we’re not going to support it then we need to come up with an explanation of why it’s a bad idea. The unintended consequence of allowing it is that we run the risk of banks only issuing - credit card credentials into national IDs because they want the two things linked. We have to think through this and the implications of giving governments access to payment credentials because the only wallet they can put - them into is their government ID.
  • (Lee) prioritization is that we have to give the EU an answer. For now I’m saying we can’t support these cases.
  • (Tim) There’s nothing preventing multi-credential, single-wallet right now. Do we explicitly say it’s in scope and cover it in security/privacy considerations?
  • (Marcos) We might say payments are covered over there in web payments.

AI Generated Summary (Google Gemini)

  • There are use cases for this, such as requesting proof of age and a credit card simultaneously.
  • Currently, the DC API only supports requesting one credential at a time.
  • Supporting multi-credential presentations would be complex, especially for wallets that span multiple issuers.
  • It could also be confusing for users and discourage websites from offering alternative credential verification methods.

The group is leaning towards not supporting multi-credential presentations in the DC API for now. They will explain this decision to relevant organizations (like OID) and focus on improving the single-credential presentation experience.

Higher Level Discussions: Issuance

Scribe: Heather Flanagan

  • (Tim) We talked about this earlier. The way I see it, it’s about initiating the flow, not the whole end-to-end flow like presentation. It’s not like Passkeys, where the response needs to go back through the API. Maybe only an ACK needs to go back. So from API service standpoint, we have some considerations like client_data, if we do add that like for WebAuthn we’d put in some extra details. Maybe we don’t need to worry about too much more right now.
  • (Lee) I was one of the people who said we should just punt on issuance right now because it’s less pressing, but we’re getting lots of questions about it now. I think we’re getting to the point where we need to think about - it and come up with a timeline. E.g., for the LSPs. Are you going to support OpenID4VCI over the API instead of doing that over custom schemes? I don’t really want to give the answer to “just use custom schemes”. I do think - we should consider now bringing it in scope in some fashion. Probably means taking a create that takes a protocol and request, and request is something like an OpenID4VCI blob.
  • (Kristina) As a baseline, the priority should be getting the presentation right. Having said that, I’d argue supporting issuance might even be higher-priority than multiple credentials for presentation because of cross-device security. We have two flows — one where you re-use the server; many implementations opt-in to pre-authorized code flow where you authenticate the user up-front. When you have a pre-authorized code across multiple devices, this is where the security comes up. We have cases where people want to use the pre-authorized code flow, because it’s how they want to authenticate the user. But we know it’s not the most secure thing to do, to just pass over QR codes. So what do we do?
  • (Manu) +1 to Kristina. The challenge is that we’re going to end up using different technologies to engage individuals on this. We’re going to use one technology for presentation, then everyone will do different things for - issuance because there’s no easy way and there are significant security and UX problems. I agree this is a higher priority than multiple credentials. +1 to Lee’s suggestion of how we could accomplish it. It doesn’t seem difficult.
  • (Nick) Question: does issuance need to be defined through an API like this in order to enable multiple wallets or web wallets? Or is there a feasible way to implement and adapt issuance into any wallet if we didn’t have an API for it?
  • (Lee) The flow is, say, on the DMV website a button says “save to my wallet”. If there are multiple wallets supported, there needs to be a way to mediate this. In the ideal case this would be mediated by a similar API as - presentation. For cross-device, we could do it across Hybrid, and get all the phishing protection. Without that you’re doing some sort of custom scheme.
  • (Tim) Issuance is much closer to passkeys — 1:1, the user always has to decide where to put it first.
  • (John) While it’s similar to WebAuthn, it’s more complicated — partly because WebAuthn has punted on a lot of the requirements of the RPs. Does the wallet support whatever trust framework I trust; does it have this credential I use to authenticate? We’re likely going to have a similar complexity where wallets are going to have to provide some trusted code, where the wallet can say whether they support it. It’ll probably look similar to presentment once we grapple with all the issues. I agree having an API similar to presentment is a good idea, but I wouldn’t underestimate the complexity.
  • (Sam) It’s useful and surprising information that folks on the call feel issuance should come before multiple credentials. It seems like an easier problem to tackle with a bigger bang-for-the-buck. I wonder if issuance pre-supposes cross-device? I assume it’s not as compelling if it’s in the same device? Maybe we do presentation same-device, then presentation cross-device, then issuance?
  • (Tim) Cross-device for issuance would come for free.
  • (Sam) I’m wondering if it would be useful at all to do issuance in the absence of cross-device infrastructure.
  • (Lee) Just practically we’re going to have CTAP support ready beforehand anyway.
  • (Tim) The CTAP layer wouldn’t know the difference between issuance and presentment.
  • (Sam) I’m thinking from an impl perspective.
  • (Lee) I wouldn’t worry. If we build issuance for same-device, we’d get cross-device virtually the next day. It’s not too hard implementation-wise for us to put this under the hybrid tunnel.
  • (Tim) Does anyone feel that tackling issuance before multi-wallet is a bad idea?
  • (Lee) Any objection to putting it into our charter?
  • (Tim) Did we say presentation only in our WG charter? Or did we keep it generic?
  • (Heather) I think we said presentation.
  • (Tim) We should check.
  • (Marcos) If we’re going to add it, that’s a big deal
  • (Wendy) Issuance is in the scope of the proposed charter — “these features are intended to support different interaction flows, e.g.,. … requesting credentials or issuance”
  • (Tim) Ok we’ll take this for next call's agenda items

AI Generated Summary (Google Gemini)

  • There are pros and cons to including issuance:

    • Pros:
      • More secure than current methods (e.g., QR codes) for cross-device credential storage.
      • Better user experience compared to custom issuance schemes.
    • Cons:
      • More complex to implement than presentation.
      • Requires wallets to provide trusted code to verify compatibility.
  • The group is leaning towards including issuance in the DC API for the following reasons:

    • It is a higher priority than supporting multiple credentials for presentation.
    • It can be implemented for same-device credentials and automatically extend to cross-device with minimal extra effort.
    • There is already interest from organizations like the LSPs.
  • The final decision will be made at the next call after confirming the current wording of the WG charter.

Higher Level Discussions: ZKP solutions for age verification

Scribe: Heather Flanagan

  • (Tim) This topic is from Moziall’s standards position — are there better solutions out there that wouldn’t require disclosing linkable data?
  • (Manu) We work with the retail industry — national association of convenience stores, 150k retail locations. Standards body CONNEXUS, they wrote a standard around age verification for in-store and onlline. One of the first to deploy VCs at scale. Verifone is one of the PoS companies that implemented it, with 60% of the US market. The mechanism they chose is unlinkable >18 and >21 age proofs. This stuff exists today and there’s a desire in CONNEXUS and the retail sector to use DC API to convey age. This is a global standard-setting organization and they work with UK and EU; approaches are different in UK, EU, and India. Any hope we have that this is easy, turns out it’s not necessarily the case. It is important. There are standardized ZKP mechanisms that are deployed. The API probably shouldn’t treat it as special it’s just another type of VC. If we try to over-optimize - about age, it’s going to get weird. And we don’t need to.
  • (Tim) Have they been following this work?
  • (Manu) Yes. I’m in those groups and giving them updates on a regular basis.
  • (John) Yes, there are ways of putting >18 as values, but that’s not really ZK; it’s just another claim. One of the reasons for moving to blinded DBF(?)+ which is on the EU roadmap is — technical advantage is how many credentials the issuers need to issue to prevent correlation. SD-JWT provides selective disclosure of claims, but the problem is that you have to make lots and lots of instances. JWP using ZKP is really more of a technical - solution for the issues than it is an improvement for privacy. Getting a privacy benefit (i.e., you can do a calculation over the age) would require adding bullet proofs or something like that to JWP. That’s possible, but - we’re a long way from being able to practically do that. There is no hardware storage for those proof keys; the algorithms aren’t widely deployed. Being able to do anything beyond SD-JWT and its traditional cryptography, - we’re still 3-5 years away. When will Android have strongbox support for blinded DBS+ proofs and secure storage of those elements, and when would that roll out? There are lots of practical things, and we need to keep working on them. One of the scary things about ZKP is that the thing least-well understood is post-quantum ZKP. Do we deploy ZKP before the post-quantum apocalypse or not? I don’t know that we need to do anything about our - APIs.
  • (Lee) We do have a ZKP that works with mdocs, standard devices, and normal elliptic curves, and it is post-quantum safe. It executes in single-second times. It’s viable to be deployed in these situations, and you can make statements over the mdocs. You can say that the mdoc has the age field set to true, and it doesn’t reveal the device key or require batches of mdocs to be issued. It works on all existing phones. It also lets you make statements about the issuer, like instead of saying it’s signed by the state of CA, you can say it’s signed by one of N. It’s a much bigger group. We have this technology and we can demonstrate it if people are interested. The downside is that it has no standing in any regulation. Everything else exists in NIST / EU, but this stuff doesn’t. Maybe it’s fine if you’re, e.g., TikTok, and need age verification, maybe it’s fine. But if you’re a government body and want to use this crypto in a regulated space, that’s gonna hold us back.
  • (John) Can you send some pointers to that?
  • (Lee) Yes I’ll send you an e-mail. I can’t send the paper just yet. Zeuthen did a presentation in ISO last week and there’s a write up. The paper will be available sometime soon.
  • (Manu) We probably want to separate age verification from ZKPs, two orthogonal things. The thing you need with age verification is unlinkability, and you can do that without ZKP. One of the approaches is to issue lots of VCs and ensure they're each used only once. That’s what TruAge has done, and they do 50M age checks a day, and that’s fairly easy to do. I don’t want us to think that the only way to do scalable age verification is if we do ZKPs. When it comes to ZKPs, that’s kind of like a credential format thing, and it’ll change over time, and we’ll get different properties from different types of crypto. Maybe we can separate the ZKP stuff , what is used to secure the credential out from what is u….(Kristina) Question to Lee — wasn’t clear to me, did you identify any changes to the protocol (18013-5 or OID4VP) in order to support it?
  • (Lee) No, no changes to the credential format itself, just making a claim over that data. I don’t think it would change ODI4VP, but the response wouldn’t be the mdoc device response anymore, it would be the proof of that a whole new response. A ZKP transcript of the claim.
  • (Gareth) 18013-5 and 18013-7 would have to change to say how to request it.
  • (Lee) if you want to request in person, yes, you’d have to change their request format.
  • (Tim) Should we go a little deeper on the next call?
  • (Lee) we could invite our crypto folks to present like we did in ISO
  • (DW) The interesting part is that when you start doing this, you’re modifying the document to start omitting pieces, until you get to zero-knowledge that just says, “yes, I have a document that meets all this”. None of the original document at all such that nobody else could use it. At the API level, we could handle this; at OID level, presentations could handle this, but there’s a lot to specify to keep moving along this point to just release, “I’m compliant with what you asked for”.
  • (Lee) Yes, proof just gives you T/F and proves that you would accept it.
  • (Tim) I’ll work with Joseph on the OID side and I’ll get this on the agenda.

AI Generated Summary (Google Gemini)

  • Current age verification methods:
    • There are standardized mechanisms for age verification that don't require disclosing a user's exact birthdate. These methods use cryptographic techniques to prove that a user is above a certain age (e.g., 18 or 21).
    • One approach is to issue multiple credentials, each indicating a different age range (e.g., one for users over 18 and another for users over 21). This approach is secure but requires issuing a large number of credentials.
  • Zero-knowledge proofs (ZKPs):
    • ZKPs are a cryptographic technique that allows a user to prove they have a certain credential without revealing the credential itself. This could be used to prove a user's age without revealing their birthdate.
    • There are challenges with using ZKPs for age verification:
      • Existing ZKP standards (e.g., NIST / EU) don't support the specific type of ZKP needed for age verification.
      • Widespread adoption of ZKPs requires additional development and hardware support.
  • Future considerations:
    • The group will discuss ZKPs in more detail with their cryptography experts.
    • They will also consider how to separate the age verification method from the underlying credential format, as these may evolve independently.
    • They will investigate how to modify existing protocols (18013-5 and OID4VP) to support ZKP-based age verification.

Spec Items: Iframe and cross-origin usage (#78)

Scribe: Rick Byers

  • [Marcos] you can embed an iframe serving as the RP — need a mechanism, the web provides “permission policy” — will we allow that at all?
  • [Tim] Webauthn uses permission policy
  • [Marcos] question was whether there is interest, because there are privacy implications
  • [Marcos] Rick proposed “digital-credential-get”, which would let us do fine grained (separate from create)
  • … If that’s not controversial I will put together a pull request
  • [Tim] how important to know what the origin TLD is — increases need for client data JSON
  • [John] Security people will freak out without origin TLD knowledge
  • [Nick] the top level document should be the one the user is informed of.
  • [tim] top origin and calling origin are listed as a SHOULD (in Webauthn)
  • [Nick] you wouldn’t want to include a webframe of an add, now it looks like the top level is asking for a drivers license
  • [rick] have had debates on what the user experience is; my viewpoint is iframes are implementation detail. if the outer frame is coordinating with the iframe, the only origin the user needs to care about is the top frame; - the responsibility is delegated. My thinking is that showing both origins could confuse the user, could be an implementation detail of my website, but we haven’t been consistent. FedCM, there are cases where you want both - top and calling, so maybe it should be a matter of choice
  • [Nick] important that it shouldn't be accidental
  • [Rick]You have to put allow explicitly for http
  • [Sam] I wonder what should be the origin to send to the wallet — the wallet expects an origin, which one?
  • [Tim] client data would have both
  • [Sam] Does the wallet need to verify?
  • [Rick] threat model is what is the user reasoning about
  • [Tim] if caller is the origin then it is a departure from WebAuthn
  • [Marcos] issue in payment request: embedders were hiding the iframe. the iframe did not have a user activation, didn’t have the permission policy. Something to keep in mind on how these things are embedded: you are looking - at a website, request a doc and it tries to get it from an iframe, and you’re screwed
  • [Lee] today we pass an origin which is the calling origin; don’t get client data, only a hash of it. They don’t know what else is in the client data. Maybe we should change that calling origin field to not have client - data. if the wallet gets client data instead of calling origin, would that be good?
  • [John] I agree with Lee that if we had structured client data — if it has to go to the wallet, it needs to know what the trust is for the verifier, what the keys are — 3D secure, the origin/trust relationship is different - from wallet to merchant. 3D secure is passing the information to the bank that needs to be validated — important to know it was Walmart, main security relationship is with the actual verifier in the iframe; if we start - hiding that, we will regret it. I’m in favor of deciding what should go in that client data with the appropriate confidence for wallet
  • [tim] if they decide to send, the only way to verify is the calling origin as the destination of the response
  • [Marcos] need to have discussion with payments WG, need to solve it together with that group
  • [John] - there are 2 pilots with regulatory obligations that need to sort out how to do this,
  • [Nick] - seems like we need the top level origin (the wallet needs it for both consent and look-up of privacy relevant info). Needs to be done for the top level, the org asking. Otherwise one vendor gets approved and is - embedded everywhere. Are you also needing the other data for encryption etc?
  • [Tim] Even if the top level has a relationship, there is no guarantee that the wallet isn’t sending it elsewhere. If it isn’t visible, that shouldn’t be allowed; scary if the levels could collude there
  • [Lee] maybe we should decide whether we fully specify client data, what fields are, change platform API to not be a hash and be richer
  • [Sam] two conversations — 1) bundling, 2) privacy threat model.
  • [Lee] our UI shows the calling origin. Web authn uses RPID.
  • [Tim] in theory, origin would be the same site. Weird there is no precedent that no service shows
  • [John] WebAuthn passes RPID containing metadata
  • [Sam] but does the RPID refer to the iframe?
  • [Tim] has no context at all. Doesn’t have the same web context
  • [DW] openID model is you can make a request on behalf of anyone because the result is the audience to whatever you asked for.
  • [Sam] in practice, how are things deployed? does redirect match iframe or top level domain?
  • [Brian] You can’t invoke openid4vp from an iframe because you need the URL that is invoked to be somewhere that the platform can see it [Rick] an iframe could trigger the top level
  • [John] the redirect URI has no direct relationship with the page that invoked the wallet — different security assumptions in connect than what happens in the API. Without the API, the wallet is directly calling the - appropriate verifier, and the identity presented to the user is that of the redirect URI. It gets more complicated when the info comes back through the browser. The origin of the iframe is the processor that will have the - legal interactions
  • [Lee] in WebAuthn, it is the client id; the response is encrypted to the client
  • [John] yes, but a leakage problem, even if only the data is encrypted and goes back. If anyone can invoke a flow on behalf of VISA, the person could figure out if there was a VISA cred because a response came back
  • [Lee] Is it the merchant or Bank of America getting the data?
  • [Tim] The difference is that a passkey is only useful to the person who uses it. This cred has data useful to everyone
  • [Lee] the actual entity/clientid. The wallet is not necessarily validating any origin at all
  • [Tim] the origin is only for the utility of the RP in WebAuthn but here the wallet NEEDS it. That is the fundamental difference — don’t have a strong opinion on putting it into client data but this shouldn’t be the top - level domain
  • [Kristina] the way we designed OpenID4VP, when there is a trust framework and a signed request when origin matching is sufficient, it doesn’t matter, because an assertion could be inserted. Right now we are allowing - unsigned and signed requests, so, when it is signed, there is an X.509 chain, for example, but somehow the wallet knows the root, so it can trust the client id at the time it is sending the data, plus, on top we added - expected origin which allows more info to the wallet. This also requires an allow list of expected origins. Expected origin comes in the signed request and from the browser, and therefore can compare. Even if the wallet - doesn’t have the allow list, it can still compare the request to the browser. For unsigned requests, if the RP decides to put expected origin there, the browser will also pass the expected origin; the wallet still gets 2 - values, but an attacker could in theory inject an unsigned request that could easily match the expected origin with what the browser was already going to return.
  • [Tim] originally it was to bind a bunch of origins
  • [Lee] primary job of origin is to validate it wasn’t MITMd. Can’t do the replay thing. For this API, even though that response is encrypted, bad.com would get signed into the response; even if it was encrypted, the - attacker would learn something. Alternative is the attacker doesn’t get anything, but it only works in the signed request case. Still get the benefit of the server validating. Last thing is the debate about the UI, 3 - things: 1) top level 2) calling fame, 3) client identifier
  • [John] even if the request isn’t signed in the verifier’s metadata, you could still have the expected origins, and the wallet could discover the MITM after the fact and refuse to respond. Expected origin should be in - either a signed request or metadata; either is possible.

AI Generated Summary (Google Gemini)

  • Security concerns:
    • If an iframe from a malicious website triggers the DC API request, the user might be tricked into giving away their credentials.
    • Even if the response is encrypted, the attacker might be able to learn something from the fact that a response was received (e.g., that the user has a certain credential).
  • Current behavior:
    • The DC API currently only sends the origin of the iframe that triggered the request (calling origin).
    • WebAuthn (a related API) sends both the top-level origin (the entire website) and the calling origin.
  • Proposed solutions:
    • Include more information in the client data sent to the wallet, such as the top-level origin and expected origins (origins allowed to request the credential). This would allow the wallet to make a more informed decision about whether to release the credential.
    • Only send the calling origin and require signed requests. This would prevent attackers from spoofing the origin but would also make it more difficult for wallets to implement.
  • Open questions:
    • How important is it for the wallet to know the top-level origin?
    • Should the UI show both the calling origin and the top-level origin to the user?

Spec Items: Consume transient activation (#91)

Scribe: Rick Byers

  • [Marcos] By way of introduction, the web security model has a thing — time based activation model, where a user may click on something and the user then has a small bit of time to take an action. Consuming the action - prevents other things from happening. E.g., i call into our API, and then I try to full-screen the document. Logical thing is to allow. Prevents silly actions — should require some ki….
  • In the simple case, we can consume the user activation when the API is involved. If that isn’t controversial, we can add to the spec. An unfocused window should not be able to call the API, even if it has activation. - Should fail with a DOM exception
  • [Rick] agree the unfocused window thing should fail,
  • [Marcos] WebAuthn defined their own thing, which isn’t great
  • [Rick] there are scenarios where activation isn’t preserved, e.g., in an identity flow, if you rely on an identity partner, click a button to auth, but now there is another page with a button saying the same thing. In - some cases, the activation was removed there; in other cases, it is made a special case. Need to try it but listen for special cases.
  • [Tim] not explicitly disallowed for a GET, but has an explicit mode
  • [Marcos] you may have created a credential but can fetch without mediation. To Rick’s point, in payments, there was a case where navigation was needed, but you needed to hold transient activation. Need to push to the - right place in the platform
  • [Tim] everything in WebAuthn requires user presence so is more of a nuisance
  • [Marcos] if you do WebAuthn and call a share API, that could cause issue
  • [Sam] in favor of requiring activation, makes privacy easier, backwards compatible, but want to say likely insufficient long term, because, more likely than not, users won’t have gov issued id’s in wallet; this means - verifiers need to add explicit buttons, 99% if you click on the button you see nothing
  • [Marcus] This came up in IIW. Talking with penID folks about verifying the request. If it fails, you may need to call again with a different request, and needing a new activation is a problem. You may need a way to validate - the request with a different API.
  • [Tim] This is a slightly different topic, but how do you gracefully fall back to custom schemes, at least in the short run? The autofill for Passkeys is intended to be a short-term solution.
  • [Sam] If we impose user activation then the page wouldn’t learn until the user closes the prompt. If the wallet didn’t have any credentials, then invoking a custom scheme wouldn’t help anyway.
  • [Tim] Unless you had a wallet that didn’t update to use the digital credential API.
  • [Lee] I think it comes down to what you reveal silently. We have a principle that nothing is conveyed silently. We want to make sure no credential is indistinguishable from user canceling. Any kind of pre-flight or silent - failure so you can fall back would not be OK, because it would reveal the presence of credential.
  • [Tim] we need to carve out time for how to realistically deploy this in practice.
  • [Lee] My view is you should never fall back to custom schemes. If the browser supports the DC API, you should just use it, and there’s no custom scheme support.
  • [Tim] The problem is the wallet on the device has been updated. But platform and wallet aren’t always the same.
  • [Lee] I might say that wallets on Android must support the API.
  • [Tim] what if FF doesn’t support the API — not possible?
  • [Lee] would like to say so. If you’re on Android you are making the assumption the browser supports it . The API would be mandatory; the custom schemes would be additional support
  • [Tim] you can silently query in WebAuthn now, maybe there is something similar
  • [Lee] That doesn’t preflight with providers, just shows what the browser knows
  • [John] we have yet to figure out how web wallets can work with this API, need that. Doing it from this API would be great
  • [Tim] question is, does this happen via browser
  • [Lee] ultimately web browsers would need to register with the platform
  • [Sam] we are figuring out quirks - can we assume we are consuming user interactions now?
  • [Tim] Easier to remove the requirement later than to add. Even WebKit for WebAuthn used to require
  • [Marcos] there are other models to hook into; transient activation can happen and you are authorized once; next time it gets easier

AI Generated Summary (Google Gemini)

The discussion is about a web security concept called transient user activation. This concept ensures that a user intentionally triggers an action and not something else (e.g., a script).

  • Currently, transient user activation works for the DC API when it's called directly by a webpage.
  • The proposal is to extend this to also cover situations where the DC API is called from an iframe or a different window. An unfocused window should not be able to call the API.
  • There are concerns that requiring user activation for everything might not be ideal in the long term, especially for cases where users won't have government-issued IDs in their wallets.
  • An alternative solution is to never fall back to custom schemes (where the user interacts with the credential outside the DC API). This would require all browsers and platforms to support the DC API.

The group decided to keep discussing this topic considering the need for practical deployment scenarios.

Spec Items: clientData payload (#95)

Scribe: Rick Byers, Pamela Dingle

  • [Tim] this is directly inspired by WebAuthn. Seems like people want it. Question is what goes in it, and should the client hash it (no, based on the previous topic). 2nd question, the one big difference is WebAuthn - involves a challenge and three is a full circle. In this case, there isn’t a nonce, so the contextual binding is lost. One option is to serialize it, hash it, pass it to the wallet, and the wallet signs it. For this API - we said there wouldn’t be a top level nonce.
  • [John] it’s the wallet that is doing the signing of the presentation. What do you mean by client?
  • [Tim] it is the browser in this case. Browser collects all the data, but there is no challenge
  • [John] individual protocols all have challenges; not sure what the client hashing adds
  • [tim] you are binding it to all the other data in the challenge
  • [John] Passing the info to the wallet so the individual parts can be signed over. you could hash but the client could just ignore and include all the bits anywhere
  • [Tim] hashing it makes it smaller
  • [John] just pass the client data
  • [Tim] we can, but right now, nothing is bound to the specific request
  • [John] individual protocols will still dig in
  • [Tim] but the client will get what it needs. easier than duplicating nonce or having client dig into the request
  • [John] having an untrustworthy browser add a hash wouldn’t add anything
  • [tim] the hash would be incorrect in that case
  • [Lee] today, you have implicit trust, because the origin is in there; it isn’t bound to anything else. If you passed 2 origins in a more structured way, there would be no need
  • [John] the reason why the challenge, etc ,doesn’t go over CTAP is size
  • [Tim] - Not talking about hashing data, just including another parameter that is hashed.
  • [Lee] - who would validate? Only person is the RP, but doesn’t the RP have to implicitly trust it anyway?
  • [John] - is the data going outside the response?
  • [Tim] - yes, the profile is being finalized and includes 3 things
  • [Lee] - you do need to sign over the origin somewhere
  • [John] - the appropriate things are signed over so…
  • [Lee] - if you don’t have response encryption, I don’t know where it gets signed today
  • [John] - the client id is the audience of the response… this is an OpenID issue; what goes back should be up to original formats
  • [Lee] - OpenID needs to specify where the Origin gets signed. Maybe just a signed blob in the response
  • [Kristina] - why does the origin need to go in the response
  • [John] - because there is no concept of an RPID, but because the parties are defined differently in OpenID4VP, might not be a bad idea to sign over it. It isn’t required
  • [Kristina] - is it wallet passing origin? The client ID gets passed in request and becomes an audience
  • [Lee] - If john is a legit server, I can ping him, get all the validated stuff, and then replay it. If I sign the origin into the request, for the signed request you have a solution. In the case of an unsigned request, - signing the origin’s response helps detect a MITM attack.
  • [Kristina] - if you are putting it inside the presentation (whatever format), you also add the expected origin, that would be a credential-specific extension. In the case where it isn’t signed, you can still dereference the expected origins and RPID. Each protocol will have different ways to do this; may not want to require every data format to include this extension
  • [Tim] does it hurt to include it?
  • [Kristina] this is a behavior in the wallet and a change in format; not clear on when it is optional and when it is mandatory — majority of requests are signed, mandating always would be something to think of
  • [Tim] this would be passed back&forth, whether the wallet does something with it is a choice. Would rather provide it now if possible
  • [Marcos] it doesn’t force anything; it is just context included by the browser
  • [Tim] Is there any requirement to send TLS context?
  • [John] I would say that’s horrible, others would love it
  • [Lee] Doesn’t make sense in the context of a app caller
  • [John] what is the current state of the TLS cert passed for reader auth for online presentment, Kristina?
  • [Kristina] not sure
  • [John] I believe this is unresolved — some people believe TLS certs need to be qualified cert passed along to wallet; others believe that cert should be either passed in request or in the verifier metadata so the wallet can examine. May have 4,5,10 certs, each from a different country. Seems hopeless
  • [Tim] tying to origin?
  • [John] yes, but depends on which wallet replies
  • [Lee] action those certs in the metadata. Making assumptions on TLS presumes web browser not mobile
  • [Tim] Is there anyone who wouldn’t support this API if we didn’t include this?
  • [Lee] No
  • [John] we don’t include a TLS certificate in WebAuthn when talking to an authenticator even though it is in the spec. May have any number of TLS connections for a given origin; knowing which of the 10 certs for that RP goes in the client data is not clear; it is a significant layering problem. Did suggest this was a mitigation for TLS MITM but that as well as token binding never made it through
  • [Tim] will do a PR for client data without TLS context

AI Generated Summary (Google Gemini)

The discussion is about what data should be included in a client data object that is passed to a digital wallet. This data is used by the wallet to verify a request from a website.

  • Current behavior:
    • The API currently only includes the origin of the website making the request.
    • WebAuthn (a related API) includes a challenge generated by the website, which helps prevent replay attacks. However, the DC API does not use challenges.
  • Proposal:
    • Include more information in the client data, such as a hash of the request object.
  • Concerns:
    • Adding a hash might not be necessary if the individual protocols using the DC API already have ways to verify the data.
    • It might be difficult for wallets to implement if they have to validate the hash.
    • Including the origin in the response might not be necessary if the response is encrypted.

The group decided to move forward with a proposal that includes the origin in the client data but does not include TLS session context.

Spec Items: Data type for the response (#119)

Scribe: Pamela Dingle

  • Tim - didn’t get sense that array buffer is needed
  • Tim - in OpenID4VP response, what is passed back?
  • Lee - JSON
  • Tim - if we wrote WebAuthn today it would all be Base64 encoded JSON
  • Marcos - there were examples of using URL search params. We can still do that, but what kind of data are we going to get back?
  • Sam - the fork is whether we expect responses to be byte streams vs text. Text includes JSON/form encoding. Do we ever expect a response that’s binary?
  • Kristina - No it shouldn’t be binary
  • Sam - can we always assume txt/JSON?
  • Kristina - yes
  • Marcos - now we can get back an object, we don’t even need text
  • Lee - as long as it is JSON serializable, it doesn’t matter what it is
  • Rick - question on the issue about size — what are current ZKP proofs?
  • Lee - low numbers of 10’s of k
  • Rick - even in those cases, can we still assume it will be string
  • Lee - if it’s an object with a byte array in it, as long as it can be converted, that’s ok, because we have to take this to hybrid; you have to return the entire object, not the object inside the object The thing we need to convert to JSON is the larger object; as long as everything embedded can be converted it is ok
  • Rick - we can require in the spec
  • Marcos - we can guarantee it, but the challenge is what we get back from the wallet, is it parseable (the platform API should dictate that)
  • Sam - you would have to guarantee that the response is JSON parseable
  • Lee - we would use the JSON object that is the digital credential object. Same as the request. It is bigger than the request parameter; it is the whole digital section of the request parameter
  • Marcos - In payment requests, we do the same thing — take in the object, and assure that it is JSON serializable. Try to convert, and reject if it dies. Need assurances that it isn’t garbage coming back from the wallet.
  • Lee - this is the same as WebAuthn. We give the JSON, accept an object back, dies otherwise
  • Lee - going to have the response and decode it, you would have a JSON string inside JSON; it isn’t top-level it is a string inside.
  • Sam - are we converging to object for the return data?
  • Marcos - looks like it. we need to take it to the other communities
  • (thumbs up from members of other communities)

AI Generated Summary (Google Gemini)

  • Current understanding:
    • The response data is expected to be in JSON format.
    • This is similar to the WebAuthn API, where the data is also JSON-encoded.
  • Alternatives considered and rejected:
    • Array buffer: This was not seen as necessary.
    • Binary data: This was not considered likely for the use case.
  • Open questions:
    • While the top-level response is expected to be JSON, there might be nested data within the response that could be binary. The group decided that as long as this nested data can be converted to JSON, it is acceptable.
    • The size of Zero-Knowledge Proofs (ZKPs) was brought up as a potential concern. The group acknowledged that ZKPs might be larger than other data types, but as long as they can be represented in JSON format, it should still work.

Spec Items: Error codes

Scribe: Rick Byers

Both transport and web layer

  • [Tim] Web platform error codes, and what we should send between clients. A current WebAuthn PR re getting more verbose in errors, because that’s #1 request from devs.
  • [Lee] We’ve had the principlehad principle that you shouldn’t be able to learn the existence of a credential by error, can’t distinguish user cancel. Say a user picks a credential, you get to the wallet, but it can’t respond. What goes back? RP validation failed? That reveals to a RP who didn’t have permission, that the user had the credential. Think we’re getting to the point that once a useronce user proceeds, you can give back an error code.
  • [Tim] Wallet can give error to platform, that doesn’t necessarily go back to the verifier
  • [Lee] Some errors you don’t go back to the app or browser, but go back to credential selector. 1. Do we allow wallets to send errors back at all? Or is the only thing they ever send, is "user canceled"? We need to make that - decision
  • [Marcos] A thing we need to consider on web platform. Until recently, we only had exceptions as defined in webIDL;, narrow set served well. We have the opportunity (not that I advise) to make extension. Don’t rely on - messages to devs, as they’ll try to parse them. Given the set we have in WebIDL, we can choose what fits, e.g., data error. Goes back to my earlier thinking on validate- before- send. Architectural principle, don’t create - new things unless we have to.
  • [Tim] Options are already there to map to conditions
  • [Lee] As soon as you map to anything other than user canceled, you reveal existence. Do we want to say we’ll do that? Seems reasonable
  • [Marcos] WebAuthn doesn’t define new types
  • [Tim] WebAuthn PR isn’t defining new types, just adding context. https://github.com/w3c/webauthn/pull/2047
  • [Tim] Consensus on once the user picks., do we need a PR?
  • [Pam] Think that works for WebAuthn, doesn’t necessarily work for data-laden credentials. E.g., multiple credentials. A bunch of finer-grained things that need events
  • [Marcos] paymentrequest experience. A series of events, rather than errors. We could look at something like that. You request with an event target, enable error recovery
  • [Lee] Also protocol errors
  • [Tim} Error in hybrid chain. We probably need to map by layer and see what we want to allow by layer
  • [Lee] OpenID4VP errors, one is invalid scope. E.g. ,an attack to see if you live in CA. Would they learn the existencelearn existence of a credential without getting wallet consent?.
  • [Sam] What errors does web platform produce? User canceled or response?.
  • [Lee] User canceled. Object invalid (e.g., not JSON).
  • [Marcos] Security error, you tried to send an object
  • [Sam] So we don't have to change anything
  • [Lee] Note that protocol error reveals existence of credential
  • [Pam] Would love a picture. Also, throttling —, is a given provider able to make 1000 requests?
  • [Rick] All about deployment and in-the-field observability. Rich wallet errors and very few of them
  • [Sam] We can have ways in which the walletwhich wallet can tell the browsertell browser not to have a privacy leak, just return as if user canceled. E.g., openID4VP access error
  • [Lee] Same in webauthn. We have a user canceled error from passkey providers, just goes back to selector dialog. If you cancel the selector, you get a userget user canceled error. If you go back from credential, you get back to selector (in Android).
  • [Tim] who wants to start an issue.
  • [Rick] I’ll file one
  • [Tim] Anyone want to draw a picture of the layers? I can try.

AI Generated Summary (Google Gemini)

  • Concerns:
    • Revealing credential existence through errors could be a privacy leak.
    • There are different error scenarios depending on the layer (e.g., Web platform, protocol).
  • Possible solutions:
    • Define new error codes for the DC API that balance information and privacy.
    • Use events (like payment requests) instead of errors for some scenarios.
    • Allow the wallet to signal the browser to return a generic "user cancelled" error even for other errors, to protect privacy.
  • Next steps:
    • Rick will file an issue to track this discussion.
    • Someone from the group might create a diagram illustrating the different error layers.
Clone this wiki locally