-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preventing silent timing attacks #447
Comments
Something I am confused about is we still don't have a guaranteed way of knowing that the user successfully logged into the SP/RP with the desired IDP. From a browser perspective we only know that the user clicked on and IDP, not even that they meant to. |
Yea FedCM indeed has no way to know if the user logged into the RP successfully after the FedCM flow is completed. However, I'm not sure how this is related to the proposal, which is IDP Sign-In Status API. The API would allow the user agent to not fetch the IDP accounts when it knows that the user is not signed in to the IDP. Perhaps you can elaborate on the question? Also happy to schedule a chat for sometime next week if that would be easier. |
Can you explain how this addresses the leak? I can't see how. |
It addresses the leak (or as we like to call it, timing attack) by making it visible to the user, so it disincentivizes a tracker from using FedCM to attempt timing attacks since they would be very clearly visible to the users. |
Yeah, I don't think that is adequate. Accountability through visibility is a fine fallback for those cases where we can't use other mechanisms to prevent an attack, but here we already know that we can do better. |
Can you expand on what you mean by this? In what way do you think we can do better here in comparison to this proposal? |
If the user needs to actively select the IdP before the IdP is contacted, we win. If the information from the IdP is cached, we can save having to ask the user, but still show them account information, and we win. That proposal depends on trusting the IdP, which is not OK. |
Before diving into the reply, we just wanted to make sure that we agree on the following statement: “this proposal solves the SILENT timing attack problem” (i.e. it is not possible to perform a timing attack silently without any user visible indication). Do we all agree with that statement? If we do, then there’s the question about how to go about the loud (not silent, i.e showing UI) timing attack problem. We believe that the browser should be allowed to pick the UX that they think is in the best interest of their users, even if that means that the loud timing attacks are not solved. This proposal should be specified in such a way that the browser can make such decisions. Do you think that we can agree on this proposal with the understanding that it is up to the browser how to deal with loud timing attacks? That is, Chrome may fetch accounts right away when the sign-in status is not “signed-out”, but Firefox may require some interaction from the user before doing this. We believe that there’s room for browsers to innovate here since the API is flexible enough that introducing or reducing user friction should not break sites. It may also be helpful to point out that the current browser behaviors already allow websites to make connections with third parties by opening a new popup window or navigating to a new origin, and this does not require the user to be prompted first since it would be significant friction to the user experience.
Yes, but we believe that this is a choice that the user agent can make. There is a tradeoff here between 1) the strength of the privacy protection and 2) the amount of user friction. If we need to ask the user to choose an IDP first before we fetch the accounts when they first visit a site, this introduces significant user friction (one extra click, plus latency) to the API. Taking this to the extreme, a user agent only caring about the first one would want to introduce user friction for cross-origin popups or navigations. Given the uncertainty here, we think the user agent should be free to pick the behavior it thinks works best for a given user. This could involve heuristics based on the assessment of privacy risks and user sensitivity.
We heard consistently from IdPs that we are engaging with that caching the account information is unacceptable. The reason is that showing stale information can be traumatizing or dangerous to certain subsets of users. For instance, it would be for those who have gone through a gender transition. Of course, the duration of the caching matters here, but if we only cache for some time and FedCM is always used infrequently enough, then this does not solve much of the problem (still need to fetch if the cache is expired) and it does introduce complexity and risks. Another problem here is that this caching would need to be performed at the user agent level, and we’ve heard some IDPs do not like the idea of the user agent storing the user account information. For these reasons, we believe that caching is not an appropriate solution here (cf. slide 17 here).
We do not share the opinion that this proposal depends on trusting the IDP. From Chrome’s analysis, a tracker trying to disguise itself as an IDP would not be able to use FedCM to try to track users. The reason for this is simple: if it tries to do so, this will become evident to the users and RPs, since users see the tracker’s domain due to the IDP Sign-In Status API. An RP would respond to this by pulling out the tracker from their site, since their website was visibly corrupted by the tracker. Besides that, their tracking abilities would be very limited: they’d have to make timing correlations, which have unknown precision rates (that get lower as number of users increase) and high technical complexity. So far, we believe that a tracker would not want to incur this risk (high visibility) for the weak benefit (timing correlations). A real IDP could in theory use FedCM along with timing to attempt to track their users. However, we think this is unlikely to happen for a few reasons. First, the IDP would only be able to attempt tracking for sites that support logging in via FedCM. Second, the tracking would be very tricky to perform accurately since it would be timing-based. Lastly, tracking via FedCM might become readily apparent from the IDP’s SDK, which needs to be public as it needs to be embedded in the RPs. A real IDP would not do this since it needs to be perceived as ‘safe’ to remain a viable IDP for its users. Now, let’s assume the browser (say, Firefox, or Chrome in the future) decides that it wants to use a UI where the accounts fetch is initially gated on the user picking an IDP. This is compatible with this proposal, and this proposal helps improve the user flow on subsequent visits. This is because the browser would no longer need to fetch accounts if it knows that the user is not logged in to a given IDP, even if the user has previously used that IDP to login to a site via FedCM. |
One thing that's not clear to me about that proposal (sorry I'm just catching up to the specific details of the proposal now) is how this goes about addressing privacy principle 1.1 if we do go in the direction of solving only for the silent timing attack and allowing the UA to decide how to solve for the loud timing attack. In this case, I'm not sure that this spec would meet the necessary level of consent defined in by the privacy principles document since it specifically calls out having the identity shared across contexts would require a high burden of proof. In this spec specifically, I don't believe that only solving for the silent timing attack is good enough here. As far as I can tell (I may have missed some aspect of the proposal, so please correct me if I've made a mistake) by leaving this up to the UA we would not allow for the user to have some way to opt out of the loud timing attack since it's controlled by the browser UI flow. The one method I could come up for preventing the loud timing attack is to tell the user "don't log in with that IDP" but there's many cases where this is not an option because the RP gets to dictate which IDPs are chosen. So by leaving the loud timing attack scenario up to the UA to decide, we are also inadvertently accepting that some RPs may only support IDPs which do track the user across origins in such a way that the user has no way to opt out from the loud timing attacks with that specific RP in certain browsers. This collusion between the RPs and IDPs isn't being accounted for from what I can tell with the current proposal and should be addressed. However, if we choose to block this with the browser UI, we completely eliminate this class of attacks and also reduce the privacy labor that would be expected of the user. For this reason I'd be a +1 to making sure that we're not only addressing the silent timing attack, but also the loud timing attack in a way that preferably prevents the user from needing to do anything to address it. |
Given this topic has been discussed in multiple threads #230 (comment) #231, ... for about a year now - which lead to some changes already (I believe the link decoration topics were addressed already via the https://fedidcg.github.io/FedCM/#idp-api-well-known) and this proposal for the sign-in status API, it would be worthwhile to re-align on the scenario that is discussed here. As far as I understand the scenario Requires a rogue IDP that
Receiving a FedCM API call (mainly the Accounts List call) would work once on one Assuming the IDP is able to record the API calls (in the above described scenario) the correlation in this scenario would solely be based on the timestamp timestamp, origin (available in the Client Metadata request)
This discussion (to my understanding) is not about sharing identity across context (which is always gated by the browser UX affordance, as that will only happen when calling the identity assertion endpoint) - but if the API offers enough of a lets say surface for an IDP to probabilistically correlate origin and identity purely based on logged timestamps (in a scenario where the user would not choose that IDP to login anyway for a specific RP) in the prior API calls.
In case this would be address with additional UX affordances this will extend the privacy labor expected from the user AND also would not really fully prevent a timing attack (its just offloading the burden to the user)
These types of gates are also concerning as they specifically favour the usual economies of scale mechanics as additional friction usually hurts those which are supported on fewer RPs. A user would not become aware of being able to sign-in with an IDP (which he is actively using, signed-into) because there is a platform default login option to gravitate too and effect the gets stronger the more gates are introduced. Giving the user choice about which IDPs to generally use for FedCM usage (which would be way more understandable for users) could be done at different places without that massive level of friction on a per RP level (the sign-in API already requires IDP to signal this intent to the browser). The only real way to fully prevent any timing attack would be to change the mechanics of when the Account List Endpoint is being called to populate the account chooser (such that timestamp of this call and the user visiting the site could not be correlated also in the faintest of cases/probabilities). This will incur user friction in the later process then though as the user might have logged out prior to a get call, account details have changed, ..... (seems unworkable without sign-in extensions etc.) |
Assuming here that the gating UX would not we shown on each site visit - which I'd guess would lead to RPs totally ignoring this API because of user friction |
Thank you for re-stating the specific scenarios that cause the attack. I've been trying to track this movement of this work, but definitely haven't been close enough to realize the 3rd point was also a requirement. I'd only picked up on the first two when I research it yesterday. From the sounds of it the reset by the browser would prevent continued coordinated fingerprinting abuse here which was my largest concern.
Sorry, to clarify what led me to that interpretation was more specifically about how to solve for the loud timing attack and to argue that the proposal should address both the loud and silent timing attacks. I agree that the proposal currently defines a way to prevent repeated instances of the silent timing attacks although it sounds like the first instance can still be executed with user mistakes along the way (visiting a malicious IDP). The reason I was arguing for this is because of the following comment from @npm1:
Does that help provide clarity around what I was arguing for previously? The primary reason I want to see this addressed universally as well is because I'd prefer to see Brave able to reuse upstream UI rather than having to prevent the loud timing attack independently of other chromium implementations.
Ok, so this gets to the heart of the tradeoff here and is extremely helpful to me to understand how the Chrome team is approaching the problem. I definitely haven't thought about the problem as deeply as you all have at this point so I don't think I can offer a perfectly balanced solution. I'm assuming that the goal here is to turn this into only needing a single UI display based on this response? It seems like if we were willing to make the account chooser process done first before retrieving the client_metadata in a way that does this as a 2 phase UI during registration (ToS/Privacy policy doesn't seem like it needs to be re-agreed to on re-login unless it's changed) then we could also prevent both the silent and loud timing attack by only fetching the client_metadata call after the account has been chosen. Is this a path that could be explored further or am I missing something obvious here? |
Avoiding the client metadata request gains you very little, because the RP (or the IDP's SDK embedded on the RP) can always create an |
Right, I think that was the main reason to introduce the IDP sign-in status API. It does not impose friction for legitimate use, but prevents coordinated abuse. |
Since we have added login status to the spec, I think this can be closed. |
This is Chrome's proposal to solve issue #231. It is described here:
https://github.com/fedidcg/FedCM/blob/main/proposals/idp-sign-in-status-api.md
I am splitting this proposal out from that issue because depending on browser UI choices, it does not necessarily fully prevent sending user info to attackers; it only prevents doing so silently.
#436 is the pending pull request to integrate the proposal into the spec. Feel free to discuss here or in the PR.
The text was updated successfully, but these errors were encountered: