-
Notifications
You must be signed in to change notification settings - Fork 5k
Fix calculation of channel bindings hash in managed NTLM implementation #95898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Tagging subscribers to this area: @dotnet/ncl, @bartonjs, @vcsjones Issue DetailsFixes #95725
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/azp run runtime-libraries-coreclr outerloop |
Azure Pipelines successfully started running 1 pipeline(s). |
/azp list |
/azp run runtime-libraries enterprise-linux |
Azure Pipelines successfully started running 1 pipeline(s). |
src/libraries/System.Net.Security/src/System/Net/NegotiateAuthenticationPal.ManagedNtlm.cs
Outdated
Show resolved
Hide resolved
...ibraries/System.Net.Security/src/System/Net/Security/Pal.Managed/SafeChannelBindingHandle.cs
Outdated
Show resolved
Hide resolved
/azp run runtime-libraries-coreclr outerloop |
/azp run runtime-libraries enterprise-linux |
Azure Pipelines successfully started running 1 pipeline(s). |
1 similar comment
Azure Pipelines successfully started running 1 pipeline(s). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
Thanks @filipnavara for digging into this.
I assume we do not have good way how to write a test, right?
We can create a subclass of We cannot test against real NTLM server on local Windows machine either because the managed NTLM implementation is not compiled in that configuration, and we would still need to establish a TLS connection with some reasonable certificate. There may potentially be some global system settings affecting it as well. |
@dotnet/dnceng The outerloop pipeline seems is reporting lots of failures. For example https://dev.azure.com/dnceng-public/public/_build/results?buildId=498370&view=logs&j=631ed29b-f0dd-5b15-38a0-c6c555556dea&t=b9941189-b5a8-5df3-6b3a-0964fabb3acd Individual work items don't indicate test failure (e.g. https://helixre107v0xdcypoyl9e7f.blob.core.windows.net/dotnet-runtime-refs-pull-95898-merge-dcd3063db7254525ba/System.IO.FileSystem.DriveInfo.Tests/1/console.61bb75be.log?helixlogtype=result), but it seems to always end with
Can you please have a look? |
@rzikm we haven't changed anything in a while that should impact this aspect of testing. is |
I am not aware of any change on our side, maybe @carlossanlop would know. Looking at recent runs, it probably started at the beginning of December and I have seen it so far only on |
I made changes in those failure reporting tools, as you can see from the logs you shared. But the problem was not in gen-debug-dump-docs.py, it was in XUnitLogChecker:
I think there might have been a fix around this area after I introduced the support for this tool, because I did validate against outerloop. It seems in this queue, the tool is not getting added to the set of payloads that get extracted in the helix machines. I opened an issue to investigate it: #96035 cc @ivdiazsa @mangod9 @JulieLeeMSFT @ericstj Thanks for reporting it, @rzikm . |
Fixes #95725
Tested on linux-x64 with Managed NTLM against Windows Server 2022 21H2 with Extended Protection set to Required.