You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of, thanks to all the contributors, especially @fflaten who responded to a ton of issues, made multiple PRs, helped me diagnose problems, and checked my fixes. Thank you!
Code coverage
Coverage report is back, on screen and on the result object
The theme of this release was Code Coverage. I finally fixed the coverage report which is now output to the screen when Detailed (or Diagnostic) output level is specified. And it is always attached to the result object when CodeCoverage is enabled. The CodeCoverage data are attached to the result object as well, if you want to do further processing on it.
Performance is better
I focused on performance as well and all breakpoints are now set in one place, making it 50% faster in my tests, but I would love to see your numbers. There is also new option, CodeCoverage.SingleHitBreakpoints, which will remove the breakpoint as soon as it is hit, lowering the overhead in PowerShell. This option is enabled by default and makes the execution a bit faster as well.
But not as great as it can be
I did some new research and have a proof of concept using my new Profiler to do code coverage which is almost as fast as running without it. This will become a new experimental option soon and should work for all versions of PowerShell that Pester supports. I will announce more details later.
I also implemented CodeCoverage using the still unreleased PowerShell profiler, which I started about half a year ago, and that @iSazonov has been working on. PowerShell/PowerShell#13673 Once (or if) this is merged and released Pester is ready to start using it #1884.
You can specify your desired code coverage percent
Using option CodeCoverage.CoveragePercentTarget you can now specify the desired code coverage. The default is 75%. This has only visual effect for now, it shows as green message in the output when the target is met, or as red when it is not met.
(See it in the gif below)
VSCode + Pester CodeCoverage are now friends
Using CodeCoverage in VSCode is very painful and that's a shame. I added new format based on JaCoCo which is especially catered to Coverage Gutters extension in VSCode. This, plus some boilerplate code enables you to easily see code coverage when developing in VSCode. The format is also compatible with Azure DevOps coverage view.
Lastly, the -CI switch no longer enables CodeCoverage. At the moment there is no stable way to make CodeCoverage fast on all versions of PowerShell so it is not a good default for beginners, or build pipelines you want to quickly setup. If you don't mind the overhead, use this configuration to get the functionality of the -CI switch:
There will be also a new option coming to take advantage of the Profiler based code coverage. Please help me test it when it comes so we can make it new default!
CI switch does not enable CodeCoverage by default #1911
Configuration
New-PesterConfiguration
New-PesterConfiguration cmdlet is added which returns [PesterConfiguration]::Default. It is recommended to be used instead of the .NET call because it will auto-load Pester if it was not loaded already.
The help for this cmdlet is generated from the object, so includes all options that are present.
Throw on failed run
New option Run.Throw is added which is similar to Run.Exit (-EnableExit). When enabled Pester will throw when there are any failed tests. When both Run.Exit and Run.Throw are enabled throwing exception is preferred, because it is more informative and because it works better with VSCode where exit is ignored.
Pester.dll version is checked on import
The Dll holding the configuration and other types is now versioned based on the version of Pester that it is released with. There is also a minimal version check that will ensure that you will get an error on module load, when you already have an older version of Pester loaded in the current session. This unfortunately cannot be avoided and throwing a sensible error is better than getting failure during runtime because some property on an object was renamed.
The useful arrow on Should -Be when comparing strings is back. I updated the implementation to show as much as it can based on how wide your window is, without wrapping the lines for big outputs. Notice the difference in the last example is on index 985 in string which is over 4000 characters long.
The mocking was not focus of the current sprint, but I made a lot of fixes there as well.
Cleanup
When you cancel test run using Ctrl+C Pester mock functions and aliases may stay in the session. In subsequent Invoke-Pester call we need to clean them up to ensure stale mocks are not making your tests fail. Pester now looks for those stale mocks not just in Pester scope, but also in all currently loaded modules and user scope.
$PesterBoundParameters variable
In -MockWith and -ParameterFilter you can now use $PesterBoundParameters variable which holds all the bound parameters for that function call. This variable is like the $PSBoundParameters which is not correctly populated, and cannot be without breaking Mock debugging.
Logging
The diagnostic output when searching for mock behaviors is much improved. Calling a mock will show all the behaviors that are present for that command, and reason why they are used or not used. There is also a list of all the behaviors that will execute parameter filter, and default behaviors. The log also shows the target module in which a mock was defined more clearly (in the log below it is module m). When mock is in script scope $none is used to denote it in the log. The mock hook function has a clearer name showing for which command the mock is.
In the code example below, mocks are defined in two different scopes (module m and script scope):
Invoke-Pester-Container (
New-PesterContainer-ScriptBlock {
Describe 'Mock logging' {
It 'is more detailed' {
Get-Module m |Remove-ModuleNew-Module m -ScriptBlock {
functionInvoke-Job ($ScriptBlock){
Start-Job-ScriptBlock $ScriptBlock
}
} |Import-Module
Mock Start-Job-ModuleName m -MockWith { "default mock in module m" }
Mock Start-Job-ModuleName m -MockWith {
"parametrized mock in module m"
} -ParameterFilter { $Name-eq"hello-job" }
Mock Start-Job-MockWith { "default mock in script" }
# call Start-Job in script# this will call the Mock defined in scriptStart-Job-ScriptBlock { Write-Host"hello" }
# call mock of Start-Job via Invoke-Job inside of module m# this will call the Mock defined in module mInvoke-Job-ScriptBlock { Write-Host"hello" }
}
}
}
) -Output Diagnostic
This is the improved log:
Mock: Setting up default mock for m - Start-Job.
Mock: Resolving command Start-Job.
Mock: ModuleName was specified searching for the command in module m.
Mock: Found module m version 0.0.
Mock: Found the command Start-Job in a different module.
Mock: Mock does not have a hook yet, creating a new one.
Mock: Defined new hook with bootstrap function PesterMock_m_Start-Job_a611abe3-203b-42c7-b81f-668945eb29eb and aliases Start-Job, Microsoft.PowerShell.Core\Start-Job.
Mock: Adding a new default behavior to m - Start-Job.
Mock: Setting up parametrized mock for m - Start-Job.
[...]
Mock: Found the command Start-Job in a different module and it resolved to PesterMock_m_Start-Job_a611abe3-203b-42c7-b81f-668945eb29eb.
Mock: Mock resolves to an existing hook, will only define mock behavior.
Mock: Adding a new parametrized behavior to m - Start-Job.
[...]
Mock: Setting up default mock for Start-Job.
Mock: We are in a test. Returning mock table from test scope.
Mock: Resolving command Start-Job.
Mock: Searching for command Start-Job in the script scope.
Mock: Found the command Start-Job in the script scope.
Mock: Mock does not have a hook yet, creating a new one.
Mock: Defined new hook with bootstrap function PesterMock_script_Start-Job_7745c21c-4173-4c11-be80-7274dd7b93ec and aliases Start-Job, Microsoft.PowerShell.Core\Start-Job.
[...]
Mock: Found 0 behaviors in all parent blocks, and 3 behaviors in test.
Mock: Filtering behaviors for command Start-Job, for target module $null (Showing all behaviors for this command, actual filtered list is further in the log, look for 'Filtered parametrized behaviors:' and 'Filtered default behaviors:'):
Mock: Behavior is a default behavior from script scope, saving it:
Target module: $null
Body: { "default mock in script" }
Filter: $null
Default: $true
Verifiable: $false
Mock: Behavior is not from the target module $null, skipping it:
Target module: m
Body: { "parametrized mock in module m" }
Filter: { $Name -eq "hello-job" }
Default: $false
Verifiable: $false
Mock: Behavior is not from the target module $null, skipping it:
Target module: m
Body: { "default mock in module m" }
Filter: $null
Default: $true
Verifiable: $false
Mock: We are in a test. Returning mock table from test scope.
[...]
Mock: Filtered parametrized behaviors:
$null
Mock: Filtered default behavior:
Target module: $null
Body: { "default mock in script" }
Filter: $null
Default: $true
Verifiable: $false
Mock: Finding behavior to use, one that passes filter or a default:
Mock: { "default mock in script" } is a default behavior and will be used for the mock call.
Mock: Executing mock behavior for mock Start-Job.
[...] Another mock invocation
Mock: Behavior for Start-Job was executed.
Mock: Mock for Start-Job was invoked from block Process, resolving call history and behaviors.
[...]
Mock: Filtered parametrized behaviors:
Target module: m
Body: { "parametrized mock in module m" }
Filter: { $Name -eq "hello-job" }
Default: $false
Verifiable: $false
Mock: Filtered default behavior:
Target module: m
Body: { "default mock in module m" }
Filter: $null
Default: $true
Verifiable: $false
Mock: Finding behavior to use, one that passes filter or a default:
Mock: Running mock filter { $Name -eq "hello-job" } with context: Command = Write-Host "hello" , ScriptBlock = Write-Host "hello" .
Mock: Mock filter returned value 'False', which is falsy. Filter did not pass.
Mock: { "default mock in module m" } is a default behavior and will be used for the mock call.
Mock: Executing mock behavior for mock m - Start-Job.
Mock: Behavior for m - Start-Job was executed.
Mock behaviors and fallbacks
When you define multiple mocks for the same command it can be very confusing which one will be used. Especially when -ModuleName is involved. The current release simplifies the rules around -ModuleName.
To understand this better, little recap:
Defining a mock means that an internal function called PesterMock_* is defined and and alias is created pointing to that function, e.g. alias Start-Job to mock function Start-Job, we call this setup a mock hook.
When you define multiple mocks for Start-Job we will only define one mock hook for Start-Job (for each module), and add all the -MockWith scriptblocks to a table. When only one -ModuleName is involved the rules which one will be used are very simple:
Order behaviors starting from the one that was defined most recently.
Check if there is a parameter filter and run it.
Use the first one that matches parameter filter.
Check if there is default behavior, and use it.
Run the real command.
But when you define mocks for Start-Job in multiple modules, e.g. one with -ModuleName m, and one without it, Pester starts to be really confusing.
This is because there is still just one table that holds the -MockWith scriptblocks and the rules which one to choose are broken.
So here are new rules for (that should hopefully not break you):
All -MockWith scriptblocks are marked with -ModuleName based on what the user specified, or if not present, based on the current module. The script scope is not an exception from this rule, it is considered to be a module with $null name, and follows all the same rules.
This means that Mock Start-Job -ModuleName m -MockWith { "sb" } is the same as InModuleScope m { Mock Start-Job -MockWith { "sb" } }. Both -MockWith scriptblocks will be marked to belong to module m.
In the previous versions it would seemingly work the same, but the -MockWith scriptblock would either 1) be marked to belong to module m, if Start-Job was a function that is defined in module m, or 2) marked to belong to any scope behaviors if Start-Job is defined in other place.
With that in place:
Parametrized behaviors (Mock with -MockWith and -ParameterFilter) are always attached to a module using the rule above and never invoked outside of it. Mock hooks from different module or from script scope will never choose this behavior.
Default behaviors (Mock with -MockWith but without -ParameterFilter) are also always attached to a module using the rule above. Default behavior defined for module m will never be used for mock hook defined in another module, or in script scope.
With one exception: Default behavior from script scope will be used by mock hook when all the parametrized behaviors for that module failed the filter, and there is no default behavior for that module. This is deliberate to not introduce breaking change to guard mocks such as Mock Remove-Item { throw "Don't call this" }. Without this exception Mock Remove-Item -ParameterFilter { $false } -ModuleName m would fall back to call the real Remove-Item in the new Pester version. This exception is true only for default script scoped behaviors. Parametrized behaviors from script scope will never be considered for hooks in other modules. Default behaviors from other modules will never be used for the fallback.
Related changes:
Fix searching for mock table when deep in tests #1856
In should Invoke, log output of parameter filter #1881
Fix mock behavior for Mock and Should -Invoke #1915
Execution
Script isolation
Each script now gets it's own script scope during run, this isolates test runs better, and avoids leaking $script: scoped variables among test scripts. The same isolation is used for ScriptBlock containers during both Discovery and Run phase.
Class metadata
Using custom attributes in functions that are defined as PowerShell class would fail when you mock the function, because of missing metadata. The metadata are now resolved in the correct scope and it no longer fails
Discovery is now more integrated with Pester, and failures in files during discovery don't have catastrophic effects. Instead any failure will go into the result object, and will be reported as container failure. This enables you to run test files that you migrated successfully, and gradually progress to the ones you did not migrate yet.
Discovery only
You can now run only discovery. There is new option Run.SkipRun. Use it with Run.PassThru, to get the result object with all the tests in your test files. All tests will be marked as NotRun, unless there is Discovery failure of course.
Pester in Pester
Framework authors may wish to run Pester in Pester. This is now possible again.
Related changes:
Report failures in discovery into result object #1898
Initialize state in Invoke-Pester and inherit it to children #1869
Development
The build is now done locally without merging all code into the single module. This means you can edit the function files directly, instead of having to debug in Pester.psm1 and changing the source files.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Pester 5.2.0 is finally here! 🥳🥳🥳
First of, thanks to all the contributors, especially @fflaten who responded to a ton of issues, made multiple PRs, helped me diagnose problems, and checked my fixes. Thank you!
Code coverage
Coverage report is back, on screen and on the result object
The theme of this release was Code Coverage. I finally fixed the coverage report which is now output to the screen when Detailed (or Diagnostic) output level is specified. And it is always attached to the result object when CodeCoverage is enabled. The CodeCoverage data are attached to the result object as well, if you want to do further processing on it.
Performance is better
I focused on performance as well and all breakpoints are now set in one place, making it 50% faster in my tests, but I would love to see your numbers. There is also new option,
CodeCoverage.SingleHitBreakpoints
, which will remove the breakpoint as soon as it is hit, lowering the overhead in PowerShell. This option is enabled by default and makes the execution a bit faster as well.But not as great as it can be
I did some new research and have a proof of concept using my new Profiler to do code coverage which is almost as fast as running without it. This will become a new experimental option soon and should work for all versions of PowerShell that Pester supports. I will announce more details later.
I also implemented CodeCoverage using the still unreleased PowerShell profiler, which I started about half a year ago, and that @iSazonov has been working on. PowerShell/PowerShell#13673 Once (or if) this is merged and released Pester is ready to start using it #1884.
You can specify your desired code coverage percent
Using option
CodeCoverage.CoveragePercentTarget
you can now specify the desired code coverage. The default is 75%. This has only visual effect for now, it shows as green message in the output when the target is met, or as red when it is not met.(See it in the gif below)
VSCode + Pester CodeCoverage are now friends
Using CodeCoverage in VSCode is very painful and that's a shame. I added new format based on JaCoCo which is especially catered to Coverage Gutters extension in VSCode. This, plus some boilerplate code enables you to easily see code coverage when developing in VSCode. The format is also compatible with Azure DevOps coverage view.
Full DEMO of the feature is here: https://youtu.be/qeiy8fRMHf8?t=5697
And code here https://gist.github.com/nohwnd/efc339339dc328d93e0fe000249aea25
❗
-CI
no longer enables CodeCoverageLastly, the
-CI
switch no longer enables CodeCoverage. At the moment there is no stable way to make CodeCoverage fast on all versions of PowerShell so it is not a good default for beginners, or build pipelines you want to quickly setup. If you don't mind the overhead, use this configuration to get the functionality of the-CI
switch:There will be also a new option coming to take advantage of the Profiler based code coverage. Please help me test it when it comes so we can make it new default!
Related changes:
Configuration
New-PesterConfiguration
New-PesterConfiguration
cmdlet is added which returns[PesterConfiguration]::Default
. It is recommended to be used instead of the .NET call because it will auto-load Pester if it was not loaded already.The help for this cmdlet is generated from the object, so includes all options that are present.
Throw on failed run
New option
Run.Throw
is added which is similar to Run.Exit (-EnableExit
). When enabled Pester will throw when there are any failed tests. When bothRun.Exit
andRun.Throw
are enabled throwing exception is preferred, because it is more informative and because it works better with VSCode where exit is ignored.Pester.dll version is checked on import
The Dll holding the configuration and other types is now versioned based on the version of Pester that it is released with. There is also a minimal version check that will ensure that you will get an error on module load, when you already have an older version of Pester loaded in the current session. This unfortunately cannot be avoided and throwing a sensible error is better than getting failure during runtime because some property on an object was renamed.
Related changes:
Should
Should -Be
for stringThe useful arrow on
Should -Be
when comparing strings is back. I updated the implementation to show as much as it can based on how wide your window is, without wrapping the lines for big outputs. Notice the difference in the last example is on index 985 in string which is over 4000 characters long.Mocking
The mocking was not focus of the current sprint, but I made a lot of fixes there as well.
Cleanup
When you cancel test run using Ctrl+C Pester mock functions and aliases may stay in the session. In subsequent Invoke-Pester call we need to clean them up to ensure stale mocks are not making your tests fail. Pester now looks for those stale mocks not just in Pester scope, but also in all currently loaded modules and user scope.
$PesterBoundParameters
variableIn
-MockWith
and-ParameterFilter
you can now use$PesterBoundParameters
variable which holds all the bound parameters for that function call. This variable is like the$PSBoundParameters
which is not correctly populated, and cannot be without breaking Mock debugging.Logging
The diagnostic output when searching for mock behaviors is much improved. Calling a mock will show all the behaviors that are present for that command, and reason why they are used or not used. There is also a list of all the behaviors that will execute parameter filter, and default behaviors. The log also shows the target module in which a mock was defined more clearly (in the log below it is module
m
). When mock is in script scope$none
is used to denote it in the log. The mock hook function has a clearer name showing for which command the mock is.In the code example below, mocks are defined in two different scopes (module m and script scope):
This is the improved log:
Mock behaviors and fallbacks
When you define multiple mocks for the same command it can be very confusing which one will be used. Especially when
-ModuleName
is involved. The current release simplifies the rules around-ModuleName
.To understand this better, little recap:
Defining a mock means that an internal function called
PesterMock_*
is defined and and alias is created pointing to that function, e.g. aliasStart-Job
to mock functionStart-Job
, we call this setup a mock hook.When you define multiple mocks for
Start-Job
we will only define one mock hook forStart-Job
(for each module), and add all the-MockWith
scriptblocks to a table. When only one-ModuleName
is involved the rules which one will be used are very simple:But when you define mocks for
Start-Job
in multiple modules, e.g. one with-ModuleName m
, and one without it, Pester starts to be really confusing.This is because there is still just one table that holds the
-MockWith
scriptblocks and the rules which one to choose are broken.So here are new rules for (that should hopefully not break you):
All
-MockWith
scriptblocks are marked with-ModuleName
based on what the user specified, or if not present, based on the current module. The script scope is not an exception from this rule, it is considered to be a module with$null
name, and follows all the same rules.This means that
Mock Start-Job -ModuleName m -MockWith { "sb" }
is the same asInModuleScope m { Mock Start-Job -MockWith { "sb" } }
. Both-MockWith
scriptblocks will be marked to belong to module m.In the previous versions it would seemingly work the same, but the
-MockWith
scriptblock would either 1) be marked to belong to module m, if Start-Job was a function that is defined in module m, or 2) marked to belong to any scope behaviors if Start-Job is defined in other place.With that in place:
Parametrized behaviors (Mock with
-MockWith
and-ParameterFilter
) are always attached to a module using the rule above and never invoked outside of it. Mock hooks from different module or from script scope will never choose this behavior.Default behaviors (Mock with
-MockWith
but without-ParameterFilter
) are also always attached to a module using the rule above. Default behavior defined for module m will never be used for mock hook defined in another module, or in script scope.With one exception: Default behavior from script scope will be used by mock hook when all the parametrized behaviors for that module failed the filter, and there is no default behavior for that module. This is deliberate to not introduce breaking change to guard mocks such as
Mock Remove-Item { throw "Don't call this" }
. Without this exceptionMock Remove-Item -ParameterFilter { $false } -ModuleName m
would fall back to call the realRemove-Item
in the new Pester version. This exception is true only for default script scoped behaviors. Parametrized behaviors from script scope will never be considered for hooks in other modules. Default behaviors from other modules will never be used for the fallback.Related changes:
Execution
Script isolation
Each script now gets it's own script scope during run, this isolates test runs better, and avoids leaking
$script:
scoped variables among test scripts. The same isolation is used for ScriptBlock containers during both Discovery and Run phase.Class metadata
Using custom attributes in functions that are defined as PowerShell class would fail when you mock the function, because of missing metadata. The metadata are now resolved in the correct scope and it no longer fails
Discovery:
Discover failures don't kill Pester
Discovery is now more integrated with Pester, and failures in files during discovery don't have catastrophic effects. Instead any failure will go into the result object, and will be reported as container failure. This enables you to run test files that you migrated successfully, and gradually progress to the ones you did not migrate yet.
Discovery only
You can now run only discovery. There is new option Run.SkipRun. Use it with Run.PassThru, to get the result object with all the tests in your test files. All tests will be marked as NotRun, unless there is Discovery failure of course.
Pester in Pester
Framework authors may wish to run Pester in Pester. This is now possible again.
Related changes:
Development
The build is now done locally without merging all code into the single module. This means you can edit the function files directly, instead of having to debug in Pester.psm1 and changing the source files.
Related changes:
And some more minor fixes and typos fixed:
Minor fixes
Typos
See full log here
This discussion was created from the release 5.2.0.
Beta Was this translation helpful? Give feedback.
All reactions