Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Temporal shenanigans #286

Merged
merged 17 commits into from
Aug 2, 2024
Merged

Temporal shenanigans #286

merged 17 commits into from
Aug 2, 2024

Conversation

jmafoster1
Copy link
Contributor

Added in temporal causal estimation with treatment sequences.

Copy link

github-actions bot commented Jul 25, 2024

🦙 MegaLinter status: ⚠️ WARNING

Descriptor Linter Files Fixed Errors Elapsed time
⚠️ PYTHON black 30 1 1.03s
✅ PYTHON pylint 30 0 4.71s

See detailed report in MegaLinter reports

MegaLinter is graciously provided by OX Security

Copy link

codecov bot commented Jul 26, 2024

Codecov Report

Attention: Patch coverage is 93.79310% with 9 lines in your changes missing coverage. Please review.

Project coverage is 95.55%. Comparing base (5af464f) to head (e6284c5).
Report is 2 commits behind head on main.

Files Patch % Lines
causal_testing/testing/causal_test_adequacy.py 64.00% 9 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #286      +/-   ##
==========================================
- Coverage   95.62%   95.55%   -0.07%     
==========================================
  Files          22       23       +1     
  Lines        1530     1666     +136     
==========================================
+ Hits         1463     1592     +129     
- Misses         67       74       +7     
Files Coverage Δ
causal_testing/json_front/json_class.py 97.98% <100.00%> (ø)
causal_testing/specification/capabilities.py 100.00% <100.00%> (ø)
causal_testing/testing/causal_test_case.py 100.00% <ø> (+4.54%) ⬆️
causal_testing/testing/causal_test_outcome.py 96.92% <100.00%> (+0.09%) ⬆️
causal_testing/testing/estimators.py 92.28% <100.00%> (+3.01%) ⬆️
causal_testing/testing/causal_test_adequacy.py 87.17% <64.00%> (-11.13%) ⬇️

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b6ce637...e6284c5. Read the comment docs.

@jmafoster1 jmafoster1 requested a review from f-allian July 29, 2024 09:20
@jmafoster1 jmafoster1 marked this pull request as ready for review July 29, 2024 09:20
Copy link
Contributor

@f-allian f-allian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmafoster1 Looks good so far. Seems like a really seamless integration. I'm currently not on my laptop so I'll do further tests later this evening / tomorrow.

Also, it might be worth requesting a review from @christopher-wild if he has some time.

Comment on lines +71 to +75
def total_time(self):
"""
Calculate the total duration of the treatment strategy.
"""
return (len(self.capabilities) + 1) * self.timesteps_per_intervention
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmafoster1 Out of curiosity, why is the total time defined as (n+1)*dt instead of n*dt? Do you assume the treatment strategy starts at t=dt instead of t=0?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@f-allian, yes exactly. This is slightly geared towards the water case study, where there is a grace period of dt before the first intervention takes place, and another period of dt after the last intervention before we can observe the final outcome. A slightly more general solution might be to have an extra grace_period parameter and then have the total time be len(capabilities) * timesteps_per_intervention + grace_period, but I'll keep it as it is for now.


def test_repr(self):
cap = Capability("v", 1, 0, 1)
self.assertEqual(str(cap), "(v, 1, 0-1)")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmafoster1 Should this be "(v, 1, 0, 1)" instead of "(v, 1, 0-1)"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the 0-1 notation for stringing capabilities would be easier to read, then you know that the last element represents the duration of the capability. Otherwise, I'd be constantly forgetting what the last three elements represented!

@@ -77,7 +79,7 @@ class TestLogisticRegressionEstimator(unittest.TestCase):

@classmethod
def setUpClass(cls) -> None:
cls.scarf_df = pd.read_csv("tests/data/scarf_data.csv")
cls.scarf_df = pd.read_csv("tests/resources/data/scarf_data.csv")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jmafoster1 Why did this change to test/resources/data? Do we have other subfolders in tests/resources?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was a folder called tests/data and one called tests/resources/data, both containing test data, so I thought I'd condense everything into one folder.

@jmafoster1
Copy link
Contributor Author

@jmafoster1 Looks good so far. Seems like a really seamless integration. I'm currently not on my laptop so I'll do further tests later this evening / tomorrow.

Also, it might be worth requesting a review from @christopher-wild if he has some time.

Thanks @f-allian. @christopher-wild, I've requested a review, but please don't feel obliged if you are too busy

causal_testing/specification/capabilities.py Outdated Show resolved Hide resolved
causal_testing/specification/capabilities.py Outdated Show resolved Hide resolved
try:
results.append(self.test_case.execute_test(estimator, None))
except LinAlgError:
continue
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would logging any warnings here be helpful? Is it okay if it fails to append results?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added in the warnings, but it's very OK to continue. What's happening here is that we're resampling the data and recalculating the estimate. If you don't have a lot of data, some of those samples may fail to converge, or fail in other ways to produce an estimate. This indicates that you don't have enough data, so I've now added "number of successes" as a third dimension to causal test adequacy.

causal_testing/testing/estimators.py Outdated Show resolved Hide resolved
tests/specification_tests/test_capabilities.py Outdated Show resolved Hide resolved
@jmafoster1
Copy link
Contributor Author

If everyone's happy with this, please could one of you approve it so I can get it merged in?

@christopher-wild
Copy link
Contributor

If everyone's happy with this, please could one of you approve it so I can get it merged in?

Ah sorry missed this! You can click re-request review next to ours name in the top right to send a notification if we do take too long

@christopher-wild christopher-wild self-requested a review August 2, 2024 13:48
@jmafoster1
Copy link
Contributor Author

Thanks @christopher-wild. Doing that feels very passive aggressive, though! 🙃

@jmafoster1 jmafoster1 merged commit 2b7042d into main Aug 2, 2024
14 of 16 checks passed
@jmafoster1 jmafoster1 deleted the temporal-shenanigans branch August 2, 2024 13:51
@christopher-wild
Copy link
Contributor

Haha I get what you mean but it's what it's there for 😁!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants