-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(rust, python): let "ambiguous" take "null" value #11606
Conversation
b6cd881
to
c24cf27
Compare
Some(ambiguous) => datetime.0.try_apply(|timestamp| { | ||
let ndt = timestamp_to_datetime(timestamp); | ||
Ok(datetime_to_timestamp(convert_to_naive_local( | ||
&from_tz, &to_tz, ndt, ambiguous, | ||
)?)) | ||
}), | ||
Some(ambiguous) => { | ||
let iter = datetime.0.downcast_iter().map(|arr| { | ||
let element_iter = arr.iter().map(|timestamp_opt| match timestamp_opt { | ||
Some(timestamp) => { | ||
let ndt = timestamp_to_datetime(*timestamp); | ||
let res = convert_to_naive_local(&from_tz, &to_tz, ndt, ambiguous)?; | ||
Ok::<_, PolarsError>(res.map(datetime_to_timestamp)) | ||
}, | ||
None => Ok(None), | ||
}); | ||
element_iter.try_collect_arr() | ||
}); | ||
ChunkedArray::try_from_chunk_iter(datetime.0.name(), iter) | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm I'm seeing a slight perf regression here. small, but looks real?
https://www.kaggle.com/code/marcogorelli/polars-timing?scriptVersionId=145820321
here:
- min: 34.58047724000062
- max: 34.82023904700054
- avg: 34.7417660610001 +/- 0.02811013567654737
main:
- min: 33.56916854799965
- max: 34.206052164999164
- avg: 33.85647739228547 +/- 0.08204153267322174
I don't know, is there a way to do this that might preserve performance?
closing as I don't think extra complexity is warranted here, can revisit if there's demand |
Suggested by @stinodego here, but also:
ambiguous='NaT'
from pandasDemo: