Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Step -1.3: vis_cpu validation #37

Open
steven-murray opened this issue Oct 21, 2019 · 4 comments
Open

Step -1.3: vis_cpu validation #37

steven-murray opened this issue Oct 21, 2019 · 4 comments
Assignees
Labels
formal-test A formal Validation Test simcmp:fg:gleam Simulation Component: GLEAM simulator:pyuvsim Uses the pyuvsim simulator simulator:viscpu Uses the vis_cpu simulator status:accepted A formal test that has been accepted as valid, but not yet actively worked on
Milestone

Comments

@steven-murray
Copy link
Contributor

This test should be a formal validation of vis_cpu against pyuvsim. Note that parts of this test will be done as unit-tests in hera_sim.

  • Simulation Component: point source, gaussian blob, GLEAM
  • Simulators: vis_cpu, pyuvsim
  • Pipeline Components: None
  • Depends on: Ideally, this depends on a semi-automatic notebook infrastructure for all -1 tests, being developed by @piyanatk. It also depends on Visibility simulators hera_sim#33 being merged.

Why this test is required

vis_cpu is a unique simulator, in that it is antenna-based instead of baseline-based. It could become useful for quicker simulations than other methods in the future. As such, it should be formally validated.

Summary

A brief step-by-step description of the proposed test follows:

  • Construct visibilities from increasing-complexity sky models, both with vis_cpu and pyuvsim.
  • Compare visibilities explicitly, and report on the level of absolute and relative error between simulators, especially as a function of frequency.

Simulation Details

  • Freq. range:
  • Channel width:
  • Baseline/antenna configuration:
  • Total integration time:
  • Number of realisations:

Criteria for Success

  • Visibilities match to 1% between simulators across all frequencies, baselines and LSTs.
@steven-murray steven-murray added the formal-test A formal Validation Test label Oct 21, 2019
@steven-murray steven-murray added simcmp:fg:gleam Simulation Component: GLEAM simulator:pyuvsim Uses the pyuvsim simulator simulator:viscpu Uses the vis_cpu simulator labels Oct 21, 2019
@steven-murray steven-murray added this to the H1C IDR2 milestone Oct 21, 2019
@steven-murray steven-murray added the status:accepted A formal test that has been accepted as valid, but not yet actively worked on label Oct 28, 2019
@Jackmastr
Copy link

I've been using this PRISim vs pyuvsim memo as a starting point for visualizing the differences between the outputs of vis_cpu and pyuvsim. For obsparam_ref_1.1.yaml I'm seeing very similar results to the ones in the memo, where the "uvw" positions are close to the reference except for a small phase shift:

image

image

However, the visibilities themselves appear to be off by an appreciable amount:

image

Worse, with obsparam_ref_1.2_uniform.yaml the uvw positions don't seem to match at all and I'm not sure why:

image

The uvw positions in vis_cpu aren't changing as they are supposed to for Ntimes > 1.

@steven-murray
Copy link
Contributor Author

This is great work!

I think it makes sense that vis_cpu should give different visibilities than pyuvsim in some regimes: it is after all an approximation. We just need to be able to identify the regimes in which it does well, and how well.

I might be wrong, but I feel like I recall that vis_cpu works differently, in that its uvw doesn't change with time, but the sky rotates with time. That should explain the last plot. So you'll only be able to compare visibilities in that ref sim.

@aelanman
Copy link
Contributor

aelanman commented Apr 2, 2020

Internally, pyuvsim considers the sky to be moving and the baselines to be fixed. Any change in uvws over time can only be due to phasing. Depending on what file format you're using, that could explain the difference in UVWs. The uvfits format requires data to be phased, so the UVWs will change with time.

If this is the issue you can try reading both vis_cpu data and pyuvsim data using UVData, then apply unphase_to_drift to both.

@piyanatk
Copy link

piyanatk commented Oct 14, 2021

To add to the list of things to test.

The new implementation in VisCPU (see HERA Memo #098) apply a correction to the source RA/Dec given a reference time to improve the source position accuracy in the Alt-AZ frame.

We should check how tolerant this correction is to the duration of the observation. That is, given a single reference time t_ref, compare accuracy of the output visibility from the VIsCPU simulations with the observation duration of 5 minutes, 1 hour, ..., and 24 hours, to the equivalent simulations from pyuvsim.

Phil Bull suggested that this test may be constructed by modifying one of the hera_sim unit test

The outcome of this test might be important to figure out how we may want to split the validation simulation in time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
formal-test A formal Validation Test simcmp:fg:gleam Simulation Component: GLEAM simulator:pyuvsim Uses the pyuvsim simulator simulator:viscpu Uses the vis_cpu simulator status:accepted A formal test that has been accepted as valid, but not yet actively worked on
Projects
None yet
Development

No branches or pull requests

4 participants