Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quality Control Checks #17

Open
TomDonoghue opened this issue Sep 7, 2022 · 0 comments
Open

Quality Control Checks #17

TomDonoghue opened this issue Sep 7, 2022 · 0 comments

Comments

@TomDonoghue
Copy link
Member

We want to check our results (and the paired task results) with relation to quality, for example:

  • are significant cells representative in terms of firing rate and general quality
    • e.g.: it's not just very low (or high firing) cells that are significant
  • are significant cells well powered and robust across trials
    • e.g.: some sessions have fewer trials - do these contribute a bunch of cells? Should they be dropped?
    • double checking consistency (not outlier trials driving things)
  • are significant cells found across sessions & subjects
    • e.g.: it's not just some weird / specific session or subject contributing all the cells
    • Note: this also relates to comparing to behaviour (maybe bad behaviour sessions are not representative / have no cells - could drop)

Potential ToDos:

  • drop cells from our analyses
    • note: we currently only drop based on 1-back quality labels
  • drop session from our analyses
    • sessions with too few trials / too bad behaviour
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant