Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: Refactor catom to reduce size of unobserved atoms #240

Closed
wants to merge 3 commits into from

Conversation

frmdstryr
Copy link
Contributor

@frmdstryr frmdstryr commented Feb 1, 2025

@MatthieuDartiailh's discussion on "un-observable atoms" got me thinking.. Currently there is 4 bytes of wasted space in the CAtom struct due to padding (on 64bit systems) also even if atoms have no observers the ObserverPool* still takes 8 bytes of space.

This PR proposes a potential new structure that converts the ObserverPool* observers field into a uint32_t pool_index and collapses that into a packed struct with the bitfield. The new catom struct has no padding and is now 32 bytes (down from 40 bytes).

Since the ObserverPool* is gone it uses a new "ObserverPoolManager" and a new bitfield flag "has_observers" that manages the mapping of the pool_index to ObserverPool for each atom.

Since it reuses ObserverPools (instead of delete & new) it is quite a bit faster add adding/removing observers however the observer pool manager (currently) doesn't ever shrink in size.

So atoms with no observers save 8 bytes, but atoms with observers cost 4 more bytes if my math is right (since pointers are just moved to the ObserverPoolManager and the extra free slot takes 4 bytes per pool). A better pool manager could eliminate the free slots vector to make it 0 cost if there's any ideas?

Just throwing this out there for discussion, there's probably better ways to do it.

@frmdstryr frmdstryr force-pushed the catom-poolmgr branch 6 times, most recently from aa31398 to ddedea6 Compare February 1, 2025 14:14
Copy link

codecov bot commented Feb 1, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.67%. Comparing base (0750392) to head (b118b23).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #240   +/-   ##
=======================================
  Coverage   97.67%   97.67%           
=======================================
  Files          24       24           
  Lines        1074     1074           
  Branches      162      162           
=======================================
  Hits         1049     1049           
  Misses         12       12           
  Partials       13       13           

@frmdstryr
Copy link
Contributor Author

Well idk why it says the sizeof(ObserverPool*) is 24 bytes, it's 8 on my computer...

@MatthieuDartiailh
Copy link
Member

On an unrelated note, we will have to move away from global statics (subinterpreters and free threaded python).

@frmdstryr
Copy link
Contributor Author

Interesting... will have to do some more reading. From a quick scan I can't see how subinterpreters improve anything (given their examples shows pickling objects between them).

@frmdstryr
Copy link
Contributor Author

frmdstryr commented Feb 2, 2025

I realize this PR is likely not going anywhere but tried changing the ObserverPoolManager

std::vector<ObserverPool> m_pools;

Which should delete 8 bytes per observed atom... but doing so causes segfaults when running the example from #239 (comment) for reasons I can't explain. Yet it passes all atom tests and enaml tests...

Edit: Figured it out from the cpp docs... "When emplace_back causes the size to change all iterators become invalid..."

@frmdstryr
Copy link
Contributor Author

This will just cause more issues going forward due to the upcoming changes mentioned by Matthieu.

@frmdstryr frmdstryr closed this Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants