Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buffering updates to ygm::container types #117

Open
bwpriest opened this issue Dec 22, 2022 · 1 comment
Open

buffering updates to ygm::container types #117

bwpriest opened this issue Dec 22, 2022 · 1 comment

Comments

@bwpriest
Copy link
Member

I am working with a usecase where I am creating a ygm container, e.g. ygm::container::map<std::size_t, std::vector<float>>, from a series of updates to the container. I might have many updates that affect the same key-value pair, which take the form of creating an std::vector<float> and adding it element-wise to the stored key-vector. I tried naïvely implementing this workflow using a separate message for each update, but this actually scales very poorly as the number of keys gets large. I found that buffering the updates locally to an std::unordered_map<std::size_t, std::vector<float>> and only sending one off-rank message per off-rank key resulted in much better scaling.

Would it be worthwhile for us to consider adding an async_buffered_visit method to our key-value store containers, such as ygm::container::map, and ygm::container::array? Such a method could be patterned off of async_visit while accepting an additional lambda function that "combines" two visits to the same key. It might be challenging to do this in a fully general way, however.

@steiltre
Copy link
Collaborator

Introduced low-level API for this issue in the form of a ygm::container::reducing_adapter in PR 129. Future PRs will address the functionality and APIs expected to be used by most users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants