You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In a perfect world we'd probably want to run these against every commit, but I imagine they may take a while to run and I don't want to affect the velocity of getting things into main. Maybe a compromise is that we spin up an environment once a day and run the tests against all new commits?
Apart from actually writing the benchmarks, we need to decide on a few things:
How often do we run them? I suggest once a day as mentioned above
Where do we run them? I think we may want to spin up a dedicated environment in EC2 so that the results are consistent
Where do we store results? Ideally, since this is an open source project, we may want the results to be public. Perhaps we can upload results to a wiki / docs area in this repo?
I think what you suggest is a good start. We want the benchmarks for a couple of reasons:
Guard against performance regressions
Have benchmarks available as part of the public documentation for the repository.
I suggest running the benchmarks as a separate workflow that is automatically run on changes to main and that can also be invoked manually on branches.
A consistent environment in terms of hardware and probably also software (maybe run the benchmarks in a container) is a must too.
Results could be uploaded to object storage and pulled from there into our docs.
Gather benchmarks data for the following parts of
pgroll
:UPDATE
heavy tables?read_schema
query, run on every DDL statement to capture 'inferred' migrations.Having these benchmarks in place would allow us to measure performance improvements over time and avoid regressions.
The text was updated successfully, but these errors were encountered: