Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Render templates in parallel. #3277

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

fqueze
Copy link
Contributor

@fqueze fqueze commented May 5, 2024

For Eleventy sites with templates using shortcodes that do a lot of computation, rendering the template is the part of the build that takes the longest. This part is done sequentially.

This PR makes it happen in parallel, limiting the concurrency to the core count.

Example profiles, using the 11ty-website site as an example for benchmarking:

I zoomed all profiles to the time range when templates are rendered.

The saving is larger for my own website, going from 10s to 3.4s to render my templates.

Rendering in parallel helps mostly for cases where the templates trigger async code that can perform some of its work off main thread. This is the case for resizing images with eleventy-img. This would also enable using workers to move heavy JS computations of shortcodes off of the main thread (without this PR, moving work to a worker is pointless as the main thread is blocked waiting for the result).

@shivjm
Copy link
Contributor

shivjm commented May 20, 2024

This would be extremely helpful!

@zachleat
Copy link
Member

Hmm. I’m not opposed to this one! I do think the new transformOnRequest feature with eleventy-img may have rendered the performance gains here irrelevant though—but it could perhaps be useful for other use cases!

https://www.11ty.dev/docs/plugins/image/#optimize-images-on-request

Anecdotally, I didn’t see any build performance benefits to this approach in 11ty.dev or zachleat.com. And https://github.com/11ty/eleventy-benchmark seems about 4% slower in one small test:

---------------------------------------------------------
Eleventy Benchmark (Node v20.12.2, 10000 templates each)
---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 8.07 seconds
* Median per template: 807 µs

---------------------------------------------------------
Eleventy 3.0.0-alpha.11                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 8.41 seconds (4%)
* Median per template: 841 µs (4%)

Can anyone else test this one?

@zachleat zachleat added the needs-discussion Please leave your opinion! This request is open for feedback from devs. label Jun 10, 2024
@fqueze
Copy link
Contributor Author

fqueze commented Jun 10, 2024

Hmm. I’m not opposed to this one! I do think the new transformOnRequest feature with eleventy-img may have rendered the performance gains here irrelevant though—but it could perhaps be useful for other use cases!

https://www.11ty.dev/docs/plugins/image/#optimize-images-on-request

Nice! But if I understand correctly this would only help for the local development server. I was also interested in improving performance when deploying on github pages.

eleventy-img is an obvious use case for building in parallel because I assume it's used on many eleventy website. The other (maybe more interesting?) use case I have in mind is to allow web site authors to use workers in shortcodes. My website generates svg charts based on reading data files, and it would be nice to be able to move the computation of the charts to other threads.

Anecdotally, I didn’t see any build performance benefits to this approach in 11ty.dev or zachleat.com. And https://github.com/11ty/eleventy-benchmark seems about 4% slower in one small test:

---------------------------------------------------------
Eleventy Benchmark (Node v20.12.2, 10000 templates each)
---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 8.07 seconds
* Median per template: 807 µs

---------------------------------------------------------
Eleventy 3.0.0-alpha.11                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 8.41 seconds (4%)
* Median per template: 841 µs (4%)

Can anyone else test this one?

I'm not completely sure how I should run the benchmark to reproduce. I've tried doing this:

diff --git a/bench.sh b/bench.sh
index 450ba5d..93a3425 100755
--- a/bench.sh
+++ b/bench.sh
@@ -6,10 +6,10 @@ RUNS=3
 # "@11ty/[email protected]"
 # "@11ty/[email protected]"
 # "file:../eleventy"
-VERSIONS=("@11ty/[email protected]" "@11ty/[email protected]" "file:../eleventy")
+VERSIONS=("@11ty/[email protected]" "@11ty/[email protected]" "file:../eleventy")
 
 ALL_LANGS=("liquid" "njk" "md" "11ty.js")
-LANGS=("liquid" "njk" "11ty.js" "md")
+LANGS=("md")
 
 LINESEP="---------------------------------------------------------"
 nodeVersion=`node --version`

And then the result I see is:

eleventy-benchmark % ./bench.sh
---------------------------------------------------------
Eleventy Benchmark (Node v18.19.1, 5000 templates each)
---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 3.79 seconds
* Median per template: 758 µs

---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 3.72 seconds (-2%)
* Median per template: 744 µs (-2%)

---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 3.71 seconds (-3%)
* Median per template: 742 µs (-3%)

I'm not sure I'm doing this right.

@fqueze
Copy link
Contributor Author

fqueze commented Jun 10, 2024

Running the benchmark a second time gives me:

eleventy-benchmark % ./bench.sh
---------------------------------------------------------
Eleventy Benchmark (Node v18.19.1, 5000 templates each)
---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 3.68 seconds
* Median per template: 736 µs

---------------------------------------------------------
Eleventy 3.0.0-alpha.11                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 3.92 seconds (6%)
* Median per template: 784 µs (6%)

---------------------------------------------------------
Eleventy 3.0.0-alpha.10                                        
---------------------------------------------------------
.md: ... 3 runs:              
* Median: 3.7 seconds (0%)
* Median per template: 740 µs (0%)

which makes me wonder if the difference is within the noise level.

As an aside, make-md-files.sh seems super slow. Changing it so that it doesn't call cat and touch repeatedly makes the time to prepare the benchmark drop on my machine from 17s to 4s.

The change I made to make-md-files.sh is:

diff --git a/make-md-files.sh b/make-md-files.sh
index 9b128eb..4a23c32 100755
--- a/make-md-files.sh
+++ b/make-md-files.sh
@@ -1,9 +1,6 @@
 mkdir -p md/page/
+content=`cat src/content.md`
 for ((i=1; i<=$1; i++)); do
-	page="md/page/$i.md"
-
-	touch $page
-	content=`cat src/content.md`
   echo "---
 name: Zach $i
 index: $i
@@ -12,7 +9,7 @@ tags: name
 ---
 # {{ name }}
 ## $i
-$content" > $page
+$content" > "md/page/$i.md"
 done
 
-cp src/page.11tydata.json md/page/
\ No newline at end of file
+cp src/page.11tydata.json md/page/

@zachleat
Copy link
Member

zachleat commented Jun 10, 2024

Happy to merge improvements to the script!

For the benchmark, I modify this https://github.com/11ty/eleventy-benchmark/blob/08d5a4942e6b65f145e374ec72f991d0eec3a231/bench.sh#L9 and file:../eleventy points to the local version of the 11ty/eleventy on the file system.

If "@11ty/[email protected]" "file:../eleventy" were not valid paths, the benchmark may just be bad in that it doesn’t report installation errors very well. It really is just a quick script.

@fqueze
Copy link
Contributor Author

fqueze commented Jun 10, 2024

Happy to merge improvements to the script!

I guess if I wanted to make improvements, I would make the script output profiles using https://www.npmjs.com/package/11ty-fx-profiler to make it possible to visualize what took more or less time between different runs.

For the benchmark, I modify this https://github.com/11ty/eleventy-benchmark/blob/08d5a4942e6b65f145e374ec72f991d0eec3a231/bench.sh#L9 and file:../eleventy points to the local version of the 11ty/eleventy on the file system.

Seems similar to what I've done. ../eleventy was my local fork with this PR applied.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-discussion Please leave your opinion! This request is open for feedback from devs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants