If you are using static HTML files for your docs, such as with Sphinx or many other doc generators, here is a chunk of code that will speed up loading of pages after the first one. If you’re using some other docs generator, the instructions will probably work with minimal adaptation.
-
Create a
custom.js
file inside your_static
directory, with the following contents:var script = document.createElement('script'); script.src = "https://unpkg.com/htmx.org@1.9.5" script.integrity = "sha384-xcuj3WpfgjlKF+FXhSQFQ0ZNr39ln+hwjN3npfM9VBnUskLolQAcN80McRIVOPuO"; script.crossOrigin = 'anonymous'; script.onload = function() { var body = document.querySelector("body"); body.setAttribute('hx-boost', "true"); htmx.process(body); } document.head.appendChild(script);
-
Add an item to your
html_js_files
setting in your Sphinxconf.py
:
Rebuild and you’re done.
What this script does is:
Load the htmx library.
If it successfully loads, adds the hx-boost attribute to the body element.
Initialises htmx on the page.
This means that htmx will intercept all internal links on the page, and instead of letting the browser load them the normal way, it sends an AJAX request and swaps in the content of the page. This means that the whole page doesn’t need to be reloaded by the browser, saving precious milliseconds.
Actually, please don’t
I will provide reasons why you really shouldn’t use the code above, although it works almost perfectly. But first, a rant.
This post was inspired by Mux’s blog post on migrating 50,000 lines of React Server Components. It contains a nice overview of the history of web site architecture, including this quote:
Then, we started wondering: What if we wanted faster responses and more interactivity? Every time a user takes an action, do we really want to send cookies back to the server and make the server generate a whole new page? What if we made the client do that work instead? We can just send all the rendering code to the client as JavaScript!
This was called client-side rendering (CSR) or single-page applications (SPA) and was widely considered a bad move
However, instead of then suggesting that we perhaps we should retrace our steps, the article just plunges on and on, deeper and deeper into the jungle.
Now, this might all make sense if we are talking about a highly interactive site that has the highest possible needs in terms of user interactivity. But I realised the article was about just their documentation site, not the main application.
Now, some docs sites are really fancy and do very clever interactive things. Mux’s, however, is not like that. The only interactive things I could find were:
tabs – like you can get with something like sphinx-code-tabs, powered by a tiny bit of Javascript.
their changelog page – which is more complicated, but whose essential functionality could again be implemented by a really small amount of Javascript added to a static page. I should also note that their page is really pretty slugish when you change the filters, much slower than you would get by an approach that just selectively hides parts of the page using DOM manipulation.
search. Search is definitely important, but I can’t see why it means the whole site needs to be implemented in React.
A “Was this helpful” component – this could have been a small web component or something similar.
A few fancy transitions in the side bar.
These are not the highly stateful pages that React was designed for. Maybe there are a few other things I didn’t find, but 95% of it could be handled using entirely static HTML, built by any number of simple docs generators, with tiny amounts of Javascript.
The only other thing I noticed is that page transitions generally had that instant feel an SPA can give you, and were noticeably faster than you would get with the static HTML solution I’m suggesting.
So, not to be beaten, I came up with the above solution on htmx so I could match the speed.
Now, here’s why you shouldn’t use it:
A typical docs page with Sphinx loads in a few hundred milliseconds, which is fine. Do you really need to shave that down to less than 50 so it feels “instant”? Do your users care?
While it is truly a tiny fraction of the complexity of the React docs site Mux described in their post, you are still adding some significant complexity. Is it worth is?
Are you sure it’s not going to interact badly with some Javascript on some page, maybe some future Javascript you will add?
-
Have you considered all use cases – like the person who downloads your whole docs site using
wget --recursive
so they can browse offline? Answer: if they have no internet connection when they view the docs, it will actually work fine, because the htmx library won’t load at all. But if they are online, the htmx library will load, and then every internal link will break due to CORS errors. You just broke offline viewing. You could fix this very easily with an extra conditional in the script above, but I’m making a point. Is there anything else that’s broken?No prizes for guessing that while Sphinx-generated sites normally work perfectly with
wget --recursive
for offline viewing, docs.mux.com does not work well, to put it mildly. I also wasted hundreds of Mb finding out, due to the vast amount of boilerplate every single HTML file has. Don’t be like them.
This is what you should actually do:
recognise that you know exactly how to make your documentation pages load instantly, like an SPA, and could absolutely do it if you wanted to, still with a tiny fraction of the complexity of an actual SPA architecture, and with fixes for the issues I’ve mentioned, in about 15 minutes, then,
don’t.
As protection against the FOMO and fashion that drives so much of web development, this attitude needs a catchy slogan, which is the kind of thing I’m not very good at. But as a first attempt, how about: SNOB driven development. SNOB means “Smugly kNOwing Better”. Or maybe that could be “Smugly NO-ing Better”.
Join me. Be an arrogant SNOB and just say No.