When we use the cache, our pages are displayed faster. But if the page is not cached, it must first be cached and then displayed, so the first time we visit it without caching, it is usually slower.
As we know the cache can be of objects, PHP OPcacheThe cache is a very complicated topic, but it can be a lot more complicated than that, because in reality there are more options.
But if we are left with the idea that the first load of a website after deleting the cache will be slower, what we are interested in is that this first visit is not from a customer or a robot that will index our website and rate it with a low time.
As I have mentioned on several occasions, I usually use Redis for object caching and also page caching with Nginx, with excellent performance. But I also usually use Cloudflare as a CDN and Firewall before I get to the server.
For that reason it is very easy to clear the cache directly from WordPress, but I don’t have an option to preload the cache, like WP Rocket has for example. But although it could easily be done with a custom plugin, if I can automate it without the server having to intervene, I always like it better.
For installation I create an ocp directory in my root and download and unzip the executable:
$ mkdir ocp $ cd ocp $ wget https://cdn.pmylund.com/files/tools/ocp2/linux/ocp-2.7-amd64.tar.gz $ tar -xf ocp-2.7-amd64.tar.gz
And now I can execute ocp (with its complete path, since I did not install it in /usr/local/bin), for which I need a url with an XML map indicating the pages to visit and that therefore will be loaded in cache.
Here we can indicate the map of our SEO plugin (the map of YOAST, Rank Math, etc.), but we would be loading all the pages to index and sometimes we do not want to load this large number of cached pages, but the most important ones.
In the case of this website, I have an XML map with the 10 pages that I want to load in cache each time the complete cache is purged, the url is https://tabernawp.com/sitemap-cache.xml.
To tell ocp to load these pages we execute the following command:
$ /home/carlos/ocp/ocp -c 3 -v=true https://tabernawp.com/sitemap-cache.xml
Where the parameter -c indicates the concurrency (load 3 pages at a time) and -v=true is to display the result on the screen(verbose):
We have more options, such as checking on disk if the cache generated by some plugins exists (if we run it on the server itself instead of remotely); or for example sending a custom User Agent header with the parameter -ua (to distinguish our requests or avoid a firewall).
If we have several customer (or own) sites where we have to perform this operation relatively frequently, it will be very useful to create aliases, for example:
alias cache-taberna='/home/carlos/ocp/ocp -c 3 v=true https://tabernawp.com/sitemap-cache.xml'
As I create all the aliases with cache- when I type in the terminal cache- and press the tab key, it will show me all the available options, and if we also use Oh My Zsh, it will be as easy as scrolling through the different options to select the one we want.
Extra Trick 2
In my case I have a couple of Linux installations on WSL2 (Ubuntu and Kali Linux) in addition to the ones on the laptop, so having all the aliases updated is a bit problematic. For that what I do is to create a .ssh_aliases file in a cloud storage (in my case for this I use Dropbox) and check if it exists and include it if it does:
# include .ssh_aliases if it exists if [ -f /mnt/d/Dropbox/Personal/ssh/.ssh_aliases ]; then . /mnt/d/Dropbox/Personal/ssh/.ssh_aliases fi
And so I have all the aliases in the same file (SSH connections, OCP and others), so running cache-tabern in any of my terminals will give me the same result, either in the WSL2 installations or from the laptop.
Added 11/11/2021 17:50: Thanks to a tweet from Fernando Puente, a real authority on caching; I’ve searched a bit more about the concept of Cache Warming and arrived at this article What is Cache Warming?. In the same we can see some possible problems of “heating the cache”, in this case and for CloudFlare, we would only be creating the cached copy in the nearest CF Data Center, although it would generate the Redis copy of my server.