Last updated December 21, 2023
A "content delivery network" or CDN is a set of "edge" servers physically located around the globe, connected with a high-speed network (usually a private, faster-than-general-internet network). Web traffic connects to this network of servers instead of connecting to the "origin" servers directly (our UBCMS servers). The CDN helps accelerate, filter, and secure web traffic. We will be using CDN services from Akamai, a leading provider.
A CDN helps us to:
The CDN is fully enabled for the entire UBCMS. No additional action is needed.
Several methods improve page performance:
Pages may be cached for up to 10 seconds in the Akamai network and thus may take up to 10 seconds beyond the usual 30-60 seconds to replicate a page. So in practice there is no meaningful delay.
New pages are not affected by this delay at all because there is no old version that will be cached.
Images and other static assets (CSS, JS) are also not affected by this delay because they use a URL fingerprinting technique that essentially makes a new URL any time the content changes.
No. In theory, more servers that could have different versions of content are involved in serving pages (6 publishers, 2 dispatchers and now thousands of Akamai edge servers). However, Akamai will only make one request for each URL to our servers and then handle synchronizing this content within its network.
When a user connects to your site, they will be routed to the closest of Akamai's thousands of global edge servers. If the Akamai network has already seen a version of the page (or asset) that can be cached and is not expired, it will be served to the user without connecting them to our servers. If not, the Akamai network will request the page/asset from our servers.
The Akamai network acts as a cacheing reverser-proxy server. UBCMS has always had a cacheing reverse-proxy server (the "dispatcher" servers), so adding another layer like this should work very smoothly. In particular, rules about what can be cached and for how long are well-established in the UBCMS dispatchers and will be extended to Akamai.
Here is a recap of the cache rules:
Note: the scheme of cacheing HTML but only for a very short time adds the benefit that if our servers are down, the Akamai network can still serve the expired copies (>10 seconds old) until our servers are reachable again.
We do not expect any problems, but the kind of potential issues we will be on highest alert for are:
Yes. Because Google Tag Manager, Google Analytics and similar services use javascript code in the browser to actively send telemetry, this will work unchanged.
Also, analytics that depend on server logs will continue to reflect the demand on the server, but this demand will no longer be directly correlated with user activity on our web pages. By design, many requests to UBCMS URLs will no longer involve on-premise UB servers and will thus not be logged in on-premise log files.
To get the most out of the Akamai CDN, we recommend the following best practices.
Cacheable pages will benefit from greater acceleration, and only cacheable pages will remain available if service in our UB datacenter is interrupted. It is very strongly recommended that your home page and your most frequently used pages are cacheable. Full details on cache rules are available in this document.
Other techniques to make pages cacheable include:
If custom code on your site depends on external resources, make sure these resources are also loaded from high-speed, high-availability, high-security sources. For example, if you load scripts, css, images, or iframe content from your own server or another server at UB, you may want to move these files into UBCMS if possible (static files can be managed in the DAM via the web interface or WebDAV). If you link to popular third-party JS libraries outside of UBCMS, try finding CDN-backed sources (googleapis.com, cdnjs, jsdelivr, etc.).