Last updated September 3, 2020
A "content delivery network" or CDN is a set of "edge" servers physically located around the globe, connected with a high-speed network (usually a private, faster-than-general-internet network). Web traffic connects to this network of servers instead of connecting to the "origin" servers directly (our UBCMS servers). The CDN helps accelerate, filter, and secure web traffic. We will be using CDN services from Akamai, a leading provider.
A CDN helps us to:
We are enabling the CDN on a domain-by-domain basis (a hostname is X.buffalo.edu). Switching the Akamai CDN on for a domain involves changing the DNS entry for the domain to direct browsers to the Akamai servers instead of ours. We have already switched ubcms.buffalo.edu (the UBCMS help/support site) and will be switching more soon.
We also have a seamless, tested rollback procedure to switch off the CDN if problems are encountered.
We do not need you to do any preparation or testing for this change, and do not expect any problems. We will coordinate a time to switch each domain that will minimize disruption in case the unexpected occurs.
The change can be tested if you want to preview it by simulating the DNS change just for you by editing your computer's "hosts" file. You need to use your hosts file to make your UBCMS domain resolve to the same address as ubcms.buffalo.edu.edgekey.net. For performance and redundancy, that address may resolve to many different IP addresses at different times. But any/all of them will work for testing.
So, specifically, you can test this out by adding entries like this to your hosts file:
Several methods improve page performance:
Pages may be cached for up to 10 seconds in the Akamai network and thus may take up to 10 seconds beyond the usual 30-60 seconds to replicate a page. So in practice there is no meaningful delay.
New pages are not affected by this delay at all because there is no old version that will be cached.
Images and other static assets (CSS, JS) are also not affected by this delay because they use a URL fingerprinting technique that essentially makes a new URL any time the content changes.
No. In theory, more servers that could have different versions of content are involved in serving pages (6 publishers, 2 dispatchers and now thousands of Akamai edge servers). However, Akamai will only make one request for each URL to our servers and then handle synchronizing this content within its network.
When a user connects to your site, they will be routed to the closest of Akamai's thousands of global edge servers. If the Akamai network has already seen a version of the page (or asset) that can be cached and is not expired, it will be served to the user without connecting them to our servers. If not, the Akamai network will request the page/asset from our servers.
The Akamai network acts as a cacheing reverser-proxy server. UBCMS has always had a cacheing reverse-proxy server (the "dispatcher" servers), so adding another layer like this should work very smoothly. In particular, rules about what can be cached and for how long are well-established in the UBCMS dispatchers and will be extended to Akamai.
Here is a recap of the cache rules:
Note: the scheme of cacheing HTML but only for a very short time adds the benefit that if our servers are down, the Akamai network can still serve the expired copies (>10 seconds old) until our servers are reachable again.
We do not expect any problems, but the kind of potential issues we will be on highest alert for are:
Also, analytics that depend on server logs will continue to reflect the demand on the server, but this demand will no longer be directly correlated with user activity on our web pages. By design, many requests to UBCMS URLs will no longer involve on-premise UB servers and will thus not be logged in on-premise log files.
To get the most out of the Akamai CDN, we recommend the following best practices.
Cacheable pages will benefit from greater acceleration, and only cacheable pages will remain available if service in our UB datacenter is interrupted. It is very strongly recommended that your home page and your most frequently used pages are cacheable. Full details on cache rules are available in this document.
Other techniques to make pages cacheable include:
If custom code on your site depends on external resources, make sure these resources are also loaded from high-speed, high-availability, high-security sources. For example, if you load scripts, css, images, or iframe content from your own server or another server at UB, you may want to move these files into UBCMS if possible (static files can be managed in the DAM via the web interface or WebDAV). If you link to popular third-party JS libraries outside of UBCMS, try finding CDN-backed sources (googleapis.com, cdnjs, jsdelivr, etc.).
Test for HTTPS compatibility (remove mixed content) to prepare for when UBCMS sites can be made HTTPS-only (more features and training coming soon). In addition to security benefits, only HtTPS connections can take advantage of improved efficiency in HTTP/2 protocol improvements.