0x02 S3 Pretty URLs
Update 09/05/2021: Navigating the site works fine with S3 as TurboLinks reloads within the context of the original page. Navigating directly to a blog post (e.g from a link) caused S3 to serve the file as a download. So this post does not apply (having 0x02.html chopped into 0x02) and instead Jekyll permalinks have been used. Setting permalinks: pretty
and the posts slug to include a /
on the end has rectified this issue. Now every post has in a directory (e.g /posts/0x02/
with an index.html file inside) and can be correctly linked.
This is site is statically generated using Jekyll and a GitHub action automatically pushes the built site to an AWS S3 bucket. So I’ve moved on from hosting my own server with NGINX and just using Cloudflare + S3. This is much simpler and much less work to maintain.
But I had to redo how the pretty URLs work since the NGINX fix was in the configuration file. With S3, I could not for the life of me work out the rewrite rule engine to do the same fix. My actual solution was writing a custom Jekyll plugin that ran after site compilation to nuke the “.html” from posts and pages.
The code is below:
require 'fileutils'
# Remove all .html ext of posts, for pretty url and S3 restriction
# Production only
def remove_html_ext(page)
if Jekyll.env == 'production'
ignore_files = ["index.html", "404.html"]
path = page.destination('/')
if path.include? ".html"
for ign in ignore_files
if path.include? ign
return
end
end
FileUtils.mv(path, path.sub(/\.html$/, ''))
end
end
end
Jekyll::Hooks.register :posts, :post_write do |post|
remove_html_ext(post)
end
Jekyll::Hooks.register :pages, :post_write do |page|
remove_html_ext(page)
end
The Jekyll::Hooks run after both posts
and pages
are written to disk. Put
this file in the _plugins
folder of your root dir and Jekyll will automatically
pick it up and run it.
With this, I am able to maintain my pretty URLs on S3.