Host static website Google - rocket - featured

And how to do it so your team can easily collaborate on new content!

In today’s world of fast changing content on the web you need an easy to use way to host and update your website.

This is the first part of a series where we go from building a static website, hosting it on multiple google buckets and making it available, to adding ssl, cleanup and all the other bits that come up while we build this.

Why Change?

Our old iteration of this website was ran on WordPress. It works…

But with managing all the bits and bobs, keeping the server up to date and keeping on top of the many many security risks. I can tell you, in this day and age, I hate worrying about individual servers… Especially for a website that in this day and age might as well be boiled down to just the html+css+js, no need for LAMP stacks anymore.

For our customers we build beautifull immutable architecture where the state of everything is created by the automation around it and everything is easily changeable.

Time to do the same for our website!

Our Goals

  1. Rebuild website in hugo. Hugo is a great tool generate websites with. And our old website had a lot of bloated html containing way to much overhead for what it was serving.
  2. Host on a google cloud storage bucket Stop using ‘servers’ start using ‘services’.
  3. Create automation so it is easy to collaborate. One way or another people with little to no IT knowledge (I found out markdown is considered IT knowledge) have to be able to collaborate on changes on the website.
  4. Get rid of the old server Needs no further explanation I reckon.

Rebuilding the Website in Hugo

We started out hopefull, we’re nerds. We build automation for a living. We should be able to build a simple website right? True but it took a ton more time than we originally anticipated and we wanted to keep our current theme, just not the html under it.

Building websites is a seperate skillset and we chose to outsource it to a party that already wrote quite a few open source hugo themes. The people over at themefisher helped us by rewriting the current website to decent html and preparing the hugo set up so that we could run it locally.

If you are looking for a decent party to help in hugo’ifying your old website I highly suggest reaching out to them.

Im starting to be a big fan of static website generators like Hugo. If you can bring your website back to just simple html+css+js hosting it becomes a lot easier, you just have to forget about ‘server’ and start thinking about ‘services’.

Host on a Google Storage Bucket

Okay so now we have a way to manually build our website and it runs fine locally, it runs fine on an example netlify site. Code proven. Now we have to get in control of hosting it.

Most of the time I read about these kinds of approaches everyone uses the file serving functionality that google storage buckets provide out of the box.

That functionality however does not provide HTTPS, and it comes with a hidden feature of adding cache headers. It also requires naming the bucket exactly as your websites FQDN.

If you want HTTPS though you can utilise a Google Cloud Load Balancer in front of your bucket(s). Which also means your bucket no longer needs to be named after the FQDN of your website.

The original assumption

Google Cloud Hosting initial idea

Seems simple enough right?! Lets get to coding, because we want this managed via automation. No manual clicking for us here.

This is when I found out that Google Loadbalancers actually exist of multiple components that work together instead of just a ‘load balancer’.

Its actually more like this

Host on a google storage bucket

Let’s go over what each part does and how they work together.

Buckets: These are your actual google storage buckets where you can easily upload your website to with tools like gsutil.

backend-bucket: These ‘entities’ define a regular Bucket as a backend-bucket, it does not share the lifecycle of your actual bucket but this is used to couple buckets to the rest of the load balancer.

url-map: These are the routing tables of our load balancer, mapping hosts and paths to specific backends. In our case backend buckets. Creating these requires one default catch-all bucket, which is where we will host our main website. You can then add path and host based routing for further defining by making pathMatchers. Creating a pathMatcher will also create a hostmatch for you so that makes our life a little easier.

target-http-proxies: These entities map a forwarding rule to a url_map, for http traffic.

target-https-proxies: These map a forwarding rule to a url_map for https traffic, and additionally require you to present a certificate. Fun fact, at the time of writing google has a beta feature that allows you to assign a letsencrypt certificate automatically, but that is for a different time. If you want to use it, create it via the gui or the gcloud cli beta features.

forwarding-rules: Here you bind an external ip address and a port to a target-http(s)-proxies entity. Good to know is you can create multiple of these with the same ip-address making it so that when you it’s easy to add one for http and one https.

The requirements

We are a team of around 15 people who want to work on this website. Hugo has a great way of serving content locally but it’s always good if you can point some of the less tech savvy people at a url and ask them to review your changes.

So we need to be able to host the website on multiple url’s which will show work in progress and allows people to review.

The automation

If we know we need to create entities for each review, a branch in our case, we can write some simple scripts to call from our pipeline.

Some things we need to create once, and some we need to manage per branch.

Once:

  • External Address
  • SSL Certificate
  • Forwarding rules for http and https
  • target-http(s)-proxies
  • url-map
  • default backend-bucket and accompanying storage bucket

Repeating:

  • pathMap
  • hostMap
  • backend-bucket
  • google storage bucket

These scripts are the first iteration and there is still work to be done, mainly cleaning up functions and making it a little more secure when checking state but for now it gets the job done.

This script will take in a name for a bucket, and make sure its there and ready to serve websites

 #!/bin/bash
 
 # If the bucket does not exist, create it and set the correct index and 404.
 if gsutil ls | grep -q -w "gs://${1}"; then
     echo "Bucket exists, also assuming web and 404 are okay."
 else
     echo "Bucket does not exist!"
     gsutil mb -c multi_regional -l eu -b on gs://${1}
     gsutil iam ch allUsers:objectViewer gs://${1}
    gsutil web set -m index.html -e 404.html gs://${1}
 fi

And this one ties it to our url-map and makes sure the route inward is there.

#!/bin/bash

# If the bucket is already a backend bucket.
if gcloud compute backend-buckets list | grep -q -w ${1}; then
  echo "Backend bucket exists."
else
  echo "Backend bucket does not exist."
  gcloud compute backend-buckets create ${1} --gcs-bucket-name=${1}
fi

# We create the initial bucket manually for now. So in this case we assume we can just create the url-map
if gcloud compute url-maps list | grep -q -w examplesite; then
  echo "url-map exists"
else
  echo "Url-map does not exist yet, creating"
  gcloud compute url-maps create examplesite --default-backend-bucket=fullstaq-website
fi

# If the pathmatcher is there, dont care.. otherwise add it.
if gcloud compute url-maps describe examplesite --format json | jq .pathMatchers | grep -q -w ${2}; then
  echo "Looks like pathmatcher is there..."
else
  echo "Looks like pathmatcher is missing, creating"
  gcloud compute url-maps add-path-matcher examplesite \
  --path-matcher-name ${2} \
  --default-backend-bucket=${1} \
  --new-hosts="${2}.www.example.com"
fi

# Create static IP
if gcloud compute addresses list | grep -q -w examplesite; then
  echo "IP address already assigned."
  export SITEIP=$(gcloud --project=production-managed-services compute addresses list | grep examplesite | awk '{print $2}') echo "Using ${SITEIP}"
else
  echo "No external IP yet, creating."
  export SITEIP=$(gcloud --project=production-managed-services compute addresses list | grep examplesite | awk '{print $2}') echo "Using ${SITEIP}"
  gcloud compute addresses create examplesite --global
fi

if gcloud compute target-http-proxies list | grep -q -w examplesite-http; then
  echo "http proxy exists"
else
  echo "creating http proxy"
  gcloud compute target-http-proxies create examplesite-http --url-map examplesite
fi

if gcloud compute forwarding-rules list | grep -q -w examplesite-http; then
  echo "http forwarding rule exists"
else
  echo "creating http forwarding rule"
  gcloud compute forwarding-rules create examplesite-http --global \
  --address ${SITEIP} --ports=80 --target-http-proxy examplesite-http
fi
 

Now that we have some basic scripts all it takes is some manual prep to set up all the one time things via gui or gcloud and create a nice gitlab pipeline!


stages:
- build
- prep_bucket
- deploy

variables:
  BUCKET: examplesite

build:
  image: alpine:latest
  stage: build
  script:
  - apk add hugo
  - hugo --minify
  - rm public/easteregg.txt
  artifacts:
    paths:
    - public
    expire_in: 1 hour

prep_bucket:
  image: google/cloud-sdk:alpine
  stage: prep_bucket
  script:
  - apk add jq
  - ./bin/ensure_bucket.sh "${CI_COMMIT_REF_SLUG}-${BUCKET}"
  - ./bin/ensure_loadbalancer.sh "${CI_COMMIT_REF_SLUG}-${BUCKET}" "${CI_COMMIT_REF_SLUG}"
  except:
  - master

deploy:
  image: google/cloud-sdk:alpine
  stage: deploy
  script:
  - gsutil -m rsync -d -r public/ gs://${CI_COMMIT_REF_SLUG}-${BUCKET}/
  environment:
    name: "${CI_COMMIT_REF_SLUG}"
    url: http://${CI_COMMIT_REF_SLUG}.www.example.com
  except:
    - master

deploy_master:
  image: google/cloud-sdk:alpine
  stage: deploy
  script:
  - gsutil -m rsync -d -r public/ gs://${BUCKET}/
  environment:
    name: live
    url: https://www.example.com
  only: - master 

Finalizing

Point your DNS records for *.www.example.com and www.example.com to the IP you got from google and youre done!

You now get new websites created for each branch, and deploy the master branch to the live website.

Fabian is a Cloud-Native Architect and Open Source Enthusiast. As one of the founders of Fullstaq, he is the technical heart and conscience of the company, helping customers and engineers with guidance and being a nice sparring partner.

To encompass Fabian's roles, we like to call him the TechFluencer. There is often a negative association with influencing or preaching some gospel and forcing opinions on others; instead, Fabian works closely with people to find out what those people and companies really want and need and makes sure the right resources from Fullstaq are made available to achieve those goals. Also, writing in the third person is something Fabian finds odd to do...

Please talk nerdy to Fabian; it is what he likes best!
January 11, 2024 | BLOG | 14 MINUTES

Why Kubernetes? Focus on the API, not the servers

In this blog I will be talking about how you can unlock the potential of Kubernetes by focussing on it's API, while trying to avoid the complexities you …

June 13, 2023 | BLOG | 11 MINUTES

Setting up Fleet - Continuous Delivery with Rancher

Available in Rancher Manager is the option ‘Continuous Delivery’ and that’s the preferred deployment method by SUSE/Rancher. Although companies frequently …

April 13, 2023 | BLOG | 4 MINUTES

Project Coldplay: home elevator project

I love creating cloud-native applications, this not only translates into my field of work but also my projects at home. Projects in which I can simplify …