RapidFort POV GCP Deployment
Deploy the RapidFort platform in GCP.
RapidFort runs on Kubernetes and can be deployed via the RapidFort Helm Chart. It's important that GCP is properly configured with the correct permissions, network requirements, and underlying tooling before deploying RapidFort via the Helm Chart.
This is described in more detail below, but the main steps before deploying RapidFort are:

- 1.Minimum Requirements of machine type n2-standard-8
- 8 vCPU
- 16 GB Memory
- 2 TB Storage
- 2.Operating System Options
- RapidFort has validated the following operating systems:
- Ubuntu 20.04.3 (focal)
- Debian 11 (bullseye)
- Please contact RapidFort Support before using a different operating system
- RapidFort POV deployment uses local storage and hence no additional storage is required for this deployment.
- 1.Networking and Security Policies allow HTTPS ingress to the RapidFort platform VM over port 443 from
- end-user desktop browsers
- environments in which container images are deployed and tested
- IMPORTANT: RapidFort CLI must be able to reach the RapidFort platform as containers are instrumented and hardened
- 2.Networking and Security Policies allow SSH access to the RapidFort administrator.
The RapidFort VM must have HTTPS egress to the RapidFort Vulnerability Database https://api.rapidfort.com and so must the Kubernetes pods deployed within the RapidFort deployment.
It may be Company Policy to block public DNS (e.g. 8.8.8.8 8.8.4.4) and have a private GCP DNS instance instead. This should be determined ahead of time so the RapidFort deployment is configured accordingly.
- Check if the public DNS accessible
host google.com 8.8.8.8
- Check /etc/resolv.conf for DNS
sudo cat /etc/resolv.conf
Create a GCP Compute VM instance according to the following specifications:
- 1.n2-standard-8 (8 vCPU 32 GB Memory)
- 2.2 TB Storage (2048 GB for boot disk size)
- 3.Ubuntu 20.04.3 or Debian 11
- Please contact RapidFort Support before using a different operating system
- 4.VPC consideration
- For convenience, the region and zone could match the VPC where container images are deployed and tested
- If that is not possible, there must be a route to the RapidFort platform from all environments in which container images are deployed and tested
- 5.Added public SSH keys of the RapidFort Administrator
- 6.API and identity management → Added Service Account from above.
- 7.Allow HTTPS traffic
- Make sure HTTPS & SSH are allowed from any customer-specific Network Tags
Here, the required packages are installed to set-up Microk8s
curl -fsSL https://get.docker.com | sh -
Please verify if snap is installed on the RapidFort VM. If snap is already installed, then proceed to Step 3.
sudo apt install snapd -y
sudo snap install core
export PATH=$PATH:/snap/bin
sudo snap install kubectl --classic
sudo snap install helm --classic
sudo snap install microk8s --classic
sudo usermod -a -G microk8s $USER
mkdir -p ~/.kube
sudo chown -f -R $USER ~/.kube
newgrp microk8s
microk8s start
microk8s status --wait-ready
microk8s kubectl get nodes # check node status.
microk8s enable dns hostpath-storage ingress
microk8s kubectl config view --raw >| ~/.kube/config
This is required if public DNS is blocked. It reconfigures the core DNS in MicroK8s to use the private DNS instead of the default 8.8.8.8. If not done, RapidFort CLI is likely to fail.
resolvectl status # get DNS server
microk8s disable dns
microk8s enable dns:<COMPANY_DNS_IP>
IMPORTANT: It should be possible to curl from both the VM host and the RapidFort runner pod:
curl https://www.google.com
sudo mkdir -p /opt/rapidfort/data
sudo chmod 777 -R /opt/rapidfort/
cat <JSON_KEY_FILE> | base64
Create the file /opt/rapidfort/.user_data and populate as follows:
# this ip can be private ip or public ip considering it is reachable where the stub image will be deployed.
RF_APP_HOST=<ip addr>
# Admin account id
RF_APP_ADMIN=<RapidFort Super Admin Email>
RF_APP_ADMIN_PASSWD=<RapidFort Super Admin Password.>
# GCP Bucket created for RapidFort
RF_S3_BUCKET=<Google Cloud Storage Bucket name>
RF_STORAGE_TYPE=gs
# Output from step 2
RF_GS_CREDS=<BASE64_JSON_KEY_FILE>
This will be sourced in step 5.
RF_HELM_CHART_VERSION="1.1.21-ib"
mkdir -p /opt/rapidfort
pushd /opt/rapidfort
curl -L https://github.com/rapidfort/rapidfort/archive/refs/tags/${RF_HELM_CHART_VERSION}.tar.gz --output rapidfort.tar.gz
tar -xvf rapidfort.tar.gz && rm -rf rapidfort.tar.gz
pushd rapidfort-1.1.21-ib/chart
echo -e "export RF_HELM_DIR=`pwd`\n" > /opt/rapidfort/.rf_env
popd
popd
source /opt/rapidfort/.rf_env
source /opt/rapidfort/.user_data
pushd ${RF_HELM_DIR} > /dev/null
helm_values="--set secret.rf_app_admin=${RF_APP_ADMIN} \
--set secret.rf_app_admin_passwd=${RF_APP_ADMIN_PASSWD} \
--set secret.s3_bucket=${RF_S3_BUCKET} \
--set secret.storage_type=${RF_STORAGE_TYPE} \
--set secret.rf_app_host=${RF_APP_HOST} \
--set global.container_engine=docker \
--set secret.gs_cred=${RF_GS_CREDS} \
--set global.ingressClassName=nginx
"
helm upgrade --install rapidfort ./ ${helm_values}
popd
$ kubectl get pods
- Sign-in to the RapidFort platform (VM IP address with the admin email and password above)
- Update admin password via Profile Settings
- Generate RapidFort License
- Ensure rfstub & rfharden work from target container environment