24 Security Recommendations for Google Cloud Platform (GCP): Part 2
Welcome to part two of our GCP security series. Today, we’ll be covering VPC, Compute Engine, and GKE.
Welcome to the second part of our detailed series about securing your GCP. By the way, in case you did not manage to read its first part, read IT now. Now let’s dive deep into the details of Virtual Private Cloud i.e., VPC.
Virtual Private Cloud
Virtual Private Cloud (VPC) offers a global, scalable, and flexible networking solution for your cloud-based resources and services. It extends networking capabilities to App Engine, Compute Engine, or Google Kubernetes Engine (GKE). Ensuring the security of these components is crucial.
Here's a key GCP security practice to follow:
9. Activate VPC Flow Logs for VPC Subnets
By default, the VPC Flow Logs feature stays turned off when you create a new VPC network subnet. Once activated, VPC Flow Logs start gathering data on network traffic to and from your Virtual Private Cloud (VPC) subnets. This data is valuable for optimizing network usage, conducting network forensics, and performing real-time security analysis.
For enhanced visibility and security of your Google Cloud VPC network, it is highly advisable to enable Flow Logs for every business-critical or production VPC subnet.
<pre class="codeWrap"><code>gcloud compute networks subnets update SUBNET_NAME --region=REGION --enable-flow-logs</code></pre>
Compute Engine
Compute Engine offers a secure and customizable computing service, allowing you to create and operate virtual machines on Google's infrastructure.
Here are several GCP security best practices to implement promptly:
10. Activate "Block Project-wide SSH keys" for VM Instances
Your project-wide SSH key allows access to all Google Cloud VM instances within your GCP project. While using project-wide SSH keys simplifies key management, if these keys are exposed, it poses a security risk affecting all VM instances in the project. To mitigate this risk, it is highly recommended to use specific SSH keys, limiting the potential impact if they are compromised.
By default, the security feature "Block Project-Wide SSH Keys" is not turned on for your Google Compute Engine instances.
To enable the blocking of project-wide SSH keys, set the metadata value to TRUE:
<pre class="codeWrap"><code>gcloud compute instances add-metadata INSTANCE_NAME --metadata block-project-ssh-keys=true</code></pre>
Here's a Cloud Custodian sample rule to verify instances without this block:
<pre class="codeWrap"><code>- name: instances-without-project-wide-ssh-keys-block
description: |
It is recommended to use Instance specific SSH key(s) instead
of using common/shared project-wide SSH key(s) to access Instances.
resource: gcp.instance
filters:
- not:
- type: value
key: name
op: regex
value: '(gke).+'
- type: metadata
key: '"block-project-ssh-keys"'
value: "false"
</code></pre>
11. Disable 'Enable connecting to serial ports' for VM Instances
A Google Cloud virtual machine (VM) instance comes with four virtual serial ports. Interaction with a serial port resembles using a terminal window, operating entirely in text mode without a graphical interface or mouse support. The instance's operating system, BIOS, and other system-level entities can output to the serial port and accept text-based input, such as commands and responses to prompts.
The first serial port, known as the interactive serial console (Port 1), is typically utilized by these system-level entities.
The interactive serial console lacks IP-based access restrictions, such as IP whitelists. Enabling it allows clients to attempt connections from any IP address. This means that anyone with the correct SSH key, username, project ID, zone, and instance name could connect to the instance. To align with Google Cloud Platform security best practices, it is advisable to disable support for the interactive serial console.
<pre class="codeWrap"><code>gcloud compute instances add-metadata INSTANCE_NAME --zone=ZONE --metadata serial-port-
enable=false</code></pre>
Additionally, you can enforce the prevention of interactive serial port access for VMs using the "Disable VM serial port access" organization policy.
12. Make Sure VM Disks for Critical VMs Use Customer-Supplied Encryption Keys (CSEK)
The Compute Engine service automatically encrypts all data at rest by default, managed seamlessly by cloud services without user or application intervention. However, for those who desire complete control over instance disk encryption, the option to supply their encryption keys is available.
These personalized keys, known as Customer-Supplied Encryption Keys (CSEKs), serve to safeguard the Google-generated keys employed for instance data encryption and decryption. CSEKs are not stored on the server by the Compute Engine service, and access to protected data requires the specific key to be provided.
For critical business VMs, it is strongly recommended to encrypt VM disks using CSEK. By default, VM disks utilize Google-managed keys for encryption, not Customer-Supplied Encryption Keys. Currently, there's no method to update the encryption of an existing disk, so it's advisable to create a new disk with Encryption set to Customer supplied. A note of caution is warranted:
Note: If you lose your encryption key, you will not be able to restore the data.
To encrypt a disk using the gcloud compute tool during instance creation, utilize the --csek-key-file flag. If employing an RSA-wrapped key, consider using the gcloud beta component.
<pre class="codeWrap"><code>gcloud beta compute instances create INSTANCE_NAME --csek-key-file=key-file.json</code></pre>
For encrypting a standalone persistent disk, employ the following command:
<pre class="codeWrap"><code>gcloud beta compute disks create DISK_NAME --csek-key-file=key-file.json</code></pre>
It is your responsibility to generate and manage a 256-bit key encoded in RFC 4648 standard base64. A sample key-file.json should resemble this:
<pre class="codeWrap"><code>[
{
"uri": "https://www.googleapis.com/compute/v1/projects/myproject/zones/us-
central1-a/disks/example-disk",
"key": "acXTX3rxrKAFTF0tYVLvydU1riRZTvUNC4g5I11NY-c=",
"key-type": "raw"
},
{
"uri":
"https://www.googleapis.com/compute/v1/projects/myproject/global/snapshots/my
-private-snapshot",
"key":
"ieCx/NcW06PcT7Ep1X6LUTc/hLvUDYyzSZPPVCVPTVEohpeHASqC8uw5TzyO9U+Fka9JFHz0mBib
XUInrC/jEk014kCK/NPjYgEMOyssZ4ZINPKxlUh2zn1bV+MCaTICrdmuSBTWlUUiFoDD6PYznLwh8
ZNdaheCeZ8ewEXgFQ8V+sDroLaN3Xs3MDTXQEMMoNUXMCZEIpg9Vtp9x2oeQ5lAbtt7bYAAHf5l+g
JWw3sUfs0/Glw5fpdjT8Uggrr+RMZezGrltJEF293rvTIjWOEB3z5OHyHwQkvdrPDFcTqsLfh+8Hr
8g+mf+7zVPEC8nEbqpdl3GPv3A7AwpFp7MA=="
"key-type": "rsa-encrypted"
}
]
</code></pre>
Other GCP security best practices for Compute Engine include:
- Verify that instances do not use the default service account.
- Verify that instances are not configured to use the default service account with full access to all Cloud APIs.
- Ensure oslogin is enabled for the Project.
- Ensure that IP forwarding is disabled on instances.
- Guarantee Compute instances are launched with Shielded VM enabled.
- Confirm that Compute instances do not have public IP addresses.
- Ensure App Engine applications enforce HTTPS connections.
Google Kubernetes Engine Service (GKE)
GKE furnishes a managed environment for deploying, managing, and scaling containerized applications within the Google infrastructure. A GKE environment comprises multiple machines, specifically Compute Engine instances, organized into clusters. Let's proceed with GCP security best practices for GKE.
13. Activate Application-Layer Secrets Encryption for GKE Clusters
Application-layer secret encryption adds an extra security layer for sensitive data, such as Kubernetes secrets stored on etcd. This feature allows the use of Cloud KMS managed encryption keys to encrypt data at the application layer, safeguarding it from potential attackers who might access offline copies of etcd. Enabling application-layer secret encryption in a GKE cluster is considered a security best practice, especially for applications that store sensitive information.
Create a key ring to store the Customer-Managed Key (CMK):
<pre class="codeWrap"><code>gcloud kms keyrings create KEY_RING_NAME --location=REGION --project=PROJECT_NAME --
format="table(name)"</code></pre>
Generate a new Cloud KMS Customer-Managed Key (CMK) within the previously created key ring:
<pre class="codeWrap"><code>gcloud kms keys create KEY_NAME --location=REGION --keyring=KEY_RING_NAME --
purpose=encryption --protection-level=software --rotation-period=90d --format="table(name)"</code></pre>
Assign the "CryptoKey Encrypter/Decrypter" role to the relevant service account:
<pre class="codeWrap"><code>gcloud projects add-iam-policy-binding
PROJECT_ID --member=serviceAccount:service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com --
role=roles/cloudkms.cryptoKeyEncrypterDecrypter</code></pre>
Finally, enable application-layer secrets encryption for the chosen cluster, using the Cloud KMS Customer-Managed Key (CMK) established in the earlier steps.
<pre class="codeWrap"><code>gcloud container clusters update CLUSTER --region=REGION --project=PROJECT_NAME --database-
encryption-
key=projects/PROJECT_NAME/locations/REGION/keyRings/KEY_RING_NAME/cryptoKeys/KEY_NAME</code></pre>
14. Activate GKE Cluster Node Encryption Using User-Managed Keys
To provide increased oversight of the encryption/decryption process in Google Kubernetes Engine (GKE), ensure that your GKE cluster node is encrypted using a user-managed key (UMK). Utilize the Cloud Key Management Service (Cloud KMS) for the creation and management of your custom user-managed keys (UMKs). Cloud KMS delivers secure and efficient cryptographic key management, along with controlled key rotation and revocation mechanisms.
By this point, you should already have a key ring where you store both the user-managed keys (UMKs) and customer-managed keys. These keys will be used in the following steps.
To activate GKE cluster node encryption, you must recreate the node pool. Utilize the name of the desired cluster node pool as an identifier parameter and apply custom output filtering to detail the configuration information for the chosen node pool:
code
Now, with the information obtained in the previous step, generate a new Google Cloud GKE cluster node pool encrypted with your user-managed key (UMK):
<pre class="codeWrap"><code>gcloud container node-pools describe NODE_POOL --cluster=CLUSTER_NAME --region=REGION --
format=json</code></pre>
Once the new cluster node pool is functioning correctly, you can delete the original node pool to halt additional charges on your Google Cloud account.
<pre class="codeWrap"><code>gcloud beta container node-pools create NODE_POOL --cluster=CLUSTER_NAME --region=REGION --
disk-type=pd-standard --disk-size=150 --boot-disk-kms-
key=projects/PROJECT/locations/REGION/keyRings/KEY_RING_NAME/cryptoKeys/KEY_NAME</code></pre>
Note: Be careful to remove the previous pool and not the current one!
<pre class="codeWrap"><code>gcloud container node-pools delete NODE_POOL --cluster=CLUSTER_NAME --region=REGION</code></pre>
15. Control Network Access to GKE Clusters
To minimize exposure to the Internet, ensure that your Google Kubernetes Engine (GKE) cluster is set up with a master authorized network. Master authorized networks enable you to whitelist specific IP addresses and/or IP address ranges, permitting access to cluster master endpoints through HTTPS.
Incorporating a master authorized network can enhance network-level protection and provide additional security advantages to your GKE cluster. Authorized networks grant access to a defined set of trusted IP addresses, offering protection for GKE cluster access in the event of vulnerabilities in the cluster's authentication or authorization mechanisms.
Include authorized networks in the chosen GKE cluster to allow access to the cluster master from trusted IP addresses or IP ranges that you specify:
<pre class="codeWrap"><code>gcloud container clusters update CLUSTER_NAME --zone=REGION --enable-master-authorized-
networks --master-authorized-networks=CIDR_1,CIDR_2,...</code></pre>
In the above command, you can specify multiple CIDRs (up to 50) separated by commas.
These practices are crucial for GKE, as non-compliance poses significant risks. However, there are additional security best practices worth considering:
- Activate auto-repair for GKE cluster nodes.
- Enable auto-upgrade for GKE cluster nodes.
- Implement integrity monitoring for GKE cluster nodes.
- Enable secure boot for GKE cluster nodes.
- Use shielded GKE cluster nodes.
Facing Challenges in Cloud, DevOps, or Security?
Let’s tackle them together!
get free consultation sessionsWe will contact you shortly.