Born to be cloud
Creating robust digital systems that flourish in an evolving landscape. Our services, spanning from Cloud to Applications, Data, and AI, are trusted by 150+ customers. Collaborating with our global partners, we transform possibilities into tangible outcomes.
.png)
.png)
.png)




.png)











.png)
.png)




.png)













.png)







.png)
.png)
.png)




.png)











.png)
.png)




.png)













.png)







.png)
.png)
.png)




.png)











.png)
.png)




.png)













.png)







.png)
.png)
.png)




.png)











.png)
.png)




.png)













.png)







Experience our services.
We can help to make the move - design, built and migrate to the cloud.
Cloud Migration
Maximise your investment in the cloud and achieve cost-effectiveness, on-demand scalability, unlimited computing, and enhanced security.
Artificial Intelligence/ Machine Learning
Infuse AI & ML into your business to solve complex problems, drive top-line growth, and innovate mission critical applications.
Data & Analytics
Discover the Hidden Gems in Your Data with cloud-native Analytics. Our comprehensive solutions cover data processing, analysis, and visualization.
Generative Artificial Intelligence (GenAI)
Drive measurable business success with GenAI, Where creative solutions lead to tangible outcomes, including improved operational efficiency, enhanced customer satisfactions, and accelerated time-to-market.
.png)
.png)
Ankercloud: Partners with AWS, GCP, and Azure
We excel through partnerships with industry giants like AWS, GCP, and Azure, offering innovative solutions backed by leading cloud technologies.



Our Specializations & Expertise

-p-500.png)
Countless Happy Clients and Counting!
.png)
.png)
Check out our blog
.png)
Google Cloud - Security Alerts Automation
In this blog, we will guide you through automating alerting for critical activities and securing your projects against accidental deletion using custom scripts. Setting up log-based metrics and alerts manually can be a time-consuming task, typically taking around an hour or might be manual errors. To optimize this process and enhance efficiency, we have automated it using a combination of Shell and YAML scripts.
By implementing this solution, you can configure notification channels to receive alerts whenever changes are detected in your cloud environment, ensuring prompt action on potential issues. Our approach involves leveraging YAML files along with the Deployment Manager to create and manage Log Metrics and Alerting Policies. Once these components are successfully deployed, the deployment itself is deleted since it does not interfere with any ongoing services or resources in your cloud environment.
The following steps will provide you with a detailed, step-by-step guide to implementing this automation effectively, allowing you to maintain better security and operational efficiency.
Step-by-step guide to implementing this automation effectively
1. Clone the Repository
Prerequisites:
Connect to your Google Cloud Shell and ensure you have the necessary permissions to implement the script.
git clone https://github.com/nvrvenkat/Securitylogalerts.git
This command clones the Securitylogalerts repository from GitHub to your local system.
2. Navigate to the Repository Directory
cd Securitylogalerts/
This command changes the directory to the Securitylogalerts folder, where all project files are located.
3. Description of Metrics and Alerts
4. Replace the Email Address in logmetric_notification.sh
Update the email address in the shell script “logmetric_notification.sh” with the specific email address where alerts need to be sent.
This email address will be used to configure the notification channel.

5. Execute the Notification Channel Script
./logmetric_notification.sh
Runs the script to create a notification channel with the updated email address.
It will create a notification channel with the updated email address and generate log-based metrics as specified in the "Metrics and Alerts" section.
Note: If a notification channel already exists, execute the logmetric.sh file to generate only the log-based metrics.

6. Navigate to the Log Alert Directory and Execute Scripts
a) cd /Securitylogalerts/Logalert
./scripts.sh
The scripts.sh script triggers:
replace_notification_channel.sh: Replaces the notification channel with ACT-MS-alerts in the YAML files used for creating log metric alerts. The output is saved to output.txt.
logalert.sh: Creates alerting policies based on the updated notification channel in the YAML files.

Alerting Policies Update:
- Once scripts.sh is executed, the notification channel in the YAML files will be replaced, and the alerting policies will be created.
- The alerting policies should be fully deployed within approximately 10 minutes.

The resources will be created using the Deployment Manager in the Google Cloud Console. Once the resources are created, the deployment will be deleted while retaining the resources.

b)Add Multiple Notification Channels (optional):cd /Securitylogalerts/Logalertmultiple./scripts.sh
This command adds multiple notification channels to the alerting policies. Ensure you update the respective notification channel names in the “replace_notification_channel.sh” file before executing the script.It updates the YAML files for log alert metrics with the additional notification channels.
7. Test Alerting Policies
This script tests the alerting policies by:
- Creating and deleting resources (e.g., instances, disks, service accounts, service account keys, and firewall rules).
- Sending alerts to the configured notification channel to verify that the policies are functioning correctly.
8. Resource Creation and Deletion Activity
After executing “alerttest.sh”, resources are automatically created and deleted as per the alerting policy configurations.
Alerts are triggered and sent to the configured notification channel.
For example: Alerts for service account key creation and deletion.
Similar alerts for other resources will be triggered based on resource creation.


9.Enable Project Liens:
cd /Securitylogalerts/
./Liens.sh
Executes the “Liens.sh” script, which fetches the project ID automatically and enables liens on the project to prevent accidental deletion.
By following these steps, you'll be able to automate your cloud environment’s monitoring and security processes, ensuring that you stay ahead of any potential data and revenue losses and minimize the risk of accidental deletions.
2
.png)
How to Secure Your Google Cloud Buckets with IP Filtering
In today's cloud-driven world, sensitive data should be kept as secure as possible. IP filtering allows you to control who accesses your storage by enabling it on your Google Cloud buckets so that only trusted networks are allowed to access. This guide will walk you through the step-by-step process for setting up IP filtering.
What is IP Filtering?
IP filtering limits access to a bucket by allowing access only from particular IP ranges or networks. It grants access to your data while blocking traffic requests from unknown or malicious sources.
Key Use Cases for Google Cloud Bucket IP Filtering
1. Compliance Requirements
- Description: Make sure only authorized users can access the bucket to meet legal or industry rules for protecting data.
- Key: Regulatory Adherence (Following Data Protection Rules)
2. Protect Public Buckets
- Description: Enhanced security prevents unauthorized access to publicly accessible buckets by limiting traffic to trusted IPs. This protects sensitive public resources from malicious activity.
- Key: Access Control for Public Data
3. VPC Integration
- Description: Private networking limits bucket access to specific Virtual Private Cloud (VPC) networks. This ensures secure interactions within a well-defined network boundary, enhancing data protection.
- Key: Network-Specific Access
4. Controlled Testing
- Description: Access restriction during testing phases ensures that bucket access is limited to only select IPs or systems. This maintains control over the testing environment and reduces unintended data exposure.
- Key: Testing Environment Control
5. Enhanced Monitoring
- Description: Simplifies audits by restricting access to known and trusted IPs. That is, by allowing access only from trusted IPs, you reduce the number of unknown or suspicious interactions. This makes it easier to track who accessed the bucket and when, simplifying audits and improving transparency.
- Key: Simplified Audit Trails
Supported locations
Bucket IP filtering is available in the following locations:
- asia-south1
- asia-south2
- asia-southeast1
- asia-southeast2
- asia-east1
- asia-east2
- europe-west1
- europe-west2
- us-central1
- us-east1
- us-east4
- us-west1
Limitations
Bucket IP filtering has the following limitations:
- Maximum number of IP CIDR blocks: You can specify a maximum of 200 IP CIDR blocks across public and VPC networks in the IP filter rule for a bucket.
- Maximum number of VPC networks: You can specify a maximum of 25 VPC networks in the IP filter rules for a bucket.
- Dual-region support: IP filtering is not supported for dual-regional buckets.
- Blocked Google Cloud services: Enabling IP filtering on Cloud Storage buckets restricts access for some Google Cloud services, regardless of whether they use a service agent to interact with Cloud Storage.
How to Enable Bucket IP Filtering
Step 1: Install the Google Cloud CLI (Command Line Interface) in the server
SSH into your instance and install the Google Cloud CLI using the following command:
sudo snap install google-cloud-cli --classic
Authenticate with Google Cloud:
gcloud auth login

You will be prompted to grant access to your Google account.

Google Cloud SDK wants to access your Google Account




Set the desired project ID:
gcloud config set project [PROJECT_ID]
Step 2: Verify Your Bucket
- List all the buckets in your project: gcloud storage buckets list
- Locate the bucket name you want to configure.

Step 3: Prepare the JSON Configuration
Create a JSON file to define your IP filtering rules:
- Open a text editor to create the file:
nano ip-filter-config.json or vim ip-filter-config.json
- Add the following configuration and save the file:
{
"mode": "Enabled",
"publicNetworkSource":
{
"allowedIpCidrRanges": ["RANGE_CIDR"]
},
"vpcNetworkSources":
[
{
"network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",
"allowedIpCidrRanges": ["RANGE_CIDR"]
}
]
}
Replace the IP ranges and VPC network details with your specific requirements.
Step 4: Apply the Configuration
Run the following command to update your bucket with the IP filtering configuration:
gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

Step 5: Verify the Configuration
After applying the rules, describe the bucket to confirm the changes:
gcloud storage buckets describe [BUCKET_NAME(gsutil URI)]

User can see the IP filter configuration in the bucket
Step 5: Test Access
- Ensure requests from allowed IPs can access the bucket.

- Verify that non-whitelisted IPs are denied access.

How to Disable or Remove IP Filtering Disabling IP Filtering
- If we want to disable the modify the json from “Enabled” to “Disabled” and update the bucket to apply the modify configuration.
{
"mode": "Disabled",
"publicNetworkSource":
{
"allowedIpCidrRanges": ["RANGE_CIDR"]
},
"vpcNetworkSources":
[
{
"network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",
"allowedIpCidrRanges": ["RANGE_CIDR"]
}
]
}
Update the bucket with the modified configuration:
gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json
Removing IP Filtering Configuration
- To remove any existing IP filtering configuration from the bucket:
gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ipfiltering --clear-ip-filter
By enabling IP filtering, you can protect your Google Cloud buckets from unauthorized access and ensure compliance with organizational security policies. Whether you are securing sensitive data or limiting access during testing, these steps provide a robust framework for managing bucket security effectively.
Bypass bucket IP filtering rules:
Bypassing bucket IP filtering rules exempts users or service accounts from IP filtering restrictions for creating, deleting, or configuring buckets, while still enforcing rules for others. For more information about bucket IP filtering, see Bucket IP filtering(https://cloud.google.com/storage/docs/ip-filtering-overview).
It's crucial to have a way to regain access to your bucket if you inadvertently block your own IP address. This can happen due to the following reasons:
- Bucket lockout: When you accidentally add a rule that blocks your own IP address or the IP range of your entire network.
- Unexpected IP change: In some cases, your IP address might change unexpectedly due to network changes, and you might find yourself locked out.
To enable specific users or service accounts to bypass IP filtering restrictions on a bucket, grant them the storage.buckets.exemptFromIpFilter permission using a custom role. This permission exempts the user or service account from IP filtering rules for bucket-level operations such as creating, deleting, or configuring buckets. To do so, complete the following steps:
- Identify the user or service account that needs to bypass the IP filtering restrictions on specific buckets.
- Create a custom role.(https://cloud.google.com/iam/docs/creating-custom-roles)
- Add the storage.buckets.exemptFromIpFilter permission to the role.
- Grant the custom role to the identified user or service account at the project level.
For information about granting roles, see Grant a single role (https://cloud.google.com/iam/docs/manage-access-service-accounts#grant-single-role)
2

Deploying Your Project on Google Cloud: From Manual Setup to Automated CD Pipeline with Secure Git Integration
Imagine you've just put the finishing touches on your latest application. Now comes the challenging part: deploying it on Google Cloud's Compute Engine instances. The initial setup is like assembling a complex jigsaw puzzle: creating a Managed Instance Group (MIG) with auto-scaling and auto-healing, pushing your dockerized application to Artifact Registry, configuring URL maps, forwarding rules, backends, and finally, a load balancer to distribute traffic.
In the current landscape of software development, while automating code improvement remains a challenge, optimizing infrastructure management for code updates is achievable. With the involvement of iterative development methodologies, it's crucial to minimize the time and effort required for code deployment.
This blog introduces and executes an efficient Continuous Deployment (CD) pipeline leveraging Git triggers on Cloud Build within Google Cloud Platform (GCP). My approach integrates SSH key authentication, enhancing both security and automation. We'll explore how to set up a Git trigger that activates whenever changes are pushed to your repository. We'll dive into configuring Cloud Build to work with these triggers, incorporating crucial security elements like Secret Manager for handling sensitive credentials, setting up GitHub SSH keys for secure, and also setting up SSH keys to maintain secure access of your deployment on your GCP infrastructure and thus lead to seamless integration.
These steps eliminate the need for repetitive infrastructure setup with each code iteration, significantly reducing deployment overhead and enabling rapid, secure updates to your production environment.
Before jumping into the implementation, let's understand the concept of Continuous Deployment (CD) in the context of Git-based version control. CD is a DevOps practice where code changes are automatically built, tested, and deployed to production environments automatically. In the Google Cloud Platform (GCP) ecosystem, Cloud Build serves as a robust CI/CD tool that can ingest source code from diverse Version Control Systems (VCS) or cloud storage solutions, execute builds according to user-defined specifications, and generate deployment artifacts such as Docker images or Java ARchives (JARs).
Let's dive into setting up the whole CD pipeline now;
To implement GitHub triggers effectively, the initial step involves properly structuring and updating your repository with the latest codebase. It's absolutely necessary that the person configuring the Cloud Build trigger possesses the requisite permissions on the target repository. Specifically, they should have collaborator status or equivalent access rights to enable seamless integration between GitHub events and the deployment pipeline. This ensures that the CD system can respond to repository updates and initiate the deployment process. The flowchart below is an overview of the steps we will be learning and practicing today.
If you are already familiar with all the services, go ahead and complete your deployment using this diagram.

Before proceeding with the setup, ensure that the Cloud Build API and the Secret Manager API are enabled in your Google Cloud environment. These can be activated via the Google Cloud Console's API Marketplace.
Establishing GitHub SSH keys for secure repository connection
For this, open up your cloud shell on your console and wait for it to connect. Now type in the following commands:
mkdir workingdir && cd workingdir
To generate your github key run this line replace the github-email with the email id that you have used to create your repository on Github
ssh-keygen -t rsa -b 4096 -N '' -f id_github -C github-email
This generates a 4096-bit RSA key pair without a passphrase, which is crucial, as Cloud Build doesn't support passphrase-protected keys.
Secure Private Key Storage on Secret Manager
Now after the above steps you would have a private and a public Github key. The private key (id_github) must be securely stored in Secret Manager to prevent unauthorized access. To do so follow these steps:
a. Navigate to the Secret Manager in Google Cloud Console.
b. Select 'Create Secret'.
c. Assign a descriptive name to the secret.
d. For the secret value, upload the 'id_github' file from your workingdir.
e. Maintain default region settings unless specific requirements dictate otherwise.
f. Finalize by clicking 'Create secret'
Once these steps are done you can be assured that your private key is protected and isn’t accessible to everyone.
Connecting to your Github repository
Now that you have your Git keys it is necessary to add the public key on GitHub so as to connect it to your infrastructure on GCP. So log into your GitHub account move into your repository page and follow these steps:
a. Move to the Settings tab of your repository
c. In the sidebar, select 'Deploy Keys' and click 'Add deploy key'.
d. Provide a descriptive title and paste the contents of 'workingdir/id_github.pub'. This is your public key
e. Enable 'Allow write access'.
f. Confirm by clicking 'Add key'.
Once you have added the Git keys to the Secret manager and your GitHub repository Access key section you can continue and remove the local copies. This adds another level of security and makes sure nobody else is able to access your GitHub key. To do so run this on your cloud shell:
rm id_github*
Configuring Cloud Build Service Account Permissions
Now that you have the above set you need to make sure that the Service Account that you are using has access to the Secret Manager.
a. Navigate to the Cloud Build Settings page in Google Cloud Console.
b. Select the service account for your build operations.
c. Enable the 'Secret Manager Secret Assessor' role for this account.
Preparing Known Hosts for GitHub
The 'known_hosts' file is a critical component of SSH security, playing a vital role in preventing man-in-the-middle (MITM) attacks. Therefore, the final step is to set up your known hosts file.
We save the GitHub public key for SSH verification in the known_hosts file. Go ahead use this command and create a known_hosts file in the working directory
ssh-keyscan -t rsa github.com > known_hosts.github
Make sure to download the 'known_hosts.github' file to the appropriate location in the build environment, in this case your Github repository.
With the GitHub SSH keys properly configured and securely stored, the next critical step is to create your cloudbuild.yaml configuration file. This YAML file defines the series of steps Cloud Build will execute during the deployment process.
For deploying applications to Compute Engine instances via SSH, it's imperative to set up authentication keys with the appropriate access permissions. These keys will enable Cloud Build to securely push code and execute commands on your Compute Engine Managed Instance Groups (MIGs).
In the next section, we'll delve into the details of setting up these SSH keys for Compute Engine. This final piece will complete our Continuous Deployment (CD) pipeline, enabling automated deployments to Compute Engine MIGs via SSH.
Configuring SSH keys for secure access to Compute Engine instances
This step is crucial for ensuring that our Cloud Build processes can securely interact with our deployment targets. Let's walk through this process, addressing common pitfalls and best practices along the way.
1. Generating SSH Keys
Create a folder named ssh_keys on your Cloud Editor. Inside that, create a blank text file called id_rsa.txt. This is where your SSH keys will be stored: both public and private.
Let's start by generating the SSH keys. Replace the italics values in the command below and run it on your cloud shell.
ssh-keygen -t rsa -f ~/enter_path_to_id_rsa.txt -C your_username -b 2048
The addition of 2048 generates a 2048-bit RSA key pair, which offers a good balance of security and performance.
2. Enter into your instance through the shell through the following command. Now the changes and the directories of files you make will all be saved in your instance memory. Make sure that you have allotted enough memory during instance formation or MIG template formation.
gcloud compute ssh username@instance_name --zone instance_zone
3. Adding SSH Keys to Compute Engine Metadata
Once you have your key pair, you need to add the public key to your Compute Engine instance's metadata. This allows you to access the SSH on that particular instance.This can be done using the following gcloud command paste this on the:
gcloud compute instances add-metadata instance_name \
--metadata ssh-keys="username:$(location_of_public_key/id_rsa.pub)" \
--zone instance_zone \
--project project_id
Replace the name of the instance with your compute engine instance name, followed by your username, location of public key, zone in which the instance is created and finally your project id.
4. Configuring the Instance for SSH Access
Now that you have added the public key to your instance metadata, in order to access the instance you would need to add the private key in the authorized_keys file in the instance ssh folder. This private key is verified with your public key from the metadata to give access to the ssh and further processing.
On your Compute Engine instance paste the following commands to set up the authorized_keys file:
mkdir -p ~/.ssh
nano ~/.ssh/authorized_keys
The nano command opens an editor. Paste your key in this file and then save it accordingly.
Next, let's set up the correct permissions for the keys, paste these commands in the shell:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
These permissions are crucial for security - SSH will refuse to work if the permissions are too open.
5. Testing Your SSH Connection
Once you have executed all the above steps your SSH connection should be set. You can test your SSH connection using the verbose flag to diagnose any issues:
ssh -v -i ~/ssh_keys/id_rsa username@external_ip_of_instance
These steps lead to the completion of your CD setup and you can seamlessly integrate your code on github to your production environment.
Before you are fully ready make sure that you have docker properly installed and running in your instance. An error commonly faced while handling docker are docker authentication issues.
COMMON ISSUE
If you encounter errors like 'Unauthenticated request' when pulling Docker images, you may need to add the appropriate IAM roles. Run this command to do so:
gcloud projects add-iam-policy-binding project_id \
--member=serviceAccount:service_account_name\
--role=roles/artifactregistry.reader
Also, configure Docker to authenticate with GCR:
gcloud auth configure-docker gcr.io --quiet
With these steps you are ready with your deployment pipeline along with continuous deployment pipeline and can seamlessly integrate updates directly from your github to production environment on the Google Cloud Platform.
There might be cases where the production code that you just deployed even after multiple checks might fail and you would want to return to the previous version. This can be taken care of by the rollback method in which you can return to the previous version on your deployed code. To do this you do have an option through the console where you can choose roll back. But if you do want to continue on shell and run it though there follow this command and replace it with the correct variables.
gcloud deploy targets rollback TARGET_NAME \ --delivery-pipeline= PIPELINE_NAME \ --release= RELEASE_NAME \ --rollout-id= ROLLOUT_ID
With this do make sure that deploying bugged code on production can lead to serious downtime on your application as well as loss of data so make sure that your code is fully tested and runs smoothly with traffic before pushing it to production.
Conclusion
The art of deploying applications on Google Cloud with a secure and automated CD pipeline is more than just a technical achievement—it's a step towards streamlined, efficient development. By meticulously configuring SSH keys and leveraging Git triggers, you ensure not only the integrity of your deployment process but also the speed and reliability of your updates. This approach eliminates manual errors, reduces operational overhead, and accelerates the delivery of new features to production.
As you continue refining your cloud infrastructure, the lessons from setting up this pipeline—such as securing credentials with Secret Manager and optimizing your GitHub integration—will serve as a strong foundation. With this setup, you're not just keeping up with the fast-paced world of DevOps; you're leading the charge towards a more secure, automated future.
2
Google Cloud - Security Alerts Automation
In this blog, we will guide you through automating alerting for critical activities and securing your projects against accidental deletion using custom scripts. Setting up log-based metrics and alerts manually can be a time-consuming task, typically taking around an hour or might be manual errors. To optimize this process and enhance efficiency, we have automated it using a combination of Shell and YAML scripts.
By implementing this solution, you can configure notification channels to receive alerts whenever changes are detected in your cloud environment, ensuring prompt action on potential issues. Our approach involves leveraging YAML files along with the Deployment Manager to create and manage Log Metrics and Alerting Policies. Once these components are successfully deployed, the deployment itself is deleted since it does not interfere with any ongoing services or resources in your cloud environment.
The following steps will provide you with a detailed, step-by-step guide to implementing this automation effectively, allowing you to maintain better security and operational efficiency.
Step-by-step guide to implementing this automation effectively
1. Clone the Repository
Prerequisites:
Connect to your Google Cloud Shell and ensure you have the necessary permissions to implement the script.
git clone https://github.com/nvrvenkat/Securitylogalerts.git
This command clones the Securitylogalerts repository from GitHub to your local system.
2. Navigate to the Repository Directory
cd Securitylogalerts/
This command changes the directory to the Securitylogalerts folder, where all project files are located.
3. Description of Metrics and Alerts
4. Replace the Email Address in logmetric_notification.sh
Update the email address in the shell script “logmetric_notification.sh” with the specific email address where alerts need to be sent.
This email address will be used to configure the notification channel.

5. Execute the Notification Channel Script
./logmetric_notification.sh
Runs the script to create a notification channel with the updated email address.
It will create a notification channel with the updated email address and generate log-based metrics as specified in the "Metrics and Alerts" section.
Note: If a notification channel already exists, execute the logmetric.sh file to generate only the log-based metrics.

6. Navigate to the Log Alert Directory and Execute Scripts
a) cd /Securitylogalerts/Logalert
./scripts.sh
The scripts.sh script triggers:
replace_notification_channel.sh: Replaces the notification channel with ACT-MS-alerts in the YAML files used for creating log metric alerts. The output is saved to output.txt.
logalert.sh: Creates alerting policies based on the updated notification channel in the YAML files.

Alerting Policies Update:
- Once scripts.sh is executed, the notification channel in the YAML files will be replaced, and the alerting policies will be created.
- The alerting policies should be fully deployed within approximately 10 minutes.

The resources will be created using the Deployment Manager in the Google Cloud Console. Once the resources are created, the deployment will be deleted while retaining the resources.

b)Add Multiple Notification Channels (optional):cd /Securitylogalerts/Logalertmultiple./scripts.sh
This command adds multiple notification channels to the alerting policies. Ensure you update the respective notification channel names in the “replace_notification_channel.sh” file before executing the script.It updates the YAML files for log alert metrics with the additional notification channels.
7. Test Alerting Policies
This script tests the alerting policies by:
- Creating and deleting resources (e.g., instances, disks, service accounts, service account keys, and firewall rules).
- Sending alerts to the configured notification channel to verify that the policies are functioning correctly.
8. Resource Creation and Deletion Activity
After executing “alerttest.sh”, resources are automatically created and deleted as per the alerting policy configurations.
Alerts are triggered and sent to the configured notification channel.
For example: Alerts for service account key creation and deletion.
Similar alerts for other resources will be triggered based on resource creation.


9.Enable Project Liens:
cd /Securitylogalerts/
./Liens.sh
Executes the “Liens.sh” script, which fetches the project ID automatically and enables liens on the project to prevent accidental deletion.
By following these steps, you'll be able to automate your cloud environment’s monitoring and security processes, ensuring that you stay ahead of any potential data and revenue losses and minimize the risk of accidental deletions.
How to Secure Your Google Cloud Buckets with IP Filtering
In today's cloud-driven world, sensitive data should be kept as secure as possible. IP filtering allows you to control who accesses your storage by enabling it on your Google Cloud buckets so that only trusted networks are allowed to access. This guide will walk you through the step-by-step process for setting up IP filtering.
What is IP Filtering?
IP filtering limits access to a bucket by allowing access only from particular IP ranges or networks. It grants access to your data while blocking traffic requests from unknown or malicious sources.
Key Use Cases for Google Cloud Bucket IP Filtering
1. Compliance Requirements
- Description: Make sure only authorized users can access the bucket to meet legal or industry rules for protecting data.
- Key: Regulatory Adherence (Following Data Protection Rules)
2. Protect Public Buckets
- Description: Enhanced security prevents unauthorized access to publicly accessible buckets by limiting traffic to trusted IPs. This protects sensitive public resources from malicious activity.
- Key: Access Control for Public Data
3. VPC Integration
- Description: Private networking limits bucket access to specific Virtual Private Cloud (VPC) networks. This ensures secure interactions within a well-defined network boundary, enhancing data protection.
- Key: Network-Specific Access
4. Controlled Testing
- Description: Access restriction during testing phases ensures that bucket access is limited to only select IPs or systems. This maintains control over the testing environment and reduces unintended data exposure.
- Key: Testing Environment Control
5. Enhanced Monitoring
- Description: Simplifies audits by restricting access to known and trusted IPs. That is, by allowing access only from trusted IPs, you reduce the number of unknown or suspicious interactions. This makes it easier to track who accessed the bucket and when, simplifying audits and improving transparency.
- Key: Simplified Audit Trails
Supported locations
Bucket IP filtering is available in the following locations:
- asia-south1
- asia-south2
- asia-southeast1
- asia-southeast2
- asia-east1
- asia-east2
- europe-west1
- europe-west2
- us-central1
- us-east1
- us-east4
- us-west1
Limitations
Bucket IP filtering has the following limitations:
- Maximum number of IP CIDR blocks: You can specify a maximum of 200 IP CIDR blocks across public and VPC networks in the IP filter rule for a bucket.
- Maximum number of VPC networks: You can specify a maximum of 25 VPC networks in the IP filter rules for a bucket.
- Dual-region support: IP filtering is not supported for dual-regional buckets.
- Blocked Google Cloud services: Enabling IP filtering on Cloud Storage buckets restricts access for some Google Cloud services, regardless of whether they use a service agent to interact with Cloud Storage.
How to Enable Bucket IP Filtering
Step 1: Install the Google Cloud CLI (Command Line Interface) in the server
SSH into your instance and install the Google Cloud CLI using the following command:
sudo snap install google-cloud-cli --classic
Authenticate with Google Cloud:
gcloud auth login

You will be prompted to grant access to your Google account.

Google Cloud SDK wants to access your Google Account




Set the desired project ID:
gcloud config set project [PROJECT_ID]
Step 2: Verify Your Bucket
- List all the buckets in your project: gcloud storage buckets list
- Locate the bucket name you want to configure.

Step 3: Prepare the JSON Configuration
Create a JSON file to define your IP filtering rules:
- Open a text editor to create the file:
nano ip-filter-config.json or vim ip-filter-config.json
- Add the following configuration and save the file:
{
"mode": "Enabled",
"publicNetworkSource":
{
"allowedIpCidrRanges": ["RANGE_CIDR"]
},
"vpcNetworkSources":
[
{
"network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",
"allowedIpCidrRanges": ["RANGE_CIDR"]
}
]
}
Replace the IP ranges and VPC network details with your specific requirements.
Step 4: Apply the Configuration
Run the following command to update your bucket with the IP filtering configuration:
gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

Step 5: Verify the Configuration
After applying the rules, describe the bucket to confirm the changes:
gcloud storage buckets describe [BUCKET_NAME(gsutil URI)]

User can see the IP filter configuration in the bucket
Step 5: Test Access
- Ensure requests from allowed IPs can access the bucket.

- Verify that non-whitelisted IPs are denied access.

How to Disable or Remove IP Filtering Disabling IP Filtering
- If we want to disable the modify the json from “Enabled” to “Disabled” and update the bucket to apply the modify configuration.
{
"mode": "Disabled",
"publicNetworkSource":
{
"allowedIpCidrRanges": ["RANGE_CIDR"]
},
"vpcNetworkSources":
[
{
"network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",
"allowedIpCidrRanges": ["RANGE_CIDR"]
}
]
}
Update the bucket with the modified configuration:
gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json
Removing IP Filtering Configuration
- To remove any existing IP filtering configuration from the bucket:
gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ipfiltering --clear-ip-filter
By enabling IP filtering, you can protect your Google Cloud buckets from unauthorized access and ensure compliance with organizational security policies. Whether you are securing sensitive data or limiting access during testing, these steps provide a robust framework for managing bucket security effectively.
Bypass bucket IP filtering rules:
Bypassing bucket IP filtering rules exempts users or service accounts from IP filtering restrictions for creating, deleting, or configuring buckets, while still enforcing rules for others. For more information about bucket IP filtering, see Bucket IP filtering(https://cloud.google.com/storage/docs/ip-filtering-overview).
It's crucial to have a way to regain access to your bucket if you inadvertently block your own IP address. This can happen due to the following reasons:
- Bucket lockout: When you accidentally add a rule that blocks your own IP address or the IP range of your entire network.
- Unexpected IP change: In some cases, your IP address might change unexpectedly due to network changes, and you might find yourself locked out.
To enable specific users or service accounts to bypass IP filtering restrictions on a bucket, grant them the storage.buckets.exemptFromIpFilter permission using a custom role. This permission exempts the user or service account from IP filtering rules for bucket-level operations such as creating, deleting, or configuring buckets. To do so, complete the following steps:
- Identify the user or service account that needs to bypass the IP filtering restrictions on specific buckets.
- Create a custom role.(https://cloud.google.com/iam/docs/creating-custom-roles)
- Add the storage.buckets.exemptFromIpFilter permission to the role.
- Grant the custom role to the identified user or service account at the project level.
For information about granting roles, see Grant a single role (https://cloud.google.com/iam/docs/manage-access-service-accounts#grant-single-role)
Deploying Your Project on Google Cloud: From Manual Setup to Automated CD Pipeline with Secure Git Integration
Imagine you've just put the finishing touches on your latest application. Now comes the challenging part: deploying it on Google Cloud's Compute Engine instances. The initial setup is like assembling a complex jigsaw puzzle: creating a Managed Instance Group (MIG) with auto-scaling and auto-healing, pushing your dockerized application to Artifact Registry, configuring URL maps, forwarding rules, backends, and finally, a load balancer to distribute traffic.
In the current landscape of software development, while automating code improvement remains a challenge, optimizing infrastructure management for code updates is achievable. With the involvement of iterative development methodologies, it's crucial to minimize the time and effort required for code deployment.
This blog introduces and executes an efficient Continuous Deployment (CD) pipeline leveraging Git triggers on Cloud Build within Google Cloud Platform (GCP). My approach integrates SSH key authentication, enhancing both security and automation. We'll explore how to set up a Git trigger that activates whenever changes are pushed to your repository. We'll dive into configuring Cloud Build to work with these triggers, incorporating crucial security elements like Secret Manager for handling sensitive credentials, setting up GitHub SSH keys for secure, and also setting up SSH keys to maintain secure access of your deployment on your GCP infrastructure and thus lead to seamless integration.
These steps eliminate the need for repetitive infrastructure setup with each code iteration, significantly reducing deployment overhead and enabling rapid, secure updates to your production environment.
Before jumping into the implementation, let's understand the concept of Continuous Deployment (CD) in the context of Git-based version control. CD is a DevOps practice where code changes are automatically built, tested, and deployed to production environments automatically. In the Google Cloud Platform (GCP) ecosystem, Cloud Build serves as a robust CI/CD tool that can ingest source code from diverse Version Control Systems (VCS) or cloud storage solutions, execute builds according to user-defined specifications, and generate deployment artifacts such as Docker images or Java ARchives (JARs).
Let's dive into setting up the whole CD pipeline now;
To implement GitHub triggers effectively, the initial step involves properly structuring and updating your repository with the latest codebase. It's absolutely necessary that the person configuring the Cloud Build trigger possesses the requisite permissions on the target repository. Specifically, they should have collaborator status or equivalent access rights to enable seamless integration between GitHub events and the deployment pipeline. This ensures that the CD system can respond to repository updates and initiate the deployment process. The flowchart below is an overview of the steps we will be learning and practicing today.
If you are already familiar with all the services, go ahead and complete your deployment using this diagram.

Before proceeding with the setup, ensure that the Cloud Build API and the Secret Manager API are enabled in your Google Cloud environment. These can be activated via the Google Cloud Console's API Marketplace.
Establishing GitHub SSH keys for secure repository connection
For this, open up your cloud shell on your console and wait for it to connect. Now type in the following commands:
mkdir workingdir && cd workingdir
To generate your github key run this line replace the github-email with the email id that you have used to create your repository on Github
ssh-keygen -t rsa -b 4096 -N '' -f id_github -C github-email
This generates a 4096-bit RSA key pair without a passphrase, which is crucial, as Cloud Build doesn't support passphrase-protected keys.
Secure Private Key Storage on Secret Manager
Now after the above steps you would have a private and a public Github key. The private key (id_github) must be securely stored in Secret Manager to prevent unauthorized access. To do so follow these steps:
a. Navigate to the Secret Manager in Google Cloud Console.
b. Select 'Create Secret'.
c. Assign a descriptive name to the secret.
d. For the secret value, upload the 'id_github' file from your workingdir.
e. Maintain default region settings unless specific requirements dictate otherwise.
f. Finalize by clicking 'Create secret'
Once these steps are done you can be assured that your private key is protected and isn’t accessible to everyone.
Connecting to your Github repository
Now that you have your Git keys it is necessary to add the public key on GitHub so as to connect it to your infrastructure on GCP. So log into your GitHub account move into your repository page and follow these steps:
a. Move to the Settings tab of your repository
c. In the sidebar, select 'Deploy Keys' and click 'Add deploy key'.
d. Provide a descriptive title and paste the contents of 'workingdir/id_github.pub'. This is your public key
e. Enable 'Allow write access'.
f. Confirm by clicking 'Add key'.
Once you have added the Git keys to the Secret manager and your GitHub repository Access key section you can continue and remove the local copies. This adds another level of security and makes sure nobody else is able to access your GitHub key. To do so run this on your cloud shell:
rm id_github*
Configuring Cloud Build Service Account Permissions
Now that you have the above set you need to make sure that the Service Account that you are using has access to the Secret Manager.
a. Navigate to the Cloud Build Settings page in Google Cloud Console.
b. Select the service account for your build operations.
c. Enable the 'Secret Manager Secret Assessor' role for this account.
Preparing Known Hosts for GitHub
The 'known_hosts' file is a critical component of SSH security, playing a vital role in preventing man-in-the-middle (MITM) attacks. Therefore, the final step is to set up your known hosts file.
We save the GitHub public key for SSH verification in the known_hosts file. Go ahead use this command and create a known_hosts file in the working directory
ssh-keyscan -t rsa github.com > known_hosts.github
Make sure to download the 'known_hosts.github' file to the appropriate location in the build environment, in this case your Github repository.
With the GitHub SSH keys properly configured and securely stored, the next critical step is to create your cloudbuild.yaml configuration file. This YAML file defines the series of steps Cloud Build will execute during the deployment process.
For deploying applications to Compute Engine instances via SSH, it's imperative to set up authentication keys with the appropriate access permissions. These keys will enable Cloud Build to securely push code and execute commands on your Compute Engine Managed Instance Groups (MIGs).
In the next section, we'll delve into the details of setting up these SSH keys for Compute Engine. This final piece will complete our Continuous Deployment (CD) pipeline, enabling automated deployments to Compute Engine MIGs via SSH.
Configuring SSH keys for secure access to Compute Engine instances
This step is crucial for ensuring that our Cloud Build processes can securely interact with our deployment targets. Let's walk through this process, addressing common pitfalls and best practices along the way.
1. Generating SSH Keys
Create a folder named ssh_keys on your Cloud Editor. Inside that, create a blank text file called id_rsa.txt. This is where your SSH keys will be stored: both public and private.
Let's start by generating the SSH keys. Replace the italics values in the command below and run it on your cloud shell.
ssh-keygen -t rsa -f ~/enter_path_to_id_rsa.txt -C your_username -b 2048
The addition of 2048 generates a 2048-bit RSA key pair, which offers a good balance of security and performance.
2. Enter into your instance through the shell through the following command. Now the changes and the directories of files you make will all be saved in your instance memory. Make sure that you have allotted enough memory during instance formation or MIG template formation.
gcloud compute ssh username@instance_name --zone instance_zone
3. Adding SSH Keys to Compute Engine Metadata
Once you have your key pair, you need to add the public key to your Compute Engine instance's metadata. This allows you to access the SSH on that particular instance.This can be done using the following gcloud command paste this on the:
gcloud compute instances add-metadata instance_name \
--metadata ssh-keys="username:$(location_of_public_key/id_rsa.pub)" \
--zone instance_zone \
--project project_id
Replace the name of the instance with your compute engine instance name, followed by your username, location of public key, zone in which the instance is created and finally your project id.
4. Configuring the Instance for SSH Access
Now that you have added the public key to your instance metadata, in order to access the instance you would need to add the private key in the authorized_keys file in the instance ssh folder. This private key is verified with your public key from the metadata to give access to the ssh and further processing.
On your Compute Engine instance paste the following commands to set up the authorized_keys file:
mkdir -p ~/.ssh
nano ~/.ssh/authorized_keys
The nano command opens an editor. Paste your key in this file and then save it accordingly.
Next, let's set up the correct permissions for the keys, paste these commands in the shell:
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
These permissions are crucial for security - SSH will refuse to work if the permissions are too open.
5. Testing Your SSH Connection
Once you have executed all the above steps your SSH connection should be set. You can test your SSH connection using the verbose flag to diagnose any issues:
ssh -v -i ~/ssh_keys/id_rsa username@external_ip_of_instance
These steps lead to the completion of your CD setup and you can seamlessly integrate your code on github to your production environment.
Before you are fully ready make sure that you have docker properly installed and running in your instance. An error commonly faced while handling docker are docker authentication issues.
COMMON ISSUE
If you encounter errors like 'Unauthenticated request' when pulling Docker images, you may need to add the appropriate IAM roles. Run this command to do so:
gcloud projects add-iam-policy-binding project_id \
--member=serviceAccount:service_account_name\
--role=roles/artifactregistry.reader
Also, configure Docker to authenticate with GCR:
gcloud auth configure-docker gcr.io --quiet
With these steps you are ready with your deployment pipeline along with continuous deployment pipeline and can seamlessly integrate updates directly from your github to production environment on the Google Cloud Platform.
There might be cases where the production code that you just deployed even after multiple checks might fail and you would want to return to the previous version. This can be taken care of by the rollback method in which you can return to the previous version on your deployed code. To do this you do have an option through the console where you can choose roll back. But if you do want to continue on shell and run it though there follow this command and replace it with the correct variables.
gcloud deploy targets rollback TARGET_NAME \ --delivery-pipeline= PIPELINE_NAME \ --release= RELEASE_NAME \ --rollout-id= ROLLOUT_ID
With this do make sure that deploying bugged code on production can lead to serious downtime on your application as well as loss of data so make sure that your code is fully tested and runs smoothly with traffic before pushing it to production.
Conclusion
The art of deploying applications on Google Cloud with a secure and automated CD pipeline is more than just a technical achievement—it's a step towards streamlined, efficient development. By meticulously configuring SSH keys and leveraging Git triggers, you ensure not only the integrity of your deployment process but also the speed and reliability of your updates. This approach eliminates manual errors, reduces operational overhead, and accelerates the delivery of new features to production.
As you continue refining your cloud infrastructure, the lessons from setting up this pipeline—such as securing credentials with Secret Manager and optimizing your GitHub integration—will serve as a strong foundation. With this setup, you're not just keeping up with the fast-paced world of DevOps; you're leading the charge towards a more secure, automated future.
FAQs
Some benefits of using cloud computing services include cost savings, scalability, flexibility, reliability, and increased collaboration.
Ankercloud takes data privacy and compliance seriously and adheres to industry best practices and standards to protect customer data. This includes implementing strong encryption, access controls, regular security audits, and compliance certifications such as ISO 27001, GDPR, and HIPAA, depending on the specific requirements of the customer. Learn More
The main types of cloud computing models are Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Each offers different levels of control and management for users.
Public clouds are owned and operated by third-party providers, private clouds are dedicated to a single organization, and hybrid clouds combine elements of both public and private clouds. The choice depends on factors like security requirements, scalability needs, and budget constraints.
Cloud computing services typically offer pay-as-you-go or subscription-based pricing models, where users only pay for the resources they consume. Prices may vary based on factors like usage, storage, data transfer, and additional features.
The process of migrating applications to the cloud depends on various factors, including the complexity of the application, the chosen cloud provider, and the desired deployment model. It typically involves assessing your current environment, selecting the appropriate cloud services, planning the migration strategy, testing and validating the migration, and finally, executing the migration with minimal downtime.
Ankercloud provides various levels of support to its customers, including technical support, account management, training, and documentation. Customers can access support through various channels such as email, phone, chat, and a self-service knowledge base.
The Ankercloud Team loves to listen

