Empowering Diverse Industries with Cloud Innovation

From project-specific support to managed services, we help you accelerate time to market, maximise cost savings and realize your growth ambitions

The logo for a company that sells products.
AWS
HPC
Cloud
Bio Tech
Machine Learning

High Performance Computing using Parallel Cluster, Infrastructure Set-up

AWS
Cloud Migration

gocomo Migrates Social Data Platform to AWS for Performance & Scalability with Ankercloud

A black and white photo of the logo for salopritns.
Google Cloud
Saas
Cost Optimization
Cloud

Migration a Saas platform from On-Prem to GCP

AWS
HPC

Benchmarking AWS performance to run environmental simulations over Belgium

Countless Happy Clients and Counting!

A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.

Awards and Recognition

The rising star partner of the year award.
The google cloud partner logo.
specialized infrastructure google cloud.
The logo for the technology fast 500.
A white badge with the google cloud logo.
The aws partner logo.

Our Latest Achievement

The aws partner logo.
Public Sector
Solution Provider
SaaS Services Competency
DevOps Services Competency
AWS WAF Delivery
The aws partner logo.
AWS Glue Delivery
AWS Lambda Delivery
Amazon CloudFront Delivery
Migration Services Competency
Public Sector Solution Provider
The aws partner logo.
AWS CloudFormation Delivery
Amazon OpenSearch Service Delivery
Well-Architected Partner Program
Cloud Operations Services Competency
AWS Badge1AWS Badge1
AWS Badge2AWS Badge2

Ankercloud: Partners with AWS, GCP, and Azure

We excel through partnerships with industry giants like AWS, GCP, and Azure, offering innovative solutions backed by leading cloud technologies.

A black and white photo of a computer screen.
A black and white photo of a clock tower.
A black and white photo of a clock tower.

Check out our blog

Blog

Automating S3 to GCS Migration Using Bash Scripts

Introduction

Cloud storage plays a crucial role in modern infrastructure, providing scalable and reliable storage solutions. Many businesses migrate from AWS S3 to Google Cloud Storage (GCS) to leverage cost benefits, integration with Google Cloud services, or optimize their cloud strategies. However, when dealing with hundreds of S3 buckets, manual migration is inefficient and time-consuming.

To streamline the process, I automated the migration using Bash scripts and Google Cloud’s Storage Transfer Service. In this blog, I’ll walk you through the steps of automating S3 to GCS migration efficiently.

Why Automate S3 to GCS Migration?

Handling over 200+ S3 buckets manually would involve:

  • Repetitive tasks – Creating GCS buckets, setting permissions, and transferring data for each bucket.
  • Human errors – Misconfiguration, incorrect bucket names, or missing files.
  • Time-consuming process – Manual intervention would take days to complete.

By automating this process, we can:

 Save time – Script execution takes a few minutes instead of hours/days.

 Eliminate errors – Ensures all S3 buckets are correctly transferred.

 Enable monitoring & scheduling – Automate recurring data transfers with Google’s Storage Transfer Service.

Prerequisites

Before running the scripts, ensure you have:

 A Google Cloud Project with Billing enabled.
AWS IAM User with s3:ListBucket and s3:GetObject permissions.
Installed Google Cloud SDK (gcloud CLI) on your local machine.

Step 1: Creating Google Cloud Storage Buckets

Each S3 bucket requires a corresponding GCS bucket. The script below reads a list of bucket names from a file and creates them in GCP.

create_gcs_bucket.sh

#!/bin/bash

# Variables

PROJECT_ID="ccd-poc-project"  # Replace with your GCP project ID

BUCKET_LIST_FILE="bucket_names.txt"  # File containing bucket names

OUTPUT_FILE="created_buckets.txt"

REGION="us-central1"  # Change if needed

# Check if the bucket list file exists

if [ ! -f "$BUCKET_LIST_FILE" ]; then

    echo "Error: Bucket names file '$BUCKET_LIST_FILE' not found!"

    exit 1

fi

# Read bucket names and create GCS buckets

while IFS= read -r BUCKET_NAME || [[ -n "$BUCKET_NAME" ]]; do

    if [[ -z "$BUCKET_NAME" ]]; then

        continue  # Skip empty lines

    fi

    # Clean bucket name

    BUCKET_NAME=$(echo "$BUCKET_NAME" | tr -d '\r' | tr -d '[:space:]')

    echo "Creating bucket: $BUCKET_NAME"

    gcloud storage buckets create "gs://$BUCKET_NAME" --location="$REGION" --project="$PROJECT_ID"

    if [ $? -eq 0 ]; then

        echo "gs://$BUCKET_NAME" >> "$OUTPUT_FILE"

        echo "Bucket $BUCKET_NAME created successfully."

    else

        echo "Error: Failed to create bucket $BUCKET_NAME"

    fi

done < "$BUCKET_LIST_FILE"

  Explanation:

  • Reads bucket names from bucket_names.txt.
  • Cleans up any unnecessary whitespace.
  • Creates GCS buckets with the specified region.
  • Stores created bucket names in created_buckets.txt.

Step 2: Automating Data Transfer from S3 to GCS

After creating the required GCS buckets, the next step is to automate data transfer using the gcloud transfer jobs command.

s3_to_gcs_transfer.sh

#!/bin/bash

# Variables

AWS_ACCESS_KEY="YOUR_AWS_ACCESS_KEY"

AWS_SECRET_KEY="YOUR_AWS_SECRET_KEY"

PROJECT_ID="ccd-poc-project"

CREDS_FILE="aws-creds.json"

# Create AWS credentials JSON file

cat <<EOF > "$CREDS_FILE"

{

  "awsAccessKeyId": "$AWS_ACCESS_KEY",

  "awsSecretAccessKey": "$AWS_SECRET_KEY"

}

EOF

# Read bucket names and create transfer jobs

while IFS= read -r BUCKET_NAME; do

  echo "Creating transfer job for S3 bucket: $BUCKET_NAME"

  JOB_NAME=$(gcloud transfer jobs create s3://"$BUCKET_NAME" gs://"$BUCKET_NAME" \

    --source-auth-method=AWS_SIGNATURE_V4 \

    --source-creds-file="$CREDS_FILE" \

    --schedule-repeats-every=1d \

    --project="$PROJECT_ID" \

    --format="value(name)")

  if [[ -n "$JOB_NAME" ]]; then

    echo "Transfer job created successfully: $JOB_NAME"

  else

    echo "Failed to create transfer job for $BUCKET_NAME"

  fi

done < bucket_names.txt

# Remove credentials file for security

rm "$CREDS_FILE"

echo "All transfer jobs created successfully!"

      Explanation:

  • Generates a secure AWS credentials file.
  • Reads bucket names and initiates a transfer job.
  • Checks if an existing transfer is running before creating a new one.
  • Deletes the credentials file after execution for security.

Step 3: Running the Migration

To execute the scripts, follow these steps:

  1. Save the S3 bucket names in a file named bucket_names.txt.
  2. Run the GCS bucket creation script:

chmod +x create_gcs_bucket.sh

./create_gcs_bucket.sh

  1. Run the S3-to-GCS transfer script:

chmod +x s3_to_gcs_transfer.sh

./s3_to_gcs_transfer.sh

Conclusion

By automating S3 to GCS migration, we:
Eliminated manual effort for creating 200+ buckets.
Ensured accurate and efficient data transfers.
Scheduled daily syncs for incremental updates.

This solution scales easily and can be modified to include advanced features like logging, monitoring, and notifications.

If you found this guide helpful, feel free to share your thoughts and experiences in the comments. Happy migrating!

Feb 20, 2025

2

Blog

Google Cloud - Security Alerts Automation

In this blog, we will guide you through automating alerting for critical activities and securing your projects against accidental deletion using custom scripts. Setting up log-based metrics and alerts manually can be a time-consuming task, typically taking around an hour or might be manual errors. To optimize this process and enhance efficiency, we have automated it using a combination of Shell and YAML scripts.

By implementing this solution, you can configure notification channels to receive alerts whenever changes are detected in your cloud environment, ensuring prompt action on potential issues. Our approach involves leveraging YAML files along with the Deployment Manager to create and manage Log Metrics and Alerting Policies. Once these components are successfully deployed, the deployment itself is deleted since it does not interfere with any ongoing services or resources in your cloud environment.

The following steps will provide you with a detailed, step-by-step guide to implementing this automation effectively, allowing you to maintain better security and operational efficiency.

Step-by-step guide to implementing this automation effectively

1. Clone the Repository

Prerequisites:

Connect to your Google Cloud Shell and ensure you have the necessary permissions to implement the script.

git clone https://github.com/nvrvenkat/Securitylogalerts.git

This command clones the Securitylogalerts repository from GitHub to your local system.

2. Navigate to the Repository Directory

cd Securitylogalerts/

This command changes the directory to the Securitylogalerts folder, where all project files are located.

3. Description of Metrics and Alerts

  • Assign-resource-to-billing-account-metric: Generates an alert whenever a resource is assigned to a billing account.
  • Create-service-account-key-metric: Sends a notification whenever a service account key is created.
  • Deletion-protection-metric: Issues an alert whenever deletion protection for a resource is disabled.
  • Delete-service-account-key-metric: Logs a warning whenever a service account key is deleted.
  • Disk-deletion-metric: Detects and notifies whenever a disk is removed.
  • Firewall-update-metric: Alerts the team whenever a firewall configuration is modified.
  • Iam-action-metric: Flags an activity whenever an IAM-related action is executed.
  • Instance-delete-metric: Reports an event whenever a virtual machine instance is terminated.
  • Instance-insert-metric: Notifies the team whenever a new virtual machine instance is provisioned.
  • Label-modification-metric: Identifies and reports whenever an instance label is altered or a new one is added.
  • Service-account-creation-metric: Triggers a notification whenever a new service account is established.
  • Set-iam-metric: Raises an alert whenever a new IAM user is assigned a role or created.
  • 4. Replace the Email Address in logmetric_notification.sh

    Update the email address in the shell script “logmetric_notification.sh” with the specific email address where alerts need to be sent.

    This email address will be used to configure the notification channel.

    5. Execute the Notification Channel Script

    ./logmetric_notification.sh 

    Runs the script to create a notification channel with the updated email address.

    It will create a notification channel with the updated email address and generate log-based metrics as specified in the "Metrics and Alerts" section.

    Note: If a notification channel already exists, execute the logmetric.sh file to generate only the log-based metrics.

    6. Navigate to the Log Alert Directory and Execute Scripts

    a) cd /Securitylogalerts/Logalert

    ./scripts.sh

    The scripts.sh script triggers:

    replace_notification_channel.sh: Replaces the notification channel with ACT-MS-alerts in the YAML files used for creating log metric alerts. The output is saved to output.txt.

    logalert.sh: Creates alerting policies based on the updated notification channel in the YAML files.

     Alerting Policies Update:

    • Once scripts.sh is executed, the notification channel in the YAML files will be replaced, and the alerting policies will be created.
    • The alerting policies should be fully deployed within approximately 10 minutes.

    The resources will be created using the Deployment Manager in the Google Cloud Console. Once the resources are created, the deployment will be deleted while retaining the resources.

    b)Add Multiple Notification Channels (optional):cd /Securitylogalerts/Logalertmultiple./scripts.sh

    This command adds multiple notification channels to the alerting policies. Ensure you update the respective notification channel names in the “replace_notification_channel.sh” file before executing the script.It updates the YAML files for log alert metrics with the additional notification channels.

    7. Test Alerting Policies

    This script tests the alerting policies by:

    • Creating and deleting resources (e.g., instances, disks, service accounts, service account keys, and firewall rules).
    • Sending alerts to the configured notification channel to verify that the policies are functioning correctly.

    8. Resource Creation and Deletion Activity

    After executing “alerttest.sh”, resources are automatically created and deleted as per the alerting policy configurations.

    Alerts are triggered and sent to the configured notification channel.

    For example: Alerts for service account key creation and deletion.

    Similar alerts for other resources will be triggered based on resource creation.

    9.Enable Project Liens:

    cd /Securitylogalerts/

    ./Liens.sh

    Executes the “Liens.sh” script, which fetches the project ID automatically and enables liens on the project to prevent accidental deletion.

    By following these steps, you'll be able to automate your cloud environment’s monitoring and security processes, ensuring that you stay ahead of any potential data and revenue losses and minimize the risk of accidental deletions.

    Feb 5, 2025

    2

    Blog

    How to Secure Your Google Cloud Buckets with IP Filtering

    In today's cloud-driven world, sensitive data should be kept as secure as possible. IP filtering allows you to control who accesses your storage by enabling it on your Google Cloud buckets so that only trusted networks are allowed to access. This guide will walk you through the step-by-step process for setting up IP filtering.

    What is IP Filtering?

    IP filtering limits access to a bucket by allowing access only from particular IP ranges or networks. It grants access to your data while blocking traffic requests from unknown or malicious sources.

    Key Use Cases for Google Cloud Bucket IP Filtering

    1. Compliance Requirements

    • Description: Make sure only authorized users can access the bucket to meet legal or industry rules for protecting data.
    • Key: Regulatory Adherence (Following Data Protection Rules)

    2. Protect Public Buckets

    • Description: Enhanced security prevents unauthorized access to publicly accessible buckets by limiting traffic to trusted IPs. This protects sensitive public resources from malicious activity.
    • Key: Access Control for Public Data

    3. VPC Integration

    • Description: Private networking limits bucket access to specific Virtual Private Cloud (VPC) networks. This ensures secure interactions within a well-defined network boundary, enhancing data protection.
    • Key: Network-Specific Access

    4. Controlled Testing

    • Description: Access restriction during testing phases ensures that bucket access is limited to only select IPs or systems. This maintains control over the testing environment and reduces unintended data exposure.
    • Key: Testing Environment Control

    5. Enhanced Monitoring

    • Description: Simplifies audits by restricting access to known and trusted IPs. That is, by allowing access only from trusted IPs, you reduce the number of unknown or suspicious interactions. This makes it easier to track who accessed the bucket and when, simplifying audits and improving transparency.
    • Key: Simplified Audit Trails

    Supported locations

    Bucket IP filtering is available in the following locations:

    • asia-south1
    • asia-south2
    • asia-southeast1
    • asia-southeast2
    • asia-east1
    • asia-east2
    • europe-west1
    • europe-west2
    • us-central1
    • us-east1
    • us-east4
    • us-west1

    Limitations

    Bucket IP filtering has the following limitations:

    • Maximum number of IP CIDR blocks: You can specify a maximum of 200 IP CIDR blocks across public and VPC networks in the IP filter rule for a bucket.
    • Maximum number of VPC networks: You can specify a maximum of 25 VPC networks in the IP filter rules for a bucket.
    • Dual-region support: IP filtering is not supported for dual-regional buckets.
    • Blocked Google Cloud services: Enabling IP filtering on Cloud Storage buckets restricts access for some Google Cloud services, regardless of whether they use a service agent to interact with Cloud Storage.

    How to Enable Bucket IP Filtering

    Step 1: Install the Google Cloud CLI (Command Line Interface) in the server

    SSH into your instance and install the Google Cloud CLI using the following command:

    sudo snap install google-cloud-cli --classic

    Authenticate with Google Cloud:


    gcloud auth login

    You will be prompted to grant access to your Google account.

    Google Cloud SDK wants to access your Google Account

    Set the desired project ID:

    gcloud config set project [PROJECT_ID]

    Step 2: Verify Your Bucket

    1. List all the buckets in your project: gcloud storage buckets list
    2. Locate the bucket name you want to configure.

    Step 3: Prepare the JSON Configuration

    Create a JSON file to define your IP filtering rules:

    • Open a text editor to create the file:
    nano ip-filter-config.json or vim ip-filter-config.json
    • Add the following configuration and save the file:

    {

     "mode": "Enabled",

     "publicNetworkSource":

       {

       "allowedIpCidrRanges": ["RANGE_CIDR"]

       },

     "vpcNetworkSources":

         [

                 {

                        "network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",

                         "allowedIpCidrRanges": ["RANGE_CIDR"]

                 }

         ]

    }

    Replace the IP ranges and VPC network details with your specific requirements.

    Step 4: Apply the Configuration

    Run the following command to update your bucket with the IP filtering configuration:

    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

    Step 5: Verify the Configuration

    After applying the rules, describe the bucket to confirm the changes:

    gcloud storage buckets describe [BUCKET_NAME(gsutil URI)]

    User can  see the IP filter configuration in the bucket

    Step 5: Test Access

    • Ensure  requests from allowed IPs can access the bucket.
    • Verify that non-whitelisted IPs are denied access.

    How to Disable or Remove IP Filtering Disabling IP Filtering

    • If we want to disable the modify the json from “Enabled” to “Disabled” and update the bucket to apply the modify configuration.

     "mode": "Disabled",

     "publicNetworkSource":

       {

       "allowedIpCidrRanges": ["RANGE_CIDR"]

       },

     "vpcNetworkSources":

         [

                 {

           "network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",

                         "allowedIpCidrRanges": ["RANGE_CIDR"]

                 }

         ]

    }

    Update the bucket with the modified configuration:

    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

    Removing IP Filtering Configuration

    • To remove any existing IP filtering configuration from the bucket:
    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ipfiltering --clear-ip-filter

    By enabling IP filtering, you can protect your Google Cloud buckets from unauthorized access and ensure compliance with organizational security policies. Whether you are securing sensitive data or limiting access during testing, these steps provide a robust framework for managing bucket security effectively.

    Bypass bucket IP filtering rules:

    Bypassing bucket IP filtering rules exempts users or service accounts from IP filtering restrictions for creating, deleting, or configuring buckets, while still enforcing rules for others. For more information about bucket IP filtering, see Bucket IP filtering(https://cloud.google.com/storage/docs/ip-filtering-overview).

    It's crucial to have a way to regain access to your bucket if you inadvertently block your own IP address. This can happen due to the following reasons:

    • Bucket lockout: When you accidentally add a rule that blocks your own IP address or the IP range of your entire network.
    • Unexpected IP change: In some cases, your IP address might change unexpectedly due to network changes, and you might find yourself locked out.

    To enable specific users or service accounts to bypass IP filtering restrictions on a bucket, grant them the storage.buckets.exemptFromIpFilter permission using a custom role. This permission exempts the user or service account from IP filtering rules for bucket-level operations such as creating, deleting, or configuring buckets. To do so, complete the following steps:

    1. Identify the user or service account that needs to bypass the IP filtering restrictions on specific buckets.
    2. Create a custom role.(https://cloud.google.com/iam/docs/creating-custom-roles)
    3. Add the storage.buckets.exemptFromIpFilter permission to the role.
    4. Grant the custom role to the identified user or service account at the project level.

    For information about granting roles, see Grant a single role (https://cloud.google.com/iam/docs/manage-access-service-accounts#grant-single-role)

    Feb 4, 2025

    2

    S3 to GCS, Cloud Migration, Bash Automation, GCS Transfer

    Automating S3 to GCS Migration Using Bash Scripts

    Feb 20, 2025
    00

    Introduction

    Cloud storage plays a crucial role in modern infrastructure, providing scalable and reliable storage solutions. Many businesses migrate from AWS S3 to Google Cloud Storage (GCS) to leverage cost benefits, integration with Google Cloud services, or optimize their cloud strategies. However, when dealing with hundreds of S3 buckets, manual migration is inefficient and time-consuming.

    To streamline the process, I automated the migration using Bash scripts and Google Cloud’s Storage Transfer Service. In this blog, I’ll walk you through the steps of automating S3 to GCS migration efficiently.

    Why Automate S3 to GCS Migration?

    Handling over 200+ S3 buckets manually would involve:

    • Repetitive tasks – Creating GCS buckets, setting permissions, and transferring data for each bucket.
    • Human errors – Misconfiguration, incorrect bucket names, or missing files.
    • Time-consuming process – Manual intervention would take days to complete.

    By automating this process, we can:

     Save time – Script execution takes a few minutes instead of hours/days.

     Eliminate errors – Ensures all S3 buckets are correctly transferred.

     Enable monitoring & scheduling – Automate recurring data transfers with Google’s Storage Transfer Service.

    Prerequisites

    Before running the scripts, ensure you have:

     A Google Cloud Project with Billing enabled.
    AWS IAM User with s3:ListBucket and s3:GetObject permissions.
    Installed Google Cloud SDK (gcloud CLI) on your local machine.

    Step 1: Creating Google Cloud Storage Buckets

    Each S3 bucket requires a corresponding GCS bucket. The script below reads a list of bucket names from a file and creates them in GCP.

    create_gcs_bucket.sh

    #!/bin/bash

    # Variables

    PROJECT_ID="ccd-poc-project"  # Replace with your GCP project ID

    BUCKET_LIST_FILE="bucket_names.txt"  # File containing bucket names

    OUTPUT_FILE="created_buckets.txt"

    REGION="us-central1"  # Change if needed

    # Check if the bucket list file exists

    if [ ! -f "$BUCKET_LIST_FILE" ]; then

        echo "Error: Bucket names file '$BUCKET_LIST_FILE' not found!"

        exit 1

    fi

    # Read bucket names and create GCS buckets

    while IFS= read -r BUCKET_NAME || [[ -n "$BUCKET_NAME" ]]; do

        if [[ -z "$BUCKET_NAME" ]]; then

            continue  # Skip empty lines

        fi

        # Clean bucket name

        BUCKET_NAME=$(echo "$BUCKET_NAME" | tr -d '\r' | tr -d '[:space:]')

        echo "Creating bucket: $BUCKET_NAME"

        gcloud storage buckets create "gs://$BUCKET_NAME" --location="$REGION" --project="$PROJECT_ID"

        if [ $? -eq 0 ]; then

            echo "gs://$BUCKET_NAME" >> "$OUTPUT_FILE"

            echo "Bucket $BUCKET_NAME created successfully."

        else

            echo "Error: Failed to create bucket $BUCKET_NAME"

        fi

    done < "$BUCKET_LIST_FILE"

      Explanation:

    • Reads bucket names from bucket_names.txt.
    • Cleans up any unnecessary whitespace.
    • Creates GCS buckets with the specified region.
    • Stores created bucket names in created_buckets.txt.

    Step 2: Automating Data Transfer from S3 to GCS

    After creating the required GCS buckets, the next step is to automate data transfer using the gcloud transfer jobs command.

    s3_to_gcs_transfer.sh

    #!/bin/bash

    # Variables

    AWS_ACCESS_KEY="YOUR_AWS_ACCESS_KEY"

    AWS_SECRET_KEY="YOUR_AWS_SECRET_KEY"

    PROJECT_ID="ccd-poc-project"

    CREDS_FILE="aws-creds.json"

    # Create AWS credentials JSON file

    cat <<EOF > "$CREDS_FILE"

    {

      "awsAccessKeyId": "$AWS_ACCESS_KEY",

      "awsSecretAccessKey": "$AWS_SECRET_KEY"

    }

    EOF

    # Read bucket names and create transfer jobs

    while IFS= read -r BUCKET_NAME; do

      echo "Creating transfer job for S3 bucket: $BUCKET_NAME"

      JOB_NAME=$(gcloud transfer jobs create s3://"$BUCKET_NAME" gs://"$BUCKET_NAME" \

        --source-auth-method=AWS_SIGNATURE_V4 \

        --source-creds-file="$CREDS_FILE" \

        --schedule-repeats-every=1d \

        --project="$PROJECT_ID" \

        --format="value(name)")

      if [[ -n "$JOB_NAME" ]]; then

        echo "Transfer job created successfully: $JOB_NAME"

      else

        echo "Failed to create transfer job for $BUCKET_NAME"

      fi

    done < bucket_names.txt

    # Remove credentials file for security

    rm "$CREDS_FILE"

    echo "All transfer jobs created successfully!"

          Explanation:

    • Generates a secure AWS credentials file.
    • Reads bucket names and initiates a transfer job.
    • Checks if an existing transfer is running before creating a new one.
    • Deletes the credentials file after execution for security.

    Step 3: Running the Migration

    To execute the scripts, follow these steps:

    1. Save the S3 bucket names in a file named bucket_names.txt.
    2. Run the GCS bucket creation script:

    chmod +x create_gcs_bucket.sh

    ./create_gcs_bucket.sh

    1. Run the S3-to-GCS transfer script:

    chmod +x s3_to_gcs_transfer.sh

    ./s3_to_gcs_transfer.sh

    Conclusion

    By automating S3 to GCS migration, we:
    Eliminated manual effort for creating 200+ buckets.
    Ensured accurate and efficient data transfers.
    Scheduled daily syncs for incremental updates.

    This solution scales easily and can be modified to include advanced features like logging, monitoring, and notifications.

    If you found this guide helpful, feel free to share your thoughts and experiences in the comments. Happy migrating!

    Read Blog
    GCP, Cloud Security, Automation, Log-based Metrics

    Google Cloud - Security Alerts Automation

    Feb 5, 2025
    00

    In this blog, we will guide you through automating alerting for critical activities and securing your projects against accidental deletion using custom scripts. Setting up log-based metrics and alerts manually can be a time-consuming task, typically taking around an hour or might be manual errors. To optimize this process and enhance efficiency, we have automated it using a combination of Shell and YAML scripts.

    By implementing this solution, you can configure notification channels to receive alerts whenever changes are detected in your cloud environment, ensuring prompt action on potential issues. Our approach involves leveraging YAML files along with the Deployment Manager to create and manage Log Metrics and Alerting Policies. Once these components are successfully deployed, the deployment itself is deleted since it does not interfere with any ongoing services or resources in your cloud environment.

    The following steps will provide you with a detailed, step-by-step guide to implementing this automation effectively, allowing you to maintain better security and operational efficiency.

    Step-by-step guide to implementing this automation effectively

    1. Clone the Repository

    Prerequisites:

    Connect to your Google Cloud Shell and ensure you have the necessary permissions to implement the script.

    git clone https://github.com/nvrvenkat/Securitylogalerts.git

    This command clones the Securitylogalerts repository from GitHub to your local system.

    2. Navigate to the Repository Directory

    cd Securitylogalerts/

    This command changes the directory to the Securitylogalerts folder, where all project files are located.

    3. Description of Metrics and Alerts

  • Assign-resource-to-billing-account-metric: Generates an alert whenever a resource is assigned to a billing account.
  • Create-service-account-key-metric: Sends a notification whenever a service account key is created.
  • Deletion-protection-metric: Issues an alert whenever deletion protection for a resource is disabled.
  • Delete-service-account-key-metric: Logs a warning whenever a service account key is deleted.
  • Disk-deletion-metric: Detects and notifies whenever a disk is removed.
  • Firewall-update-metric: Alerts the team whenever a firewall configuration is modified.
  • Iam-action-metric: Flags an activity whenever an IAM-related action is executed.
  • Instance-delete-metric: Reports an event whenever a virtual machine instance is terminated.
  • Instance-insert-metric: Notifies the team whenever a new virtual machine instance is provisioned.
  • Label-modification-metric: Identifies and reports whenever an instance label is altered or a new one is added.
  • Service-account-creation-metric: Triggers a notification whenever a new service account is established.
  • Set-iam-metric: Raises an alert whenever a new IAM user is assigned a role or created.
  • 4. Replace the Email Address in logmetric_notification.sh

    Update the email address in the shell script “logmetric_notification.sh” with the specific email address where alerts need to be sent.

    This email address will be used to configure the notification channel.

    5. Execute the Notification Channel Script

    ./logmetric_notification.sh 

    Runs the script to create a notification channel with the updated email address.

    It will create a notification channel with the updated email address and generate log-based metrics as specified in the "Metrics and Alerts" section.

    Note: If a notification channel already exists, execute the logmetric.sh file to generate only the log-based metrics.

    6. Navigate to the Log Alert Directory and Execute Scripts

    a) cd /Securitylogalerts/Logalert

    ./scripts.sh

    The scripts.sh script triggers:

    replace_notification_channel.sh: Replaces the notification channel with ACT-MS-alerts in the YAML files used for creating log metric alerts. The output is saved to output.txt.

    logalert.sh: Creates alerting policies based on the updated notification channel in the YAML files.

     Alerting Policies Update:

    • Once scripts.sh is executed, the notification channel in the YAML files will be replaced, and the alerting policies will be created.
    • The alerting policies should be fully deployed within approximately 10 minutes.

    The resources will be created using the Deployment Manager in the Google Cloud Console. Once the resources are created, the deployment will be deleted while retaining the resources.

    b)Add Multiple Notification Channels (optional):cd /Securitylogalerts/Logalertmultiple./scripts.sh

    This command adds multiple notification channels to the alerting policies. Ensure you update the respective notification channel names in the “replace_notification_channel.sh” file before executing the script.It updates the YAML files for log alert metrics with the additional notification channels.

    7. Test Alerting Policies

    This script tests the alerting policies by:

    • Creating and deleting resources (e.g., instances, disks, service accounts, service account keys, and firewall rules).
    • Sending alerts to the configured notification channel to verify that the policies are functioning correctly.

    8. Resource Creation and Deletion Activity

    After executing “alerttest.sh”, resources are automatically created and deleted as per the alerting policy configurations.

    Alerts are triggered and sent to the configured notification channel.

    For example: Alerts for service account key creation and deletion.

    Similar alerts for other resources will be triggered based on resource creation.

    9.Enable Project Liens:

    cd /Securitylogalerts/

    ./Liens.sh

    Executes the “Liens.sh” script, which fetches the project ID automatically and enables liens on the project to prevent accidental deletion.

    By following these steps, you'll be able to automate your cloud environment’s monitoring and security processes, ensuring that you stay ahead of any potential data and revenue losses and minimize the risk of accidental deletions.

    Read Blog
    GCP, Security, DataProtection, IPFiltering, CloudStorage

    How to Secure Your Google Cloud Buckets with IP Filtering

    Feb 4, 2025
    00

    In today's cloud-driven world, sensitive data should be kept as secure as possible. IP filtering allows you to control who accesses your storage by enabling it on your Google Cloud buckets so that only trusted networks are allowed to access. This guide will walk you through the step-by-step process for setting up IP filtering.

    What is IP Filtering?

    IP filtering limits access to a bucket by allowing access only from particular IP ranges or networks. It grants access to your data while blocking traffic requests from unknown or malicious sources.

    Key Use Cases for Google Cloud Bucket IP Filtering

    1. Compliance Requirements

    • Description: Make sure only authorized users can access the bucket to meet legal or industry rules for protecting data.
    • Key: Regulatory Adherence (Following Data Protection Rules)

    2. Protect Public Buckets

    • Description: Enhanced security prevents unauthorized access to publicly accessible buckets by limiting traffic to trusted IPs. This protects sensitive public resources from malicious activity.
    • Key: Access Control for Public Data

    3. VPC Integration

    • Description: Private networking limits bucket access to specific Virtual Private Cloud (VPC) networks. This ensures secure interactions within a well-defined network boundary, enhancing data protection.
    • Key: Network-Specific Access

    4. Controlled Testing

    • Description: Access restriction during testing phases ensures that bucket access is limited to only select IPs or systems. This maintains control over the testing environment and reduces unintended data exposure.
    • Key: Testing Environment Control

    5. Enhanced Monitoring

    • Description: Simplifies audits by restricting access to known and trusted IPs. That is, by allowing access only from trusted IPs, you reduce the number of unknown or suspicious interactions. This makes it easier to track who accessed the bucket and when, simplifying audits and improving transparency.
    • Key: Simplified Audit Trails

    Supported locations

    Bucket IP filtering is available in the following locations:

    • asia-south1
    • asia-south2
    • asia-southeast1
    • asia-southeast2
    • asia-east1
    • asia-east2
    • europe-west1
    • europe-west2
    • us-central1
    • us-east1
    • us-east4
    • us-west1

    Limitations

    Bucket IP filtering has the following limitations:

    • Maximum number of IP CIDR blocks: You can specify a maximum of 200 IP CIDR blocks across public and VPC networks in the IP filter rule for a bucket.
    • Maximum number of VPC networks: You can specify a maximum of 25 VPC networks in the IP filter rules for a bucket.
    • Dual-region support: IP filtering is not supported for dual-regional buckets.
    • Blocked Google Cloud services: Enabling IP filtering on Cloud Storage buckets restricts access for some Google Cloud services, regardless of whether they use a service agent to interact with Cloud Storage.

    How to Enable Bucket IP Filtering

    Step 1: Install the Google Cloud CLI (Command Line Interface) in the server

    SSH into your instance and install the Google Cloud CLI using the following command:

    sudo snap install google-cloud-cli --classic

    Authenticate with Google Cloud:


    gcloud auth login

    You will be prompted to grant access to your Google account.

    Google Cloud SDK wants to access your Google Account

    Set the desired project ID:

    gcloud config set project [PROJECT_ID]

    Step 2: Verify Your Bucket

    1. List all the buckets in your project: gcloud storage buckets list
    2. Locate the bucket name you want to configure.

    Step 3: Prepare the JSON Configuration

    Create a JSON file to define your IP filtering rules:

    • Open a text editor to create the file:
    nano ip-filter-config.json or vim ip-filter-config.json
    • Add the following configuration and save the file:

    {

     "mode": "Enabled",

     "publicNetworkSource":

       {

       "allowedIpCidrRanges": ["RANGE_CIDR"]

       },

     "vpcNetworkSources":

         [

                 {

                        "network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",

                         "allowedIpCidrRanges": ["RANGE_CIDR"]

                 }

         ]

    }

    Replace the IP ranges and VPC network details with your specific requirements.

    Step 4: Apply the Configuration

    Run the following command to update your bucket with the IP filtering configuration:

    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

    Step 5: Verify the Configuration

    After applying the rules, describe the bucket to confirm the changes:

    gcloud storage buckets describe [BUCKET_NAME(gsutil URI)]

    User can  see the IP filter configuration in the bucket

    Step 5: Test Access

    • Ensure  requests from allowed IPs can access the bucket.
    • Verify that non-whitelisted IPs are denied access.

    How to Disable or Remove IP Filtering Disabling IP Filtering

    • If we want to disable the modify the json from “Enabled” to “Disabled” and update the bucket to apply the modify configuration.

     "mode": "Disabled",

     "publicNetworkSource":

       {

       "allowedIpCidrRanges": ["RANGE_CIDR"]

       },

     "vpcNetworkSources":

         [

                 {

           "network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",

                         "allowedIpCidrRanges": ["RANGE_CIDR"]

                 }

         ]

    }

    Update the bucket with the modified configuration:

    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

    Removing IP Filtering Configuration

    • To remove any existing IP filtering configuration from the bucket:
    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ipfiltering --clear-ip-filter

    By enabling IP filtering, you can protect your Google Cloud buckets from unauthorized access and ensure compliance with organizational security policies. Whether you are securing sensitive data or limiting access during testing, these steps provide a robust framework for managing bucket security effectively.

    Bypass bucket IP filtering rules:

    Bypassing bucket IP filtering rules exempts users or service accounts from IP filtering restrictions for creating, deleting, or configuring buckets, while still enforcing rules for others. For more information about bucket IP filtering, see Bucket IP filtering(https://cloud.google.com/storage/docs/ip-filtering-overview).

    It's crucial to have a way to regain access to your bucket if you inadvertently block your own IP address. This can happen due to the following reasons:

    • Bucket lockout: When you accidentally add a rule that blocks your own IP address or the IP range of your entire network.
    • Unexpected IP change: In some cases, your IP address might change unexpectedly due to network changes, and you might find yourself locked out.

    To enable specific users or service accounts to bypass IP filtering restrictions on a bucket, grant them the storage.buckets.exemptFromIpFilter permission using a custom role. This permission exempts the user or service account from IP filtering rules for bucket-level operations such as creating, deleting, or configuring buckets. To do so, complete the following steps:

    1. Identify the user or service account that needs to bypass the IP filtering restrictions on specific buckets.
    2. Create a custom role.(https://cloud.google.com/iam/docs/creating-custom-roles)
    3. Add the storage.buckets.exemptFromIpFilter permission to the role.
    4. Grant the custom role to the identified user or service account at the project level.

    For information about granting roles, see Grant a single role (https://cloud.google.com/iam/docs/manage-access-service-accounts#grant-single-role)

    Read Blog

    The Ankercloud Team loves to listen