Resources

The latest industry news, interviews, technologies and resources.

LATEST
BLOG
CASE STUDIES
announcements
E-BOOKS
WHITEPAPERS
EVENTS
WEBINARS
TrialVault
Total
00
posts
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
AI, Agentic AI, AI Solutions, Business Automation

The Rise of the Solo AI: Understanding How Intelligent Agents Operate Independently

Jun 19, 2025
00

The world of Artificial Intelligence is evolving at breakneck speed, and if you thought Generative AI was a game-changer, prepare yourself for the next frontier: Agentic AI. This isn't just about AI creating content or making predictions; it's about AI taking initiative, making decisions, and autonomously acting to achieve defined goals, all without constant human oversight. Welcome to a future where your digital workforce is not just smart, but truly agentic

What exactly is Agentic AI? The Future of Autonomous Action

Think of traditional AI as a highly intelligent assistant waiting for your commands. Generative AI then empowered this assistant to create original content based on your prompts. Now, with Agentic AI, this assistant becomes a proactive, self-managing colleague or robot.

Agentic AI systems are characterized by their ability to:

  • Autonomy: They can perform tasks independently, making decisions and executing actions without constant human intervention.
  • Adaptability: They learn from interactions, feedback, and new data, continuously refining their strategies and decisions.
  • Goal-Orientation: They are designed to achieve specific objectives, breaking down complex problems into manageable steps and seeing them through.
  • Tool Integration: They can seamlessly interact with various software tools, databases, and APIs to gather information and execute tasks, much like a human would.
  • Reasoning and Planning: Beyond simple rule-following, Agentic AI can reason about its environment, plan multi-step processes, and even recover from errors.

This evolution from reactive to proactive AI is not just a technological leap; it's a paradigm shift that promises to redefine how businesses operate. Gartner projects that by 2028, 33% of enterprise software applications will have integrated Agentic AI, a dramatic increase from less than 1% in 2024, highlighting its rapid adoption.

The Impact is Real: Why Agentic AI is a Trending Imperative

Businesses are no longer just experimenting with AI; they are investing heavily in it. A recent IBM study revealed that executives expect AI-enabled workflows to surge eightfold by the end of 2025, with Agentic AI at the core of this transformation. Why the urgency? Because the benefits are profound:

  • Boosted Productivity & Efficiency: Imagine repetitive, time-consuming tasks being handled entirely by AI agents, freeing up your human workforce to focus on strategic initiatives and creative problem-solving.
  • Enhanced Decision-Making: Agentic AI can analyze vast datasets in real-time, identify patterns, and provide actionable insights, leading to more informed and proactive business decisions.
  • Cost Reduction: Automating complex processes and optimizing resource allocation directly translates into significant cost savings.
  • Unlocking New Revenue Streams: By automating customer interactions, personalizing experiences, and optimizing operations, Agentic AI can directly contribute to increased sales and market expansion.
  • Improved Employee and Customer Experience: From streamlined internal workflows to hyper-personalized customer service, Agentic AI elevates interactions across the board.
  • Competitive Advantage: Early adopters of Agentic AI are already seeing a distinct edge in their respective markets, setting new standards for innovation and operational excellence.

Top Use Cases: Where Agentic AI Shines Brightest

The applications of Agentic AI are vast and growing across every industry. Here are some of the top use cases where it's already making a significant impact:

  • Smart Manufacturing
    • Predictive Maintenance & Quality Control: Agentic AI monitors equipment in real time, predicts failures, and schedules maintenance to prevent unplanned downtime while also using computer vision to detect product defects and reduce waste by up to 60%.
    • Autonomous Inventory & Supply Chain Optimization: AI agents track inventory levels, forecast demand, and optimize supply chain logistics to avoid stockouts or overstocking, dynamically adjusting to market changes and disruptions for cost efficiency and seamless operations.
  • Smart Robots
    • Dynamic Task Allocation & Autonomous Assembly: Agentic AI enables robots to adapt to new tasks and environments in real time, optimizing assembly processes and resource usage for faster, more flexible production with minimal human intervention.
    • Collaborative Robotics (Cobots) & Real-Time Monitoring: AI-powered robots work safely alongside humans, adjusting behaviors based on real-time conditions, and continuously monitor production lines to detect anomalies and ensure quality and safety.
  • Customer Service & Engagement:
    • Autonomous Support Agents: Beyond traditional chatbots, agentic AI can independently resolve complex customer inquiries, access and analyze live data, offer tailored solutions (e.g., refunds, expedited orders), and update records.
    • Personalized Customer Journeys: Anticipating customer needs and preferences, agentic AI can proactively offer relevant products, services, and support, enhancing satisfaction and loyalty.

  • Finance & Fraud Detection:
    • Automated Trading: Analyzing market data and executing trades autonomously to optimize investment decisions.
    • Enhanced Fraud Detection: Proactively identifying and flagging suspicious patterns in transactions and user behavior to mitigate financial risks.

  • Software Development & IT Operations (DevOps):
    • Automated Code Generation & Testing: AI agents can generate code segments, provide real-time suggestions, and automate software testing, accelerating development cycles.
    • Proactive System Monitoring & Maintenance: Continuously scanning for anomalies, triggering automated responses to contain threats, and scheduling predictive maintenance.

  • Human Resources (HR):
    • Automated Recruitment: From screening resumes and scheduling interviews to simulating interview experiences for candidates.
    • Personalized Onboarding: Tailoring onboarding sessions and providing relevant information to new hires.

Ankercloud's Agentic AI Solutions: Your Partner in the Autonomous Future

At Ankercloud, we don't just talk about Agentic AI; we build and deploy real-world solutions that deliver tangible business value. We combine cutting-edge technology with our deep industry expertise to help you navigate the complexities of this new frontier.

Our approach to Agentic AI is rooted in a fundamental understanding of your business needs. We work closely with you to:

  • Analyze Existing Workflows: We identify opportunities where Agentic AI can significantly enhance efficiency and outcomes.
  • Integrate Human-in-the-Loop Solutions: Our solutions are designed to augment, not replace, your human workforce, ensuring critical oversight and collaboration.
  • Seamless Integration: We design AI agents that integrate effortlessly with your existing systems (ERPs, CRMs, finance tools) to enhance workflows without disruption.
  • Custom GenAI Models: We develop bespoke Agentic AI models tailored to your specific business goals, leveraging the power of Generative AI for advanced reasoning and content generation.
  • Industry-Specific Expertise: Our experience spans diverse industries, allowing us to build solutions that address your unique challenges and opportunities.
  • Robust Governance and Security: We embed ethical guardrails, robust security protocols, and explainable AI capabilities from the outset, ensuring responsible and trustworthy autonomous operations.

The future of business is autonomous, adaptive, and intelligent. Agentic AI is no longer a concept; it's a tangible reality that is reshaping industries and creating new opportunities for growth.

Are you ready to unlock the full potential of Agentic AI for your business?

Contact Ankercloud today to explore how our Agentic AI solutions can transform your operations and propel you into the autonomous future.

Read Blog
Sovereign Cloud, Data Residency, Cloud Compliance, Digital Autonomy

The Cloud Promised Freedom. But What About Control? Drive Your Digital Innovation with Sovereign Cloud

Jun 17, 2025
00

Remember the dream of the cloud? Infinite scale, instant agility, unparalleled innovation. It’s a dream that has revolutionized businesses globally. But in the relentless race for digital supremacy, a new, critical question has emerged from the shadows: who truly controls your data?

In an era of shifting global alliances, escalating cyber threats, and a tidal wave of new data regulations sweeping across nations – like India’s pivotal Digital Personal Data Protection (DPDP) Act of 2023 and the recent EU Data Act – true cloud freedom isn't about limitless access; it’s about unwavering control. This isn't just a technical upgrade; it's a strategic awakening, and its name is Sovereign Cloud.

At Ankercloud, we’re witnessing this paradigm shift firsthand. Businesses are no longer just asking "Where is my data stored?" They're demanding, "Who can touch my data? What laws govern it? And how can I be absolutely sure of my digital autonomy?" As your trusted partner in cloud solutions and services, we're here to tell you: Sovereign Cloud is the definitive answer, and it’s fast becoming the bedrock of future-proof enterprises.

Digital Borders: Unpacking Sovereign Cloud, Data Residency, and Digital Autonomy

To truly grasp this new frontier, let’s demystify the terms that define it:

  • Data Residency: This is the foundational layer. It's the absolute guarantee that your data physically resides and is processed within the geographical boundaries of a specific country. For Indian enterprises, this means your sensitive customer records, intellectual property, and financial data stay firmly on Indian soil.
  • Data Sovereignty: This concept elevates residency into the legal realm. It means your data is not only physically located in a specific country but is also exclusively subject to the laws and governance structures of that nation. No backdoor access, no extraterritorial legal claims from foreign powers. Your data dances to your nation’s tune.
  • Digital Autonomy: This is the ultimate aspiration. It’s the profound ability for an organization – and by extension, a nation – to chart its own digital course, free from undue external influence. It’s about owning your technology stack, controlling operational workflows, safeguarding critical intellectual property, and ensuring that no foreign entity, however powerful, can dictate the terms of your digital existence.
  • Sovereign Cloud: This isn’t just a server in a specific country. It’s a meticulously engineered cloud ecosystem where every layer – infrastructure, operations, administrative access, and legal frameworks – is purpose-built to ensure your data, applications, and operations are unconditionally subject to the laws and jurisdiction of a specific nation. It's your fortress in the cloud.

The Unstoppable Momentum: Why Sovereign Cloud is a 2025 Imperative

The drive towards Sovereign Cloud isn't a fleeting trend; it's an economic and geopolitical force reshaping the global digital landscape.

  1. The Regulatory Hammer Falls: From Europe’s GDPR and upcoming AI Act to India’s landmark DPDP Act (2023) and the new EU Data Act, governments worldwide are legislating stringent data protection, cross-border transfer rules, and even data localization. The penalties for non-compliance are no longer just abstract; they're substantial and real.
  2. Geopolitical Chessboard: In an increasingly complex global arena, the specter of foreign government data access requests (like those under the US CLOUD Act) looms large. Businesses cannot afford to have their critical data exposed to such vulnerabilities, risking competitive advantage or even national security.
  3. Fortifying Critical Infrastructure: For vital sectors like energy, finance, defense, and healthcare, compromising data integrity or availability isn't an option. Sovereign Cloud offers the ironclad assurance needed to protect national assets.
  4. Supply Chain Due Diligence: Who builds your cloud? Who manages it? The origin and operational control of cloud infrastructure and personnel are under unprecedented scrutiny. Sovereign Cloud provides transparency and control over your digital supply chain.
  5. Earning and Keeping Trust: For many sectors, or those handling vast amounts of personal data, visibly committing to data sovereignty is a powerful statement of integrity. It builds and maintains invaluable public trust, a currency more precious than ever.

Where Trust Meets Technology: Top Sovereign Cloud Use Cases

Sovereign Cloud is becoming indispensable across a variety of sectors that simply cannot compromise on control:

  • Government & Public Sector: Mandated by law in many countries for highly sensitive citizen data, national security information, and critical government applications.
  • Financial Services: Banks, insurance companies, and fintech firms handling vast amounts of sensitive customer financial data and adhering to strict industry-specific regulations (e.g., RBI guidelines in India). A global financial services firm, for instance, partnered with Ankercloud to define the necessary architecture and implement robust security controls across multiple jurisdictions to meet stringent local regulatory requirements.
  • Healthcare: Protecting patient health records (PHR/EHR) and complying with stringent privacy regulations (e.g., HIPAA in the US, similar acts globally).
  • Defense & Aerospace: Critical for classified information, R&D, and operational data where national security is paramount. A government agency, as shared by Ankercloud's MD Judith Evers, needed to ensure citizen data remained within national borders and was subject solely to national laws, including strict control over administrative access to their cloud environment.
  • Telecommunications: Managing subscriber data and critical network infrastructure, often subject to national communication laws.
  • Manufacturing & Industrial IoT: Protecting intellectual property, operational technology (OT) data, and ensuring supply chain resilience, especially for data generated at the edge. Ankercloud assisted a European manufacturing company in securing highly sensitive IoT data from their smart factories, where data sovereignty was crucial for intellectual property protection and operational resilience against cyber threats, by focusing on securing the data pipeline from edge to cloud.
  • Research & Development: Safeguarding proprietary algorithms, research data, and intellectual property.

The Anatomy of Control: What Defines a True Sovereign Cloud

A truly sovereign cloud environment isn't just about putting a server in a specific country. It's a holistic commitment to control:

  1. Unbreakable Jurisdictional Control: Every byte, every process, every application lives and breathes under the legal authority of the designated nation.
  2. Operational Independence, Local Hands: The people managing, maintaining, and supporting your cloud environment must reside in the local jurisdiction, subjected to its laws. No "follow-the-sun" support models that cross sensitive borders.
  3. Glass Box Transparency & Compliance: Clear, auditable proof of adherence to local laws and regulations. Robust processes for rejecting, challenging, or disclosing any external data access requests.
  4. Fort Knox Data Segregation & Encryption: Your data is not just stored ; it’s encrypted with state-of-the-art methods, and critically, the cryptographic keys are managed exclusively under local control.
  5. Scrutinized Supply Chain: Full visibility and control over the origin of hardware, software, and services. Knowing the nationality of every vendor and sub-processor.
  6. Resilience Within Borders: Disaster recovery and business continuity plans are designed to ensure data resilience and availability without compromising residency or sovereignty requirements.

Navigating the Sovereignty Labyrinth: Challenges We Help You Conquer

Embracing digital sovereignty is a powerful move, but it's not without its complexities. Ankercloud helps you navigate:

  • Cost vs. Control: While dedicated sovereign environments can seem pricier than global hyperscalers, we help you optimize costs by right-sizing solutions and focusing on critical workloads that genuinely require sovereignty.
  • Integration Puzzles: Seamlessly integrating a sovereign cloud with your existing hybrid or multi-cloud landscape demands expert architectural design to prevent data silos or operational friction.
  • Avoiding Vendor Lock-in: We prioritize solutions with open standards and strong data portability, ensuring you maintain flexibility even within a dedicated sovereign environment.
  • The Regulatory Tightrope: Data sovereignty laws are dynamic. Our compliance experts provide continuous monitoring and strategic guidance to ensure you always stay ahead of evolving regulations.
  • Talent Scarcity: Building and managing truly sovereign clouds requires niche expertise. Ankercloud brings that specialized talent to your doorstep, filling skill gaps and accelerating your journey. As Ankercloud's MD Judith Evers notes, "The real challenge lies in moving from strategy to execution," emphasizing the need for expertise in navigating implementation complexity and integrating with existing systems.

Ankercloud: Your Architects of Digital Sovereignty

At Ankercloud, we don't just provide cloud services; we architect your digital future with an unwavering commitment to your control and compliance. For businesses across India and around the world seeking to fortify their data defenses and secure their digital autonomy, we are your trusted partner.

As Ankercloud's Managing Director, Judith Evers, discussed in her interview with The Daily Pulse, the focus is not just on leveraging specific cloud providers but on building a strategic layer on top of hyperscalers that ensures true sovereignty through robust governance, security, compliance, identity management, and operational control. She emphasizes Ankercloud's role as a trusted advisor, bridging the gap between business needs and technical solutions. Read the full interview with Ankercloud's MD, Judith Evers, on The Daily Pulse to gain deeper insights into driving Sovereign Cloud adoption: Click Here

Here’s how Ankercloud empowers your journey to true digital sovereignty:

  • Strategic Blueprinting: We begin with a deep dive into your unique data landscape, regulatory obligations, and risk appetite. Our experts then craft a bespoke cloud strategy that perfectly balances sovereignty needs with your performance and budget goals.
  • Precision Data Localization: Leveraging our deep understanding of regulatory landscapes and partnerships with cloud providers offering local regions (like AWS regions in India), we engineer solutions that guarantee your data’s absolute residency, strictly compliant with local acts like the DPDP Act and the EU Data Act.
  • Ironclad Compliance & Security: We don't just promise compliance; we embed it.
    • Rigorous Security Assessments: Proactive evaluations covering everything from physical security to advanced threat modeling, penetration testing, and continuous vulnerability management.
    • Regulatory Acceleration: We simplify the daunting task of achieving certifications like ISO 27001, SOC 2, HIPAA, GDPR, and custom regional frameworks, providing a clear roadmap to auditable compliance.
    • Uncompromised Encryption: Implementing cutting-edge encryption for data at rest and in transit, with advanced key management solutions that keep the keys to your kingdom firmly in your hands.
  • Operational Autonomy & Transparency: We help you implement granular access controls, robust Identity and Access Management (IAM), and transparent operational procedures, ensuring your cloud environment is managed by authorized personnel within the required jurisdiction. Judith Evers highlights the importance of human-in-the-loop oversight for critical control and accountability within sovereign environments.
  • Seamless Hybrid & Multi-Cloud Harmony: For enterprises navigating complex IT landscapes, we design and implement integrated solutions that extend data sovereignty and compliance seamlessly across your hybrid and multi-cloud environments.
  • Resilience Engineered for Sovereignty: Our disaster recovery and business continuity plans are meticulously designed to ensure your data is always available and protected, without ever compromising its residency or sovereignty requirements.
  • Continuous Governance & Advisory: The digital landscape is always moving. Ankercloud offers ongoing monitoring, auditing, and expert advisory to ensure your sovereign cloud strategy remains robust, compliant, and ahead of the curve.

Ready to start your journey to Sovereign Cloud?

To help you gain clarity on your current cloud posture and readiness for enhanced data control, Ankercloud offers a comprehensive ESC Readiness Assessment. This assessment provides a detailed evaluation of your existing infrastructure and processes, identifying key areas for improvement and a clear roadmap towards achieving full sovereign cloud compliance and digital autonomy.

Learn more about and initiate your ESC Readiness Assessment on the AWS Marketplace: https://aws.amazon.com/marketplace/pp/prodview-yngepquunjfue

The cloud promised freedom, and with Sovereign Cloud, you can finally have it – true freedom that comes from absolute control. It's time to stop worrying about who might access your data and start focusing on what your data can do for you.

Don't just migrate to the cloud. Modernize with sovereignty. Partner with Ankercloud to build your secure, compliant, and truly autonomous digital future.

Contact us today to begin your journey to digital sovereignty.

Read Blog
AWS, WAF, DDoS Protection, IP Blocking, CloudFormation

Enhancing DDoS Protection with Extended IP Block Duration Using AWS WAF Rate-Based Rules

Mar 26, 2025
00

Problem

DDoS attackers use the same IPs to send many HTTP requests once the AWS WAF rate limit rule removes the block. The default block lasts only for a definite time, so attacks repeat again. We need a solution that makes the block time for harmful IPs last indefinitely, keeping them blocked until the attack persists. 

Solution Workflow

  1. CloudFormation: Use the predefined CFT template to set custom block time for harmful IPs. Adjust by how severe the attack is.
  2. EventBridge & Lambda: Let EventBridge call a Lambda function every minute. The function checks AWS WAF’s rate rule for blocked IPs.
  3. Store in S3: Save blocked IPs in an S3 bucket with timestamps for records.
  4. Update WAF Custom IP Sets: Lambda revises WAF custom IP sets by keeping IPs within block time. It also drops IPs that passed the block period.
  5. Regular Updates: Run the process every minute to keep only harmful IPs blocked and avoid an outdated, heavy block list.

Deploying the Solution

  1. Download the CloudFormation Template:
    Download the customized AWS CloudFormation template (customized-block-period-template.yaml) from the solution’s GitHub repository.
  2. Create a Stack in CloudFormation Console:
    Open the AWS CloudFormation console, then create a new stack with the downloaded template. Check the CloudFormation User Guide for detailed instructions for stack creation.
  1. Specify Stack Details:
    On the Specify Stack Details page, type a unique stack name. Enter the required parameters, such as blocking duration and configuration settings listed in the prerequisites.
  1. Provisioning Resources:

The template provisions several AWS resources, including:

  • AWS WAF IP Sets, which store the blocked IPs.
  • An Amazon EventBridge Rule that triggers the Lambda function at regular intervals.
  • Amazon S3 Buckets to store the blocked IP addresses and their timestamps.
  • AWS IAM Roles with permissions to allow Lambda functions to query AWS WAF and access other required resources.
  • The AWS Lambda function itself, which performs the logic for tracking and updating the blocked IP addresses.
  1. Deploy and Apply the WAF Rule:
    Deployment takes under 15 minutes. When the stack shows CREATE_COMPLETE, build a custom AWS WAF rule to apply custom IP sets and block the malicious IPs.

6. Reviewing IPs that are Blocked:

Go to the IP Sets section on the AWS WAF console. Choose the set named with the prefix "IPv4-IPset." You can check the list of IPs blocked by the rate limit rule in the set produced by the stack.

7. Whitelisting or Removing Specific IPs from the Blocked List

To remove an IP from the blocked list, merely deleting it from the IP set in the AWS WAF console does not work. This is because the IP set updates every minute with a JSON file stored in an S3 bucket (controlled by the CloudFormation template).

To remove an IP properly, delete it from the JSON file; then upload the revised file to the S3 bucket. You may use a Lambda script to automate this process. The script lets you choose the IP to remove; it completes each required step.

You can find the environment variable details and the Python code for the script here:

 https://rentry.co/ew84t8tu

Blocking Requests Originating from Referrer URLs

Problem Statement: 

Third-party websites might copy images or content from your site and use them on their platforms. These requests come via referrer URLs.

Solution:

To block such requests, follow these steps:

  1. Identify the Referrer URL:
  • Open the site suspected of scraping your content in a browser.
  • Right-click on the page and select Inspect to open the developer tools.
  • Navigate to the Network tab and reload the page.
  • Look for requests made to your site. For example, if the site https://www.webpagetest.org/ is scraping your images, you might find requests to your domain in the list.
  • Identify the image being used (e.g., twitter.svg), and click on the request.
  1. Retrieve the Referrer URL:
  • In the request details on the right panel, locate the Headers section.
  • Scroll to find the Referer value. This will show the URL of the site making the request (e.g., https://www.webpagetest.org/).
  1. Block the Referrer in AWS WAF:
  • Open the AWS WAF console and create a new Custom Rule.
  • Set the Inspect field to Single Header.
  • Use Referer as the Header Field Name.
  • Set Match Type to Exactly matches string.
  • Enter the referrer URL (e.g., https://www.webpagetest.org/) in the String to Match field.
  • Set the Action to Block. You can optionally configure a custom response code for blocked requests.

Outcome

By enforcing this rule, you can block requests from specific referrer URLs stopping site mirroring and web scraping by third-party sites.

Read Blog
AWS, Amplify, DevOps, Automation, CI CD, Shell Scripting

Automating AWS Amplify: Streamlining CI/CD with Shell & Expect Scripts

Feb 27, 2025
00

Introduction

Automating cloud infrastructure and deployments is a crucial aspect of DevOps. AWS Amplify provides a powerful framework for developing and deploying full-stack applications. However, initializing and managing an Amplify app manually can be time-consuming, especially when integrating it into a CI/CD pipeline like Jenkins.

This blog explores how we automated the Amplify app creation process in headless mode using shell scripting and Expect scripts, eliminating interactive prompts to streamline our pipeline.

Setting Up AWS and Amplify CLI

1. Configure AWS Credentials

Before initializing an Amplify app, configure AWS CLI with your Access Key and Secret Key:

aws configure

2. Install and Configure Amplify CLI

To install Amplify CLI and configure it:

npm install -g @aws-amplify/cli

amplify configure

This will prompt you to create an IAM user and set up authentication.

Automating Amplify App Creation

1. Initialize the Amplify App Using a Script

We created a shell script amplify-init.sh to automate the initialization process.

amplify-init.sh

#!/bin/bash

set -e

IFS='|'

AMPLIFY_NAME=amplifyapp

API_FOLDER_NAME=amplifyapp

BACKEND_ENV_NAME=staging

AWS_PROFILE=default

REGION=us-east-1

AWSCLOUDFORMATIONCONFIG="{\

\"configLevel\":\"project\",\

\"useProfile\":true,\

\"profileName\":\"${AWS_PROFILE}\",\

\"region\":\"${REGION}\"\

}"

AMPLIFY="{\

\"projectName\":\"${AMPLIFY_NAME}\",\

\"envName\":\"${BACKEND_ENV_NAME}\",\

\"defaultEditor\":\"Visual Studio Code\"\

}"

amplify init --amplify $AMPLIFY --providers $AWSCLOUDFORMATIONCONFIG --yes

Run the script:

./amplify-init.sh

2. Automating API and Storage Integration

Since Amplify prompts users for inputs, we used Expect scripts to automate API and storage creation.

add-api-response.exp

#!/usr/bin/expect

spawn ./add-api.sh

expect "? Please select from one of the below mentioned services:\r"

send -- "GraphQL\r"

expect eof

add-storage-response.exp

#!/usr/bin/expect

spawn ./add-storage.sh

expect "? Select from one of the below mentioned services:\r"

send -- "Content\r"

expect eof

These scripts eliminate manual input, making Amplify API and storage additions fully automated.

Automating Schema Updates

One of the biggest challenges was automating schema.graphql updates without manual intervention. The usual approach required engineers to manually upload the file, leading to potential errors.

To solve this, we automated the process with an Amplify Pull script.

amplify-pull.sh

#!/bin/bash

set -e

IFS='|'

AMPLIFY_NAME=amp3

API_FOLDER_NAME=amp3

BACKEND_ENV_NAME=prod

AWS_PROFILE=default

REGION=us-east-1

APP_ID=dzvchzih477u2

AWSCLOUDFORMATIONCONFIG="{\

\"configLevel\":\"project\",\

\"useProfile\":true,\

\"profileName\":\"${AWS_PROFILE}\",\

\"region\":\"${REGION}\"\

}"

AMPLIFY="{\

\"projectName\":\"${AMPLIFY_NAME}\",\

\"appId\":\"${APP_ID}\",\

\"envName\":\"${BACKEND_ENV_NAME}\",\

\"defaultEditor\":\"code\"\

}"

amplify pull --amplify $AMPLIFY --providers $AWSCLOUDFORMATIONCONFIG --yes

This script ensures that the latest schema changes are pulled and updated in the pipeline automatically.

Integrating with Jenkins

Since this automation was integrated with a Jenkins pipeline, we enabled "This project is parameterized" to allow file uploads directly into the workspace.

  1. Upload the schema.graphql file via Jenkins UI.
  2. The script pulls the latest changes and updates Amplify automatically.

This method eliminates manual intervention, ensuring consistency in schema updates across multiple environments.

Conclusion

By automating AWS Amplify workflows with shell scripting and Expect scripts, we achieved:  Fully automated Amplify app creation
  Eliminated manual schema updates
  Seamless integration with Jenkins pipelines
  Faster deployments with reduced errors

This approach significantly minimized manual effort, ensuring that updates were streamlined and efficient. If you're using Amplify for your projects, automation like this can save countless hours and improve developer productivity.

Have questions or feedback? Drop a comment below! 

Read Blog
GKE Ingress, Kubernetes Networking, Google Cloud, Load Balancing, Cloud Security

Configuring GKE Ingress: Traffic Routing, Security, and Load Balancing

Feb 25, 2025
00

GKE Ingress acts as a bridge between external users and your Kubernetes services. It allows you to define rules for routing traffic based on hostnames and URL paths, enabling you to direct requests to different backend services seamlessly.

A single GKE Ingress controller routes traffic to multiple services by identifying the target backend based on hostname and URL paths. It supports multiple certificates for different domains.

FrontendConfig enables automatic redirection from HTTP to HTTPS, ensuring encrypted communication between the web browser and the Ingress.
BackendConfig that allows you to configure advanced settings for backend services. It provides additional options beyond standard service configurations, enabling better control over traffic handling, security, and load balancing behavior.

Setup GKE ingress with application loadbalancer

To specify an Ingress class, you must use the kubernetes.io/ingress.class annotation.The “gce” class deploys an external Application Load Balancer

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: my-ingress

annotations:

kubernetes.io/ingress.class: “gce”

Configure FrontendConfiguration:

apiVersion: networking.gke.io/v1beta1

kind: FrontendConfig

metadata:

name: my-frontend-config

spec:

redirectToHttps:

enabled: true

The FrontendConfig resource in GKE enables automatic redirection from HTTP to HTTPS, ensuring secure communication between clients and services.

Associating FrontendConfig with your Ingress

You can associate a FrontendConfig with an Ingress. Use the “networking.gke.io/v1beta1.FrontendConfig” to annotate with the ingress.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

Configure Backend Configuration:

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

timeoutSec: 40

BackendConfig to set a backend service timeout period in seconds.The following BackendConfig manifest specifies a timeout of 40 seconds.

Associate the backend configuration with service:

apiVersion: v1

kind: Service

metadata:

annotations:

cloud.google.com/backend-config: ‘{“ports”:{“my-backendconfig”}}’

cloud.google.com/neg: ‘{“ingress”: true}’

spec:

ports:

- name: app

port: 80

protocol: TCP

targetPort: 50000

We can specify a custom BackendConfig for one or more ports using a key that matches the port’s name or number. The Ingress controller uses the specific BackendConfig when it creates a load balancer backend service for a referenced Service port.

Creating an Ingress with a Google-Managed SSL Certificate

To set up a Google-managed SSL certificate and link it to an Ingress, follow these steps:

  • Create a ManagedCertificate resource in the same namespace as the Ingress.
  • Associate the ManagedCertificate with the Ingress by adding the annotation networking.gke.io/managed-certificates to the Ingress resource.

apiVersion: networking.gke.io/v1

kind: ManagedCertificate

metadata:

name: managed-cert

spec:

domains:

- hello.example.com

- world.example.com

Associate the SSL with Ingress

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ingress

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

networking.gke.io/managed-certificates: managed-cert

kubernetes.io/ingress.class: “gce”

associate it with the managed-certificate by adding an annotation.

Assign Static IP to Ingress

When hosting a web server on a domain, the application’s external IP address should be static to ensure it remains unchanged.

By default, GKE assigns ephemeral external IP addresses for HTTP applications exposed via an Ingress. However, these addresses can change over time. If you intend to run your application long-term, it is essential to use a static external IP address for stability.

Create a global static ip from gcp console with specific name eg: web-static-ip and associate it with ingress by adding the global-static-ip-name annotation.

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

name: ingress

annotations:

networking.gke.io/v1beta1.FrontendConfig: “my-frontend-config”

networking.gke.io/managed-certificates: managed-cert

kubernetes.io/ingress.class: “gce”

kubernetes.io/ingress.global-static-ip-name: “web-static-ip”

Google Cloud Armor Ingress security policy

Google Cloud Armor security policies safeguard your load-balanced applications against web-based attacks. Once configured, a security policy can be referenced in a BackendConfig to apply protection to specific backends.

To enable a security policy, add its name to the BackendConfig. The following example configures a security policy named security-policy:

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

namespace: cloud-armor-how-to

name: my-backendconfig

spec:

securityPolicy:

name: “security-policy”

User-defined request/response headers

A BackendConfig can be used to define custom request headers that the load balancer appends to requests before forwarding them to the backend services.

These custom headers are only added to client requests and not to health check probes. If a backend requires a specific header for authorization and it is absent in the health check request, the health check may fail.

To configure user-defined request headers, specify them under the customRequestHeaders/customResponseHeaders property in the BackendConfig resource. Each header should be defined as a header-name:header-value string.

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

customRequestHeaders:

headers:

- “X-Client-Region:{client_region}”

- “X-Client-City:{client_city}”

- “X-Client-CityLatLong:{client_city_lat_long}”

apiVersion: cloud.google.com/v1

kind: BackendConfig

metadata:

name: my-backendconfig

spec:

customResponseHeaders:

headers:

- “Strict-Transport-Security: max-age=28800; includeSubDomains”

Read Blog
Kubernetes, ArgoCD, GitOps, DevOps, ContinuousDelivery

Automating Kubernetes Deployments with Argo CD

Feb 25, 2025
00

Argo CD is a declarative, GitOps-based continuous delivery tool designed for Kubernetes. It allows you to manage and automate application deployment using Git as the single source of truth. Argo CD continuously monitors your Git repository and ensures the Kubernetes environment matches the desired state described in your manifest.

Step 1: Create and Connect to a Kubernetes Cluster

Steps to Create and Connect

Create a Kubernetes Cluster
If you’re using Google Kubernetes Engine (GKE), you can create a cluster using the following command:

gcloud container clusters create <cluster name> — zone <zone of cluster>

Replace <cluster name> with your desired cluster name and <zone of cluster> with your preferred zone.

Connect to the Cluster
Once the cluster is created, configure kubectl (the Kubernetes CLI) to interact with it:

gcloud container clusters get-credentials argo-test — zone us-central1-c

Verify the connection by listing the nodes in the cluster:
kubectl get nodes

Step 2: Install Argo CD

Installing Argo CD means deploying its server, UI, and supporting components as Kubernetes resources in a namespace.

Steps to Install

Create a Namespace for Argo CD
A namespace in Kubernetes is a logical partition to organize resources:

kubectl create namespace argocd

Install Argo CD Components
Use the official installation manifest to deploy all Argo CD components:

kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

This deploys key components like the API server, repository server, application controller, and web UI.

Step 3: Expose Argo CD Publicly

By default, the argocd-server service is configured as a ClusterIP, making it accessible only within the cluster. You need to expose it for external access.

Options to Expose Argo CD

Option-1 LoadBalancer
Change the service type to LoadBalancer to get an external IP address:

kubectl patch svc argocd-server -n argocd -p ‘{“spec”: {“type”: “LoadBalancer”}}’

Ingress
For advanced routing and SSL support, create an Ingress resource. This approach is recommended if you want to add HTTPS to your setup.

Option-2 Port Forwarding
If you only need temporary access:

kubectl port-forward svc/argocd-server -n argocd 8080:80

Step 4: Access the Argo CD Dashboard

Retrieve the External IP
After exposing the service as a LoadBalancer, get the external IP address:

kubectl get svc argocd-server -n argocd

Login Credentials

Username: admin

Password: Retrieve it from the secret:

kubectl get secret argocd-initial-admin-secret -n argocd -o yaml

Decode the base64 password:

echo “<base64_encoded_password>” | base64 — decode

Access the dashboard by navigating to https://<external-ip> in your browser.

Step 5: Install the Argo CD CLI

The Argo CD CLI enables you to interact with the Argo CD server programmatically for managing clusters, applications, and configurations.

Steps to Install

Download the CLI

curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64

Install the CLI

sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd

rm argocd-linux-amd64

Verify Installation

argocd version

Step 6: Add a Kubernetes Cluster to Argo CD

Argo CD requires access to the Kubernetes cluster where it will deploy applications.

Steps to Add

Log in to Argo CD via CLI

argocd login <argocd-server-url>:<port> — username admin — password <password>

Get the Kubernetes Context

kubectl config get-contexts -o name

Add the Cluster

argocd cluster add <context-name>

This command creates a service account (argocd-manager) with cluster-wide permissions to deploy applications.

To Verify the added cluster via cli use below command else navigate to the ui dashboard setting -> cluster

argocd cluster list

Step 7: Add a Git Repository

The Git repository serves as the source of truth for application manifests.

Steps to Add

  1. Navigate to Repositories
    Log in to the Argo CD dashboard, go to Settings -> Repositories, and click Connect Repo.
  2. Enter Repository Details
  • Choose a connection method (e.g., HTTPS or SSH).
  • Provide the repository URL and credentials.
  • Assign a project to organize repositories.

Step 8: Create an Application in Argo CD

An Argo CD application represents the Kubernetes resources defined in a Git repository.

Steps to Create

Click New App
Enter the application details:

  • Application Name: e.g., hello-world
  • Project: Assign the application to a project.
  • Source: Select the Git repository and specify the manifest file path.
  • Destination: Select the cluster and namespace for deployment.
  1. Enable Auto-Sync policy
    Enable this option for automated synchronization between the Git repository and the Kubernetes cluster.
  2. Create the Application
    Click Create. Argo CD will deploy the application and monitor its state.

Read Blog
S3 to GCS, Cloud Migration, Bash Automation, GCS Transfer

Automating S3 to GCS Migration Using Bash Scripts

Feb 20, 2025
00

Introduction

Cloud storage plays a crucial role in modern infrastructure, providing scalable and reliable storage solutions. Many businesses migrate from AWS S3 to Google Cloud Storage (GCS) to leverage cost benefits, integration with Google Cloud services, or optimize their cloud strategies. However, when dealing with hundreds of S3 buckets, manual migration is inefficient and time-consuming.

To streamline the process, I automated the migration using Bash scripts and Google Cloud’s Storage Transfer Service. In this blog, I’ll walk you through the steps of automating S3 to GCS migration efficiently.

Why Automate S3 to GCS Migration?

Handling over 200+ S3 buckets manually would involve:

  • Repetitive tasks – Creating GCS buckets, setting permissions, and transferring data for each bucket.
  • Human errors – Misconfiguration, incorrect bucket names, or missing files.
  • Time-consuming process – Manual intervention would take days to complete.

By automating this process, we can:

 Save time – Script execution takes a few minutes instead of hours/days.

 Eliminate errors – Ensures all S3 buckets are correctly transferred.

 Enable monitoring & scheduling – Automate recurring data transfers with Google’s Storage Transfer Service.

Prerequisites

Before running the scripts, ensure you have:

 A Google Cloud Project with Billing enabled.
AWS IAM User with s3:ListBucket and s3:GetObject permissions.
Installed Google Cloud SDK (gcloud CLI) on your local machine.

Step 1: Creating Google Cloud Storage Buckets

Each S3 bucket requires a corresponding GCS bucket. The script below reads a list of bucket names from a file and creates them in GCP.

create_gcs_bucket.sh

#!/bin/bash

# Variables

PROJECT_ID="ccd-poc-project"  # Replace with your GCP project ID

BUCKET_LIST_FILE="bucket_names.txt"  # File containing bucket names

OUTPUT_FILE="created_buckets.txt"

REGION="us-central1"  # Change if needed

# Check if the bucket list file exists

if [ ! -f "$BUCKET_LIST_FILE" ]; then

    echo "Error: Bucket names file '$BUCKET_LIST_FILE' not found!"

    exit 1

fi

# Read bucket names and create GCS buckets

while IFS= read -r BUCKET_NAME || [[ -n "$BUCKET_NAME" ]]; do

    if [[ -z "$BUCKET_NAME" ]]; then

        continue  # Skip empty lines

    fi

    # Clean bucket name

    BUCKET_NAME=$(echo "$BUCKET_NAME" | tr -d '\r' | tr -d '[:space:]')

    echo "Creating bucket: $BUCKET_NAME"

    gcloud storage buckets create "gs://$BUCKET_NAME" --location="$REGION" --project="$PROJECT_ID"

    if [ $? -eq 0 ]; then

        echo "gs://$BUCKET_NAME" >> "$OUTPUT_FILE"

        echo "Bucket $BUCKET_NAME created successfully."

    else

        echo "Error: Failed to create bucket $BUCKET_NAME"

    fi

done < "$BUCKET_LIST_FILE"

  Explanation:

  • Reads bucket names from bucket_names.txt.
  • Cleans up any unnecessary whitespace.
  • Creates GCS buckets with the specified region.
  • Stores created bucket names in created_buckets.txt.

Step 2: Automating Data Transfer from S3 to GCS

After creating the required GCS buckets, the next step is to automate data transfer using the gcloud transfer jobs command.

s3_to_gcs_transfer.sh

#!/bin/bash

# Variables

AWS_ACCESS_KEY="YOUR_AWS_ACCESS_KEY"

AWS_SECRET_KEY="YOUR_AWS_SECRET_KEY"

PROJECT_ID="ccd-poc-project"

CREDS_FILE="aws-creds.json"

# Create AWS credentials JSON file

cat <<EOF > "$CREDS_FILE"

{

  "awsAccessKeyId": "$AWS_ACCESS_KEY",

  "awsSecretAccessKey": "$AWS_SECRET_KEY"

}

EOF

# Read bucket names and create transfer jobs

while IFS= read -r BUCKET_NAME; do

  echo "Creating transfer job for S3 bucket: $BUCKET_NAME"

  JOB_NAME=$(gcloud transfer jobs create s3://"$BUCKET_NAME" gs://"$BUCKET_NAME" \

    --source-auth-method=AWS_SIGNATURE_V4 \

    --source-creds-file="$CREDS_FILE" \

    --schedule-repeats-every=1d \

    --project="$PROJECT_ID" \

    --format="value(name)")

  if [[ -n "$JOB_NAME" ]]; then

    echo "Transfer job created successfully: $JOB_NAME"

  else

    echo "Failed to create transfer job for $BUCKET_NAME"

  fi

done < bucket_names.txt

# Remove credentials file for security

rm "$CREDS_FILE"

echo "All transfer jobs created successfully!"

      Explanation:

  • Generates a secure AWS credentials file.
  • Reads bucket names and initiates a transfer job.
  • Checks if an existing transfer is running before creating a new one.
  • Deletes the credentials file after execution for security.

Step 3: Running the Migration

To execute the scripts, follow these steps:

  1. Save the S3 bucket names in a file named bucket_names.txt.
  2. Run the GCS bucket creation script:

chmod +x create_gcs_bucket.sh

./create_gcs_bucket.sh

  1. Run the S3-to-GCS transfer script:

chmod +x s3_to_gcs_transfer.sh

./s3_to_gcs_transfer.sh

Conclusion

By automating S3 to GCS migration, we:
Eliminated manual effort for creating 200+ buckets.
Ensured accurate and efficient data transfers.
Scheduled daily syncs for incremental updates.

This solution scales easily and can be modified to include advanced features like logging, monitoring, and notifications.

If you found this guide helpful, feel free to share your thoughts and experiences in the comments. Happy migrating!

Read Blog
GCP, Cloud Security, Automation, Log-based Metrics

Google Cloud - Security Alerts Automation

Feb 5, 2025
00

In this blog, we will guide you through automating alerting for critical activities and securing your projects against accidental deletion using custom scripts. Setting up log-based metrics and alerts manually can be a time-consuming task, typically taking around an hour or might be manual errors. To optimize this process and enhance efficiency, we have automated it using a combination of Shell and YAML scripts.

By implementing this solution, you can configure notification channels to receive alerts whenever changes are detected in your cloud environment, ensuring prompt action on potential issues. Our approach involves leveraging YAML files along with the Deployment Manager to create and manage Log Metrics and Alerting Policies. Once these components are successfully deployed, the deployment itself is deleted since it does not interfere with any ongoing services or resources in your cloud environment.

The following steps will provide you with a detailed, step-by-step guide to implementing this automation effectively, allowing you to maintain better security and operational efficiency.

Step-by-step guide to implementing this automation effectively

1. Clone the Repository

Prerequisites:

Connect to your Google Cloud Shell and ensure you have the necessary permissions to implement the script.

git clone https://github.com/nvrvenkat/Securitylogalerts.git

This command clones the Securitylogalerts repository from GitHub to your local system.

2. Navigate to the Repository Directory

cd Securitylogalerts/

This command changes the directory to the Securitylogalerts folder, where all project files are located.

3. Description of Metrics and Alerts

  • Assign-resource-to-billing-account-metric: Generates an alert whenever a resource is assigned to a billing account.
  • Create-service-account-key-metric: Sends a notification whenever a service account key is created.
  • Deletion-protection-metric: Issues an alert whenever deletion protection for a resource is disabled.
  • Delete-service-account-key-metric: Logs a warning whenever a service account key is deleted.
  • Disk-deletion-metric: Detects and notifies whenever a disk is removed.
  • Firewall-update-metric: Alerts the team whenever a firewall configuration is modified.
  • Iam-action-metric: Flags an activity whenever an IAM-related action is executed.
  • Instance-delete-metric: Reports an event whenever a virtual machine instance is terminated.
  • Instance-insert-metric: Notifies the team whenever a new virtual machine instance is provisioned.
  • Label-modification-metric: Identifies and reports whenever an instance label is altered or a new one is added.
  • Service-account-creation-metric: Triggers a notification whenever a new service account is established.
  • Set-iam-metric: Raises an alert whenever a new IAM user is assigned a role or created.
  • 4. Replace the Email Address in logmetric_notification.sh

    Update the email address in the shell script “logmetric_notification.sh” with the specific email address where alerts need to be sent.

    This email address will be used to configure the notification channel.

    5. Execute the Notification Channel Script

    ./logmetric_notification.sh 

    Runs the script to create a notification channel with the updated email address.

    It will create a notification channel with the updated email address and generate log-based metrics as specified in the "Metrics and Alerts" section.

    Note: If a notification channel already exists, execute the logmetric.sh file to generate only the log-based metrics.

    6. Navigate to the Log Alert Directory and Execute Scripts

    a) cd /Securitylogalerts/Logalert

    ./scripts.sh

    The scripts.sh script triggers:

    replace_notification_channel.sh: Replaces the notification channel with ACT-MS-alerts in the YAML files used for creating log metric alerts. The output is saved to output.txt.

    logalert.sh: Creates alerting policies based on the updated notification channel in the YAML files.

     Alerting Policies Update:

    • Once scripts.sh is executed, the notification channel in the YAML files will be replaced, and the alerting policies will be created.
    • The alerting policies should be fully deployed within approximately 10 minutes.

    The resources will be created using the Deployment Manager in the Google Cloud Console. Once the resources are created, the deployment will be deleted while retaining the resources.

    b)Add Multiple Notification Channels (optional):cd /Securitylogalerts/Logalertmultiple./scripts.sh

    This command adds multiple notification channels to the alerting policies. Ensure you update the respective notification channel names in the “replace_notification_channel.sh” file before executing the script.It updates the YAML files for log alert metrics with the additional notification channels.

    7. Test Alerting Policies

    This script tests the alerting policies by:

    • Creating and deleting resources (e.g., instances, disks, service accounts, service account keys, and firewall rules).
    • Sending alerts to the configured notification channel to verify that the policies are functioning correctly.

    8. Resource Creation and Deletion Activity

    After executing “alerttest.sh”, resources are automatically created and deleted as per the alerting policy configurations.

    Alerts are triggered and sent to the configured notification channel.

    For example: Alerts for service account key creation and deletion.

    Similar alerts for other resources will be triggered based on resource creation.

    9.Enable Project Liens:

    cd /Securitylogalerts/

    ./Liens.sh

    Executes the “Liens.sh” script, which fetches the project ID automatically and enables liens on the project to prevent accidental deletion.

    By following these steps, you'll be able to automate your cloud environment’s monitoring and security processes, ensuring that you stay ahead of any potential data and revenue losses and minimize the risk of accidental deletions.

    Read Blog
    GCP, Security, DataProtection, IPFiltering, CloudStorage

    How to Secure Your Google Cloud Buckets with IP Filtering

    Feb 4, 2025
    00

    In today's cloud-driven world, sensitive data should be kept as secure as possible. IP filtering allows you to control who accesses your storage by enabling it on your Google Cloud buckets so that only trusted networks are allowed to access. This guide will walk you through the step-by-step process for setting up IP filtering.

    What is IP Filtering?

    IP filtering limits access to a bucket by allowing access only from particular IP ranges or networks. It grants access to your data while blocking traffic requests from unknown or malicious sources.

    Key Use Cases for Google Cloud Bucket IP Filtering

    1. Compliance Requirements

    • Description: Make sure only authorized users can access the bucket to meet legal or industry rules for protecting data.
    • Key: Regulatory Adherence (Following Data Protection Rules)

    2. Protect Public Buckets

    • Description: Enhanced security prevents unauthorized access to publicly accessible buckets by limiting traffic to trusted IPs. This protects sensitive public resources from malicious activity.
    • Key: Access Control for Public Data

    3. VPC Integration

    • Description: Private networking limits bucket access to specific Virtual Private Cloud (VPC) networks. This ensures secure interactions within a well-defined network boundary, enhancing data protection.
    • Key: Network-Specific Access

    4. Controlled Testing

    • Description: Access restriction during testing phases ensures that bucket access is limited to only select IPs or systems. This maintains control over the testing environment and reduces unintended data exposure.
    • Key: Testing Environment Control

    5. Enhanced Monitoring

    • Description: Simplifies audits by restricting access to known and trusted IPs. That is, by allowing access only from trusted IPs, you reduce the number of unknown or suspicious interactions. This makes it easier to track who accessed the bucket and when, simplifying audits and improving transparency.
    • Key: Simplified Audit Trails

    Supported locations

    Bucket IP filtering is available in the following locations:

    • asia-south1
    • asia-south2
    • asia-southeast1
    • asia-southeast2
    • asia-east1
    • asia-east2
    • europe-west1
    • europe-west2
    • us-central1
    • us-east1
    • us-east4
    • us-west1

    Limitations

    Bucket IP filtering has the following limitations:

    • Maximum number of IP CIDR blocks: You can specify a maximum of 200 IP CIDR blocks across public and VPC networks in the IP filter rule for a bucket.
    • Maximum number of VPC networks: You can specify a maximum of 25 VPC networks in the IP filter rules for a bucket.
    • Dual-region support: IP filtering is not supported for dual-regional buckets.
    • Blocked Google Cloud services: Enabling IP filtering on Cloud Storage buckets restricts access for some Google Cloud services, regardless of whether they use a service agent to interact with Cloud Storage.

    How to Enable Bucket IP Filtering

    Step 1: Install the Google Cloud CLI (Command Line Interface) in the server

    SSH into your instance and install the Google Cloud CLI using the following command:

    sudo snap install google-cloud-cli --classic

    Authenticate with Google Cloud:


    gcloud auth login

    You will be prompted to grant access to your Google account.

    Google Cloud SDK wants to access your Google Account

    Set the desired project ID:

    gcloud config set project [PROJECT_ID]

    Step 2: Verify Your Bucket

    1. List all the buckets in your project: gcloud storage buckets list
    2. Locate the bucket name you want to configure.

    Step 3: Prepare the JSON Configuration

    Create a JSON file to define your IP filtering rules:

    • Open a text editor to create the file:
    nano ip-filter-config.json or vim ip-filter-config.json
    • Add the following configuration and save the file:

    {

     "mode": "Enabled",

     "publicNetworkSource":

       {

       "allowedIpCidrRanges": ["RANGE_CIDR"]

       },

     "vpcNetworkSources":

         [

                 {

                        "network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",

                         "allowedIpCidrRanges": ["RANGE_CIDR"]

                 }

         ]

    }

    Replace the IP ranges and VPC network details with your specific requirements.

    Step 4: Apply the Configuration

    Run the following command to update your bucket with the IP filtering configuration:

    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

    Step 5: Verify the Configuration

    After applying the rules, describe the bucket to confirm the changes:

    gcloud storage buckets describe [BUCKET_NAME(gsutil URI)]

    User can  see the IP filter configuration in the bucket

    Step 5: Test Access

    • Ensure  requests from allowed IPs can access the bucket.
    • Verify that non-whitelisted IPs are denied access.

    How to Disable or Remove IP Filtering Disabling IP Filtering

    • If we want to disable the modify the json from “Enabled” to “Disabled” and update the bucket to apply the modify configuration.

     "mode": "Disabled",

     "publicNetworkSource":

       {

       "allowedIpCidrRanges": ["RANGE_CIDR"]

       },

     "vpcNetworkSources":

         [

                 {

           "network": "projects/PROJECT_ID/global/networks/NETWORK_NAME",

                         "allowedIpCidrRanges": ["RANGE_CIDR"]

                 }

         ]

    }

    Update the bucket with the modified configuration:

    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ip-filter-file=ip-filter-config.json

    Removing IP Filtering Configuration

    • To remove any existing IP filtering configuration from the bucket:
    gcloud alpha storage buckets update [BUCKET_NAME(gsutil URI)] --ipfiltering --clear-ip-filter

    By enabling IP filtering, you can protect your Google Cloud buckets from unauthorized access and ensure compliance with organizational security policies. Whether you are securing sensitive data or limiting access during testing, these steps provide a robust framework for managing bucket security effectively.

    Bypass bucket IP filtering rules:

    Bypassing bucket IP filtering rules exempts users or service accounts from IP filtering restrictions for creating, deleting, or configuring buckets, while still enforcing rules for others. For more information about bucket IP filtering, see Bucket IP filtering(https://cloud.google.com/storage/docs/ip-filtering-overview).

    It's crucial to have a way to regain access to your bucket if you inadvertently block your own IP address. This can happen due to the following reasons:

    • Bucket lockout: When you accidentally add a rule that blocks your own IP address or the IP range of your entire network.
    • Unexpected IP change: In some cases, your IP address might change unexpectedly due to network changes, and you might find yourself locked out.

    To enable specific users or service accounts to bypass IP filtering restrictions on a bucket, grant them the storage.buckets.exemptFromIpFilter permission using a custom role. This permission exempts the user or service account from IP filtering rules for bucket-level operations such as creating, deleting, or configuring buckets. To do so, complete the following steps:

    1. Identify the user or service account that needs to bypass the IP filtering restrictions on specific buckets.
    2. Create a custom role.(https://cloud.google.com/iam/docs/creating-custom-roles)
    3. Add the storage.buckets.exemptFromIpFilter permission to the role.
    4. Grant the custom role to the identified user or service account at the project level.

    For information about granting roles, see Grant a single role (https://cloud.google.com/iam/docs/manage-access-service-accounts#grant-single-role)

    Read Blog

    PM2 Process Monitoring and Alerting for Enhancing Service Availability

    AWS, PM2 Monitoring, Nodejs, AWS Lambda, Dev Ops Automation
    Mar 26, 2025
    Read Case Study

    Saasification and Cloud Migration for vitagroup: a key player in the highly-regulated German Healthcare sector

    AWS, Migration, SaaS
    Feb 18, 2025
    Read Case Study

    Secure Data Analytics Pipeline Setup

    AWS, Data Analytics Pipeline, Cloud Security, Cost Optimization
    Jan 20, 2025
    Read Case Study

    Enhancing DDoS Protection with Extended IP Block Duration Using AWS WAF Rate-Based Rules

    AWS, AWS WAF, DDoS Protection, IP Blocking, Cloud Security
    Dec 23, 2024
    Read Case Study

    Smart Risk Assessment: Bitech’s AI-Driven Solution for Property Insurance

    AWS, AI Risk Assessment, Property Insurance, Predictive Analytics, Real-Time Forecasting
    Sep 26, 2024
    Read Case Study

    Building an AI-powered System for Reels Creation

    Google Cloud, QuickReel, Vertex AI, Custom ML Models, Video Editing Technology
    Jul 22, 2024
    Read Case Study

    Transforming Prescription Verification with Google Cloud AI

    Google Cloud, Vision AI, Document AI, Vertex AI
    Jul 22, 2024
    Read Case Study

    Streamlining CI/CD: A Seamless Journey from Bitbucket to Elastic Beanstalk with AWS CodePipeline

    AWS, CI/CD Pipeline, AWS S3
    Jul 22, 2024
    Read Case Study

    Cost-Effective Auto-Scaling for WordPress on AWS: S3 Data Sync Solution

    AWS
    Jul 3, 2024
    Read Case Study

    From Manual to Automated: Transforming Deployment and Enhancing Security

    AWS, Cloud Security, AWS WAF, CI/CD Pipelines
    Jul 2, 2024
    Read Case Study

    Streamlining MongoDB Analytics with AWS

    AWS, MongoDB, Cloud Security, Data Analytics
    Jul 2, 2024
    Read Case Study

    Transforming Interior Design with AI

    GenAI, AWS, AI/ML
    Jun 28, 2024
    Read Case Study

    Setting Up Google Cloud Account and Migrating Critical Applications for Rakuten India

    Google Cloud, Cloud Migration, IAM, Security
    Jun 27, 2024
    Read Case Study

    Automating Prescription Verification for Tata 1MG

    GCP, Cloud Technology, AI/ML
    Jun 27, 2024
    Read Case Study

    Streamlining FSSAI Compliance for Food Packaging

    GCP, AI/ML
    Jun 27, 2024
    Read Case Study

    Migration from AWS to GCP for an Ed Tech

    GCP, Cloud Migration, AI/ML
    Jun 27, 2024
    Read Case Study

    Dr.Karl-Remeis-Sternwarte Bamberg - Astronomisches Institut

    AWS, Cloud Migration
    May 10, 2024
    Read Case Study

    Autonomous Mobility MLOps with AWS Migration

    AWS, Cloud Migration, MLOps
    May 7, 2024
    Read Case Study

    Migration to Cloud and Setting Up of Analytics along With Managed Services

    AWS, Cloud Migration, Data Analytics
    Apr 30, 2024
    Read Case Study

    gocomo Migrates Social Data Platform to AWS for Performance and Scalability with Ankercloud

    AWS, Cloud Migration
    Apr 8, 2024
    Read Case Study

    Benchmarking AWS performance to run environmental simulations over Belgium

    AWS, HPC
    Apr 3, 2024
    Read Case Study

    SaaS based Cloud Native B2B Media Platform

    AWS, Cloud
    Aug 10, 2023
    Read Case Study

    SAAS Discovery program

    AWS, SaaS Discovery, Online Workspace
    Aug 10, 2023
    Read Case Study

    Innovapptive's Cloud-Native Transformation with AWS

    AWS, Cloud
    Aug 10, 2023
    Read Case Study

    Developed Cloud Identity Security SaaS Platform

    SaaS, AWS, Cloud
    Aug 10, 2023
    Read Case Study

    Well-Architected Framework Review

    AWS, Travel Agency, WAFR
    Aug 10, 2023
    Read Case Study

    Model development for Image Object Classification and OCR analysis for mining industry

    AWS, Cloud
    Aug 10, 2023
    Read Case Study

    Modernization & SaaSification of B2B Platform

    AWS, Cloud
    Aug 10, 2023
    Read Case Study

    Mobile AI Claims solution for Insurers

    Cloud, AWS, Germany, Europe
    Aug 10, 2023
    Read Case Study

    High Performance Computing using Parallel Cluster, Infrastructure Set-up

    AWS, Cloud, HPC, Machine Learning, BioTech
    Aug 10, 2023
    Read Case Study

    Achieving Cost Optimization, Security, and Compliance: Ankercloud's AWS CloudOps Solutions for Federmeister

    AWS, DevOps
    Aug 10, 2023
    Read Case Study

    WAFR and Architecture validation

    AWS, HD Camera, Construction, WAFR
    Aug 10, 2023
    Read Case Study

    Bitech AG DevOps Migration from on-prem to AWS for German ISV

    AWS, DevOps, SaaS
    Aug 10, 2023
    Read Case Study

    AI & ML Solution for a Facade Building Company

    AWS, AL & ML, Construction, APAC
    Aug 10, 2023
    Read Case Study

    Migration a Saas platform from On-Prem to GCP

    GCP, Cloud, Saas
    Aug 10, 2023
    Read Case Study

    Data Lake Infrastructure Setup on AWS Cloud Platform

    AWS, Big data, India
    Aug 9, 2023
    Read Case Study

    Replication of On-premise Infrastructure into AWS Cloud on Docker Swarm platform

    AWS, Cloud Migration, Europe
    Aug 7, 2023
    Read Case Study

    Replication of On-premise Infrastructure into AWS Cloud on Docker Swarm platform

    AWS, Cloud Migration, Germany, Europe
    May 7, 2023
    Read Case Study

    Migration from On-prem to AWS of a Content Automation Platform

    AWS, Amazon OpenSearch, Cloud technology, Germany, Europe
    Jan 17, 2023
    Read Case Study
    This is some text inside of a div block.

    Judith Evers Appointed Managing Director of Ankercloud

    May 26, 2025
    00
    REad announcement
    No Results Found !!
    Please Type Other Keywords

    The Ankercloud Team loves to listen