Empowering Diverse Industries with Cloud Innovation

From project-specific support to managed services, we help you accelerate time to market, maximise cost savings and realize your growth ambitions

The logo for a company that sells products.
AWS
HPC
Cloud
Bio Tech
Machine Learning

High Performance Computing using Parallel Cluster, Infrastructure Set-up

AWS
Cloud Migration

gocomo Migrates Social Data Platform to AWS for Performance & Scalability with Ankercloud

A black and white photo of the logo for salopritns.
Google Cloud
Saas
Cost Optimization
Cloud

Migration a Saas platform from On-Prem to GCP

AWS
HPC

Benchmarking AWS performance to run environmental simulations over Belgium

Countless Happy Clients and Counting!

A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.
A man wearing glasses and a suit looks at the camera.

“Ankercloud has been very helpful and understanding. All interactions have been smooth and enjoyable.”

Torbjörn Svensson
Head of Development
A black and white photo of a clock tower.

Awards and Recognition

The rising star partner of the year award.
The google cloud partner logo.
specialized infrastructure google cloud.
The logo for the technology fast 500.
A white badge with the google cloud logo.
The aws partner logo.

Our Latest Achievement

The aws partner logo.
Public Sector
Solution Provider
SaaS Services Competency
DevOps Services Competency
AWS WAF Delivery
The aws partner logo.
AWS Glue Delivery
AWS Lambda Delivery
Amazon CloudFront Delivery
Migration Services Competency
Public Sector Solution Provider
The aws partner logo.
AWS CloudFormation Delivery
Amazon OpenSearch Service Delivery
Well-Architected Partner Program
Cloud Operations Services Competency
AWS Badge1AWS Badge1
AWS Badge2AWS Badge2

Ankercloud: Partners with AWS, GCP, and Azure

We excel through partnerships with industry giants like AWS, GCP, and Azure, offering innovative solutions backed by leading cloud technologies.

A black and white photo of a computer screen.
A black and white photo of a clock tower.
A black and white photo of a clock tower.

Check out our blog

Blog

Pinpoint APM Implementation for Node Js Application

Introduction

Application Performance Management (APM) is crucial for monitoring and managing the performance and availability of software applications. Pinpoint is an open-source APM tool that offers comprehensive insights into the performance and reliability of applications. It is designed to monitor large-scale distributed systems, providing real-time performance metrics, tracing, and detailed visualizations.

This guide provides a step-by-step approach to implementing Pinpoint APM for a Node.js application, including setting up the server, installing Docker, deploying Pinpoint, and integrating it with the Node.js application.

About Pinpoint

Pinpoint is a powerful APM tool that helps understand the application's performance and track down issues. It supports a variety of technologies and provides functionalities like:

  • Real-time application monitoring
  • Distributed tracing
  • Visualization of application topology
  • Alerts and notifications
  • Detailed transaction analysis

Setup a server:

We have to launch a new server with a minimum of 2vCPU and 4GB RAM.

Install Docker Engine

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

Install Docker Compose

Download the Docker Compose binary into the /usr/local/bin directory:

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Apply executable permissions to the binary

sudo chmod +x /usr/local/bin/docker-compose

Verify the installation:

docker --version

docker-compose --version


Deploy Pinpoint Using Docker

Clone the git repository:

git clone https://github.com/pinpoint-apm/pinpoint-docker.git

cd pinpoint-docker

sudo docker-compose pull && docker-compose up -d

NOTE:
If the above docker-compose.yml won’t work please use the following yml file to up the docker.

version: "3.6"

services:

  pinpoint-hbase:

    build:

      context: ./pinpoint-hbase/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_HBASE_NAME}"

    image: "pinpointdocker/pinpoint-hbase:${PINPOINT_VERSION}"

    networks:

      - pinpoint

    environment:

      - AGENTINFO_TTL=${AGENTINFO_TTL}

      - AGENTSTATV2_TTL=${AGENTSTATV2_TTL}

      - APPSTATAGGRE_TTL=${APPSTATAGGRE_TTL}

      - APPINDEX_TTL=${APPINDEX_TTL}

      - AGENTLIFECYCLE_TTL=${AGENTLIFECYCLE_TTL}

      - AGENTEVENT_TTL=${AGENTEVENT_TTL}

      - STRINGMETADATA_TTL=${STRINGMETADATA_TTL}

      - APIMETADATA_TTL=${APIMETADATA_TTL}

      - SQLMETADATA_TTL=${SQLMETADATA_TTL}

      - TRACEV2_TTL=${TRACEV2_TTL}

      - APPTRACEINDEX_TTL=${APPTRACEINDEX_TTL}

      - APPMAPSTATCALLERV2_TTL=${APPMAPSTATCALLERV2_TTL}

      - APPMAPSTATCALLEV2_TTL=${APPMAPSTATCALLEV2_TTL}

      - APPMAPSTATSELFV2_TTL=${APPMAPSTATSELFV2_TTL}

      - HOSTAPPMAPV2_TTL=${HOSTAPPMAPV2_TTL}

    volumes:

      - hbase_data:/home/pinpoint/hbase

      - /home/pinpoint/zookeeper

    expose:

      # HBase Master API port

      - "60000"

      # HBase Master Web UI

      - "16010"

      # Regionserver API port

      - "60020"

      # HBase Regionserver web UI

      - "16030"

    ports:

      - "60000:60000"

      - "16010:16010"

      - "60020:60020"

      - "16030:16030"

    restart: always

    depends_on:

      - zoo1

  pinpoint-mysql:

    container_name: pinpoint-mysql

    image: mysql:8.0

    restart: "no"

    hostname: pinpoint-mysql

    entrypoint: > 

      sh -c "

      curl -SL "https://raw.githubusercontent.com/ga-ram/pinpoint/latest/web/src/main/resources/sql/CreateTableStatement-mysql.sql" -o /docker-entrypoint-initdb.d/CreateTableStatement-mysql.sql &&

      curl -SL "https://raw.githubusercontent.com/ga-ram/pinpoint/latest/web/src/main/resources/sql/SpringBatchJobRepositorySchema-mysql.sql" -o /docker-entrypoint-initdb.d/SpringBatchJobRepositorySchema-mysql.sql &&

      sed -i '/^--/d' /docker-entrypoint-initdb.d/CreateTableStatement-mysql.sql &&

      sed -i '/^--/d' /docker-entrypoint-initdb.d/SpringBatchJobRepositorySchema-mysql.sql &&

      docker-entrypoint.sh mysqld

      "

    ports:

      - "3306:3306"

    environment:

      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}

      - MYSQL_USER=${MYSQL_USER}

      - MYSQL_PASSWORD=${MYSQL_PASSWORD}

      - MYSQL_DATABASE=${MYSQL_DATABASE}

    volumes:

      - mysql_data:/var/lib/mysql

    networks:

      - pinpoint

  pinpoint-web:

    build:

      context: ./pinpoint-web/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_WEB_NAME}"

    image: "pinpointdocker/pinpoint-web:${PINPOINT_VERSION}"

    depends_on:

      - pinpoint-hbase

      - pinpoint-mysql

      - zoo1

      - redis

    restart: always

    expose:

      - "9997"

    ports:

      - "9997:9997"

      - "${WEB_SERVER_PORT:-8080}:8080"

    environment:

      - WEB_SERVER_PORT=${WEB_SERVER_PORT}

      - SPRING_PROFILES_ACTIVE=${SPRING_PROFILES}

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

      - CLUSTER_ENABLE=${CLUSTER_ENABLE}

      - ADMIN_PASSWORD=${ADMIN_PASSWORD}

      - CONFIG_SENDUSAGE=${CONFIG_SENDUSAGE}

      - LOGGING_LEVEL_ROOT=${WEB_LOGGING_LEVEL_ROOT}

      - CONFIG_SHOW_APPLICATIONSTAT=${CONFIG_SHOW_APPLICATIONSTAT}

      - JDBC_DRIVERCLASSNAME=${JDBC_DRIVERCLASSNAME}

      - JDBC_URL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - JDBC_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - JDBC_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_DATASOURCE_HIKARI_JDBCURL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - SPRING_DATASOURCE_HIKARI_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - SPRING_DATASOURCE_HIKARI_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_METADATASOURCE_HIKARI_JDBCURL=${SPRING_METADATASOURCE_HIKARI_JDBCURL}

      - SPRING_METADATASOURCE_HIKARI_USERNAME=${SPRING_METADATASOURCE_HIKARI_USERNAME}

      - SPRING_METADATASOURCE_HIKARI_PASSWORD=${SPRING_METADATASOURCE_HIKARI_PASSWORD}

      - SPRING_DATA_REDIS_HOST=${SPRING_DATA_REDIS_HOST}

      - SPRING_DATA_REDIS_PORT=${SPRING_DATA_REDIS_PORT}

      - SPRING_DATA_REDIS_USERNAME=${SPRING_DATA_REDIS_USERNAME}

      - SPRING_DATA_REDIS_PASSWORD=${SPRING_DATA_REDIS_PASSWORD}

    links:

      - "pinpoint-mysql:pinpoint-mysql"

    networks:

      - pinpoint

  pinpoint-collector:

    build:

      context: ./pinpoint-collector/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_COLLECTOR_NAME}"

    image: "pinpointdocker/pinpoint-collector:${PINPOINT_VERSION}"

    depends_on:

      - pinpoint-hbase

      - zoo1

      - redis

    restart: always

    expose:

      - "9991"

      - "9992"

      - "9993"

      - "9994"

      - "9995"

      - "9996"

    ports:

      - "${COLLECTOR_RECEIVER_GRPC_AGENT_PORT:-9991}:9991/tcp"

      - "${COLLECTOR_RECEIVER_GRPC_STAT_PORT:-9992}:9992/tcp"

      - "${COLLECTOR_RECEIVER_GRPC_SPAN_PORT:-9993}:9993/tcp"

      - "${COLLECTOR_RECEIVER_BASE_PORT:-9994}:9994"

      - "${COLLECTOR_RECEIVER_STAT_UDP_PORT:-9995}:9995/tcp"

      - "${COLLECTOR_RECEIVER_SPAN_UDP_PORT:-9996}:9996/tcp"

      - "${COLLECTOR_RECEIVER_STAT_UDP_PORT:-9995}:9995/udp"

      - "${COLLECTOR_RECEIVER_SPAN_UDP_PORT:-9996}:9996/udp"

    networks:

      pinpoint:

        ipv4_address: ${COLLECTOR_FIXED_IP}

    environment:

      - SPRING_PROFILES_ACTIVE=${SPRING_PROFILES}

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

      - CLUSTER_ENABLE=${CLUSTER_ENABLE}

      - LOGGING_LEVEL_ROOT=${COLLECTOR_LOGGING_LEVEL_ROOT}

      - FLINK_CLUSTER_ENABLE=${FLINK_CLUSTER_ENABLE}

      - FLINK_CLUSTER_ZOOKEEPER_ADDRESS=${FLINK_CLUSTER_ZOOKEEPER_ADDRESS}

      - SPRING_DATA_REDIS_HOST=${SPRING_DATA_REDIS_HOST}

      - SPRING_DATA_REDIS_PORT=${SPRING_DATA_REDIS_PORT}

      - SPRING_DATA_REDIS_USERNAME=${SPRING_DATA_REDIS_USERNAME}

      - SPRING_DATA_REDIS_PASSWORD=${SPRING_DATA_REDIS_PASSWORD}

  pinpoint-quickstart:

    build:

      context: ./pinpoint-quickstart/

      dockerfile: Dockerfile

    container_name: "pinpoint-quickstart"

    image: "pinpointdocker/pinpoint-quickstart"

    ports:

      - "${APP_PORT:-8085}:8080"

    volumes:

      - data-volume:/pinpoint-agent

    environment:

      JAVA_OPTS: "-javaagent:/pinpoint-agent/pinpoint-bootstrap.jar -Dpinpoint.agentId=${AGENT_ID} -Dpinpoint.applicationName=${APP_NAME} -Dpinpoint.profiler.profiles.active=${SPRING_PROFILES}"

    networks:

      - pinpoint

    depends_on:

      - pinpoint-agent

  pinpoint-batch:

    build:

      context: ./pinpoint-batch/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_BATCH_NAME}"

    image: "pinpointdocker/pinpoint-batch:${PINPOINT_VERSION}"

    depends_on:

      - pinpoint-hbase

      - pinpoint-mysql

      - zoo1

    restart: always

    environment:

      - BATCH_SERVER_PORT=${BATCH_SERVER_PORT}

      - SPRING_PROFILES_ACTIVE=${SPRING_PROFILES}

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

      - CLUSTER_ENABLE=${CLUSTER_ENABLE}

      - ADMIN_PASSWORD=${ADMIN_PASSWORD}

      - CONFIG_SENDUSAGE=${CONFIG_SENDUSAGE}

      - LOGGING_LEVEL_ROOT=${BATCH_LOGGING_LEVEL_ROOT}

      - CONFIG_SHOW_APPLICATIONSTAT=${CONFIG_SHOW_APPLICATIONSTAT}

      - BATCH_FLINK_SERVER=${BATCH_FLINK_SERVER}

      - JDBC_DRIVERCLASSNAME=${JDBC_DRIVERCLASSNAME}

      - JDBC_URL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - JDBC_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - JDBC_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_DATASOURCE_HIKARI_JDBCURL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - SPRING_DATASOURCE_HIKARI_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - SPRING_DATASOURCE_HIKARI_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_METADATASOURCE_HIKARI_JDBCURL=${SPRING_METADATASOURCE_HIKARI_JDBCURL}

      - SPRING_METADATASOURCE_HIKARI_USERNAME=${SPRING_METADATASOURCE_HIKARI_USERNAME}

      - SPRING_METADATASOURCE_HIKARI_PASSWORD=${SPRING_METADATASOURCE_HIKARI_PASSWORD}

      - ALARM_MAIL_SERVER_URL=${ALARM_MAIL_SERVER_URL}

      - ALARM_MAIL_SERVER_PORT=${ALARM_MAIL_SERVER_PORT}

      - ALARM_MAIL_SERVER_USERNAME=${ALARM_MAIL_SERVER_USERNAME}

      - ALARM_MAIL_SERVER_PASSWORD=${ALARM_MAIL_SERVER_PASSWORD}

      - ALARM_MAIL_SENDER_ADDRESS=${ALARM_MAIL_SENDER_ADDRESS}

      - ALARM_MAIL_TRANSPORT_PROTOCOL=${ALARM_MAIL_TRANSPORT_PROTOCOL}

      - ALARM_MAIL_SMTP_PORT=${ALARM_MAIL_SMTP_PORT}

      - ALARM_MAIL_SMTP_AUTH=${ALARM_MAIL_SMTP_AUTH}

      - ALARM_MAIL_SMTP_STARTTLS_ENABLE=${ALARM_MAIL_SMTP_STARTTLS_ENABLE}

      - ALARM_MAIL_SMTP_STARTTLS_REQUIRED=${ALARM_MAIL_SMTP_STARTTLS_REQUIRED}

      - ALARM_MAIL_DEBUG=${ALARM_MAIL_DEBUG}

    links:

      - "pinpoint-mysql:pinpoint-mysql"

    networks:

      - pinpoint

  pinpoint-agent:

    build:

      context: ./pinpoint-agent/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_AGENT_NAME}"

    image: "pinpointdocker/pinpoint-agent:${PINPOINT_VERSION}"

    restart: unless-stopped

    networks:

      - pinpoint

    volumes:

      - data-volume:/pinpoint-agent

    environment:

      - SPRING_PROFILES=${SPRING_PROFILES}

      - COLLECTOR_IP=${COLLECTOR_IP}

      - PROFILER_TRANSPORT_AGENT_COLLECTOR_PORT=${PROFILER_TRANSPORT_AGENT_COLLECTOR_PORT}

      - PROFILER_TRANSPORT_METADATA_COLLECTOR_PORT=${PROFILER_TRANSPORT_METADATA_COLLECTOR_PORT}

      - PROFILER_TRANSPORT_STAT_COLLECTOR_PORT=${PROFILER_TRANSPORT_STAT_COLLECTOR_PORT}

      - PROFILER_TRANSPORT_SPAN_COLLECTOR_PORT=${PROFILER_TRANSPORT_SPAN_COLLECTOR_PORT}

      - PROFILER_SAMPLING_TYPE=${PROFILER_SAMPLING_TYPE}

      - PROFILER_SAMPLING_COUNTING_SAMPLING_RATE=${PROFILER_SAMPLING_COUNTING_SAMPLING_RATE}

      - PROFILER_SAMPLING_PERCENT_SAMPLING_RATE=${PROFILER_SAMPLING_PERCENT_SAMPLING_RATE}

      - PROFILER_SAMPLING_NEW_THROUGHPUT=${PROFILER_SAMPLING_NEW_THROUGHPUT}

      - PROFILER_SAMPLING_CONTINUE_THROUGHPUT=${PROFILER_SAMPLING_CONTINUE_THROUGHPUT}

      - DEBUG_LEVEL=${AGENT_DEBUG_LEVEL}

      - PROFILER_TRANSPORT_MODULE=${PROFILER_TRANSPORT_MODULE}

    depends_on:

      - pinpoint-collector

  #zookeepers

  zoo1:

    image: zookeeper:3.4.13

    restart: always

    hostname: zoo1

    expose:

      - "2181"

      - "2888"

      - "3888"

    ports:

      - "2181"

    environment:

      ZOO_MY_ID: 1

      ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

    networks:

      - pinpoint

  zoo2:

    image: zookeeper:3.4.13

    restart: always

    hostname: zoo2

    expose:

      - "2181"

      - "2888"

      - "3888"

    ports:

      - "2181"

    environment:

      ZOO_MY_ID: 2

      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zoo3:2888:3888

    networks:

      - pinpoint

  zoo3:

    image: zookeeper:3.4.13

    restart: always

    hostname: zoo3

    expose:

      - "2181"

      - "2888"

      - "3888"

    ports:

      - "2181"

    environment:

      ZOO_MY_ID: 3

      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=0.0.0.0:2888:3888

    networks:

      - pinpoint

  ##flink

  jobmanager:

    build:

      context: pinpoint-flink

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_FLINK_NAME}-jobmanager"

    image: "pinpointdocker/pinpoint-flink:${PINPOINT_VERSION}"

    expose:

      - "6123"

    ports:

      - "${FLINK_WEB_PORT:-8081}:8081"

    command: standalone-job -p 1 pinpoint-flink-job.jar -spring.profiles.active release

    environment:

      - JOB_MANAGER_RPC_ADDRESS=jobmanager

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

    networks:

      - pinpoint

    depends_on:

      - zoo1

  taskmanager:

    build:

      context: pinpoint-flink

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_FLINK_NAME}-taskmanager"

    image: "pinpointdocker/pinpoint-flink:${PINPOINT_VERSION}"

    expose:

      - "6121"

      - "6122"

      - "19994"

    ports:

      - "6121:6121"

      - "6122:6122"

      - "19994:19994"

    depends_on:

      - zoo1

      - jobmanager

    command: taskmanager

    links:

      - "jobmanager:jobmanager"

    environment:

      - JOB_MANAGER_RPC_ADDRESS=jobmanager

    networks:

      - pinpoint

  redis:

    image: redis:7.0.14

    restart: always

    hostname: pinpoint-redis

    ports:

      - "6379:6379"

    networks:

      - pinpoint

volumes:

  data-volume:

  mysql_data:

  hbase_data:

networks:

  pinpoint:

    driver: bridge

    ipam:

      config:

        - subnet: ${PINPOINT_NETWORK_SUBNET}

The explanation of the components that we have used in the docker-compose yaml files

1. Services

Services are the individual containers that make up the application. Each service runs in its container but can interact with other services defined in the same docker-compose.yml file.

a. pinpoint-hbase

  • Purpose: Pinpoint uses HBase as its primary storage for storing tracing data.
  • Build: The service is built from a Dockerfile located in the ./pinpoint-hbase/ directory.
  • Environment Variables: These variables define various TTL (Time-to-Live) settings for different types of data stored in HBase.
  • Volumes: Persistent storage for HBase data is mounted on the host to ensure data persistence across container restarts.
  • Ports: The service exposes several ports for communication (60000, 16010, 60020, 16030).
  • Depends_on: This ensures that zoo1 (Zookeeper) service starts before pinpoint-hbase.

b. pinpoint-mysql

  • Purpose: MySQL is used to store application metadata and other relational data needed by Pinpoint.
  • Image: A MySQL 8.0 image from Docker Hub is used.
  • Environment Variables: These include MySQL credentials like root password, user, password, and database name.
  • Volumes: Persistent storage for MySQL data is mounted on the host.
  • Ports: The MySQL service is exposed on port 3306.

c. pinpoint-web

  • Purpose: This is the web UI for Pinpoint, allowing users to visualize and analyze the tracing data.
  • Build: The service is built from a Dockerfile located in the ./pinpoint-web/ directory.
  • Depends_on: This ensures that the pinpoint-hbase, pinpoint-mysql, zoo1, and redis services are running before starting the web service.
  • Environment Variables: These configure the web service, including database connections, logging levels, and other properties.
  • Ports: The service exposes port 9997 for the web interface.

d. pinpoint-collector

  • Purpose: The collector service gathers trace data from applications and stores it in HBase.
  • Build: The service is built from a Dockerfile located in the ./pinpoint-collector/ directory.
  • Depends_on: This ensures that pinpoint-hbase, zoo1, and redis services are running before starting the collector.
  • Environment Variables: These configure the collector service, including its connection to HBase, Zookeeper, and logging levels.
  • Ports: The collector exposes several ports (9991-9996) for various types of communication (gRPC, UDP, etc.).
  • Networks: The collector service is part of the pinpoint network and uses a fixed IP address.

e. zoo1

  • Purpose: Zookeeper is used to manage and coordinate the distributed components of Pinpoint.
  • Image: A Zookeeper image (3.4.14) from Docker Hub is used.
  • Environment Variables: These configure the Zookeeper instance.
  • Ports: The service is exposed on port 2181 for Zookeeper communication.

f. redis

  • Purpose: Redis is used as a caching layer for Pinpoint, helping to improve performance.
  • Image: A Redis image (5.0.6) from Docker Hub is used.
  • Ports: The Redis service is exposed on port 6379.

2. Networks

Networks allow the services to communicate with each other. In this docker-compose.yml, a custom bridge network named pinpoint is defined.

  • pinpoint: This is a user-defined bridge network that allows all the services to communicate with each other on a private network. Each service can reach others using their service names.

3. Volumes

Volumes provide persistent storage that survives container restarts. They are used to store data generated by services (like databases).

  • hbase_data: A volume for storing HBase data.
  • mysql_data: A volume for storing MySQL data.

4. Environment Variables

Environment variables are used to configure the services at runtime. These can include database credentials, logging levels, ports, and other configuration details. Each service defines its own set of environment variables, tailored to its specific needs.

5. Ports

Ports are exposed to allow external access to the services. For example:

  • 3306:3306 for MySQL
  • 9997:9997 for the Pinpoint Web UI
  • 6379:6379 for Redis

6. Restart Policies

Restart policies (restart: always) ensure that the containers are automatically restarted if they stop or crash. This helps maintain the high availability of the services.

7. Links

Links allow containers to communicate with each other using hostnames. In this docker-compose.yml, the pinpoint-web and pinpoint-collector services are linked to the pinpoint-mysql service to facilitate database communication.

8. Expose vs. Ports

  • Expose: This allows containers to communicate with each other internally, without exposing the ports to the host machine.
  • Ports: These map the container ports to the host machine, allowing external access to the services.

Then need to whitelist the following ports 8080, 80, and 443  in the security groups.

We can see the dashboard below.

Integrate Pinpoint to the Node Js application:

We have to import the pinpoint agent in the Nodejs application.
Commands to be run after import pinpoint agent:

Install with npm:

npm install --save pinpoint-node-agent

Install with yarn:

yarn add pinpoint-node-agent

Adding a code:

To run Pinpoint agent for applications, we need to make sure the prerequisites are in place first.

CommonJS

require('pinpoint-node-agent')

If we are using pm2, use node-args(CLI) or node_args(Ecosystem File).

module.exports = {

  apps : [{

    name: "app",

    script: "./app.js",

    'node_args': ['-r', 'pinpoint-node-agent']

  }]

}

Below is the example of we have attached,


Configure with environment variables and start the application


Based on the pinpoint-config-default.json file in the server, only necessary parts are set as environment variables.

PINPOINT_AGENT_ID=${HOSTNAME} PINPOINT_APPLICATION_NAME=Test-Node-App PINPOINT_COLLECTOR_IP=<pinpoint server private-ip> PINPOINT_ENABLE=true pm2 start ~/application path/app.js

Once the application is running and then check the site. The output is attached below

Conclusion

By following these steps, we have successfully set up Pinpoint APM to monitor our Node.js application. With Pinpoint, we can gain deep insights into our application's performance, identify bottlenecks, and optimize our code to ensure a smooth and efficient user experience. Pinpoint's real-time monitoring and comprehensive tracing capabilities make it an invaluable tool for managing the performance of our applications.

Reference


https://github.com/pinpoint-apm

https://github.com/pinpoint-apm/pinpoint

https://github.com/pinpoint-apm/pinpoint-node-agent

https://www.baeldung.com/ops/pinpoint-intro 

Sep 16, 2024

2

Blog

Migrating a VM Instance from GCP to AWS A Step by Step Guide

Overview

Moving a virtual machine (VM) instance from Google Cloud Platform (GCP) to Amazon Web Services (AWS) can seem scary. But with the right tools and a step by step process it can be done. In this post we will walk you through the entire process and make the transition from GCP to AWS smooth. Here we are using AWS’s native tool, Application Migration Service, to move a VM instance from GCP to AWS.

Architecture Diagram

Server Migration Architecture GCP to AWS

Step-by-Step Guide

Step 1: Setup on GCP

Launch a Test Windows VM Instance

Go to your GCP console and create a test Windows VM. We created a 51 GB boot disk for this example. This will be our source VM.

RDP into the Windows Server

Next RDP into your Windows server. Once connected you need to install the AWS Application Migration Service (AMS) agent on this server.

Install the AMS Agent

To install the AMS agent, download it using the following command:

https://aws-application-migration-service-us-east-1.s3.us-east-1.amazonaws.com/latest/windows/AwsReplicationWindowsInstaller.exe

For more details, refer to the AWS documentation: https://docs.aws.amazon.com/mgn/latest/ug/windows-agent.html

Step 2: Install the AMS Agent

Navigate to the Downloads folder and open the AWS agent with administrator privileges using the Command prompt.

 When installing you will be asked to choose the AWS region to replicate to. For this guide we chose N.V.

A screenshot of a computerDescription automatically generated

Step 3: Prepare the AWS Console

Create a User and Attach Permissions

In the AWS console create a new user and attach an AWS replication permission role to it. Generate access and secret keys for this user.

While creating keys choose the “third-party service” option for that key.

Enter the Keys into the GCP Windows Server

Enter the access key and secret key into the GCP Windows server. The AMS agent will ask which disks to replicate (e.g. C and D drives). For this example we just pressed enter to replicate all disks.

Once done the AMS agent will install and start replicating your data.

In our AWS account, one instance was created :

After installing the AMS agent on the source Windows server in GCP, a replication server was created in the AWS EC2 console. This instance was used to replicate all VM instance data from the GCP account to the AWS account.

A screenshot of a computerDescription automatically generated

Step 4: Monitor the Data Migration

 Go to the Application Migration Service in your AWS account. In the source servers column you should see your GCP VM instance listed.

A screenshot of a computerDescription automatically generated
A screenshot of a computerDescription automatically generated

The data migration will start and you can monitor it. Depending on the size of your boot disk and the amount of data this may take some time.

A screenshot of a computerDescription automatically generated

It took over half an hour to migrate the data from a 51 GB boot disk on a GCP VM instance to AWS. Once completed, it was ready for the testing stage.

A screenshot of a computerDescription automatically generated

Step 5: Create a Launch Template

After the data migration is done, create a launch template for your use case. This launch template should include instance type, key pair, VPC range, subnets, etc. The new EC2 instance will be launched from this template.

A screenshot of a computerDescription automatically generated

Step 6: Create a Replication Template

Similarly, create a replication template. This template will replicate your data to your new AWS environment.

A screenshot of a computerDescription automatically generated

Step 7: Launch an EC2 Test Instance

Once the templates are set up, launch an EC2 test instance from the boot disk of your source GCP VM instance. Take a snapshot of your instance to ensure data integrity. The test instance should launch successfully and match your original GCP VM. This is automated, no manual migration steps.

A screenshot of a computerDescription automatically generated

Once we launch a test EC2 instance, everything starts to happen automatically and the test EC2 instance is launched. Below is the automated process for launching the EC2 instance. See the screenshot.

A screenshot of a computerDescription automatically generated

Once the above is done, data is migrated from GCP to AWS using AWS Application Migration Service replication server. You can see the test EC2 instance in the AWS EC2 console as shown below.

A screenshot of a computerDescription automatically generated

Test EC2 instance configuration for your reference:

Step 8: Final cut-over stage

Once the cutover is complete and a new EC2 instance is launched, the test EC2 instance and replication server are terminated and we are left with the new EC2 instance with our custom configuration. See the screenshot below.

Step 9: Verify the EC2 Instance

Login to the new EC2 instance using RDP and verify all data is migrated. Verify all data is intact and accessible, check for any discrepancies. See our new EC2 instance below:

Step 10: Test Your Application

After verifying the data, test your application to see if it works as expected in the new AWS environment. We tested our sample web application and it worked.

A screenshot of a computerDescription automatically generated

Conclusion

Migrating a VM instance from GCP to AWS is a multi step process but with proper planning and execution it can be done smoothly. Follow this guide and your data will be migrated securely and your applications will run smoothly in the new environment.

Aug 13, 2024

2

Blog

ISO 27001:2022 Made Easy: How Ankercloud and Vanta Simplify Compliance

At Ankercloud, our commitment to information security is reflected in our ISO 27001:2022 certification. Leveraging our expertise and advanced tools, we help other organizations achieve the same certification efficiently. With Vanta, we ensure a streamlined, automated, and effective compliance journey, showcasing our dedication to the highest standards of information security.

What is ISO 27001:2022?

ISO 27001:2022 is a global standard for managing and protecting sensitive company information through an Information Security Management System (ISMS). It ensures the confidentiality, integrity, and availability of data by providing a structured approach to managing information security risks.

The ISO 27001:2022 Process (Traditional Approach)

Obtaining ISO 27001 certification requires the following crucial steps

Preparation (1-3 months) 

Familiarize with the standard, define the scope, and Perform an initial gap analysis

Implementation (3-6 months) 

Develop an ISMS, conduct risk assessments, Implement necessary controls, and document policies

Internal Audit (1-2 months) 

Evaluate compliance with the ISMS and identify improvements

Management Review (1 month) 

Review ISMS performance and align with organizational objectives

Certification Audit (1-2 months) 

Engage a certification body for stage 1 (document review) and stage 2 (on-site assessment) audits

Post-Certification (Ongoing) 

Continuously monitor, conduct internal audits, and perform management reviews

In total, the process can take about 6 to 12 months, depending on factors like the organization's size, complexity, and preparedness.

How Vanta Simplifies ISO 27001:2022 Compliance

Vanta, a compliance automation platform, transforms the compliance process by automating security monitoring and evidence collection, making ISO 27001:2022 compliance more manageable. Here's how:

  1. Automated Security Monitoring: Vanta continuously monitors your systems for security issues, ensuring you meet ISO 27001:2022 requirements without manual intervention.
  2. Evidence Collection: Vanta automates 90% of the evidence collection, such as access logs, security configurations, and compliance status reports.
  3. Compliance Management: A centralized dashboard helps manage and track compliance efforts, simplifying the process.
  4. Risk Assessment: Vanta identifies vulnerabilities and risks, providing effective recommendations.
  5. Automated Documentation: Generates and maintains required documentation for audits, reducing the manual workload.

With Vanta's automation approach, the ISO 27001:2022 certification process can be significantly expedited, allowing organizations to achieve certification in as little as 2 to 3 months. This accelerated timeline is made possible by Vanta's efficient, automated workflows and continuous monitoring, which streamline compliance tasks and reduce the time typically required for manual processes.

Benefits of Using Vanta Compliance Tools Compared to Traditional Methods

Vanta offers numerous advantages over traditional compliance methods:

  1. Simplified Management and Guidance: Reduces complexities and provides step-by-step guidance, lowering the administrative burden.
  2. Automated Detection and Proactive Assessment: Ensures timely identification and prioritization of security risks.
  3. Real-time Dashboards and Streamlined Audits: Provides immediate visibility into compliance status and simplifies audit preparation.
  4. Seamless Integration and User-Friendly Interface: Enhances workflow efficiency with seamless integration and an intuitive interface.
  5. Enhanced Data Protection and Trust Building: Strengthens data protection and demonstrates strong security practices to stakeholders.
  6. Time and Cost Savings with Continuous Monitoring: Automation reduces time and costs, while continuous monitoring ensures long-term security and compliance.

How Ankercloud Can Help Companies Achieve ISO 27001:2022 Certification Using Vanta

As ISO 27001:2022 certified lead auditors, Ankercloud enhances organizations' information security practices, ensuring compliance with legal and regulatory requirements. We equip organizations with the skills to effectively manage risks, fostering a proactive approach to data protection. Implementing ISO 27001:2022 can streamline operations, improve efficiency, and build trust with customers and stakeholders.

  • Expert Guidance: Ankercloud's expertise guides companies through the ISO 27001:2022 process efficiently.
  • Platform Utilization: Vanta's automation and monitoring tools streamline compliance.
  • Customized Support: Tailored services meet specific company needs, ensuring comprehensive ISO 27001:2022 coverage.
  • Accelerated Timeline: Vanta's automated processes and Ankercloud's expertise enable faster ISO certification.
  • Continuous Improvement: Ankercloud helps maintain and improve ISMS post-certification, ensuring ongoing compliance and security.

Conclusion

Ankercloud's expertise, combined with Vanta's automation capabilities, offers a powerful solution for companies seeking ISO 27001:2022 certification. By streamlining the compliance process through automated security monitoring, evidence collection, and compliance management, Ankercloud helps companies achieve certification efficiently and effectively. Leveraging Vanta, Ankercloud ensures a smooth and cost-effective journey to certification, enhancing the overall security posture of your organization.

Jul 17, 2024

2

Pinpoint APM Implementation for Node Js Application

Sep 16, 2024
00

Introduction

Application Performance Management (APM) is crucial for monitoring and managing the performance and availability of software applications. Pinpoint is an open-source APM tool that offers comprehensive insights into the performance and reliability of applications. It is designed to monitor large-scale distributed systems, providing real-time performance metrics, tracing, and detailed visualizations.

This guide provides a step-by-step approach to implementing Pinpoint APM for a Node.js application, including setting up the server, installing Docker, deploying Pinpoint, and integrating it with the Node.js application.

About Pinpoint

Pinpoint is a powerful APM tool that helps understand the application's performance and track down issues. It supports a variety of technologies and provides functionalities like:

  • Real-time application monitoring
  • Distributed tracing
  • Visualization of application topology
  • Alerts and notifications
  • Detailed transaction analysis

Setup a server:

We have to launch a new server with a minimum of 2vCPU and 4GB RAM.

Install Docker Engine

sudo apt-get update

sudo apt-get install docker-ce docker-ce-cli containerd.io

Install Docker Compose

Download the Docker Compose binary into the /usr/local/bin directory:

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Apply executable permissions to the binary

sudo chmod +x /usr/local/bin/docker-compose

Verify the installation:

docker --version

docker-compose --version


Deploy Pinpoint Using Docker

Clone the git repository:

git clone https://github.com/pinpoint-apm/pinpoint-docker.git

cd pinpoint-docker

sudo docker-compose pull && docker-compose up -d

NOTE:
If the above docker-compose.yml won’t work please use the following yml file to up the docker.

version: "3.6"

services:

  pinpoint-hbase:

    build:

      context: ./pinpoint-hbase/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_HBASE_NAME}"

    image: "pinpointdocker/pinpoint-hbase:${PINPOINT_VERSION}"

    networks:

      - pinpoint

    environment:

      - AGENTINFO_TTL=${AGENTINFO_TTL}

      - AGENTSTATV2_TTL=${AGENTSTATV2_TTL}

      - APPSTATAGGRE_TTL=${APPSTATAGGRE_TTL}

      - APPINDEX_TTL=${APPINDEX_TTL}

      - AGENTLIFECYCLE_TTL=${AGENTLIFECYCLE_TTL}

      - AGENTEVENT_TTL=${AGENTEVENT_TTL}

      - STRINGMETADATA_TTL=${STRINGMETADATA_TTL}

      - APIMETADATA_TTL=${APIMETADATA_TTL}

      - SQLMETADATA_TTL=${SQLMETADATA_TTL}

      - TRACEV2_TTL=${TRACEV2_TTL}

      - APPTRACEINDEX_TTL=${APPTRACEINDEX_TTL}

      - APPMAPSTATCALLERV2_TTL=${APPMAPSTATCALLERV2_TTL}

      - APPMAPSTATCALLEV2_TTL=${APPMAPSTATCALLEV2_TTL}

      - APPMAPSTATSELFV2_TTL=${APPMAPSTATSELFV2_TTL}

      - HOSTAPPMAPV2_TTL=${HOSTAPPMAPV2_TTL}

    volumes:

      - hbase_data:/home/pinpoint/hbase

      - /home/pinpoint/zookeeper

    expose:

      # HBase Master API port

      - "60000"

      # HBase Master Web UI

      - "16010"

      # Regionserver API port

      - "60020"

      # HBase Regionserver web UI

      - "16030"

    ports:

      - "60000:60000"

      - "16010:16010"

      - "60020:60020"

      - "16030:16030"

    restart: always

    depends_on:

      - zoo1

  pinpoint-mysql:

    container_name: pinpoint-mysql

    image: mysql:8.0

    restart: "no"

    hostname: pinpoint-mysql

    entrypoint: > 

      sh -c "

      curl -SL "https://raw.githubusercontent.com/ga-ram/pinpoint/latest/web/src/main/resources/sql/CreateTableStatement-mysql.sql" -o /docker-entrypoint-initdb.d/CreateTableStatement-mysql.sql &&

      curl -SL "https://raw.githubusercontent.com/ga-ram/pinpoint/latest/web/src/main/resources/sql/SpringBatchJobRepositorySchema-mysql.sql" -o /docker-entrypoint-initdb.d/SpringBatchJobRepositorySchema-mysql.sql &&

      sed -i '/^--/d' /docker-entrypoint-initdb.d/CreateTableStatement-mysql.sql &&

      sed -i '/^--/d' /docker-entrypoint-initdb.d/SpringBatchJobRepositorySchema-mysql.sql &&

      docker-entrypoint.sh mysqld

      "

    ports:

      - "3306:3306"

    environment:

      - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}

      - MYSQL_USER=${MYSQL_USER}

      - MYSQL_PASSWORD=${MYSQL_PASSWORD}

      - MYSQL_DATABASE=${MYSQL_DATABASE}

    volumes:

      - mysql_data:/var/lib/mysql

    networks:

      - pinpoint

  pinpoint-web:

    build:

      context: ./pinpoint-web/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_WEB_NAME}"

    image: "pinpointdocker/pinpoint-web:${PINPOINT_VERSION}"

    depends_on:

      - pinpoint-hbase

      - pinpoint-mysql

      - zoo1

      - redis

    restart: always

    expose:

      - "9997"

    ports:

      - "9997:9997"

      - "${WEB_SERVER_PORT:-8080}:8080"

    environment:

      - WEB_SERVER_PORT=${WEB_SERVER_PORT}

      - SPRING_PROFILES_ACTIVE=${SPRING_PROFILES}

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

      - CLUSTER_ENABLE=${CLUSTER_ENABLE}

      - ADMIN_PASSWORD=${ADMIN_PASSWORD}

      - CONFIG_SENDUSAGE=${CONFIG_SENDUSAGE}

      - LOGGING_LEVEL_ROOT=${WEB_LOGGING_LEVEL_ROOT}

      - CONFIG_SHOW_APPLICATIONSTAT=${CONFIG_SHOW_APPLICATIONSTAT}

      - JDBC_DRIVERCLASSNAME=${JDBC_DRIVERCLASSNAME}

      - JDBC_URL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - JDBC_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - JDBC_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_DATASOURCE_HIKARI_JDBCURL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - SPRING_DATASOURCE_HIKARI_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - SPRING_DATASOURCE_HIKARI_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_METADATASOURCE_HIKARI_JDBCURL=${SPRING_METADATASOURCE_HIKARI_JDBCURL}

      - SPRING_METADATASOURCE_HIKARI_USERNAME=${SPRING_METADATASOURCE_HIKARI_USERNAME}

      - SPRING_METADATASOURCE_HIKARI_PASSWORD=${SPRING_METADATASOURCE_HIKARI_PASSWORD}

      - SPRING_DATA_REDIS_HOST=${SPRING_DATA_REDIS_HOST}

      - SPRING_DATA_REDIS_PORT=${SPRING_DATA_REDIS_PORT}

      - SPRING_DATA_REDIS_USERNAME=${SPRING_DATA_REDIS_USERNAME}

      - SPRING_DATA_REDIS_PASSWORD=${SPRING_DATA_REDIS_PASSWORD}

    links:

      - "pinpoint-mysql:pinpoint-mysql"

    networks:

      - pinpoint

  pinpoint-collector:

    build:

      context: ./pinpoint-collector/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_COLLECTOR_NAME}"

    image: "pinpointdocker/pinpoint-collector:${PINPOINT_VERSION}"

    depends_on:

      - pinpoint-hbase

      - zoo1

      - redis

    restart: always

    expose:

      - "9991"

      - "9992"

      - "9993"

      - "9994"

      - "9995"

      - "9996"

    ports:

      - "${COLLECTOR_RECEIVER_GRPC_AGENT_PORT:-9991}:9991/tcp"

      - "${COLLECTOR_RECEIVER_GRPC_STAT_PORT:-9992}:9992/tcp"

      - "${COLLECTOR_RECEIVER_GRPC_SPAN_PORT:-9993}:9993/tcp"

      - "${COLLECTOR_RECEIVER_BASE_PORT:-9994}:9994"

      - "${COLLECTOR_RECEIVER_STAT_UDP_PORT:-9995}:9995/tcp"

      - "${COLLECTOR_RECEIVER_SPAN_UDP_PORT:-9996}:9996/tcp"

      - "${COLLECTOR_RECEIVER_STAT_UDP_PORT:-9995}:9995/udp"

      - "${COLLECTOR_RECEIVER_SPAN_UDP_PORT:-9996}:9996/udp"

    networks:

      pinpoint:

        ipv4_address: ${COLLECTOR_FIXED_IP}

    environment:

      - SPRING_PROFILES_ACTIVE=${SPRING_PROFILES}

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

      - CLUSTER_ENABLE=${CLUSTER_ENABLE}

      - LOGGING_LEVEL_ROOT=${COLLECTOR_LOGGING_LEVEL_ROOT}

      - FLINK_CLUSTER_ENABLE=${FLINK_CLUSTER_ENABLE}

      - FLINK_CLUSTER_ZOOKEEPER_ADDRESS=${FLINK_CLUSTER_ZOOKEEPER_ADDRESS}

      - SPRING_DATA_REDIS_HOST=${SPRING_DATA_REDIS_HOST}

      - SPRING_DATA_REDIS_PORT=${SPRING_DATA_REDIS_PORT}

      - SPRING_DATA_REDIS_USERNAME=${SPRING_DATA_REDIS_USERNAME}

      - SPRING_DATA_REDIS_PASSWORD=${SPRING_DATA_REDIS_PASSWORD}

  pinpoint-quickstart:

    build:

      context: ./pinpoint-quickstart/

      dockerfile: Dockerfile

    container_name: "pinpoint-quickstart"

    image: "pinpointdocker/pinpoint-quickstart"

    ports:

      - "${APP_PORT:-8085}:8080"

    volumes:

      - data-volume:/pinpoint-agent

    environment:

      JAVA_OPTS: "-javaagent:/pinpoint-agent/pinpoint-bootstrap.jar -Dpinpoint.agentId=${AGENT_ID} -Dpinpoint.applicationName=${APP_NAME} -Dpinpoint.profiler.profiles.active=${SPRING_PROFILES}"

    networks:

      - pinpoint

    depends_on:

      - pinpoint-agent

  pinpoint-batch:

    build:

      context: ./pinpoint-batch/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_BATCH_NAME}"

    image: "pinpointdocker/pinpoint-batch:${PINPOINT_VERSION}"

    depends_on:

      - pinpoint-hbase

      - pinpoint-mysql

      - zoo1

    restart: always

    environment:

      - BATCH_SERVER_PORT=${BATCH_SERVER_PORT}

      - SPRING_PROFILES_ACTIVE=${SPRING_PROFILES}

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

      - CLUSTER_ENABLE=${CLUSTER_ENABLE}

      - ADMIN_PASSWORD=${ADMIN_PASSWORD}

      - CONFIG_SENDUSAGE=${CONFIG_SENDUSAGE}

      - LOGGING_LEVEL_ROOT=${BATCH_LOGGING_LEVEL_ROOT}

      - CONFIG_SHOW_APPLICATIONSTAT=${CONFIG_SHOW_APPLICATIONSTAT}

      - BATCH_FLINK_SERVER=${BATCH_FLINK_SERVER}

      - JDBC_DRIVERCLASSNAME=${JDBC_DRIVERCLASSNAME}

      - JDBC_URL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - JDBC_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - JDBC_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_DATASOURCE_HIKARI_JDBCURL=${SPRING_DATASOURCE_HIKARI_JDBCURL}

      - SPRING_DATASOURCE_HIKARI_USERNAME=${SPRING_DATASOURCE_HIKARI_USERNAME}

      - SPRING_DATASOURCE_HIKARI_PASSWORD=${SPRING_DATASOURCE_HIKARI_PASSWORD}

      - SPRING_METADATASOURCE_HIKARI_JDBCURL=${SPRING_METADATASOURCE_HIKARI_JDBCURL}

      - SPRING_METADATASOURCE_HIKARI_USERNAME=${SPRING_METADATASOURCE_HIKARI_USERNAME}

      - SPRING_METADATASOURCE_HIKARI_PASSWORD=${SPRING_METADATASOURCE_HIKARI_PASSWORD}

      - ALARM_MAIL_SERVER_URL=${ALARM_MAIL_SERVER_URL}

      - ALARM_MAIL_SERVER_PORT=${ALARM_MAIL_SERVER_PORT}

      - ALARM_MAIL_SERVER_USERNAME=${ALARM_MAIL_SERVER_USERNAME}

      - ALARM_MAIL_SERVER_PASSWORD=${ALARM_MAIL_SERVER_PASSWORD}

      - ALARM_MAIL_SENDER_ADDRESS=${ALARM_MAIL_SENDER_ADDRESS}

      - ALARM_MAIL_TRANSPORT_PROTOCOL=${ALARM_MAIL_TRANSPORT_PROTOCOL}

      - ALARM_MAIL_SMTP_PORT=${ALARM_MAIL_SMTP_PORT}

      - ALARM_MAIL_SMTP_AUTH=${ALARM_MAIL_SMTP_AUTH}

      - ALARM_MAIL_SMTP_STARTTLS_ENABLE=${ALARM_MAIL_SMTP_STARTTLS_ENABLE}

      - ALARM_MAIL_SMTP_STARTTLS_REQUIRED=${ALARM_MAIL_SMTP_STARTTLS_REQUIRED}

      - ALARM_MAIL_DEBUG=${ALARM_MAIL_DEBUG}

    links:

      - "pinpoint-mysql:pinpoint-mysql"

    networks:

      - pinpoint

  pinpoint-agent:

    build:

      context: ./pinpoint-agent/

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_AGENT_NAME}"

    image: "pinpointdocker/pinpoint-agent:${PINPOINT_VERSION}"

    restart: unless-stopped

    networks:

      - pinpoint

    volumes:

      - data-volume:/pinpoint-agent

    environment:

      - SPRING_PROFILES=${SPRING_PROFILES}

      - COLLECTOR_IP=${COLLECTOR_IP}

      - PROFILER_TRANSPORT_AGENT_COLLECTOR_PORT=${PROFILER_TRANSPORT_AGENT_COLLECTOR_PORT}

      - PROFILER_TRANSPORT_METADATA_COLLECTOR_PORT=${PROFILER_TRANSPORT_METADATA_COLLECTOR_PORT}

      - PROFILER_TRANSPORT_STAT_COLLECTOR_PORT=${PROFILER_TRANSPORT_STAT_COLLECTOR_PORT}

      - PROFILER_TRANSPORT_SPAN_COLLECTOR_PORT=${PROFILER_TRANSPORT_SPAN_COLLECTOR_PORT}

      - PROFILER_SAMPLING_TYPE=${PROFILER_SAMPLING_TYPE}

      - PROFILER_SAMPLING_COUNTING_SAMPLING_RATE=${PROFILER_SAMPLING_COUNTING_SAMPLING_RATE}

      - PROFILER_SAMPLING_PERCENT_SAMPLING_RATE=${PROFILER_SAMPLING_PERCENT_SAMPLING_RATE}

      - PROFILER_SAMPLING_NEW_THROUGHPUT=${PROFILER_SAMPLING_NEW_THROUGHPUT}

      - PROFILER_SAMPLING_CONTINUE_THROUGHPUT=${PROFILER_SAMPLING_CONTINUE_THROUGHPUT}

      - DEBUG_LEVEL=${AGENT_DEBUG_LEVEL}

      - PROFILER_TRANSPORT_MODULE=${PROFILER_TRANSPORT_MODULE}

    depends_on:

      - pinpoint-collector

  #zookeepers

  zoo1:

    image: zookeeper:3.4.13

    restart: always

    hostname: zoo1

    expose:

      - "2181"

      - "2888"

      - "3888"

    ports:

      - "2181"

    environment:

      ZOO_MY_ID: 1

      ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

    networks:

      - pinpoint

  zoo2:

    image: zookeeper:3.4.13

    restart: always

    hostname: zoo2

    expose:

      - "2181"

      - "2888"

      - "3888"

    ports:

      - "2181"

    environment:

      ZOO_MY_ID: 2

      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zoo3:2888:3888

    networks:

      - pinpoint

  zoo3:

    image: zookeeper:3.4.13

    restart: always

    hostname: zoo3

    expose:

      - "2181"

      - "2888"

      - "3888"

    ports:

      - "2181"

    environment:

      ZOO_MY_ID: 3

      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=0.0.0.0:2888:3888

    networks:

      - pinpoint

  ##flink

  jobmanager:

    build:

      context: pinpoint-flink

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_FLINK_NAME}-jobmanager"

    image: "pinpointdocker/pinpoint-flink:${PINPOINT_VERSION}"

    expose:

      - "6123"

    ports:

      - "${FLINK_WEB_PORT:-8081}:8081"

    command: standalone-job -p 1 pinpoint-flink-job.jar -spring.profiles.active release

    environment:

      - JOB_MANAGER_RPC_ADDRESS=jobmanager

      - PINPOINT_ZOOKEEPER_ADDRESS=${PINPOINT_ZOOKEEPER_ADDRESS}

    networks:

      - pinpoint

    depends_on:

      - zoo1

  taskmanager:

    build:

      context: pinpoint-flink

      dockerfile: Dockerfile

      args:

        - PINPOINT_VERSION=${PINPOINT_VERSION}

    container_name: "${PINPOINT_FLINK_NAME}-taskmanager"

    image: "pinpointdocker/pinpoint-flink:${PINPOINT_VERSION}"

    expose:

      - "6121"

      - "6122"

      - "19994"

    ports:

      - "6121:6121"

      - "6122:6122"

      - "19994:19994"

    depends_on:

      - zoo1

      - jobmanager

    command: taskmanager

    links:

      - "jobmanager:jobmanager"

    environment:

      - JOB_MANAGER_RPC_ADDRESS=jobmanager

    networks:

      - pinpoint

  redis:

    image: redis:7.0.14

    restart: always

    hostname: pinpoint-redis

    ports:

      - "6379:6379"

    networks:

      - pinpoint

volumes:

  data-volume:

  mysql_data:

  hbase_data:

networks:

  pinpoint:

    driver: bridge

    ipam:

      config:

        - subnet: ${PINPOINT_NETWORK_SUBNET}

The explanation of the components that we have used in the docker-compose yaml files

1. Services

Services are the individual containers that make up the application. Each service runs in its container but can interact with other services defined in the same docker-compose.yml file.

a. pinpoint-hbase

  • Purpose: Pinpoint uses HBase as its primary storage for storing tracing data.
  • Build: The service is built from a Dockerfile located in the ./pinpoint-hbase/ directory.
  • Environment Variables: These variables define various TTL (Time-to-Live) settings for different types of data stored in HBase.
  • Volumes: Persistent storage for HBase data is mounted on the host to ensure data persistence across container restarts.
  • Ports: The service exposes several ports for communication (60000, 16010, 60020, 16030).
  • Depends_on: This ensures that zoo1 (Zookeeper) service starts before pinpoint-hbase.

b. pinpoint-mysql

  • Purpose: MySQL is used to store application metadata and other relational data needed by Pinpoint.
  • Image: A MySQL 8.0 image from Docker Hub is used.
  • Environment Variables: These include MySQL credentials like root password, user, password, and database name.
  • Volumes: Persistent storage for MySQL data is mounted on the host.
  • Ports: The MySQL service is exposed on port 3306.

c. pinpoint-web

  • Purpose: This is the web UI for Pinpoint, allowing users to visualize and analyze the tracing data.
  • Build: The service is built from a Dockerfile located in the ./pinpoint-web/ directory.
  • Depends_on: This ensures that the pinpoint-hbase, pinpoint-mysql, zoo1, and redis services are running before starting the web service.
  • Environment Variables: These configure the web service, including database connections, logging levels, and other properties.
  • Ports: The service exposes port 9997 for the web interface.

d. pinpoint-collector

  • Purpose: The collector service gathers trace data from applications and stores it in HBase.
  • Build: The service is built from a Dockerfile located in the ./pinpoint-collector/ directory.
  • Depends_on: This ensures that pinpoint-hbase, zoo1, and redis services are running before starting the collector.
  • Environment Variables: These configure the collector service, including its connection to HBase, Zookeeper, and logging levels.
  • Ports: The collector exposes several ports (9991-9996) for various types of communication (gRPC, UDP, etc.).
  • Networks: The collector service is part of the pinpoint network and uses a fixed IP address.

e. zoo1

  • Purpose: Zookeeper is used to manage and coordinate the distributed components of Pinpoint.
  • Image: A Zookeeper image (3.4.14) from Docker Hub is used.
  • Environment Variables: These configure the Zookeeper instance.
  • Ports: The service is exposed on port 2181 for Zookeeper communication.

f. redis

  • Purpose: Redis is used as a caching layer for Pinpoint, helping to improve performance.
  • Image: A Redis image (5.0.6) from Docker Hub is used.
  • Ports: The Redis service is exposed on port 6379.

2. Networks

Networks allow the services to communicate with each other. In this docker-compose.yml, a custom bridge network named pinpoint is defined.

  • pinpoint: This is a user-defined bridge network that allows all the services to communicate with each other on a private network. Each service can reach others using their service names.

3. Volumes

Volumes provide persistent storage that survives container restarts. They are used to store data generated by services (like databases).

  • hbase_data: A volume for storing HBase data.
  • mysql_data: A volume for storing MySQL data.

4. Environment Variables

Environment variables are used to configure the services at runtime. These can include database credentials, logging levels, ports, and other configuration details. Each service defines its own set of environment variables, tailored to its specific needs.

5. Ports

Ports are exposed to allow external access to the services. For example:

  • 3306:3306 for MySQL
  • 9997:9997 for the Pinpoint Web UI
  • 6379:6379 for Redis

6. Restart Policies

Restart policies (restart: always) ensure that the containers are automatically restarted if they stop or crash. This helps maintain the high availability of the services.

7. Links

Links allow containers to communicate with each other using hostnames. In this docker-compose.yml, the pinpoint-web and pinpoint-collector services are linked to the pinpoint-mysql service to facilitate database communication.

8. Expose vs. Ports

  • Expose: This allows containers to communicate with each other internally, without exposing the ports to the host machine.
  • Ports: These map the container ports to the host machine, allowing external access to the services.

Then need to whitelist the following ports 8080, 80, and 443  in the security groups.

We can see the dashboard below.

Integrate Pinpoint to the Node Js application:

We have to import the pinpoint agent in the Nodejs application.
Commands to be run after import pinpoint agent:

Install with npm:

npm install --save pinpoint-node-agent

Install with yarn:

yarn add pinpoint-node-agent

Adding a code:

To run Pinpoint agent for applications, we need to make sure the prerequisites are in place first.

CommonJS

require('pinpoint-node-agent')

If we are using pm2, use node-args(CLI) or node_args(Ecosystem File).

module.exports = {

  apps : [{

    name: "app",

    script: "./app.js",

    'node_args': ['-r', 'pinpoint-node-agent']

  }]

}

Below is the example of we have attached,


Configure with environment variables and start the application


Based on the pinpoint-config-default.json file in the server, only necessary parts are set as environment variables.

PINPOINT_AGENT_ID=${HOSTNAME} PINPOINT_APPLICATION_NAME=Test-Node-App PINPOINT_COLLECTOR_IP=<pinpoint server private-ip> PINPOINT_ENABLE=true pm2 start ~/application path/app.js

Once the application is running and then check the site. The output is attached below

Conclusion

By following these steps, we have successfully set up Pinpoint APM to monitor our Node.js application. With Pinpoint, we can gain deep insights into our application's performance, identify bottlenecks, and optimize our code to ensure a smooth and efficient user experience. Pinpoint's real-time monitoring and comprehensive tracing capabilities make it an invaluable tool for managing the performance of our applications.

Reference


https://github.com/pinpoint-apm

https://github.com/pinpoint-apm/pinpoint

https://github.com/pinpoint-apm/pinpoint-node-agent

https://www.baeldung.com/ops/pinpoint-intro 

Read Blog
AWS, Virtual Machine, GCP

Migrating a VM Instance from GCP to AWS A Step by Step Guide

Aug 13, 2024
00

Overview

Moving a virtual machine (VM) instance from Google Cloud Platform (GCP) to Amazon Web Services (AWS) can seem scary. But with the right tools and a step by step process it can be done. In this post we will walk you through the entire process and make the transition from GCP to AWS smooth. Here we are using AWS’s native tool, Application Migration Service, to move a VM instance from GCP to AWS.

Architecture Diagram

Server Migration Architecture GCP to AWS

Step-by-Step Guide

Step 1: Setup on GCP

Launch a Test Windows VM Instance

Go to your GCP console and create a test Windows VM. We created a 51 GB boot disk for this example. This will be our source VM.

RDP into the Windows Server

Next RDP into your Windows server. Once connected you need to install the AWS Application Migration Service (AMS) agent on this server.

Install the AMS Agent

To install the AMS agent, download it using the following command:

https://aws-application-migration-service-us-east-1.s3.us-east-1.amazonaws.com/latest/windows/AwsReplicationWindowsInstaller.exe

For more details, refer to the AWS documentation: https://docs.aws.amazon.com/mgn/latest/ug/windows-agent.html

Step 2: Install the AMS Agent

Navigate to the Downloads folder and open the AWS agent with administrator privileges using the Command prompt.

 When installing you will be asked to choose the AWS region to replicate to. For this guide we chose N.V.

A screenshot of a computerDescription automatically generated

Step 3: Prepare the AWS Console

Create a User and Attach Permissions

In the AWS console create a new user and attach an AWS replication permission role to it. Generate access and secret keys for this user.

While creating keys choose the “third-party service” option for that key.

Enter the Keys into the GCP Windows Server

Enter the access key and secret key into the GCP Windows server. The AMS agent will ask which disks to replicate (e.g. C and D drives). For this example we just pressed enter to replicate all disks.

Once done the AMS agent will install and start replicating your data.

In our AWS account, one instance was created :

After installing the AMS agent on the source Windows server in GCP, a replication server was created in the AWS EC2 console. This instance was used to replicate all VM instance data from the GCP account to the AWS account.

A screenshot of a computerDescription automatically generated

Step 4: Monitor the Data Migration

 Go to the Application Migration Service in your AWS account. In the source servers column you should see your GCP VM instance listed.

A screenshot of a computerDescription automatically generated
A screenshot of a computerDescription automatically generated

The data migration will start and you can monitor it. Depending on the size of your boot disk and the amount of data this may take some time.

A screenshot of a computerDescription automatically generated

It took over half an hour to migrate the data from a 51 GB boot disk on a GCP VM instance to AWS. Once completed, it was ready for the testing stage.

A screenshot of a computerDescription automatically generated

Step 5: Create a Launch Template

After the data migration is done, create a launch template for your use case. This launch template should include instance type, key pair, VPC range, subnets, etc. The new EC2 instance will be launched from this template.

A screenshot of a computerDescription automatically generated

Step 6: Create a Replication Template

Similarly, create a replication template. This template will replicate your data to your new AWS environment.

A screenshot of a computerDescription automatically generated

Step 7: Launch an EC2 Test Instance

Once the templates are set up, launch an EC2 test instance from the boot disk of your source GCP VM instance. Take a snapshot of your instance to ensure data integrity. The test instance should launch successfully and match your original GCP VM. This is automated, no manual migration steps.

A screenshot of a computerDescription automatically generated

Once we launch a test EC2 instance, everything starts to happen automatically and the test EC2 instance is launched. Below is the automated process for launching the EC2 instance. See the screenshot.

A screenshot of a computerDescription automatically generated

Once the above is done, data is migrated from GCP to AWS using AWS Application Migration Service replication server. You can see the test EC2 instance in the AWS EC2 console as shown below.

A screenshot of a computerDescription automatically generated

Test EC2 instance configuration for your reference:

Step 8: Final cut-over stage

Once the cutover is complete and a new EC2 instance is launched, the test EC2 instance and replication server are terminated and we are left with the new EC2 instance with our custom configuration. See the screenshot below.

Step 9: Verify the EC2 Instance

Login to the new EC2 instance using RDP and verify all data is migrated. Verify all data is intact and accessible, check for any discrepancies. See our new EC2 instance below:

Step 10: Test Your Application

After verifying the data, test your application to see if it works as expected in the new AWS environment. We tested our sample web application and it worked.

A screenshot of a computerDescription automatically generated

Conclusion

Migrating a VM instance from GCP to AWS is a multi step process but with proper planning and execution it can be done smoothly. Follow this guide and your data will be migrated securely and your applications will run smoothly in the new environment.

Read Blog
ISO 27001

ISO 27001:2022 Made Easy: How Ankercloud and Vanta Simplify Compliance

Jul 17, 2024
00

At Ankercloud, our commitment to information security is reflected in our ISO 27001:2022 certification. Leveraging our expertise and advanced tools, we help other organizations achieve the same certification efficiently. With Vanta, we ensure a streamlined, automated, and effective compliance journey, showcasing our dedication to the highest standards of information security.

What is ISO 27001:2022?

ISO 27001:2022 is a global standard for managing and protecting sensitive company information through an Information Security Management System (ISMS). It ensures the confidentiality, integrity, and availability of data by providing a structured approach to managing information security risks.

The ISO 27001:2022 Process (Traditional Approach)

Obtaining ISO 27001 certification requires the following crucial steps

Preparation (1-3 months) 

Familiarize with the standard, define the scope, and Perform an initial gap analysis

Implementation (3-6 months) 

Develop an ISMS, conduct risk assessments, Implement necessary controls, and document policies

Internal Audit (1-2 months) 

Evaluate compliance with the ISMS and identify improvements

Management Review (1 month) 

Review ISMS performance and align with organizational objectives

Certification Audit (1-2 months) 

Engage a certification body for stage 1 (document review) and stage 2 (on-site assessment) audits

Post-Certification (Ongoing) 

Continuously monitor, conduct internal audits, and perform management reviews

In total, the process can take about 6 to 12 months, depending on factors like the organization's size, complexity, and preparedness.

How Vanta Simplifies ISO 27001:2022 Compliance

Vanta, a compliance automation platform, transforms the compliance process by automating security monitoring and evidence collection, making ISO 27001:2022 compliance more manageable. Here's how:

  1. Automated Security Monitoring: Vanta continuously monitors your systems for security issues, ensuring you meet ISO 27001:2022 requirements without manual intervention.
  2. Evidence Collection: Vanta automates 90% of the evidence collection, such as access logs, security configurations, and compliance status reports.
  3. Compliance Management: A centralized dashboard helps manage and track compliance efforts, simplifying the process.
  4. Risk Assessment: Vanta identifies vulnerabilities and risks, providing effective recommendations.
  5. Automated Documentation: Generates and maintains required documentation for audits, reducing the manual workload.

With Vanta's automation approach, the ISO 27001:2022 certification process can be significantly expedited, allowing organizations to achieve certification in as little as 2 to 3 months. This accelerated timeline is made possible by Vanta's efficient, automated workflows and continuous monitoring, which streamline compliance tasks and reduce the time typically required for manual processes.

Benefits of Using Vanta Compliance Tools Compared to Traditional Methods

Vanta offers numerous advantages over traditional compliance methods:

  1. Simplified Management and Guidance: Reduces complexities and provides step-by-step guidance, lowering the administrative burden.
  2. Automated Detection and Proactive Assessment: Ensures timely identification and prioritization of security risks.
  3. Real-time Dashboards and Streamlined Audits: Provides immediate visibility into compliance status and simplifies audit preparation.
  4. Seamless Integration and User-Friendly Interface: Enhances workflow efficiency with seamless integration and an intuitive interface.
  5. Enhanced Data Protection and Trust Building: Strengthens data protection and demonstrates strong security practices to stakeholders.
  6. Time and Cost Savings with Continuous Monitoring: Automation reduces time and costs, while continuous monitoring ensures long-term security and compliance.

How Ankercloud Can Help Companies Achieve ISO 27001:2022 Certification Using Vanta

As ISO 27001:2022 certified lead auditors, Ankercloud enhances organizations' information security practices, ensuring compliance with legal and regulatory requirements. We equip organizations with the skills to effectively manage risks, fostering a proactive approach to data protection. Implementing ISO 27001:2022 can streamline operations, improve efficiency, and build trust with customers and stakeholders.

  • Expert Guidance: Ankercloud's expertise guides companies through the ISO 27001:2022 process efficiently.
  • Platform Utilization: Vanta's automation and monitoring tools streamline compliance.
  • Customized Support: Tailored services meet specific company needs, ensuring comprehensive ISO 27001:2022 coverage.
  • Accelerated Timeline: Vanta's automated processes and Ankercloud's expertise enable faster ISO certification.
  • Continuous Improvement: Ankercloud helps maintain and improve ISMS post-certification, ensuring ongoing compliance and security.

Conclusion

Ankercloud's expertise, combined with Vanta's automation capabilities, offers a powerful solution for companies seeking ISO 27001:2022 certification. By streamlining the compliance process through automated security monitoring, evidence collection, and compliance management, Ankercloud helps companies achieve certification efficiently and effectively. Leveraging Vanta, Ankercloud ensures a smooth and cost-effective journey to certification, enhancing the overall security posture of your organization.

Read Blog

The Ankercloud Team loves to listen