Step-by-Step Guide to Implementing an End-to-End VPC Endpoint Service.

Step-by-Step Guide to Implementing an End-to-End VPC Endpoint Service.

Introduction.

In modern cloud architectures, ensuring secure and efficient communication between services is critical. One of the most effective ways to do this within AWS is by using VPC (Virtual Private Cloud) Endpoint services. This guide will walk you through the process of setting up an end-to-end VPC endpoint service, enabling private and secure communication between your VPC and AWS services or other VPCs.

What is a VPC Endpoint Service?

Start by explaining what a VPC Endpoint Service is, the types of VPC endpoints (Interface and Gateway), and the use cases for implementing them. Include a basic explanation of the VPC endpoint’s role in connecting resources securely without requiring access over the public internet.

  • Interface Endpoints (PrivateLink): Direct connection to services like AWS Lambda, EC2, etc.
  • Gateway Endpoints: For services like S3 or DynamoDB.
project handling 3

Why Use VPC Endpoint Services?

Discuss the benefits of using VPC endpoints, such as improved security (private connections), lower data transfer costs, and better performance due to reduced internet exposure.

Screenshot 2025 02 03 122135

STEP 1: Navigate the VPC and Click on create VPC.

  • Enter the VPC name.
  • Enter Ipv4 CIDR.
  • Click on create VPC.
Screenshot 2025 02 03 155015
Screenshot 2025 02 03 160000
Screenshot 2025 02 03 160047

STEP 2: Click on Internet gateway and create internet gateway.

  • Enter the gate name.
  • Click on create internet gateway.
Screenshot 2025 02 03 160112
Screenshot 2025 02 03 160135

STEP 3: Click on your created internet gateway and attach to your created VPC.

Screenshot 2025 02 03 160200
Screenshot 2025 02 03 160222

STEP 4: Next, Click on rout tables and Click on create route tables.

Screenshot 2025 02 03 160440
Screenshot 2025 02 03 160335 1

STEP 5: Next, create subnet.

  • Enter subnet name.
  • Select AZ.
  • Enter IPV4.
Screenshot 2025 02 03 160505
Screenshot 2025 02 03 161126
Screenshot 2025 02 03 161149
Screenshot 2025 02 03 161226

STEP 6: Go to your created route table.

  • Select subnet associations.
  • Select your subnet.
  • Click on save associations.
Screenshot 2025 02 03 161250

STEP 7: Now, Navigate the EC2 instance and click on create instance.

  • Enter the name and select AMI, instance type.
  • Create keypair.
  • Configure the security group with rules allowing SSH, HTTP, and HTTPS traffic from all IP addresses.
  • Configure Apache HTTP server on the EC2 instance to serve a custom HTML page.  
  • Click on create instance.
Screenshot 2025 02 03 161440
Screenshot 2025 02 03 161501
Screenshot 2025 02 03 161657
Screenshot 2025 02 03 161850
Screenshot 2025 02 03 161904
Screenshot 2025 02 03 162013

STEP 8: Now you created network load balancer.

  • Enter load balancer name.
  • Select vpc, security group and target.
  • TCP listener on port 80 and associate it with the EC2 instance. Ensure that the load balancer and the EC2 instance use the same security group for consistent security policies. 
  • Click on create.
Screenshot 2025 02 03 162559
Screenshot 2025 02 03 162619
Screenshot 2025 02 03 162642
Screenshot 2025 02 03 163106

STEP 9: Now, you go to create customer VPC.

  • Enter the name.
  • Enter IPV4 CIDR.
  • Click on create VPC.
Screenshot 2025 02 03 163748
Screenshot 2025 02 03 163836

STEP 10: Create customer IGW and attach your created customer VPC.

Screenshot 2025 02 03 164001
Screenshot 2025 02 03 164033

STEP 11: Create subnet and enter the name.

  • Select VPC and IPV4.
  • Click on create subnet.
Screenshot 2025 02 03 164334

STEP 12: Create route table.

  • Enter the name.
  • Select VPC.
  • Click on create route table.
Screenshot 2025 02 03 164604

STEP 13: Select your created route table.

  • Click on subnet association.
  • Select your created subnet.
Screenshot 2025 02 03 164640
Screenshot 2025 02 03 164658

STEP 14: Create instance.

  • Enter the instance name, AMI, instance type.
  • Select your keypair.
  • Configure the security group with rules allowing SSH, HTTP, and HTTPS traffic from all IP addresses. 
  • Click on create instance.
Screenshot 2025 02 03 164826
Screenshot 2025 02 03 164851
Screenshot 2025 02 03 164930
Screenshot 2025 02 03 165041
Screenshot 2025 02 03 165133

STEP 15: Create VPC End point.

  • Enter the name.
  • Select type.
  • And select security group and VPC.
Screenshot 2025 02 03 165712
Screenshot 2025 02 03 165730

Test the End-to-End Connection.

After creating the VPC Endpoint and configuring routing and security, test the connection to ensure that traffic from the consumer VPC can reach the service through the VPC Endpoint:

  1. From the consumer side, use a tool like curl, wget, or any application to make a request to the private endpoint.
  2. Monitor logs from your services and load balancer to ensure the requests are being routed correctly.
  3. Verify DNS resolution if you enabled private DNS. Ensure the correct endpoint DNS resolves to the private IPs.
web development

Conclusion.

Implementing a VPC endpoint service in AWS is an essential step for optimizing security and performance when connecting your VPC to AWS resources or other VPCs. By following this step-by-step guide, you’ll be able to create a secure, private connection for your services with ease.

How to Create an Amazon Machine Image (AMI) from an EC2 Instance Using Terraform: A Step-by-Step Guide.

How to Create an Amazon Machine Image (AMI) from an EC2 Instance Using Terraform: A Step-by-Step Guide.

Introduction.

An AWS AMI (Amazon Machine Image) is a pre-configured template used to create virtual machines (or instances) in Amazon Web Services (AWS). It contains the operating system, software, and configuration settings required to launch and run a specific environment on AWS.

About AWS AMI:

  • Customizable: You can use a public AMI provided by AWS, a marketplace AMI (from third-party vendors), or create your own custom AMI.
  • Reusable: Once you create an AMI, you can launch multiple instances from it, making it easy to replicate environments and ensure consistency across deployments.
  • Operating System and Software: An AMI typically includes the OS (Linux, Windows, etc.) and additional software or configurations you need (like web servers, databases, etc.).
  • Snapshot-based: AMIs are created from snapshots of existing EC2 instances, which capture the instance’s state (disk, configurations, etc.) at the time the AMI is made.
app development 1

Benefits:

  • Fast Deployment: AMIs allow you to quickly deploy fully configured environments, saving time when provisioning new EC2 instances.
  • Consistency: You can ensure the same software configuration for multiple instances, reducing configuration drift.
  • Scalability: Using AMIs, you can scale out your infrastructure with ease by launching multiple instances from the same image.

Types of AMIs:

  1. Public AMIs: Provided by AWS or third-party vendors. These are free or available for purchase.
  2. Private AMIs: Created by users to include their own custom configurations or applications.

STEP 1: Go vscode Select your folder and create variables.tf file.

variable "access_key" {
  description = "Access key to AWS console"
}

variable "secret_key" {
  description = "Secret key to AWS console"
}

variable "region" {
  description = "Region of AWS VPC"
}
Screenshot 2025 01 31 223818

STEP 2: Enter the file and save it.

STEP 3: Create the file terraform.tfvars

region      = "us-east-1"
access_key  = "<YOUR AWS CONSOLE ACCESS ID>"
secret_key  = "<YOUR AWS CONSOLE SECRET KEY>"
Screenshot 2025 01 31 224052

STEP 4: Enter the terraform script and save the file.

STEP 5: Create main.tf file and enter the following command save the file.

provider "aws" {
  region     = var.region
  access_key = var.access_key
  secret_key = var.secret_key
}

resource "aws_security_group" "ec2sg" {
  name = "EC2-SG"

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_instance" "ec2instance" {
  ami              = "ami-0c101f26f147fa7fd"
  instance_type    = "t2.micro"
  security_groups  = ["${aws_security_group.ec2sg.name}"]
 
  user_data = <<-EOF
    #!/bin/bash
    sudo su
    yum update -y
    yum install httpd -y
    echo "<html><h1> Welcome to Whizlabs </h1></html>" >> /var/www/html/index.html  
    systemctl start httpd
    systemctl enable httpd
  EOF

  tags = {
    Name = "MyEC2Server"
  }
}

resource "aws_ami_from_instance" "ec2ami" {
  name               = "MyEC2Image"
  source_instance_id = aws_instance.ec2instance.id
}
Screenshot 2025 01 31 224404
Screenshot 2025 01 31 224416

STEP 6: Create output.tf file.

output "ami_id" {
  value = aws_ami_from_instance.ec2ami.id
}
Screenshot 2025 01 31 224501

STEP 7: Go to the terminal enter the command terraform init.

Screenshot 2025 01 31 224728

STEP 8: Enter the command terraform plan.

Screenshot 2025 01 31 225130

STEP 9: Enter terraform apply command.

Screenshot 2025 01 31 225655
Screenshot 2025 01 31 225711

STEP 10: Go Security group and verify the Created security group.

Screenshot 2025 01 31 225735

STEP 11: Next, Verify the created Instance.

Screenshot 2025 01 31 225821

STEP 12: Now, Go to verify the AMI.

Screenshot 2025 01 31 230000
cloud computing 1

Conclusion.

In conclusion, an AWS AMI (Amazon Machine Image) is a powerful tool in Amazon Web Services that allows you to create and launch virtual machine instances quickly and consistently. It includes the operating system, software, and configurations necessary to set up an environment on AWS. AMIs are highly customizable and reusable, making them a key part of scalable, automated cloud infrastructure. They enable rapid deployment, ensure consistency across instances, and help in scaling your applications efficiently. If you’re working with AWS, understanding and utilizing AMIs can significantly streamline the management of your cloud infrastructure.

How to Set Up an Amazon EKS Cluster and Install kubectl Using Terraform: A Comprehensive Guide.

How to Set Up an Amazon EKS Cluster and Install kubectl Using Terraform: A Comprehensive Guide.

Introduction.

In today’s cloud-native world, Kubernetes has become the go-to platform for managing containerized applications. Amazon Elastic Kubernetes Service (EKS) provides a fully managed Kubernetes service, enabling you to easily deploy and scale containerized applications on AWS. Setting up an EKS cluster manually can be complex, but by using Infrastructure as Code (IaC) tools like Terraform, you can automate the process and ensure a repeatable, scalable solution.

In this comprehensive guide, we’ll walk you through the steps to set up an Amazon EKS cluster and install kubectl, the Kubernetes command-line tool, using Terraform. Whether you’re new to EKS or a seasoned DevOps engineer looking to streamline your infrastructure, this tutorial will give you the knowledge and tools to get up and running with Kubernetes on AWS efficiently. Let’s dive in and explore how Terraform can simplify the creation of your EKS environment, so you can focus on deploying and managing your applications with ease.

cyber security

Now, Create EKS Cluster and Install Kubectl in Terraform Using VS code.

STEP1: Go to vscode open the new folder and create a variables.tf file.

  • Enter the following terraform script.
variable "access_key" {
    description = "Access key to AWS console"
}
variable "secret_key" {
    description = "Secret key to AWS console"
}
variable "region" {
    description = "AWS region"
}
Screenshot 2025 01 30 202313

STEP 2: Next Create terraform.tfvars file.

  • Enter the folllowing script.
  • And enter the access key and secret key.
  • Next, Save the file.
region = "us-east-1"
access_key = "<YOUR AWS CONSOLE ACCESS ID>"
secret_key = "<YOUR AWS CONSOLE SECRET KEY>"
Screenshot 2025 01 30 202455

STEP 3: Create main.tf files.

provider "aws" {
    region     = "${var.region}"    
    access_key = "${var.access_key}"
    secret_key = "${var.secret_key}"
}
Screenshot 2025 01 30 202706

STEP 4: Copide your role arn and paste the notepad.

  • Next copy your subnet ID for us-east-1a, us-east-1b.
Screenshot 2025 01 30 203030

STEP 5: Now enter the Script foe creating cluster in main.tf file.

################## Creating an EKS Cluster ##################
resource "aws_eks_cluster" "cluster" {
  name     = "whiz"
  role_arn = "<YOUR_IAM_ROLE_ARN>"

  vpc_config {
    subnet_ids = ["SUBNET-ID 1", "SUBNET-ID 2"]
  }
}
  • Enter the arn and subnets ID.
Screenshot 2025 01 30 203749

STEP 6: Next create output.tf file.

  • Enter the command and save the file.
output "cluster" {
  value = aws_eks_cluster.cluster.endpoint
}
Screenshot 2025 01 30 203841

STEP 7: Now, go to the terminal, Enter trraform init command.

Screenshot 2025 01 30 204415

STEP 8: Enter terraform plan command.

Screenshot 2025 01 30 204503

STEP 9: Enter terraform apply.

Screenshot 2025 01 30 210541

STEP 10: Go amazon EKS console verify the created cluster.

Screenshot 2025 01 30 210643

STEP 11: Click on the Cloud Shell icon on the top right AWS menu bar.

Enter the following command.

curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.18.9/2020-11-02/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
kubectl version --short --client
aws eks update-kubeconfig --region us-east-1 --name whiz
kubectl get svc
Screenshot 2025 01 30 211019 1
data analytics

Conclusion.

Setting up an Amazon EKS cluster and installing kubectl using Terraform is an efficient way to manage Kubernetes infrastructure on AWS. By leveraging Terraform’s Infrastructure as Code capabilities, you not only automate the setup process but also ensure a reproducible and scalable environment for your containerized applications. With the steps outlined in this guide, you’ve learned how to configure your AWS environment, create an EKS cluster, and set up kubectl for easy management of your Kubernetes workloads.

The combination of Amazon EKS and Terraform simplifies Kubernetes cluster management, reduces manual errors, and enhances operational efficiency. Whether you’re deploying microservices, managing applications, or experimenting with Kubernetes, this setup provides a solid foundation to build on. As you continue to explore and expand your Kubernetes ecosystem, using tools like Terraform will help streamline your processes and scale your infrastructure seamlessly. With the knowledge gained from this tutorial, you’re now equipped to take full advantage of EKS and kubectl in your cloud-native journey. Happy deploying!

How Fingerprints and Artifacts Enhance Jenkins Workflows.

How Fingerprints and Artifacts Enhance Jenkins Workflows.

Fingerprints and artifacts both play important roles in Jenkins workflows, making it easier to track, manage, and improve the software development and continuous integration (CI) process. Let’s break down how each of these elements enhances Jenkins workflows:

Fingerprints:

Fingerprints are unique identifiers that Jenkins assigns to various components, such as build outputs, source code, or any other items that might be part of a build process. The fingerprint typically comes from a hash of the content of a file or artifact. It ensures that every part of a build can be traced back to its origin and guarantees consistency.

dev ops

How Fingerprints Enhance Jenkins Workflows:

Traceability: Fingerprints ensure that you can always trace the exact version of the code and dependencies that were used in any given build. This is critical when debugging, performing audits, or identifying issues in a particular version of software.

Consistency: With fingerprints, you can ensure that the build artifacts (e.g., JARs, WARs, etc.) are consistently the same across different Jenkins jobs and environments. If an artifact’s fingerprint doesn’t match, you know it’s not the same artifact.

Improved Dependency Management: By tracking fingerprints, you can correlate between different stages of a build, and dependencies can be tracked more precisely. This can prevent issues with mismatched or outdated dependencies.

Efficient Cache Handling: Jenkins can optimize workflows by caching specific artifacts based on their fingerprints. If the fingerprint hasn’t changed, Jenkins can skip parts of the build process (like recompiling), improving build efficiency.

Artifacts:

Artifacts are files generated as part of the build process, such as compiled binaries, test reports, configuration files, logs, and other output files. These are often stored and archived for future reference.

How Artifacts Enhance Jenkins Workflows:

Storage for Future Use: Artifacts, once built, can be stored and reused in later stages or even in other Jenkins jobs. For instance, you might store a built .jar file as an artifact and use it in deployment stages or testing phases in a different job.

Archival for Rollback: Storing artifacts as part of your build helps in versioning. If you need to rollback to a previous version, you can retrieve the exact artifact from the build archive without rebuilding it.

Continuous Delivery/Deployment: Artifacts are essential in a CI/CD pipeline. They represent the build outputs that are deployed or tested in different environments, allowing you to automate the delivery process effectively.

Separation of Concerns: Artifacts allow you to separate the build from the deployment or testing process. This separation means that you can decouple your CI pipeline and focus on using the same artifacts for various tasks (such as deployment or performance testing), without having to rebuild them.

Combining Fingerprints and Artifacts:

Together, fingerprints and artifacts streamline Jenkins workflows by ensuring that every artifact can be traced and validated. For example: You can archive a build artifact and, using its fingerprint, trace the exact code that produced it, as well as the environment it was built in. You can ensure that only the correct version of an artifact (identified by its fingerprint) is used in subsequent stages like deployment or testing. This combination creates a reliable and efficient build system where every part of the process is traceable, reducing the chances of errors and making debugging easier.

Requirements.

Create a Jenkins installed in ec2 server.

TASK 1: Create an Artifact for the output of Job in Jenkins—Method 1.

STEP 1: Go to AWS Dashboard Click on create new item.

  • Enter the item name.
  • Select freestyle project.
  • Click on ok button.
Screenshot 2025 01 29 055246

STEP 2: Configure the job.

Screenshot 2025 01 29 055342

STEP 3: Scroll down Click on Add build step under build steps.

  • Select execute shell.
Screenshot 2025 01 29 055432

STEP 4: Enter the following command in execute shell.

touch testarchive
tar -czvf archive01.tar.gz testarchive
Screenshot 2025 01 29 055519

STEP 5: Click on Advance button and select archive the artifacts.

Screenshot 2025 01 29 060647 1

STEP 6: Enter archive the artifacts.

archive01.tar.gz 
Screenshot 2025 01 29 055823

STEP 7: Click on advanced and Select fingureprint all archived artifacts.

  • Apply and save.
Screenshot 2025 01 29 055931

STEP 8: Click on build job.

Screenshot 2025 01 29 060103

STEP 9: Click on see fingureprints.

Screenshot 2025 01 29 060219

STEP 10: Click on fingureprints name.

Screenshot 2025 01 29 060305
Screenshot 2025 01 29 060454

TASK 2: Create an Artifact for the output of Job in Jenkins—Method2.

STEP 1: Go to the job configure and delete the archive artifacts.

  • Add select the record fingureprints of files to track usage.
Screenshot 2025 01 29 060647

STEP 2: Select postbuild actions.

  • Enter files to fingureprint.
  • Apply and save.
Screenshot 2025 01 29 060806

STEP 3: Now build now and click on see fingureprints.

Screenshot 2025 01 29 060936
Screenshot 2025 01 29 061019
Screenshot 2025 01 29 061107
digital marketing

Conclusion.

In conclusion, fingerprints and artifacts are key elements that significantly enhance Jenkins workflows by ensuring traceability, consistency, and efficiency. Fingerprints provide a unique identifier for every component, enabling precise tracking and validation of builds, dependencies, and outputs. Artifacts, on the other hand, are the tangible build outputs that can be stored, reused, and deployed across different stages of the CI/CD pipeline. By combining both, Jenkins users can establish a robust, reliable, and streamlined process where every artifact is verifiable and can be traced back to its source. This not only improves debugging, auditing, and consistency but also optimizes build performance and supports continuous delivery and deployment. Ultimately, leveraging fingerprints and artifacts helps teams maintain high-quality software while enhancing collaboration and minimizing errors across the development lifecycle.

Step-by-Step Guide to Deploying a Two-Tier Architecture on AWS with Terraform.

Step-by-Step Guide to Deploying a Two-Tier Architecture on AWS with Terraform.

AWS Two-Tire.

An AWS Two-Tier Architecture is a common cloud computing design pattern that separates the application into two layers, typically a web tier and a database tier. This architecture is commonly used for applications that need to scale easily while maintaining clear separation between the user-facing elements and the backend data storage.

  1. Web Tier (also called the presentation layer):
    • This is the front-end of the application, typically consisting of web servers or load balancers that handle incoming user traffic. In AWS, this layer is usually composed of services like Amazon EC2 instances, Elastic Load Balancer (ELB), or Amazon Lightsail.
    • This tier is responsible for processing user requests, managing user interactions, and routing requests to the appropriate backend services.
  2. Database Tier (also called the data layer):
    • This tier is responsible for data storage and management. In AWS, it often uses managed database services like Amazon RDS (for relational databases) or Amazon DynamoDB (for NoSQL databases).
    • The database tier is typically isolated from the web tier to ensure better security, reliability, and scalability.
project handling

Benefits of a Two-Tier Architecture in AWS:

  • Scalability: Each tier can be scaled independently based on demand. You can scale the web tier up or down to handle more traffic, while scaling the database tier as needed for storage and performance.
  • Security: The database tier is often placed behind a Virtual Private Cloud (VPC) and security groups, ensuring that only the web tier can access it, providing better control over access and reducing security risks.
  • Cost-Effective: With AWS’s flexible services, you only pay for what you use, allowing you to optimize costs as you scale each tier.
  • High Availability: AWS services offer built-in availability options like Multi-AZ for databases, ensuring uptime and fault tolerance.

STEP 1: Go to AWS IAM console.

  • Create a user.
  • Must add the following permissions.
Screenshot 2025 01 28 212107

STEP 2: Create provider.tf file.

  • Enter the following lines and save the file.
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~>4"
    }
  }
}

provider "aws" {
  region  = "us-east-1"
}
Screenshot 2025 01 28 091316

STEP 3: Create network_resources.tf file.

# VPC
resource "aws_vpc" "two-tier-vpc" {
  cidr_block = "10.0.0.0/16"
  tags = {
    Name = "two-tier-vpc"
  }
}

# Public Subnets 
resource "aws_subnet" "two-tier-pub-sub-1" {
  vpc_id            = aws_vpc.two-tier-vpc.id
  cidr_block        = "10.0.0.0/18"
  availability_zone = "us-east-1a"
  map_public_ip_on_launch = "true"

  tags = {
    Name = "two-tier-pub-sub-1"
  }
}

resource "aws_subnet" "two-tier-pub-sub-2" {
  vpc_id            = aws_vpc.two-tier-vpc.id
  cidr_block        = "10.0.64.0/18"
  availability_zone = "us-east-1b"
  map_public_ip_on_launch = "true"
  tags = {
    Name = "two-tier-pub-sub-2"
  }
}

# Private Subnets
resource "aws_subnet" "two-tier-pvt-sub-1" {
  vpc_id                  = aws_vpc.two-tier-vpc.id
  cidr_block              = "10.0.128.0/18"
  availability_zone       = "us-east-1a"
  map_public_ip_on_launch = false
  tags = {
    Name = "two-tier-pvt-sub-1"
  }
}
resource "aws_subnet" "two-tier-pvt-sub-2" {
  vpc_id                  = aws_vpc.two-tier-vpc.id
  cidr_block              = "10.0.192.0/18"
  availability_zone       = "us-east-1b"
  map_public_ip_on_launch = false
  tags = {
    Name = "two-tier-pvt-sub-2"
  }
}

# Internet Gateway
resource "aws_internet_gateway" "two-tier-igw" {
  tags = {
    Name = "two-tier-igw"
  }
  vpc_id = aws_vpc.two-tier-vpc.id
}

# Route Table
resource "aws_route_table" "two-tier-rt" {
  tags = {
    Name = "two-tier-rt"
  }
  vpc_id = aws_vpc.two-tier-vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.two-tier-igw.id
  }
}

# Route Table Association
resource "aws_route_table_association" "two-tier-rt-as-1" {
  subnet_id      = aws_subnet.two-tier-pub-sub-1.id
  route_table_id = aws_route_table.two-tier-rt.id
}

resource "aws_route_table_association" "two-tier-rt-as-2" {
  subnet_id      = aws_subnet.two-tier-pub-sub-2.id
  route_table_id = aws_route_table.two-tier-rt.id
}


# Create Load balancer
resource "aws_lb" "two-tier-lb" {
  name               = "two-tier-lb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.two-tier-alb-sg.id]
  subnets            = [aws_subnet.two-tier-pub-sub-1.id, aws_subnet.two-tier-pub-sub-2.id]

  tags = {
    Environment = "two-tier-lb"
  }
}

resource "aws_lb_target_group" "two-tier-lb-tg" {
  name     = "two-tier-lb-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.two-tier-vpc.id
}

# Create Load Balancer listener
resource "aws_lb_listener" "two-tier-lb-listner" {
  load_balancer_arn = aws_lb.two-tier-lb.arn
  port              = "80"
  protocol          = "HTTP"
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.two-tier-lb-tg.arn
  }
}

# Create Target group
resource "aws_lb_target_group" "two-tier-loadb_target" {
  name       = "target"
  depends_on = [aws_vpc.two-tier-vpc]
  port       = "80"
  protocol   = "HTTP"
  vpc_id     = aws_vpc.two-tier-vpc.id
  
}

resource "aws_lb_target_group_attachment" "two-tier-tg-attch-1" {
  target_group_arn = aws_lb_target_group.two-tier-loadb_target.arn
  target_id        = aws_instance.two-tier-web-server-1.id
  port             = 80
}
resource "aws_lb_target_group_attachment" "two-tier-tg-attch-2" {
  target_group_arn = aws_lb_target_group.two-tier-loadb_target.arn
  target_id        = aws_instance.two-tier-web-server-2.id
  port             = 80
}

# Subnet group database
resource "aws_db_subnet_group" "two-tier-db-sub" {
  name       = "two-tier-db-sub"
  subnet_ids = [aws_subnet.two-tier-pvt-sub-1.id, aws_subnet.two-tier-pvt-sub-2.id]
}
Screenshot 2025 01 28 211452

STEP 4: Create security_resources.tf file.

# Security Group
resource "aws_security_group" "two-tier-ec2-sg" {
  name        = "two-tier-ec2-sg"
  description = "Allow traffic from VPC"
  vpc_id      = aws_vpc.two-tier-vpc.id
  depends_on = [
    aws_vpc.two-tier-vpc
  ]

  ingress {
    from_port = "0"
    to_port   = "0"
    protocol  = "-1"
  }
  ingress {
    from_port   = "80"
    to_port     = "80"
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  ingress {
    from_port   = "22"
    to_port     = "22"
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = "0"
    to_port     = "0"
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "two-tier-ec2-sg"
  }
}

# Load balancer security group
resource "aws_security_group" "two-tier-alb-sg" {
  name        = "two-tier-alb-sg"
  description = "load balancer security group"
  vpc_id      = aws_vpc.two-tier-vpc.id
  depends_on = [
    aws_vpc.two-tier-vpc
  ]


  ingress {
    from_port   = "0"
    to_port     = "0"
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
  egress {
    from_port   = "0"
    to_port     = "0"
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }


  tags = {
    Name = "two-tier-alb-sg"
  }
}

# Database tier Security gruop
resource "aws_security_group" "two-tier-db-sg" {
  name        = "two-tier-db-sg"
  description = "allow traffic from internet"
  vpc_id      = aws_vpc.two-tier-vpc.id

  ingress {
    from_port       = 3306
    to_port         = 3306
    protocol        = "tcp"
    security_groups = [aws_security_group.two-tier-ec2-sg.id]
    cidr_blocks     = ["0.0.0.0/0"]
  }

  ingress {
    from_port       = 22
    to_port         = 22
    protocol        = "tcp"
    security_groups = [aws_security_group.two-tier-ec2-sg.id]
    cidr_blocks     = ["10.0.0.0/16"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
}
Screenshot 2025 01 28 204446

STEP 5: Create ec2-resources.tf file.

# Public subnet EC2 instance 1
resource "aws_instance" "two-tier-web-server-1" {
  ami             = "ami-0ca9fb66e076a6e32"
  instance_type   = "t2.micro"
  security_groups = [aws_security_group.two-tier-ec2-sg.id]
  subnet_id       = aws_subnet.two-tier-pub-sub-1.id
  key_name   = "two-tier-key"

  tags = {
    Name = "two-tier-web-server-1"
  }

  user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo amazon-linux-extras install nginx1 -y 
sudo systemctl enable nginx
sudo systemctl start nginx
EOF
}

# Public subnet  EC2 instance 2
resource "aws_instance" "two-tier-web-server-2" {
  ami             = "ami-0ca9fb66e076a6e32"
  instance_type   = "t2.micro"
  security_groups = [aws_security_group.two-tier-ec2-sg.id]
  subnet_id       = aws_subnet.two-tier-pub-sub-2.id
  key_name   = "two-tier-key"

  tags = {
    Name = "two-tier-web-server-2"
  }

  user_data = <<-EOF
#!/bin/bash
sudo yum update -y
sudo amazon-linux-extras install nginx1 -y 
sudo systemctl enable nginx
sudo systemctl start nginx
EOF
}

#EIP

resource "aws_eip" "two-tier-web-server-1-eip" {

  instance                  = aws_instance.two-tier-web-server-1.id
  depends_on                = [aws_internet_gateway.two-tier-igw]
}

resource "aws_eip" "two-tier-web-server-2-eip" {

  instance                  = aws_instance.two-tier-web-server-2.id
  depends_on                = [aws_internet_gateway.two-tier-igw]
}
Screenshot 2025 01 28 211529

STEP 6: Create db-resources.tf file.

# RDS MYSQL database
resource "aws_db_instance" "two-tier-db-1" {
  allocated_storage           = 5
  storage_type                = "gp2"
  engine                      = "mysql"
  engine_version              = "5.7"
  instance_class              = "db.t2.micro"
  db_subnet_group_name        = "two-tier-db-sub"
  vpc_security_group_ids      = [aws_security_group.two-tier-db-sg.id]
  parameter_group_name        = "default.mysql5.7"
  db_name                     = "two_tier_db1"
  username                    = "admin"
  password                    = "password"
  allow_major_version_upgrade = true
  auto_minor_version_upgrade  = true
  backup_retention_period     = 35
  backup_window               = "22:00-23:00"
  maintenance_window          = "Sat:00:00-Sat:03:00"
  multi_az                    = false
  skip_final_snapshot         = true
}
Screenshot 2025 01 28 211641

STEP 7: Go to the terminal.

  • Enter terraform init command.
Screenshot 2025 01 28 204632

STEP 8: Next, Enter the terraform plan command.

Screenshot 2025 01 28 211027

STEP 9: Enter terraform apply command.

Screenshot 2025 01 28 211049

STEP 10: Now, go to AWS console.

  • And verify VPC and Network resources.
  • EC2 instances.
  • Load Balancer.
  • Target Group.
  • RDS MYSQL Database.
  • EC2 NGINX web server is also working.
  • Security Groups.
web development

Conclusion.

By following this guide, you’ve successfully deployed a scalable and secure two-tier architecture on AWS using Terraform. This setup enables efficient management of resources and ensures flexibility for future expansion. Leveraging Terraform’s infrastructure-as-code capabilities not only streamlines the deployment process but also provides the consistency needed for a robust cloud environment. With this foundation in place, you can now customize and scale your architecture based on your specific requirements. Whether you’re building a simple web application or a more complex solution, Terraform and AWS offer the tools you need to grow and adapt with ease. Don’t forget to explore further Terraform modules and AWS services to enhance your architecture’s capabilities and ensure it meets your application’s evolving needs.

Securing Your Jenkins Pipeline: Core Security Concepts You Need to Know.

Securing Your Jenkins Pipeline: Core Security Concepts You Need to Know.

Introduction.

Jenkins is a popular open-source automation server used to automate the process of building, testing, and deploying software. Since Jenkins is often used in continuous integration/continuous delivery (CI/CD) pipelines, ensuring its security is crucial to avoid potential risks like unauthorized access to sensitive information, codebase tampering, or pipeline manipulation. Here are some important security concepts in Jenkins:

app development

Authentication

  • Jenkins User Authentication: Jenkins provides user authentication using several methods such as its internal user database, LDAP, Active Directory, or OAuth integration.
  • Matrix-based Security: This allows fine-grained control over who has access to different parts of Jenkins. You can specify which users or groups have permissions like “read,” “build,” “admin,” etc.
  • LDAP Authentication: Many organizations use LDAP to authenticate users so that Jenkins can integrate with their existing user management systems.

Authorization

  • Role-Based Access Control (RBAC): You can configure role-based access to define who has access to what within Jenkins. RBAC allows the segregation of duties, preventing unauthorized users from accessing or modifying critical resources.
  • Matrix Authorization Strategy: It offers a flexible way to control access by setting permissions for users and groups.

Security Realm

  • The Security Realm is a Jenkins concept used to manage the identity of users. Jenkins supports various security realms such as Jenkins’ internal user database, LDAP, or other external authentication systems (OAuth, SSO).

Authorization Strategy

Project-based Matrix Authorization: This restricts permissions for specific jobs, not just the entire Jenkins instance, which adds an additional layer of security for sensitive projects.
Crowd or SSO: Jenkins supports using external identity management systems like Atlassian Crowd or single sign-on (SSO) integration for centralized user authentication.

Pipeline Security

Script Approval: Jenkins allows administrators to approve or block scripts running in the pipeline, such as Groovy scripts, which could potentially contain malicious code. Users need to explicitly approve certain scripts before they are allowed to run.
Secure Agents: If you use distributed agents (slave nodes), make sure to properly configure the agent security, ensuring that the master has secure communication with agents. The communication should also be encrypted using TLS.

Auditing and Logging


Audit Logging: Jenkins provides a way to track all changes and actions performed in the system, such as job execution, user login, or configuration changes. It’s critical for identifying any suspicious activities or unauthorized actions.
Access Logs: Jenkins logs can be reviewed to track who accessed the system, when, and what actions were performed.

Encryption


TLS/SSL Encryption: Jenkins supports enabling HTTPS for secure communication between users and the Jenkins master or between master and agents. This prevents eavesdropping on sensitive information such as credentials or pipeline outputs.
Secrets Management: Jenkins has a “Credentials Plugin” that helps manage secrets, such as API tokens or passwords, safely. These credentials can be stored in the Jenkins Credentials Store and used securely within jobs and pipelines.

Plugins Security


Plugin Management: Jenkins allows the installation of plugins, which can sometimes introduce security vulnerabilities if not regularly updated. It’s important to maintain and update plugins to the latest secure versions.
Plugin Permission: Plugins should be audited for security and configured to ensure that they don’t introduce vulnerabilities. Certain plugins may grant escalated privileges, so care should be taken when installing them.

Security Hardening


Disable Unnecessary Features: To reduce the attack surface, Jenkins administrators should disable features they don’t need (e.g., the Jenkins CLI or remote access API) to prevent unwanted access or malicious activities.
CSRF Protection: Jenkins includes Cross-Site Request Forgery (CSRF) protection, which helps prevent attackers from tricking authenticated users into performing actions on Jenkins without their consent.
Enable Security Settings: Features like “Enable security” in Jenkins should always be turned on, enforcing users to authenticate before using Jenkins.

Backup and Recovery


Regular backups should be scheduled for Jenkins configurations, including jobs, plugins, and credentials, so that you can restore your Jenkins instance in case of an attack or failure.

Requirement.

STEP 1: Create a new user on jenkins.

  • Go to Jenkins dashboard.
  • Select manage Jenkins and click on security.
Screenshot 2025 01 25 172832
Screenshot 2025 01 25 172847

STEP 2: Click on create a user.

  • Enter username, password, name and mail Id.
Screenshot 2025 01 25 172928 1
Screenshot 2025 01 25 173723

STEP 3: . You will see the user created by you.

Screenshot 2025 01 25 173742

STEP 4: Configure Security for the new user.

  • Go to Jenkins dashboard and Click on manage Jenkins.
  • Click on security.
  • Authorization: Matrix-based security.
Screenshot 2025 01 25 174020

STEP 5: Click on add user.

Screenshot 2025 01 25 174059

STEP 6: Enter user ID.

Screenshot 2025 01 25 174111

STEP 7: Add your created user.

Screenshot 2025 01 25 174229

STEP 8: Logout your Jenkins dashboard.

  • Log in your created user.
Screenshot 2025 01 25 174511

STEP 9: Now go back to the initial account and give the permission to the user01.

Screenshot 2025 01 25 174527

STEP 10: Click on read to your created user.

  • Apply and save.
Screenshot 2025 01 25 175029

STEP 11: Now login to your created user.

Screenshot 2025 01 25 175154

Create Project–based Matrix Authorization Strategy

STEP 1:Go to manage Jenkins and Click on security.

  • Select matrix based security.
Screenshot 2025 01 25 175338

STEP 2: Click on read option.

  • Apply and save.
Screenshot 2025 01 25 175500

STEP 3: Create a new job.

  • Select t freestyle project.
  • Click on ok button.
Screenshot 2025 01 25 175659

STEP 4: Enable project-based security.

  • Select your created user.
  • Tick on view and update option.
  • Click on apply and save.
Screenshot 2025 01 25 175731
Screenshot 2025 01 25 175903
Screenshot 2025 01 25 180000

STEP 5: Click on build now.

Screenshot 2025 01 25 180114
Screenshot 2025 01 25 180132
cyber security 1

Conclusion.

You have successfully configured the Matrix-Based Authorization Strategy, granting specific permissions to different users and groups, thereby ensuring controlled access to Jenkins resources.

You have successfully implemented the Project-Based Matrix Authorization Strategy, enabling project-specific permissions for users and groups, ensuring secure and tailored access to Jenkins jobs and pipelines.

By configuring these security practices correctly, Jenkins can be used safely to automate CI/CD processes without exposing sensitive information or opening the system to unauthorized access.

Screenshot 2025-01-23 214210

A Complete Guide to Viewing All Running Amazon EC2 Instances Across All Regions.

Introduction.

AWS Global View refers to a feature provided by Amazon Web Services (AWS) that allows you to get a unified, global perspective of your AWS resources across all regions and accounts. This feature helps you manage and monitor the global infrastructure in a centralized way, providing visibility into the health, performance, and status of your AWS services worldwide.

Global Resource Visibility: It lets you view resources and services that are spread across multiple AWS regions and accounts, helping you understand how your resources are distributed and interconnected.

Global Dashboards: Provides an overview of your AWS accounts, services, and infrastructure from a single dashboard, which can be helpful in large-scale environments where resources are spread across different geographical locations.

Unified Management: AWS Global View allows you to manage your AWS resources from a high-level view, making it easier to track your AWS usage, monitor billing and costs, and identify potential issues across your global footprint.

Compliance and Governance: Helps in ensuring that your resources adhere to compliance standards and governance policies by providing insights into resource configurations and deployments worldwide.

dev ops 1

Method : 1

STEP 1: Go to EC2 instance, Click on EC2 Global View.

Screenshot 2025 01 23 104106

STEP 2: After Clicking on Global view you will see that all running instances in across the regions that they are running.

Screenshot 2025 01 23 103833

STEP 3: Click on view all.

Screenshot 2025 01 23 103943
Screenshot 2025 01 23 104000

Method : 2

STEP 1: Go to VPC in Dashboard there you can see that ‘Running Instances’ Option.

Screenshot 2025 01 23 104230

Method : 3

STEP 1: Navigate the AWS Resource Groups.

  • Click on tag editor.
Screenshot 2025 01 23 105554

STEP 2: Select your resource type.

  • Click on search resource.
Screenshot 2025 01 23 105638

STEP 3 : After that you can see all the running instances in across all regions.

Screenshot 2025 01 23 105705

Method : 4

STEP 1:  Now Create a bash file where we will write a script to see instances running in all regions.

toch ec2.sh
Screenshot 2025 01 23 110018

STEP 2: Paste the following lines.

 install aws cli first and configure it with credentials and default region
# the script will iterate over all regions of AWS
for region in `aws ec2 describe-regions --output text | cut -f4`
do
     echo -e "\nListing Instances in region:'$region'..."
     aws ec2 describe-instances --query "Reservations[*].Instances[*].{IP:PublicIpAddress,ID:InstanceId,Type:InstanceType,State:State.Name,Name:Tags[0].Value}" --output=table --region $region
done
Screenshot 2025 01 23 110157
Screenshot 2025 01 23 110859

STEP 3: Now give the necessary permission to the file we created using chmod command and run the file.

chmod +x file_name.sh
./file_name.sh
Screenshot 2025 01 23 110909

STEP 4: After running the file we can see the output in table format in the output it will loop through all regions and give the describe of instances where they are running.

Screenshot 2025 01 23 110931
digital marketing 1

Conclusion.

Amazon EC2 instances are deployed across multiple AWS regions. By default, you can only view instances within a specific region, so it’s crucial to consider all regions when attempting to get a global view of your EC2 infrastructure. While the AWS Management Console allows you to see EC2 instances within each region, it doesn’t provide a unified view across multiple regions simultaneously. You would need to manually switch regions to check each one.

In essence, a combination of AWS CLI, tagging, automation, and external tools can help you gain visibility and control over EC2 instances running across multiple regions, ensuring smooth operations and cost-effective resource management.

Querying Amazon S3 Data with Athena: A Step-by-Step Guide.

Querying Amazon S3 Data with Athena: A Step-by-Step Guide.

What is Aws Athena?

Amazon Athena is an interactive query service provided by Amazon Web Services (AWS) that allows users to analyze data directly in Amazon Simple Storage Service (S3) using standard SQL queries. Athena is serverless, meaning you don’t need to manage any infrastructure. You simply point Athena at your data stored in S3, define the schema, and run SQL queries.

project handling 1

What is AWS Glue?

AWS Glue is a fully managed, serverless data integration service provided by Amazon Web Services (AWS) that makes it easy to prepare and transform data for analytics. It automates much of the effort involved in data extraction, transformation, and loading (ETL) workflows, helping you to quickly move and process data for storage in data lakes, data warehouses, or other data stores.

In this blog we will attempt to query and analyze data stored in an S3 bucket using SQL statements. 

Task 1: Setup workgroup.

STEP 1: Navigate AWS Athena.

  • Select “Analyze your data using pyspark and spark sql.
  • Click on launch notebook editor.
Screenshot 2025 01 22 132707

STEP 2: Select workgroup and click on create workgroups.

Screenshot 2025 01 22 134048

STEP 3: Workgroup Name: Enter your choice of name

  • Description: Enter your choice of description
  • Click on Athena SQL and automatic.
Screenshot 2025 01 22 134349

STEP 4: Select your S3 bucket.

Screenshot 2025 01 22 134411

STEP 5: You will be see a created myworkgroup.

Screenshot 2025 01 22 134445

Task 2: Create a database in Glue.

STEP 1: Navigate AWS Glue, Click on database.

  • Click on Add database.
Screenshot 2025 01 22 135235

STEP 2: Enter the database name.

  • Click on create database.
Screenshot 2025 01 22 135335

STEP 3: You will see a created Database.

Screenshot 2025 01 22 135355

Task 3: Create a table in Glue.

STEP 1: Select table and click on add table.

Screenshot 2025 01 22 135426

STEP 2: Enter the name and select the database.

Screenshot 2025 01 22 135523

STEP 3: Select standard AWS glue table.

  • Data store : S3.
  • Data location : my account.
  • Path : S3 bucket.
  • Data format : CSV.
Screenshot 2025 01 22 135837
Screenshot 2025 01 22 135849

STEP 4: Schema : Define or upload schema.

  • Click on add.
Screenshot 2025 01 22 140431

STEP 5: Enter the name and data type.

Screenshot 2025 01 22 140526

STEP 6: Enter the name and data type(coloum 1).

Screenshot 2025 01 22 140616

STEP 7: After Enter the schema Click on next.

Screenshot 2025 01 22 140646

STEP 8: Click on create.

Screenshot 2025 01 22 140716

STEP 9: Select database and table.

Screenshot 2025 01 22 140942

STEP 10: Click on preview table.

Screenshot 2025 01 22 141449

STEP 10: Query editor will automatically generate the SQL statement for querying the first 10 columns.

  • The result of the query is shown below.
Screenshot 2025 01 22 141546
Screenshot 2025 01 22 141716
app development 1

Conclusion.

In this guide, we’ve walked through the process of querying data stored in Amazon S3 using Amazon Athena, which offers a powerful, serverless, and cost-effective solution for interacting with large datasets in S3. Amazon Athena allows you to run SQL queries directly on data stored in Amazon S3, without the need to load it into a database. This makes querying large datasets more accessible, especially for users who are familiar with SQL. Athena’s serverless nature means there’s no infrastructure to manage. You don’t need to worry about provisioning, configuring, or maintaining servers. Athena scales automatically based on the complexity of your queries.

How to Set Up an FSx File System for Windows: A Step-by-Step Guide.

How to Set Up an FSx File System for Windows: A Step-by-Step Guide.

Overview.

AWS FSx (Amazon FSx) is a fully managed service provided by Amazon Web Services (AWS) that offers highly scalable, performant, and cost-effective file storage options. It is designed to support workloads that require shared file storage for Windows and Linux-based applications.

There are two main types of Amazon FSx:

  1. Amazon FSx for Windows File Server: A managed file storage service that is built on Windows Server, providing features like SMB (Server Message Block) protocol support, Active Directory integration, and NTFS file system, making it ideal for applications that need Windows file system compatibility.
  2. Amazon FSx for Lustre: A managed file system optimized for high-performance computing (HPC), machine learning, and data-intensive workloads, using the Lustre file system. It’s designed for large-scale, high-speed data processing needs.
cloud computing

Now go to create a Amazon FSx.

STEP 1: Go to the “Services” section & Search the “FSx”. Click on Create file system.

Screenshot 2025 01 21 165330

STEP 2: Choose the “Amazon FSx for Window File Server” .

  • Click on next.
Screenshot 2025 01 21 165428

STEP 3: In the “File system details”, we will choose the following options.

  • File system name — optional: FSx-For-Windows.
  • Deployment Type — Single-AZ–2.
  • Storage type — SSD.
  • SSD Storage Capacity — 32 GiB.
  • Provisioned SSD IOPS — Automatic (3 IOPS per GiB of SSD Storage).
  • Throughput capacity- 32 MB/s (recommended).
Screenshot 2025 01 21 165832
Screenshot 2025 01 21 165849

STEP 4: Select VPC, Security group and subnet.

  • Windows authentication: AWS Managed Microsoft Active Directory.
Screenshot 2025 01 21 165905

STEP 5: Click on the Create new directory.

  • Choose the AWS Managed Microsoft AD.
  • Click on next.
Screenshot 2025 01 21 165939

STEP 6: Edition: Standard Edition.

Screenshot 2025 01 21 170142

STEP 7: Directory DNS Name: fsx.ad.

  • Directory NetBIOS: FSX-Assignment.
  • Choose the “Admin password” & also confirm your password over here. Click on the “Next”.
Screenshot 2025 01 21 170211

STEP 8: Select VPC and click on next.

Screenshot 2025 01 21 170300
Screenshot 2025 01 21 170403

STEP 9: It will be taking 30 Minutes to be go from the “Creating” to the “Active” Status.

Screenshot 2025 01 21 170448
Screenshot 2025 01 21 174622

STEP 10: Choose your created directory (fsx.ad).

Screenshot 2025 01 21 194823

STEP 11: click on the Next.

Screenshot 2025 01 21 195619

STEP 12: : Click on the “Create file system”.

Screenshot 2025 01 21 195635

STEP 13:  it will show the status as Creating. It will take 30 minutes to be shown Status as Available.

Screenshot 2025 01 21 202828
cyber security 2

Conclusion.

In conclusion, AWS FSx provides a powerful and flexible file storage solution for a variety of use cases. Whether you need a Windows-compatible file system with Amazon FSx for Windows File Server or require high-performance storage for data-intensive applications with Amazon FSx for Lustre, AWS ensures a fully managed experience with ease of scalability, reliability, and security. By leveraging AWS FSx, businesses can streamline their storage management, reduce administrative overhead, and focus on their core operations while benefiting from high-performance file systems tailored to their specific needs.




How to Create a User Pool in AWS Cognito Using Terraform.

How to Create a User Pool in AWS Cognito Using Terraform.

Introduction.

What is Cognito?

AWS Cognito is a service provided by Amazon Web Services (AWS) that helps developers manage user authentication and access control in their applications. It simplifies the process of adding user sign-up, sign-in, and access management features to mobile and web apps. AWS Cognito is widely used to handle user identities securely and scale effortlessly while integrating with other AWS services.

cyber security 3

What is terraform?

Terraform is an open-source Infrastructure as Code (IaC) tool developed by HashiCorp, that lets you build, change, and version cloud and on-prem resources safely and efficiently in human-readable configuration files that you can version, reuse, and share.

Now, Create cognito using terraform script in vscode.

STEP 1: Create IAM user.

  • Enter the user name.
  • Select I want to create an IAM user.
  • Click on next button.
Screenshot 2025 01 20 110839
Screenshot 2025 01 20 110924
Screenshot 2025 01 20 110943

STEP 2: Click on Attach policies directly.

  • Select Cognito Access policies.
  • Click on next.
Screenshot 2025 01 20 111409

STEP 3: Click on create user.

Screenshot 2025 01 20 111429

STEP 4: You will see that a user has been created.

  • Select your user.
Screenshot 2025 01 20 111508

STEP 5: Select the create access key.

Screenshot 2025 01 20 111524

STEP 6: Select CLI.

  • Click on create access key.
  • You will get a access key and secret key.
  • Download the .csv file.
  • Click on done.
Screenshot 2025 01 20 111547
Screenshot 2025 01 20 111601
Screenshot 2025 01 20 111739

STEP 7: Go to VS Code and open your folder.

  • Create variable.tf file.
  • Enter the following script and save the file.
variable "access_key" {
    description = "Access key to AWS console"
}
variable "secret_key" {
    description = "Secret key to AWS console"
}
variable "region" {
    description = "AWS region"
}
Screenshot 2025 01 20 112319

STEP 8: Create terraform.tfvars and enter your region, access key and secret key.

  • Save it.
region = "us-east-1"
access_key = "<Your access key id>"        
secret_key = "<Your secret key id>"
Screenshot 2025 01 20 112635

STEP 9: Create main.tf file.

  • Enter the following commands and save the file.
provider "aws" {
  region     = var.region
  access_key = var.access_key
  secret_key = var.secret_key
}

resource "aws_cognito_user_pool" "my_user_pool" {
  name = "my-user-pool"
  
  # Optional configuration settings
  username_attributes     = ["email"]  # Specify the attribute used as the username
  schema {
    attribute_data_type = "String"
    name                = "email"
    required            = true
  }
  
  # Define password policy
  password_policy {
    minimum_length    = 8
    require_lowercase = true
    require_numbers   = true
    require_symbols   = true
    require_uppercase = true
  }
  
  # Specify email verification settings
  verification_message_template {
    default_email_option = "CONFIRM_WITH_CODE"  # Options: CONFIRM_WITH_LINK or CONFIRM_WITH_CODE
  }
}

output "user_pool_id" {
  value = aws_cognito_user_pool.my_user_pool.id
}
Screenshot 2025 01 20 113008

STEP 10: Create output.tf file.

  • Enter the terraform script and save the file.
output "Cognito_user_pool" {
    value =  "cognito_user_pool"
    description = "Cognito user pool created successfully"
}
Screenshot 2025 01 20 113112

STEP 11: Go to terminal and enter the terraform init command.

Screenshot 2025 01 20 113308

STEP 12: Next, enter terraform plan.

Screenshot 2025 01 20 113341

STEP 13: Enter terraform apply.

Screenshot 2025 01 20 113444
Screenshot 2025 01 20 113455 1

STEP 14: Check the resources in the AWS Console.

  • Navigate to the Cognito page by clicking on the Services menu at the top. 
Screenshot 2025 01 20 113536

STEP 15: If you delete, use the terraform destroy command.

Screenshot 2025 01 20 113641
data analytics 1

Conclusion.

You have successfully created a User Pool in AWS Cognito. AWS Cognito is a powerful tool for managing user authentication and access in applications, providing a blend of security, scalability, and integration with AWS services. While it may require a learning curve for complex setups, its benefits often outweigh these challenges, making it a strong candidate for companies looking to enhance their user management processes and secure their applications. It is essential for businesses to assess their specific use cases to maximize the benefits of AWS Cognito effectively.