Posts

Python to convert json to yaml

 import json import yaml # Convert the JSON response to a Python dictionary response_dict = json.loads(json.dumps(response)) # Convert the Python dictionary to YAML (if desired) response_yaml = yaml.dump(response_dict, default_flow_style=False) print(response_yaml) ##################################### import json # Define JSON data as a Python dictionary json_data = {   "books": [     {       "title": "The Great Gatsby",       "author": "F. Scott Fitzgerald",       "publication_year": 1925,       "genre": "Fiction",       "isbn": "978-0743273565"     },     {       "title": "To Kill a Mockingbird",       "author": "Harper Lee",       "publication_year": 1960,       "genre": "Fiction",       "isbn": "978-0061120084"     }   ] } # Access and print the author and genre for each book for ...

check if an IP is valid

I was tasked with crafting a Bash program designed to determine the validity of a supplied IP address and assess whether each octet falls within the acceptable range.  #!/bin/bash # Function to validate an IP address is_valid_ip() {     local ip="$1"     local regex="^([0-9]{1,3}\.){3}[0-9]{1,3}$"          if [[ $ip =~ $regex ]]; then         # Check if each octet is in the valid range (0-255)         IFS='.' read -r -a octets <<< "$ip"         for octet in "${octets[@]}"; do             if [[ "$octet" -lt 0 || "$octet" -gt 255 ]]; then                 return 1             fi         done         return 0     else         return 1     fi } # Read an IP address from the user read -p "Enter an I...

Terrarform Variables

 You only need to define `variables.tf` to declare the input variables in your Terraform configuration. `terraform.tfvars` is optional but often used to provide values for those variables. Here's a breakdown of their roles: 1. `variables.tf`:    - Purpose: This file is used to declare input variables in your Terraform configuration. It specifies the names, types, and descriptions of the variables you want to use.    - Contents: In `variables.tf`, you define variables like this:            variable "aws_region" {        description = "AWS region"        type        = string        default  = "us-east-1"  # Default value for aws_region      }    - Usage: You use these declared variables throughout your Terraform configuration files (e.g., `main.tf`) to parameterize your resources. 2. `terraform.tfvars`:    - Pur...

AWS: Auto Scaling using Terraform

Creating a comprehensive Terraform project with a complete VPC, subnets, security groups, an Application Load Balancer, and Auto Scaling groups with code is an extensive task. Below, I'll provide you with an example project structure and simplified Terraform configuration files. Please adapt these files to your specific needs and follow best practices. Directory Structure: plaintext terraform-project/ |-- main.tf |-- variables.tf |-- outputs.tf |-- vpc.tf |-- subnets.tf |-- security_groups.tf |-- load_balancer.tf |-- autoscaling.tf |-- providers.tf |-- terraform.tfvars Here's a brief overview of what each file should contain: 1. `main.tf`:    The main configuration file where resources and dependencies are defined. 2. `variables.tf`:    Input variable definitions that allow you to parameterize your configuration. 3. `outputs.tf`:    Output definitions for displaying information after deployment. 4. `vpc.tf`:    Configuration for the Virtual Privat...

AWS: Auto Scaling Steps

 Creating an Auto Scaling group in AWS involves several steps. Here's a step-by-step guide to setting up an Auto Scaling group: 1. Log in to the AWS Console: Open your web browser and navigate to the [AWS Management Console](https://aws.amazon.com/). Sign in to your AWS account if you haven't already. 2. Navigate to Auto Scaling:  Once you're logged in, click on the "Services" menu at the top of the page, and under the "Compute" section, select "Auto Scaling." You can also search for "Auto Scaling" in the AWS services search bar. 3. Create an Auto Scaling Group: In the Auto Scaling dashboard, click on "Create Auto Scaling group." 4. Select Launch Template or Launch Configuration: You'll need to specify the launch template or launch configuration that defines the instance configuration for your Auto Scaling group. You can either select an existing launch configuration or create a new launch template. If you're crea...

AWS: Auto Scaling

Amazon Web Services (AWS) Auto Scaling is a service that allows you to automatically adjust the number of Amazon EC2 instances (virtual servers) in your AWS environment to handle changes in workload, traffic, or resource utilization. It helps you ensure that you have the right number of instances running at any given time to maintain application availability and performance while optimizing costs. Here's a breakdown of key concepts and how AWS Auto Scaling works: 1. Auto Scaling Groups (ASGs):   An Auto Scaling Group is a fundamental component of AWS Auto Scaling. It defines a collection of Amazon EC2 instances that share similar characteristics and are treated as a logical grouping for scaling purposes. Instances within an ASG are launched from the same Amazon Machine Image (AMI) and have the same configuration settings. 2. Scaling Policies: AWS Auto Scaling allows you to define scaling policies that specify when and how the Auto Scaling Group should scale. There are two prim...

Jenkins Pipeline

Introduction Jenkins, the popular open-source automation server, offers a powerful feature known as pipelines to streamline and automate your software development processes. Jenkins pipelines enable you to define, automate, and manage your entire software delivery workflow as code. In this blog post, we'll dive into the world of Jenkins pipelines, discussing what they are, their benefits, and how to create them effectively. What are Jenkins Pipelines? Jenkins pipelines are a set of instructions defined in code that specify how Jenkins should build, test, and deploy your software. They are designed to replace the traditional, point-and-click job configuration in Jenkins with a more structured and flexible approach. Pipelines can be defined using either Declarative or Scripted syntax, giving you the flexibility to choose the best fit for your project. Benefits of Jenkins Pipelines 1. Code as Configuration: Jenkins pipelines are defined as code, allowing you to version control your bu...

DevOps

Introduction In today's fast-paced world of software development and IT operations, DevOps has emerged as a transformative approach. It bridges the gap between traditionally siloed development and operations teams, fostering collaboration, automation, and continuous improvement. In this blog post, we'll explore the concept of DevOps, its core principles, and the benefits it brings to organizations. What is DevOps? DevOps, a portmanteau of "development" and "operations," represents a cultural and technical shift in how organizations build, deploy, and manage software systems. It emphasizes collaboration, automation, and the integration of development and operations teams throughout the entire software development life cycle. Core Principles of DevOps 1. Collaboration: DevOps encourages developers, operations, and other stakeholders to work together seamlessly. This collaboration eliminates bottlenecks, improves communication, and aligns everyone towards a com...

Software Development Life Cycle

Introduction The Software Development Life Cycle (SDLC) is a systematic and structured approach to developing software applications. It encompasses a series of phases and processes that guide software developers, project managers, and stakeholders from the initial concept to the final product. In this blog post, we will delve into the different stages of the SDLC, emphasizing their importance and how they contribute to successful software development. 1. Requirements Gathering and Analysis The SDLC begins with a crucial phase: requirements gathering and analysis. During this stage, project stakeholders collaborate to define and document the software's functional and non-functional requirements. These requirements serve as the foundation for the entire development process, ensuring that the software aligns with business needs and user expectations. 2. Planning and Feasibility Assessment Once the requirements are established, the project team creates a comprehensive project plan. Thi...

Understanding Jenkins: Servers and Agents

Introduction Jenkins is a widely used open-source automation server that helps streamline software development processes through continuous integration and continuous delivery (CI/CD). Central to Jenkins' functionality are its server and agent components, which work together to automate build, test, and deployment tasks. In this blog post, we'll explore the roles of Jenkins servers and agents and how they collaborate to optimize software development pipelines. Jenkins Server The Jenkins server, often referred to simply as the Jenkins master, is the core component of the Jenkins infrastructure. It serves as the primary control center for managing and orchestrating CI/CD pipelines. Here are its key functions: 1. Job Management: The Jenkins server is responsible for creating, configuring, and scheduling jobs. Jobs define the steps required for building, testing, and deploying software applications. Jenkins jobs are created using Jenkinsfile (declarative pipeline) or scripted pipel...

What need to be Monitored in Real Time?

Monitoring Plays a Major role in Production to Debug and takes necessary action to Resolve the Issues. Choice of Monitoring Tools depends on the stats you want to check and licensing type one wants to go ( paid version or Open Source) Monitoring Tools: Prometheus (Trending Tool) Nagios CheckMk Datadog Appdynamics Types of Monitoring Infra Monitoring:  CPU Utilization / Load Avg Memory / RAM Utilization Disk Usage and Disk I/O Utilization Ping SSH Status check Network (bandwidth, Incoming and outgoing Traffic ) Service or Application Monitoring: Service Port or Service Status No of Requests Resources used by Application Heap Dump in case of Java Application  GC Garbage Collector in case of Java Application Performance/ Response Time Load Testing vs Response Time API Health check URL The health of the Database 

SSL key and crt validation

openssl rsa -modulus -noout -in server.key | openssl md5 (stdin)= c20d7fd18a97bf6cba1e4f974d805023 openssl rsa -check -noout -in server.key RSA key ok openssl x509 -modulus -noout -in server.crt | openssl md5 (stdin)= c20d7fd18a97bf6cba1e4f974d805023

EC2-Instances-awscli

Launch instance: aws ec2 run-instances --image-id <value> --instance-type <value> --security-group-ids <value> --subnet-id <value> --key-name <value> --user-data <value> Terminate instances: aws ec2 terminate-instances --instance-ids <value> <value> 

Cloudbees FlowServer - Debug

Error:  Workspace file /workspace/job_name/step.log in workspace 'default' not found. The error means workspace is not available for the resource on which job landed.   Start with checking the workspace attached to the resource/project/procedure/step  and make sure the workspace is accessible to the resource  

Python-mycheat-sheet

Method to check path from which the python module   >>> import os >>> import inspect >>> inspect.getfile(os) '/usr/lib64/python2.7/os.pyc' >>> inspect.getfile(inspect) '/usr/lib64/python2.7/inspect.pyc' >>> os.path.dirname(inspect.getfile(inspect)) '/usr/lib64/python2.7' C:\WINDOWS\system32>assoc .py=Python.File .py=Python.File ftype Python.File="C:\Program Files\Python35" "%1" %* ftype Python.File="C:\python279-64" "%1" %* Python.File="C:\Program Files\Python35" "%1" %* Python.File="C:\python279-64" "%1" %* C:\>ftype | findstr -i python Python.CompiledFile="C:\Python27\python.exe" "%1" %* Python.File="C:\Python27\python.exe" "%1" %* Python.NoConFile="C:\Python27\pythonw.exe" "%1" %* ftype Python.CompiledFile="C:\CRMApps\Apps\Python262\python.exe" "%1" %*...

PIP Errors and Fix

PIP Upgrade fails : OS: Ubuntu 16.04.6 LTS PIP upgrade failure on Ubuntu 16.04  Python version: Python 2.7.12 # pip install --upgrade pip Collecting pip   Using cached https://files.pythonhosted.org/packages/88/d9/761f0b1e0551a3559afe4d34bd9bf68fc8de3292363b3775dda39b62ce84/pip-22.0.3.tar.gz     Complete output from command python setup.py egg_info:     Traceback (most recent call last):       File "<string>", line 1, in <module>       File "/tmp/pip-build-WjsvHJ/pip/setup.py", line 7         def read(rel_path: str) -> str:                          ^     SyntaxError: invalid syntax     ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-WjsvHJ/pip/ You are using pip version 8.1.1, however version 22.0.3 is available. You should consider upgrad...

BASH-Elements

Bash Elements Below $1 and $2 are arguments passed while running the script. where the first element passed is $1 which is 10 and the second element $2 which is 20 and $3...  Server1> cat test.sh #/bin/bash echo $1 echo $2 echo "$1" + "$2" echo "Sum of $1 and $2 is $(( $1+$2 ))" echo "Number of arguments passed $#" echo "Print @ Notation $@" echo "Echo of 0 is $0"  echo "Execute a command:   `pwd`" Server1> chmod +x test.sh Server1>. /test.sh   10 20 10 20 Concatinate to elemts: 10  20 Sum of 10 and 20 is 30 Number of arguments passed 2 Print @ Notation 10 20 Print * Notation 10 20 Echo of 0 is ./test.sh Execute a command: /usr/username

BASH-Array

 Working with BASH Array Declare an Array: Server1>declare -a ARRAY Added Elements to an Array: Server1>ARRAY=(mango banana pear kiwi) [@] & [*] print all elements in an ARRAY Server1>echo ${ARRAY[@]} mango banana pear kiwi Server1>echo ${ARRAY[*]} mango banana pear kiwi Print Index values in an ARRAY: Server1>echo ${!ARRAY[*]} 0 1 2 3 Size of an ARRAY: Server1>echo ${#ARRAY[*]} 4 Print all elements in an ARRAY: Server1>declare -p ARRAY declare -a ARRAY='([0]="mango" [1]="banana" [2]="pear" [3]="kiwi")' Update an Array: Server1>ARRAY[0]=Apple Server1>declare -p ARRAY declare -a ARRAY='([0]="Apple" [1]="banana" [2]="pear" [3]="kiwi")' Append  ARRAY: Server1>declare -p ARRAY declare -a ARRAY='([0]="mango" [1]="banana" [2]="pear" [3]="kiwi")' Server1>ARRAY=(${ARRAY[@]} Orange) Server1>declare -p ARRAY declar...

SSL Verify

Verify SSL Certificates Verify KEY: Private Key #openssl rsa -in hostname.domain.key -check Decoder CSR: Certificate Request  #openssl req -in hostname.domain.csr -noout -text Decoder CRT: Certificate #openssl x509 -in hostname.domain.crt -text -noout NOTE: .key, .csr and .crt can have  different names

SSL new cert generation

Below are steps for generating certificates in your Organization. Private key generations using OpenSSL: # openssl genrsa -out hostname.domain.com.key 2048 CSR generation using openssl: # openssl req -new -key hostname.domain.com.key -out hostname.domain.com.csr -nodes -subj "/C=US/ST=Region/L=Location/O=Organization/OU=UNIT/CN=hostname.domain.com/emailAddress=support.help@domain.com" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:hostname,DNS:hostname.domain.com,DNS:api.hostname.domain.com,DNS:storage.hostname.domain.com,DNS:tasks.hostname.domain.com")) Acquire security certification from CA: Security certificate needs to be provided by the cert admins. Work with AD Team Admins or one who supports certs in your Organization NOTE: for most it ends here if you need pem or pfx file follow respective steps to generate keys  ##################################################################################### To generate .p...