Posts

Showing posts from February, 2022

EC2-Instances-awscli

Launch instance: aws ec2 run-instances --image-id <value> --instance-type <value> --security-group-ids <value> --subnet-id <value> --key-name <value> --user-data <value> Terminate instances: aws ec2 terminate-instances --instance-ids <value> <value> 

Cloudbees FlowServer - Debug

Error:  Workspace file /workspace/job_name/step.log in workspace 'default' not found. The error means workspace is not available for the resource on which job landed.   Start with checking the workspace attached to the resource/project/procedure/step  and make sure the workspace is accessible to the resource  

Python-mycheat-sheet

Method to check path from which the python module   >>> import os >>> import inspect >>> inspect.getfile(os) '/usr/lib64/python2.7/os.pyc' >>> inspect.getfile(inspect) '/usr/lib64/python2.7/inspect.pyc' >>> os.path.dirname(inspect.getfile(inspect)) '/usr/lib64/python2.7' C:\WINDOWS\system32>assoc .py=Python.File .py=Python.File ftype Python.File="C:\Program Files\Python35" "%1" %* ftype Python.File="C:\python279-64" "%1" %* Python.File="C:\Program Files\Python35" "%1" %* Python.File="C:\python279-64" "%1" %* C:\>ftype | findstr -i python Python.CompiledFile="C:\Python27\python.exe" "%1" %* Python.File="C:\Python27\python.exe" "%1" %* Python.NoConFile="C:\Python27\pythonw.exe" "%1" %* ftype Python.CompiledFile="C:\CRMApps\Apps\Python262\python.exe" "%1" %*...

PIP Errors and Fix

PIP Upgrade fails : OS: Ubuntu 16.04.6 LTS PIP upgrade failure on Ubuntu 16.04  Python version: Python 2.7.12 # pip install --upgrade pip Collecting pip   Using cached https://files.pythonhosted.org/packages/88/d9/761f0b1e0551a3559afe4d34bd9bf68fc8de3292363b3775dda39b62ce84/pip-22.0.3.tar.gz     Complete output from command python setup.py egg_info:     Traceback (most recent call last):       File "<string>", line 1, in <module>       File "/tmp/pip-build-WjsvHJ/pip/setup.py", line 7         def read(rel_path: str) -> str:                          ^     SyntaxError: invalid syntax     ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-WjsvHJ/pip/ You are using pip version 8.1.1, however version 22.0.3 is available. You should consider upgrad...

BASH-Elements

Bash Elements Below $1 and $2 are arguments passed while running the script. where the first element passed is $1 which is 10 and the second element $2 which is 20 and $3...  Server1> cat test.sh #/bin/bash echo $1 echo $2 echo "$1" + "$2" echo "Sum of $1 and $2 is $(( $1+$2 ))" echo "Number of arguments passed $#" echo "Print @ Notation $@" echo "Echo of 0 is $0"  echo "Execute a command:   `pwd`" Server1> chmod +x test.sh Server1>. /test.sh   10 20 10 20 Concatinate to elemts: 10  20 Sum of 10 and 20 is 30 Number of arguments passed 2 Print @ Notation 10 20 Print * Notation 10 20 Echo of 0 is ./test.sh Execute a command: /usr/username

BASH-Array

 Working with BASH Array Declare an Array: Server1>declare -a ARRAY Added Elements to an Array: Server1>ARRAY=(mango banana pear kiwi) [@] & [*] print all elements in an ARRAY Server1>echo ${ARRAY[@]} mango banana pear kiwi Server1>echo ${ARRAY[*]} mango banana pear kiwi Print Index values in an ARRAY: Server1>echo ${!ARRAY[*]} 0 1 2 3 Size of an ARRAY: Server1>echo ${#ARRAY[*]} 4 Print all elements in an ARRAY: Server1>declare -p ARRAY declare -a ARRAY='([0]="mango" [1]="banana" [2]="pear" [3]="kiwi")' Update an Array: Server1>ARRAY[0]=Apple Server1>declare -p ARRAY declare -a ARRAY='([0]="Apple" [1]="banana" [2]="pear" [3]="kiwi")' Append  ARRAY: Server1>declare -p ARRAY declare -a ARRAY='([0]="mango" [1]="banana" [2]="pear" [3]="kiwi")' Server1>ARRAY=(${ARRAY[@]} Orange) Server1>declare -p ARRAY declar...

SSL Verify

Verify SSL Certificates Verify KEY: Private Key #openssl rsa -in hostname.domain.key -check Decoder CSR: Certificate Request  #openssl req -in hostname.domain.csr -noout -text Decoder CRT: Certificate #openssl x509 -in hostname.domain.crt -text -noout NOTE: .key, .csr and .crt can have  different names

SSL new cert generation

Below are steps for generating certificates in your Organization. Private key generations using OpenSSL: # openssl genrsa -out hostname.domain.com.key 2048 CSR generation using openssl: # openssl req -new -key hostname.domain.com.key -out hostname.domain.com.csr -nodes -subj "/C=US/ST=Region/L=Location/O=Organization/OU=UNIT/CN=hostname.domain.com/emailAddress=support.help@domain.com" -reqexts SAN -config <(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:hostname,DNS:hostname.domain.com,DNS:api.hostname.domain.com,DNS:storage.hostname.domain.com,DNS:tasks.hostname.domain.com")) Acquire security certification from CA: Security certificate needs to be provided by the cert admins. Work with AD Team Admins or one who supports certs in your Organization NOTE: for most it ends here if you need pem or pfx file follow respective steps to generate keys  ##################################################################################### To generate .p...

GIT

Git is a free and open-source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. Code is saved in a repository like GitHub. Git is software for tracking changes in any set of files, usually used for coordinating work among programmers collaboratively developing source code during software development. Its goals include speed, data integrity, and support for distributed, non-linear workflows The project can be either Public or Private Works with HTTP and SSH protocol  GIT follows branching stratagy.  Master Feature1/Bug1 Feature2/Bug2 Feature3/Bug3 Working with git Steps: 1) Initialize a repository on the local machine (git init) or create a new project in the repository from UI.  2) Add files and commit the changes  3)   Push the locally created project to the git repository. Working on a bug fix or feature Steps: 1) clone the existing project  2) create a new branch with a versionin...

Sysstat - collect system stats

The sysstat service is responsible for the regular collection of system performance information. Through the use of cron and sadc (System Activity data collector), sysstat gathers SAR data at n minute intervals daily. The service has little impact on overall server performance. The default sysstat configuration overwrites collected performance information every 7 days. Sysstat is provided as part of the sysstat package – it also provides useful system performance gathering utilities including; mpstat, iostat and sar. Service: /etc/init.d/sysstat start|stop Configuration File:  /etc/sysconfig/sysstat Log location: /var/log/sa/ server1# cat /etc/default/sysstat # # Default settings for /etc/init.d/sysstat, /etc/cron.d/sysstat # and /etc/cron.daily/sysstat files # # Should sadc collect system activity information Valid values # are "true" and "false". Please do not put other values, they # will be overwritten by debconf! ENABLED="true" Change History Size: ...

LINUX - RAM Usage

The RAM (Random Access Memory) is an essential component of a Linux system that has to be monitored closely. In some conditions, we may run out of memory with very slow response times to our server or be completely unresponsive. Linux machine has swap file which acts as an Extra RAM for the machine Check RAM available on Machine: # free -g               total        used        free      shared  buff/cache   available Mem:             31           1          25           0           4          29 Swap:            31           1          29 Commands to check RAM usage on Linux Machines: 1) TOP command to get the  high Memory...

LINUX - CPU Usage

Checking CPU and RAM is one of the important Task. Though we have monitoring enable to check CPU and RAM stats we ran into a situation where we need to check the CPU and RAM utilization live during debugging session. CPU: To list No of  CPU run  lscpu same can be checked using cat /proc/cpuinfo $ lscpu Architecture:          x86_64 CPU op-mode(s):        32-bit, 64-bit Byte Order:            Little Endian CPU(s):                12 On-line CPU(s) list:   0-11 Thread(s) per core:    2 Core(s) per socket:    6 Socket(s):             1 NUMA node(s):          1 Vendor ID:             GenuineIntel CPU family:            6 Model:                 158 Model name:   ...

Ansible Playbook

Ansible Playbook Playbooks are created in YAML format using modules available in ansible To list the available modules $ansible-doc -l  To know more about a specific module used ansible-doc modulename $ansible-doc <modulename> Below playbook use apt module to install Nginx and TFTP packages on ubuntu machines  $ cat  install.yml ---  - name: Install Packages    gather_facts: false    become: yes    hosts: NODE    tasks:      - name: Install packages         apt: pkg={{ item }} state=present update_cache=true        with_items:             - nginx             - tftp $ cat list.ini [NODE] localhost server1 server2 $ansible-playbook -i list.ini install.yml --syntax-check (Verify syntax ) playbook: install.yml Ansible playbook created runs on all the host under NODE group in list.ini file $ansible-playbook -i...

Ansible

Ansible and Basic Commands Ansible is an open-source and  Infrastructure Management Tool. The Ansible file is in YAML format. Ansible is Agentless works with SSH Protocol. Ansible is written and uses Python Ansible has many modules to perform various operations without any programming knowledge.   Ansible Configuration file: PATH: /etc/ansible/ansible.cfg remote_tmp  = /commlocation (change to common location to avoid permissions issue) forks          = 50 (Adjust to number of instances to connect parallelly )  Default hosts file Path: /etc/ansible/hosts [TEST1] server1 server2 [TEST2] server3 server4 Command Usage: Ansible to use default host file  $ansible all -m ping -o (run on all the hosts in the hosts file) $ansible TEST1 -m ping -o (run on nodes under TEST1 group) to use a custom host file create a file and pass an argument with -i as below 4cat hostfile.ini [NODE1] testserver1 testserver2 [NODE2] testserver3 testserver4...

Electric Commander Startup and Important Files

  Start Electric Commander Services Start/Stop Apache Service: /etc/init.d/commanderApache start|stop Start/Stop  Agent Service: /etc/init.d/commanderAgent  start|stop Start/Stop Commander Flow Server: /etc/init.d/commanderServer start|stop Important Files: 1) CommanderServer:  Path:  /commanderserver/installfolder/conf wrapper.conf database.properties commander.properties Logs Folder: PATH: /commanderserver/installfolder/logs commander-service.log commander-hostname.log event.log 2) Commander Web Server: PATH: /commanderserver/installfolder/apache/con f httpd.conf ssl/certificates.{cer, csr, key, pem} PATH: /commanderserver/installfolder/apache/logs access.log error.log 3) Agent Log: PATH: /commanderserver/installfolder/apache/logs jagent-service.log jagent.log agent.log

Install Electric Commander on Singe node Cluster

  Install Electric Commander on Singe node Cluster Electric Commander on Single node cluster need all the components like Flow Server, Apache, Agent, and Database server to be installed on a Single Node ./CloudBeesFlow-x64-version --mode silent --installDirectory /local/mnt/electriccloud/electriccommander  --installWeb --installAgent --installDatabase --installServer  --unixServerUser qctecmdr --unixServerGroup users

Electric Commander Upgrade

  Electric Commander Upgrade The below command uses the previous installation file and installs the same components with the same user and group from the file and make sure the package has executed permissions ./CloudBeesFlow-x64-version --mode silent --dataDirectory /opt/electriccloud/

Install Electric Commander

  Install Electric Commander Web and Flow Server  Install Commander Apache and Agent  : ./CloudBeesFlow-x64-version --mode silent --installDirectory /local/mnt/electriccloud/electriccommander   --installWeb --installAgent --unixAgentUser  qctecmdr --unixAgentGroup users --unixServerUser qctecmdr --unixServerGroup users Start Apache and Agent Service: /etc/init.d/commanderApache start|stop /etc/init.d/commanderAgent  start|stop Install Commander Flow Server: ./CloudBeesFlow-x64-version --mode silent --installDirectory /local/mnt/electriccloud/electriccommander   --installServer  --unixServerUser qctecmdr --unixServerGroup users Start and Stop Commander Flow Server: /etc/init.d/commanderServer start|stop

SSL cert check

Below are the different ways to check the SSL cert and its validity remotely NMAP is used to check the port and services running on the remote machine $nmap --script ssl-cert -p PORT   URL The OpenSSL program is a command-line tool for using the various cryptography functions of OpenSSL's crypto library from the shell.  It can be used for Creation and management of private keys, public keys and parameters Public key cryptographic operations Creation of X.509 certificates, CSRs and CRLs Calculation of Message Digests Encryption and Decryption with Ciphers SSL/TLS Client and Server Tests Handling of S/MIME signed or encrypted mail Time Stamp requests, generation and verification openssl s_client -showcerts -connect URL:PORT

kernel fallback to older version - Ubuntu

Kernel upgrade is a simple and smooth process. However, there are a few cases for which all ways need a fallback kernel available. NOTE:  Test changes first in using #1 before making the permanent changes to the system.  Temporary:  Boot into the previous version of Kernel: Interrupting the boot process using ESC key keep will splash the available kernel version on the system. Select the kernel.  2) Permanent  Method 1: sudo view /boot/grub/grub.cfg and copy the full name of your old kernel. sudo vi /etc/default/grub and, at the top, change GRUB_DEFAULT=0 to instead read GRUB_DEFAULT=your_kernel_name_from_grub.cfg , and save the change (you may like to keep a copy of the original file for safety). sudo update-grub Method 2:        1. grep -A100 submenu  /boot/grub/grub.cfg |grep menuentry       2. update  GRUB_DEFAULT=gnulinux-advanced-version>gnukernl-kernel-version in /etc/default/grub     ...

Ubuntu Kernel and Upgrade

The kernel is the heart of the system which converts the human-readable language to machine-understandable language  Steps: 1) Identify the current version of kernel 2) Update the repository 3) Install kernel   Identify the current version of kernel: uname is used to check the kernel of a machine #uname -a Linux hostname 4.4.0-186-generic #216-Ubuntu SMP Wed Jul 1 05:34:05 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux # uname -sr Linux 4.4.0-186-generic Update the repository:  apt-get update apt-get remove Install Kernel: apt-get install --install-recommends linux-generic-{version} linux-headers-{version} linux-modules-{version} reboot the server #reboot -f  verify the current kernel using # uname -sr   Once verified we can remove the old kernel version to list all the old kernels use  dpkg -l|grep linux-image* apt-get remove --purge linux-image-{older-kernel-version} Note: It is always recommended to have at least one alternate kernel to fall back...

Increase LVM

Extending Existing Logical volume: Steps: 1) Identify partitions 2) Identify Volume Group 3) Extend Volume Group 4) Extend Logical Volume Identify partitions: identify and format disk to LVM partition 8e once created create a PV as below #pvcreate /dev/sdd (10G) Identify Volume Group: list volume groups  available on the machine using vgs # vgs   VG   #PV #LV #SN Attr   VSize   VFree   vgname   1   5   0 wz--n- 299.04g    0 Extend Volume Group: #vgextend vgname /dev/sdd # vgs   VG   #PV #LV #SN Attr   VSize   VFree   vgname   1   5   0 wz--n- 309.04g    0 Extend Logical Volume: #lvextend -l +100%FREE /dev/vgname/lvname (Adds maximum available free space to the Logical Voulme) or #lvextend -l +10G /dev/vgname/lvname (  Increase Logical Volume by 10G ) #resize2fs /dev/vgname/lvname

Reduce LVM

Reduce an LVM: consider the lvm size is 15G and want it to reduce to 10G unmount the filesystem #umount /mnt/pathtreduce #e2fsck -ff /dev/vgname/lvname #resize2fs  /dev/vgname/lvname 10G (10G is the final size to which you want to reduce) #lvreduce -l -5G /dev/vgname/lvname # mount  /dev/vgname/lvname  /mnt/pathtreduce

LVM

Logical Volume Management: Dynamic Partitions (create/resize/delete) partitions.  In simple if you have multiple disks we can group all the disks into a single partition or multiple partitions  volumes using LVM .   Steps: 1) Identify the correct partition attached to the Node and format 2) Create a Physical Volume on the disk 3) Create a Volume Group 4) Create Logical Volume 5) Create Filesystem of the logical Volume Identify the correct partition attached to the Node: Identify the Correct Partition attached to the Node using fdisk or lsblk or any other disk management tools /dev/sdb or /dev/sdc and convert the disk to an LVM disk  #fdisk /dev/sdb create a partition with the key "n" assigning +size of capacity change the type partition with key "t" assigning "8e" (lvm partition)  Enter key "w" to save changes,  Press key "p". #partprobe /dev/sdb (To update the kernel) Identify the correct partition attached to the Node: #   pvcreate /d...

SSH and its configuration files permissions

Every user has .ssh   folder under the user home directory to manage the ssh connections and store keys that are used to connect.   .SSH folder permission plays an important role in setting up connections so please make sure the permissions and intact. chmod 700 ~/.ssh (Folder to store ssh config and ssh keys) chmod 644 ~/.ssh/authorized_keys (store public keys to establish passwordless ssh connection) chmod 644 ~/.ssh/known_hosts (stores key information which connected to the node) chmod 644 ~/.ssh/config (customize and manage ssh connections) chmod 600 ~/.ssh/id_rsa (RSA private key) chmod 644 ~/.ssh/id_rsa.pub (RSA public key) For more information use the below command $man ssh_config

Install SSH and enable ssh for root login

 SSH is SECURE SHELL which is used to connect remote Linux Servers. It uses port 22 for connection. Application name: openssh-server and openssh-client  Install ssh on the server: apt-get install openssh-server -y  Start and Stop Service: $service sshd (status|start|stop) ● ssh.service - OpenBSD Secure Shell server    Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)    Active: active (running) since Mon 2021-11-15 22:27:10 PST; 2 months 28 days ago  Main PID: 1195 (sshd)    CGroup: /system.slice/ssh.service            └─1195 /usr/sbin/sshd -D   NOTE: By default ssh login for root is disabled to enable ssh for use change PermitRootLogin no to PermitRootLogin yes in   /etc/ssh/sshd_config file. root@server:~# grep PermitRootLogin  /etc/ssh/sshd_config # Ubuntu16 requires PermitRootLogin set to 'yes'. PermitRootLogin yes

SSH-PASSWORDESS

SSH: SSH (SECURE SHELL ) is an Open Source and most trusted protocol that is used to remote login to other Linux Machines. Scenario: If you would like to connect to another remote host via command line we can use ssh <hostname> to connect to the host by entering username and password . Command: server1 $ ssh server2 password: one you have entered the password you will be login to the server2 via command line. In IT Industry we may need to work with multiple hosts. Entering a password every time connecting to other hosts is a very time-consuming and repetitive task. to overcome this we have a password-less authentication method using ssh keys. Generating ssh keys: $ssh-keygen -t rsa (Press Enter for every question asked after executing the command) This generates the RSA keys id_rsa and id_rsa.pub keys now you can manually append the authorized_keys on the remote host(server2) with the .pub key on the node(server1) on which you have created the keys. or run the below...

Tools for working Linux professional

There are many tools and packages that help to make the IT-Admin life easier. Below are a few that I have used. ssh nslookup Ping fping echo   strace ps vmstat top netstat ss awk sed dmesg jq xmlstarlet Basic Tools:  They are many tools or packages available that help IT-Admin.  Below are a few which I use frequently. Installation OS: One should know the OS Installation and Boot process to better understand how the OS or application works. SSH: If you are a working Linux professional. Learning ssh and configuring passwordless ssh is the first thing to start.  Clusters-ssh:  Cluster ssh is a simple tool to manage multiple hosts I used way back before started working with Ansible  Cron Job:  Cron Job is used to run the scripts or tasks that needed to run on a Linux machine on a schedule Ansible: This makes our job easy. if you haven't started  Ansibling you better start today. Easy to learn and simple to use. you can start automating your day-t...