Sunday, November 14, 2021

Linux: Delete Multiple Lines in VIM and Search for String in VIM

https://linuxize.com/post/vim-delete-line/

  • Press the Esc key to go to normal mode.
  • Place the cursor on the first line you want to delete.
  • Type 5dd and hit Enter to delete the next five lines.
----------------------------------------------------------------


The basic steps to perform a search in Vim are as follows:

  • Press /.
  • Type the search pattern.
  • Press Enter to perform the search.
  • Press n to find the next occurrence or N to find the previous occurrence.

Friday, November 12, 2021

Mount S3 Bucket onto Linux Folder : s3FS : FUSE - Filesystem in Userspace

https://levelup.gitconnected.com/how-to-mount-s3-bucket-on-an-ec2-linux-instance-df44c7885aae

https://medium.com/tensult/aws-how-to-mount-s3-bucket-using-iam-role-on-ec2-linux-instance-ad2afd4513ef

S3FS

An S3 bucket can be mounted in an AWS instance as a file system known as S3fs. S3fs is a FUSE file system that allows you to mount an Amazon S3 bucket as a local file system.

Filesystem in Userspace (FUSE) is a simple interface for userspace programs to export a virtual file system to the Linux kernel.


  • Install S3FS
  • vim /etc/passwd-s3fs [Enter AccessKey/Secret Key of AWS having full S3 Permissions]
  • Mount Bucket to a Linux Folder
          mkdir /mys3bucket
         s3fs your_bucketname -o use_cache=/tmp -o allow_other -o uid=1000 -o mp_umask=002 -o multireq_max=5 /mys3bucket

Friday, October 22, 2021

EKS : Kubernetes : AWS : Install Kubernetes on Control Plane and Configure EKS with kubectl

visudo
703250313 ALL=(ALL) NOPASSWD: ALL
eks                ALL=(ALL) NOPASSWD: ALL

export VISUAL=vim
export EDITOR="$VISUAL"


> Bootstrapping clusters with kubeadm
> Installing Kubernetes with kops
> Installing Kubernetes with Kubespray

Installing kubeadm, kubelet and kubectl
""""""""""You will install these packages on all of your machines:"""""""""
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.

apt-get install bash-completion
source /usr/share/bash-completion/bash_completion
type _init_completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
kubectl completion bash


> sudo snap install kubectl --classic
> kubectl version --client

history | grep SEARCH_STRING


curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
aws-iam-authenticator help


find ./path/subpath searchFileName
Syntax :
$ find [where to start searching from]
 [expression determines what to find] [-options] [what to find]
 
 
 Instead of manually making 1 node as Kubenetes Master and rest as worker using "kubeadm" manually
 We go for EKS approach
 
 kubectl cluster-info
 kubectl 
 kubectl get pods
 
 
 eks@GRDLUSAWSJS01:~$ kubectl get deployment  -n fda
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
analytics            1/1     1            1           378d
business-rules       1/1     1            1           330d
case-management      1/1     1            1           330d
classifiy-rule       1/1     1            1           378d
cora-mail            1/1     1            1           378d
data-processor       1/1     1            1           330d
doc-conversion-api   1/1     1            1           323d
eaas-service         1/1     1            1           378d
email-segmentator    1/1     1            1           378d
flowable             1/1     1            1           330d
genex-runtime        1/1     1            1           378d
ief-classification   1/1     1            1           377d
ief-extraction       1/1     1            1           377d
ief-tensorflow       1/1     1            1           377d
ml-webapp            1/1     1            1           332d
modelserver          1/1     1            1           378d
nlu-service          1/1     1            1           378d
ocr-nuance           1/1     1            1           378d
output-generation    1/1     1            1           330d
platform             1/1     1            1           330d
slot-modelserver     1/1     1            1           368d
slot-serving         1/1     1            1           368d
trainer              1/1     1            1           378d
usaaddress           1/1     1            1           330d
vea-cc               1/1     1            1           378d
vea-nlp              1/1     1            1           378d

eks@GRDLUSAWSJS01:~$ kubectl rollout history deployment vea-cc -n fda
deployment.apps/vea-cc
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
3         <none>
4         <none>


eks@GRDLUSAWSJS01:~$ kubectl get nodes
NAME                            STATUS   ROLES    AGE    VERSION
ip-10-102-25-101.ec2.internal   Ready    <none>   151d   v1.17.9-eks-4c6976
ip-10-102-25-105.ec2.internal   Ready    <none>   274d   v1.17.9-eks-4c6976
ip-10-102-25-142.ec2.internal   Ready    <none>   69d    v1.17.9-eks-4c6976
ip-10-102-25-143.ec2.internal   Ready    <none>   179d   v1.17.9-eks-4c6976
ip-10-102-25-149.ec2.internal   Ready    <none>   330d   v1.17.9-eks-4c6976
ip-10-102-25-186.ec2.internal   Ready    <none>   260d   v1.17.9-eks-4c6976
ip-10-102-25-247.ec2.internal   Ready    <none>   260d   v1.17.9-eks-4c6976
ip-10-102-25-29.ec2.internal    Ready    <none>   302d   v1.17.9-eks-4c6976
ip-10-102-25-31.ec2.internal    Ready    <none>   326d   v1.17.9-eks-4c6976
ip-10-102-25-40.ec2.internal    Ready    <none>   260d   v1.17.9-eks-4c6976
ip-10-102-26-106.ec2.internal   Ready    <none>   330d   v1.17.9-eks-4c6976
ip-10-102-26-111.ec2.internal   Ready    <none>   330d   v1.17.9-eks-4c6976
ip-10-102-26-55.ec2.internal    Ready    <none>   179d   v1.17.9-eks-4c6976
ip-10-102-26-58.ec2.internal    Ready    <none>   233d   v1.17.9-eks-4c6976
ip-10-102-26-74.ec2.internal    Ready    <none>   179d   v1.17.9-eks-4c6976
ip-10-102-26-88.ec2.internal    Ready    <none>   164d   v1.17.9-eks-4c6976


eks@GRDLUSAWSJS01:~$ kubectl cluster-info
Kubernetes master is running at https://23BB04FB3E3508D16899825B2B3F38FA.yl4.us-east-1.eks.amazonaws.com
CoreDNS is running at https://23BB04FB3E3508D16899825B2B3F38FA.yl4.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://23BB04FB3E3508D16899825B2B3F38FA.yl4.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
eks@GRDLUSAWSJS01:~$


Linux : PING - Internet Speed Test : 8.8.8.8 - Google DNS

https://wisetut.com/best-ping-test-ip-addresses-google-dns-8-8-8-8-cloudflare-dns-1-1-1-1/


The network connection to the 8.8.8.8 Google DNS service can be tested with the ping command like below.

$ ping 8.8.8.8

The output is like below as we can see that the time or RTT is very low.

Linux : Ubuntu - APT vs SNAP

https://phoenixnap.com/kb/snap-vs-apt

Azure - Pipeline - Add Approver for Stage

https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass