Thursday, June 17, 2021

~/.aws ~.aws/confg ~./aws/credentials IAM Access Key/Secret Key

 aws configure

aws configure --profile karan

aws s3 ls 

aws s3 ls profile --fubar


vi ~/.aws/config

vi ~/.aws/credentials


aws iam list-users --profile default


[ec2-user@ip-172-31-16-83 .aws]$ aws iam list-users

An error occurred (AccessDenied) when calling the ListUsers operation: User: arn:aws:sts::061116847625:assumed-role/S3FullAccessFromEC2NoCredReq/i-0622cebe406df06cd is not authorized to perform: iam:ListUsers on resource: arn:aws:iam::061116847625:user/

Mount EBS to EC2

https://devopscube.com/mount-ebs-volume-ec2-instance/

sudo cp /etc/fstab /etc/fstab.bak

/dev/xvdf       /hdd2   ext4    defaults,nofail        0       0

* chown, chmod, chgrp, chattr, id 2775 vs 775

 

EBS vs EFS - AWS - Elastic Block Storage vs Elastic File System

https://aws.amazon.com/getting-started/tutorials/create-network-file-system/

https://www.missioncloud.com/blog/resource-amazon-ebs-vs-efs-vs-s3-picking-the-best-aws-storage-option-for-your-business


EBS  --------> 1 EC2 Instance is mapped to EBS - Block Level Storage


EFS ----------> Multiple EC2 Instances - Mounted to this File System - Common across multiple Apps


The main differences between EBS and EFS is that EBS is only accessible from a single EC2 instance in your particular AWS region, while EFS allows you to mount the file system across multiple regions and instances.


S3 -->  S3 is Object Level storage. S3 is not limited to EC2, Its linked to CloudFront through which many Media etc are hosted there.


Monday, June 14, 2021

IPTABLES, Firewall, Actual Firewall + Security Group - AWS

https://www.tecmint.com/fix-no-route-to-host-ssh-error-in-linux/


https://www.cyberciti.biz/faq/how-to-list-all-iptables-rules-in-linux/                            


sudo nmap -p 6900,25,22,8080,21000,7856,9084 10.79.197.70

sudo iptables -S


sudo nmap -p 6900,25,22,8080,21000,7856,9084 10.79.197.70

https://www.e2enetworks.com/help/knowledge-base/how-to-open-ports-on-iptables-in-a-linux-server/#step-1-list-the-current-iptables-rules


sudo iptables -D IN_public_allow -p tcp -m tcp --dport 7856 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT

sudo iptables -A IN_public_allow -p tcp -m tcp --dport 7856 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT


Saturday, June 5, 2021

Azure Pipelines, YAML Schema





https://faun.pub/reduce-your-build-time-using-caching-in-azure-pipelines-7a7bd0201cee

Resources
Trigger
Pipeline -> Stage(Implicit) -> Jobs -> Steps [Task, Script, Checkout]

jobs:
- job: Job_1
  displayName: Agent job 1   
  pool:
    vmImage: ubuntu-latest
  steps:
    -script: echo "Hello World"
pool:
  name: Default
  demands: SpecialSoftware # Check if SpecialSoftware capability exists
  
  
stages:
- stage: A
  jobs:
  - job: A1
  - job: A2
- stage: B
  jobs:
  - job: B1
  - job: B2  




stages:
- stage: A

# stage B runs if A fails
- stage: B
  condition: failed()

# stage C runs if B succeeds
- stage: C
  dependsOn:
  - A
  - B
  condition: succeeded('B')




If you choose to specify a pool at the stage level, then all jobs defined in that stage will use that pool unless otherwise specified at the job-level.
stages:
- stage: A
  pool: StageAPool
  jobs:
  - job: A1 # will run on "StageAPool" pool based on the pool defined on the stage
  - job: A2 # will run on "JobPool" pool
    pool: JobPool





jobs:
- job: Foo

  steps:
  - script: echo Hello!
    condition: always() # this step will always run, even if the pipeline is canceled

- job: Bar
  dependsOn: Foo
  condition: failed() # this job will only run if Foo fails





jobs:
- job: Debug
  steps:
  - script: echo hello from the Debug build
- job: Release
  dependsOn: Debug
  steps:
  - script: echo hello from the Release build


You can organize pipeline jobs into stages. Stages are the major divisions in a pipeline: "build this app", "run these tests", and "deploy to pre-production" are good examples of stages. They are logical boundaries in your pipeline where you can pause the pipeline and perform various checks.

Pipeline > Stages >Stage>Steps>Step

jobs:
- job: A
  steps:
  - bash: echo "A"

- job: B
  steps:
  - bash: echo "B"
  
  
  If you organize your pipeline into multiple stages, you use the stages keyword.

If you choose to specify a pool at the stage level, then all jobs defined in that stage will use that pool unless otherwise specified at the job-level
stages:
- stage: A
  jobs:
  - job: A1
  - job: A2

- stage: B
  jobs:
  - job: B1
  - job: B2
  
  When you define multiple stages in a pipeline, by default, they run sequentially in the order in which you define them in the YAML file. The exception to this is when you add dependencies. With dependencies, stages run in the order of the dependsOn requirements
  stages:
- stage: string
  dependsOn: string
  condition: string
  -----------------------------------------------------------
  You can organize your pipeline into jobs. Every pipeline has at least one job. A job is a series of steps that run sequentially as a unit. In other words, a job is the smallest unit of work that can be scheduled to run.
  
  In the simplest case, a pipeline has a single job. In that case, you do not have to explicitly use the job keyword
  
  jobs:
- job: myJob
  timeoutInMinutes: 10
  pool:
    vmImage: 'ubuntu-16.04'
  steps:
  - bash: echo "Hello world"
  
  
  /usr/lib/jvm/adoptopenjdk-11-hotspot-amd64


 # update-alternatives --config java
          update-alternatives --list java
          echo ls /etc/alternatives
          ls /etc/alternatives
  
  


Azure - Pipeline - Add Approver for Stage

https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass