Sunday, July 18, 2021

Docker: Docker Commands : Exhaustive

docker image pull <image>
docker image ls

docker container run <image> 
docker container run <image> --name  <containerName>
docker container run <image> -d
docker container run <image> -p HostPort:ContainerPort
docker container run -it <image> command

docker container 

docker container start <container>
docker container start -at <container>

docker container exec -it  <container>  command

docker container top

Would the following two commands create a port conflict error with each other?
NO ? Why ?
Each of below lines creates own containers

docker container run -p 80:80 -d nginx
docker container run -p 8080:80 -d nginx





Thursday, July 15, 2021

Maven : -DskipTests vs -Dmaven.test.skip=true

https://stackoverflow.com/questions/25639336/whats-the-difference-between-dskiptests-and-dmaven-test-skip-true 


Maven docs:

-DskipTests compiles the tests, but skips running them

-Dmaven.test.skip=true skips compiling the tests and does not run them

Also this one might be important

maven.test.skip is honored by Surefire, Failsafe and the Compiler Plugin


So the complete set of test options for Maven would be:

  • -DskipTests ==> compiles the tests, but skips running them
  • -Dmaven.test.skip.exec=true ==> the tests get compiled, but not executed.
  • -Dmaven.test.skip=true ==> doesn't compile or execute the tests.

 

PKIX path building failed: sun.security.provider.certpath

https://stackoverflow.com/questions/21076179/pkix-path-building-failed-and-unable-to-find-valid-certification-path-to-requ



  1. Go to URL in your browser:
  • firefox - click on HTTPS certificate chain (the lock icon right next to URL address). Click "more info" > "security" > "show certificate" > "details" > "export..". Pickup the name and choose file type example.cer
  • chrome - click on site icon left to address in address bar, select "Certificate" -> "Details" -> "Export" and save in format "Der-encoded binary, single certificate".
  1. Now you have file with keystore and you have to add it to your JVM. Determine location of cacerts files, eg. C:\Program Files (x86)\Java\jre1.6.0_22\lib\security\cacerts.

  2. Next import the example.cer file into cacerts in command line (may need administrator command prompt):

keytool -import -alias example -keystore "C:\Program Files (x86)\Java\jre1.6.0_22\lib\security\cacerts" -file example.cer

You will be asked for password which default is changeit

Azure Notifications

 


Redhat Service : VSTS Agent Service :EAP Service : Azure Agent Service : Configure as a service

 https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/installation_guide/configuring_jboss_eap_to_run_as_a_service

sudo cp EAP_HOME/bin/init.d/jboss-eap.conf /etc/default

sudo cp EAP_HOME/bin/init.d/jboss-eap-rhel.sh /etc/init.d

sudo chmod +x /etc/init.d/jboss-eap-rhel.sh

sudo chkconfig --add jboss-eap-rhel.sh

sudo service jboss-eap-rhel start

------------------------------------------------------------------------------------------------------

https://www.geeksforgeeks.org/setuid-setgid-and-sticky-bits-in-linux-file-permissions/

------------------------------------------------------------------------------------------------------

https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=azure-devops

sudo ./svc.sh install [username]

sudo ./svc.sh status

sudo ./svc.sh start

sudo ./svc.sh stop

du vs df

https://stackoverflow.com/questions/10103604/linux-command-line-du-how-to-make-it-show-only-total-for-each-directories/10103709#10103709

du -cksh *


df -h

What does 'total' mean in "ls" command output ?

What does 'total' mean in "ls" command output ?

https://lists.fedoraproject.org/pipermail/users/2006-November/317250.html

> ls -la

> total 8

> drwxr-xr-x 2 fajar users 4096 2006-11-06 11:12 .

> drwxr-xr-x 3 fajar users 4096 2006-11-06 11:12 ..

> What 'total 8' stands for?

> Thank you very much.


That is the total number of file system blocks, including indirect

blocks, used by the listed files. 

How to see Sizes in MB/GB for Linux 'ls' command

https://net2.com/how-to-display-files-sizes-in-mb-in-linux-ubuntu/

Want to read Size in MB/GB for ls ?

ls -lh

Difference between "ls -1" and "ls -l" ?

 Difference between  "ls -1" and "ls -l" ?

ls -1
ls -l

AWS : EC2 Key Pairs : How to connect if I lose my Private Key : Replace Your SSH Keys: Login : AWS Key Lost


https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html


A key pair, consisting of a public key and a private key, is a set of security credentials that you use to prove your identity when connecting to an EC2 instance. Amazon EC2 stores the public key on your instance, and you store the private key. 


Create a key pair using Amazon EC2

Create a key pair using a third-party tool and import the public key to Amazon EC2

Tag a public key

Retrieve the public key from the private key

Retrieve the public key through instance metadata

Locate the public key on an instance

Identify the key pair that was specified at launch

Verify your key pair's fingerprint

Add or replace a key pair for your instance

Delete your key pair

Delete a public key from an instance


--------------------------------------------------------------------------------------------------------

::Connect to your Linux instance if you lose your private key::

Lost Private key --> Detach from orig- Attach to Temp - Modify authorized_keys - Detach from Temp - Attach Again to orig


https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/replacing-lost-key-pair.html


Step 1: Create a new key pair

Step 2: Get information about the original instance and its root volume

Step 3: Stop the original instance

Step 4: Launch a temporary instance

Step 5: Detach the root volume from the original instance and attach it to the temporary instance

Step 6: Add the new public key to authorized_keys on the original volume mounted to the temporary instance

Step 7: Unmount and detach the original volume from the temporary instance, and reattach it to the original instance

Step 8: Connect to the original instance using the new key pair

Step 9: Clean up

--------------------------------------------------------------------------------------------------------

To add or replace a key pair


Connect to Instance using old mechanism=>  Public Keys => add new Key => authorized-keys


https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#replacing-key-pair


Linux Variables: Shell Variables: Local Variables : Environment Variables

 https://www.tutorialspoint.com/unix/unix-using-variables.htm

https://www.serverlab.ca/tutorials/linux/administration-linux/how-to-set-environment-variables-in-linux/

export NAME=VALUE

export JAVA_HOME=/opt/openjdk11


unset VARIABLE_NAME

unset JAVA_HOME


::Listing All Set Environment Variables::

set


variable_name=variable_value

NAME="John Doe"

echo $NAME


readonly NAME


export NAME


Linux Variable Types

When a shell is running, three main types of variables are present −

Local Variables

Environment Variables 

Shell Variables 

AWS : Capacity Reservation: AWS Calculator : Savings Plan: Reserved Instances :Compute Savings Plans : EC2 Instance Savings Plans

Capacity Reservation

 When you create a Capacity Reservation, we reserve the specified capacity for your use. The reserved capacity is charged at the selected instance type’s On-Demand rate whether an instance is running in it or not. You can also use your regional reserved instances with your Capacity Reservations to benefit from billing discounts.


https://calculator.aws/#/

AWS Calculator


Reserved Instances

 Platform[Linux, Windows], Tenancy[Default/Dedicated], Offering class[Convertible,Standard] 

 Instance type[c2.medium], Term[1 year/3 Years], Payment option[Partial, Upfront, None]


 

Savings Plan

Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per hour. This provides you with the flexibility to use the instance configurations that best meet your needs and continue to save money, instead of making a commitment to a specific instance configuration

---------------------------------------

SageMaker Savings Plans

Compute Savings Plans

EC2 Instance Savings Plans

---------------------------------------

Compute Savings Plans [Better than EC2 Instance Savings Plan]

Applies to EC2 instance usage, AWS Fargate, and AWS Lambda service usage, regardless of region, instance family, size, tenancy, and operating system.

Term, Payment option, Purchase commitment [Hourly commitment]

---------------------------------------

EC2 Instance Savings Plans

Applies to instance usage within the committed EC2 family and region, regardless of size, tenancy, and operating system.

Region,Instance family, Term, Payment option, Purchase commitment [Hourly commitment]

Tuesday, July 13, 2021

Docker Compose

  • docker-compose logs --follow elasticsearch
  • docker-compose -f docker-compose.elastic.yml up -d
  • docker-compose ps 
  • docker-compose down 
  • docker-compose up 
  • docker-compose -f filename up
  • docker-compose images
  • docker-compose down -v                                                [Removes all volumes created by docker]

Commands:
  build              Build or rebuild services
  bundle             Generate a Docker bundle from the Compose file
  config             Validate and view the Compose file
  create             Create services
  down               Stop and remove containers, networks, images, and volumes
  events             Receive real time events from containers
  exec               Execute a command in a running container
  help               Get help on a command
  images             List images
  kill               Kill containers
  logs               View output from containers
  pause              Pause services
  port               Print the public port for a port binding
  ps                 List containers
  pull               Pull service images
  push               Push service images
  restart            Restart services
  rm                 Remove stopped containers
  run                Run a one-off command
  scale              Set number of containers for a service
  start              Start services
  stop               Stop services
  top                Display the running processes
  unpause            Unpause services
  up                 Create and start containers
  version            Show the Docker-Compose version information


Monday, July 12, 2021

Docker Compose YAML Elasticsearch HTTPS

https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls-docker.html

---------------------------------
services:
  create_certs:
    container_namecreate_certs
    imagedocker.elastic.co/elasticsearch/elasticsearch:7.13.0
    # command: >
    #   bash -c '
    #     if [[ ! -f /certs/bundle.zip ]]; then
    #       bin/elasticsearch-certutil cert --silent --pem --in config/certificates/instances.yml -out /certs/bundle.zip;
    #       unzip /certs/bundle.zip -d /certs; 
    #     fi;
    #     chown -R 1000:0 /certs
    #   '
    # user: "0"
    # working_dir: /usr/share/elasticsearch
    # volumes: ['certs:/certs', '.:/usr/share/elasticsearch/config/certificates']

    command: >
      bash -c '
        if [[ ! -f ./config/certificates/elastic-certificates.p12 ]]; then
          bin/elasticsearch-certutil cert -out config/certificates/elastic-certificates.p12 -pass ""
        fi;
        chown -R 1000:0 /usr/share/elasticsearch/config/certificates
      '
    user"0"
    working_dir/usr/share/elasticsearch
    volumes: ['certs:/usr/share/elasticsearch/config/certificates']

  elasticsearch:
    container_nameelasticsearch
    depends_on: [create_certs]
    imagedocker.elastic.co/elasticsearch/elasticsearch:7.13.0
    environment:
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - ELASTIC_PASSWORD=$ELASTIC_PASSWORD # password for default user: elastic 
      - xpack.security.enabled=true
      - xpack.security.transport.ssl.enabled=true
      - xpack.security.transport.ssl.verification_mode=certificate
      - xpack.security.transport.ssl.keystore.path=$CERTS_DIR/elastic-certificates.p12
      - xpack.security.transport.ssl.truststore.path=$CERTS_DIR/elastic-certificates.p12
      - xpack.security.http.ssl.enabled=true
      - xpack.security.http.ssl.verification_mode=none
      - xpack.security.http.ssl.truststore.path=$CERTS_DIR/elastic-certificates.p12
      - xpack.security.http.ssl.keystore.path=$CERTS_DIR/elastic-certificates.p12

      # - xpack.license.self_generated.type=trial 
      # - xpack.security.enabled=true
      # - xpack.security.http.ssl.enabled=true
      # - xpack.security.http.ssl.key=$CERTS_DIR/es01/es01.key
      # - xpack.security.http.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      # - xpack.security.http.ssl.certificate=$CERTS_DIR/es01/es01.crt
      # - xpack.security.transport.ssl.enabled=true
      # - xpack.security.transport.ssl.verification_mode=certificate 
      # - xpack.security.transport.ssl.certificate_authorities=$CERTS_DIR/ca/ca.crt
      # - xpack.security.transport.ssl.certificate=$CERTS_DIR/es01/es01.crt
      # - xpack.security.transport.ssl.key=$CERTS_DIR/es01/es01.key

    volumes: ['esdata:/usr/share/elasticsearch/data''certs:$CERTS_DIR']
    ulimits:
      nofile:
        soft65536
        hard65536
      memlock:
        soft-1
        hard-1
    ports:
      - "9200:9200"

volumes: {"esdata""certs"}

Saturday, July 10, 2021

TLS SSL Docker Elasticsearch

https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html#generate-certificates

https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html#encrypt-http-communication

--------------------------------------------------------------------------------------------------------

https://stackoverflow.com/questions/50832249/enable-authentication-in-elasticsearch-with-docker-environment-variable

https://dev.to/thehoodsdev/securing-our-dockerized-elastic-stack-3o15

https://medium.com/@mandeep_m91/setting-up-elasticsearch-and-kibana-on-docker-with-x-pack-security-enabled-6875b63902e6


https://askubuntu.com/questions/772050/reset-the-password-in-ubuntu-linux-bash-in-windows

wsl --user root


elasticsearch has own cert creation module

https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-http

xpack.security.enabled to true


OS

Basic 

Gold

Platinum


https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc/51447991#51447991

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

sudo sysctl -w vm.max_map_count=262144


wsl --user root



https://stackoverflow.com/questions/22049212/docker-copying-files-from-docker-container-to-host

docker cp <containerId>:/file/path/within/container /host/path/target

docker cp elasticsearch:/usr/share/elasticsearch  ~


docker run -it --rm --privileged --pid=host justincormack/nsenter1


PKCS#12 format

A PFX file indicates a certificate in PKCS#12 format

https://www.sslmarket.com/ssl/how-to-create-an-pfx-file#:~:text=A%20PFX%20file%20indicates%20a,need%20to%20deploy%20a%20certificate.


docker-compose down -v


https://www.bleepingcomputer.com/news/security/new-meow-attack-has-deleted-almost-4-000-unsecured-databases/


http://localhost:9200/_xpack

http://localhost:9200/


xpack.security.http.ssl.enabled

xpack.security.transport.ssl.enabled


 - xpack.security.transport.ssl.enabled=true

Friday, July 9, 2021

Docker Desktop + WSL2 + Ubuntu + WSL +docker

 docker run -it --rm --privileged --pid=host justincormack/nsenter1

\\wsl2$ - Special Shared Path on Windows - which hides complex Path of Windows 
Docker Desktop + Ubuntu(WSL2) --- linked
Docker commands run from both
but Images go in Docker Desktop only -  \\wsl$\docker-desktop-data\version-pack-data\community\docker
Volume should be uploaded from Ubuntu/WSL2
Volume upload from Windows wont work
You can upload Folders from Windows to \\wsl$ shared path -specfically here  - \\wsl$\Ubuntu-20.04\home\karankaw
and it will show up as ~ in WSL2/Ubuntu

Thursday, July 8, 2021

Docker Logs


 docker logs  <web_container_ID_Or_Name>


 docker logs --follow web

Windows: Docker : exec :Docker Desktop

https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/


Getting a Shell in the Docker for Windows Moby VM

Moby VM - Mean Dummy Linux VM running on WSL Windows



docker run -it --rm --privileged --pid=host justincormack/nsenter1

Docker Volumes - File Locations for docker which Persist after container

https://www.freecodecamp.org/news/where-are-docker-images-stored-docker-container-paths-explained/

https://stackoverflow.com/questions/34809646/what-is-the-purpose-of-volume-in-dockerfile/34810191#34810191


https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/   [Example]

Docker Volumes

It is possible to add a persistent store to containers to keep data longer than the container exists or to share the volume with the host or with other containers. A container can be started with a volume by using the -v option:

$ docker run --name nginx_container -v /var/log nginx
We can get information about the connected volume location by:
$ docker inspect nginx_container 

Adding Custom HTML

By default, Nginx looks in the /usr/share/nginx/html directory inside of the container for files to serve. We need to get our html files into this directory. A fairly simple way to do this is use a mounted volume. With mounted volumes, we are able to link a directory on our local machine and map that directory into our running container.

docker run -it --rm -d -p 8080:80 --name web -v ~/site-content:/usr/share/nginx/html nginx

docker    run   -v    /path/to/host/directory:/path/inside/the/container    image

When a docker container is deleted,
volume is not deleted by itself, atleast not by default.

Clean up space used by Docker

It is recommended to use the Docker command to clean up unused containers. Container, networks, images, and the build cache can be cleaned up by executing:

$ docker system prune -a

Additionally, you can also remove unused volumes by executing:

$ docker volumes prune

Docker Images : Where are they stored


Docker Desktop + WSL(Ubuntu-20)

\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
\\wsl$\docker-desktop-data\version-pack-data\community\docker\overlay2

Docker images

The heaviest contents are usually images. If you use the default storage driver overlay2, then your Docker images are stored in 

/var/lib/docker/overlay2  - List of all images 


/var/lib/docker/image/overlay2/imagedb/content/sha256 List of kinda Top Images


docker run -it --rm --privileged --pid=host justincormack/nsenter1
\\wsl2$ - Special Shared Path on Windows - which hides complex Path of Windows 
Docker Desktop + Ubuntu(WSL2) --- linked
Docker commands run from both
but Images go in Docker Desktop only -  \\wsl$\docker-desktop-data\version-pack-data\community\docker
Volume should be uploaded from Ubuntu/WSL2
Volume upload from Windows wont work
You can upload Folders from Windows to \\wsl$ shared path -specfically here  - \\wsl$\Ubuntu-20.04\home\karankaw
and it will show up as ~ in WSL2/Ubuntu

Docker Command List - Full list

 Management Commands:

  app*        Docker App (Docker Inc., v0.9.1-beta3)

  builder     Manage builds

  buildx*     Build with BuildKit (Docker Inc., v0.5.1-docker)

  compose*    Docker Compose (Docker Inc., 2.0.0-beta.1)

  config      Manage Docker configs

  container   Manage containers

  context     Manage contexts

  image       Manage images

  manifest    Manage Docker image manifests and manifest lists

  network     Manage networks

  node        Manage Swarm nodes

  plugin      Manage plugins

  scan*       Docker Scan (Docker Inc., v0.8.0)

  secret      Manage Docker secrets

  service     Manage services

  stack       Manage Docker stacks

  swarm       Manage Swarm

  system      Manage Docker

  trust       Manage trust on Docker images

  volume      Manage volumes


Commands:

  attach      Attach local standard input, output, and error streams to a running contain

er

  build       Build an image from a Dockerfile

  commit      Create a new image from a container's changes

  cp          Copy files/folders between a container and the local filesystem

  create      Create a new container

  diff        Inspect changes to files or directories on a container's filesystem

  events      Get real time events from the server

  exec        Run a command in a running container

  export      Export a container's filesystem as a tar archive

  history     Show the history of an image

  images      List images

  import      Import the contents from a tarball to create a filesystem image

  info        Display system-wide information

  inspect     Return low-level information on Docker objects

  kill        Kill one or more running containers

  load        Load an image from a tar archive or STDIN

  login       Log in to a Docker registry

  logout      Log out from a Docker registry

  logs        Fetch the logs of a container

  pause       Pause all processes within one or more containers

  port        List port mappings or a specific mapping for the container

  ps          List containers

  pull        Pull an image or a repository from a registry

  push        Push an image or a repository to a registry

  rename      Rename a container

  restart     Restart one or more containers

  rm          Remove one or more containers

  rmi         Remove one or more images

  run         Run a command in a new container

  save        Save one or more images to a tar archive (streamed to STDOUT by default)

  search      Search the Docker Hub for images

  start       Start one or more stopped containers

  stats       Display a live stream of container(s) resource usage statistics

  stop        Stop one or more running containers

  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

  top         Display the running processes of a container

  unpause     Unpause all processes within one or more containers

  update      Update configuration of one or more containers

  version     Show the Docker version information

  wait        Block until one or more containers stop, then print their exit codes

Docker-Learn3

 docker container run --detach --rm --publish 80:80 --name webserver nginx

docker container run --detach  --publish 80:80 --name webserver nginx

docker container rm <container_Name|container_Id_First3Digit_SHA>


Use -- flags, its a good practice in docker


docker image ls 


docker container logs webserver


docker container top ContainerID|containerName


docker run -d --name mongo mongo

-------------------------------------

$ docker run -it --rm --privileged --pid=host justincormack/nsenter1

/ #

-------------------------------------

ps aux | { head -1; grep -E 'mysql|mongo' ; }

ps -ef | { head -1; grep bash; }

ps aux | { head -1; grep 999 ; }

-------------------------------------


Docker proces is running on host 

docker top mongo 

//mongo is name of container and this command tells us processes running as a part of mongo docker container - top 10 Processes

ps aux | grep mongod


-------------------------------------

docs.docker.com

--help 

Our friends

-------------------------------------


docker container run -d -p 3306:3306 --name db --env MYSQL_RANDOM_ROOT_PASSWORD=yes mysql


-------------------------------------------------------------------------------------------------

Analyse Outside container

docker container top Container_Name

docker container inspect [OPTIONS] Container_Name

docker container stats [OPTIONS] [CONTAINER...]  //if no container_Name , it means all


-------------------------------------------------------------------------------------------------

Analyse Inside container - To know about whats happening in container

docker container start -ai Container_Name   //a means attach , i means interactive

docker container exec -it Container_Name   //t means tty , i means interactive

docker container run -it ImageName              //t means tty , i means interactive

-------------------------------------------------------------------------------------------------

apt-get update

apt-get install -y procps //Install "ps" in mysql - Its having debian

-------------------------------------------------------------------------------------------------

docker container port nginx


virtual network  ----is having a container's port

host port

only host port is mapped to only 1 Container...mapped


A container can talk to other container if they are on same virtual networks

-------------------------------------------------------------------------------------------------------

• Each Container is by default connected to - private virtual network "Bridge"

• Each PVN routes through NAT Firewall on host IP

• All containers on a virtual network can talk to each other without -p 

For example :

A network has 2 Containers :-> Mysql and httpd

httpd has 8080:80 

While Mysql has nothing

Mysql can talk to httpd

• 2 Different networks cannot talk to each other they have to go via NAT

• 1 host level port is mapped to 1 container only


• Make new virtual networks

• Attach containers to more than 1 virtual network

• Use docker network Drivers.

----------------------------------------------------------------------------------------------

ifconfig en0 // Linux based Actual Host machine

ipconfig // Windows based Actual Host machine

----------------------------------------------------------------------------------------------

docker container port ContainerID

docker container inspect ContainerID 

docker container inspect  --format  "{{ .NetworkSettings.IPAddress}}" nginx

----------------------------------------------------------------------------------------------

Why its called a bridge network ?

Its a type of "Driver"

because this vpn connects our container to outside physical network through  NAT  firewall

----------------------------------------------------------------------------------------------

:::::docker network commands :::::


• docker network ls      // Shows list of all private virtual networks with type of Drivers they possess

//bridge is called "bridge" or "docker0"



• docker network inspect bridge //shows containers attached to this network

Each Container has its own IP Address , although they are attached to same Network SHAid


• Network Type has a Subnet": "172.17.0.0/16" in IPAM Config

has many containers attached to it , Each having its own Ip Addresses

 "IPv4Address": "172.17.0.6/16"

 "IPv4Address": "172.17.0.3/16"

"IPv4Address": "172.17.0.2/16"

----------------------------------------------------------------------------------------------

172.17.0.0 ---- Default IP Address of Bridge Network


----------------------------------------------------------------------------------------------

another network is "host", gives up docker security

attaches directly to host interface

----------------------------------------------------------------------------------------------

docker network inspect bridge

docker network inspect host

docker network inspect none

----------------------------------------------------------------------------------------------

:::: Create a new network

docker network create my_app_net

docker network inspect my_app_net 

"Subnet": "172.18.0.0/16",

"Gateway": "172.18.0.1"

----------------------------------------------------------------------------------------------

docker network create my_app_net 

docker container run --name new_nginx --network my_app_net nginx:alpine //New Container

docker network inspect my_app_net // it has new_nginx attached to it

// --network network   flag on "run" command       Connect a container to a network

----------------------------------------------------------------------------------------------

//docker network 

// docker network connect [OPTIONS] NETWORK CONTAINER

docker network connect --help

docker network  connect  my_app_net nginx          //Attach network to container

docker container inspect nginx      //Inspect container --- It shows connection to 2 networks, now



 "Networks": {

                "bridge": {

                    "IPAMConfig": null,

                    "Gateway": "172.17.0.1",

                    "IPAddress": "172.17.0.6",

                },

                "my_app_net": {

                    "IPAMConfig": {},

                    "Gateway": "172.18.0.1",

                    "IPAddress": "172.18.0.3",


                }

            }

----------------------------------------------------------------------------------------------

docker network disconnect  my_app_net nginx    // Disconnect custom network from ContainerName

----------------------------------------------------------------------------------------------

if apps are on same host, then you should connect both apps to same network 

Explicit -p is very safe because all other ports are blocked!!!! so its very safe.

----------------------------------------------------------------------------------------------

docker network create --driver bridge my_app_net


----------------------------------------------------------------------------------------------

Containers should not rely on IP addresses for communication. DNS Should be used.


Custom Network have DNS Server built into them

default "bridge" network does not has DNS, use --link as workaround.


docker container exec -it my_nginx ping new_nginx


Container Names can be used as - DNS Hostnames , So, if there are 2 containes on same custom Network

They can ping each other using just "Container Names" which are DNS Names

Azure - Pipeline - Add Approver for Stage

https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass