https://www.educative.io/blog/java-lambda-expression-tutorial
Tuesday, July 27, 2021
Sunday, July 18, 2021
Docker: Docker Commands : Exhaustive
docker image ls
docker container run <image> --name <containerName>
docker container run -it <image> command
docker container start -at <container>
docker container exec -it <container> command
docker container top
NO ? Why ?
Each of below lines creates own containers
docker container run -p 80:80 -d nginx
docker container run -p 8080:80 -d nginx
Thursday, July 15, 2021
Maven : -DskipTests vs -Dmaven.test.skip=true
-DskipTests
compiles the tests, but skips running them
-Dmaven.test.skip=true
skips compiling the tests and does not run them
Also this one might be important
maven.test.skip is honored by Surefire, Failsafe and the Compiler Plugin
So the complete set of test options for Maven would be:
- -DskipTests ==> compiles the tests, but skips running them
- -Dmaven.test.skip.exec=true ==> the tests get compiled, but not executed.
- -Dmaven.test.skip=true ==> doesn't compile or execute the tests.
PKIX path building failed: sun.security.provider.certpath
- Go to URL in your browser:
- firefox - click on HTTPS certificate chain (the lock icon right next to URL address). Click
"more info" > "security" > "show certificate" > "details" > "export.."
. Pickup the name and choose file type example.cer - chrome - click on site icon left to address in address bar, select "Certificate" -> "Details" -> "Export" and save in format "Der-encoded binary, single certificate".
Now you have file with keystore and you have to add it to your JVM. Determine location of cacerts files, eg.
C:\Program Files (x86)\Java\jre1.6.0_22\lib\security\cacerts.
Next import the
example.cer
file into cacerts in command line (may need administrator command prompt):
keytool -import -alias example -keystore "C:\Program Files (x86)\Java\jre1.6.0_22\lib\security\cacerts" -file example.cer
You will be asked for password which default is changeit
Redhat Service : VSTS Agent Service :EAP Service : Azure Agent Service : Configure as a service
https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/installation_guide/configuring_jboss_eap_to_run_as_a_service
sudo cp EAP_HOME/bin/init.d/jboss-eap.conf /etc/default
sudo cp EAP_HOME/bin/init.d/jboss-eap-rhel.sh /etc/init.d
sudo chmod +x /etc/init.d/jboss-eap-rhel.sh
sudo chkconfig --add jboss-eap-rhel.sh
sudo service jboss-eap-rhel start
------------------------------------------------------------------------------------------------------
https://www.geeksforgeeks.org/setuid-setgid-and-sticky-bits-in-linux-file-permissions/
------------------------------------------------------------------------------------------------------
https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/v2-linux?view=azure-devops
sudo ./svc.sh install [username]
sudo ./svc.sh status
sudo ./svc.sh start
sudo ./svc.sh stop
What does 'total' mean in "ls" command output ?
What does 'total' mean in "ls" command output ?
https://lists.fedoraproject.org/pipermail/users/2006-November/317250.html
> ls -la
> total 8
> drwxr-xr-x 2 fajar users 4096 2006-11-06 11:12 .
> drwxr-xr-x 3 fajar users 4096 2006-11-06 11:12 ..
>
> What 'total 8' stands for?
> Thank you very much.
That is the total number of file system blocks, including indirect
blocks, used by the listed files.
AWS : EC2 Key Pairs : How to connect if I lose my Private Key : Replace Your SSH Keys: Login : AWS Key Lost
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
A key pair, consisting of a public key and a private key, is a set of security credentials that you use to prove your identity when connecting to an EC2 instance. Amazon EC2 stores the public key on your instance, and you store the private key.
Create a key pair using Amazon EC2
Create a key pair using a third-party tool and import the public key to Amazon EC2
Tag a public key
Retrieve the public key from the private key
Retrieve the public key through instance metadata
Locate the public key on an instance
Identify the key pair that was specified at launch
Verify your key pair's fingerprint
Add or replace a key pair for your instance
Delete your key pair
Delete a public key from an instance
--------------------------------------------------------------------------------------------------------
::Connect to your Linux instance if you lose your private key::
Lost Private key --> Detach from orig- Attach to Temp - Modify authorized_keys - Detach from Temp - Attach Again to orig
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/replacing-lost-key-pair.html
Step 1: Create a new key pair
Step 2: Get information about the original instance and its root volume
Step 3: Stop the original instance
Step 4: Launch a temporary instance
Step 5: Detach the root volume from the original instance and attach it to the temporary instance
Step 6: Add the new public key to authorized_keys on the original volume mounted to the temporary instance
Step 7: Unmount and detach the original volume from the temporary instance, and reattach it to the original instance
Step 8: Connect to the original instance using the new key pair
Step 9: Clean up
--------------------------------------------------------------------------------------------------------
To add or replace a key pair
Connect to Instance using old mechanism=> Public Keys => add new Key => authorized-keys
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html#replacing-key-pair
Linux Variables: Shell Variables: Local Variables : Environment Variables
https://www.tutorialspoint.com/unix/unix-using-variables.htm
https://www.serverlab.ca/tutorials/linux/administration-linux/how-to-set-environment-variables-in-linux/
export NAME=VALUE
export JAVA_HOME=/opt/openjdk11
unset VARIABLE_NAME
unset JAVA_HOME
::Listing All Set Environment Variables::
set
variable_name=variable_value
NAME="John Doe"
echo $NAME
readonly NAME
export NAME
Linux Variable Types
When a shell is running, three main types of variables are present −
Local Variables
Environment Variables
Shell Variables
AWS : Capacity Reservation: AWS Calculator : Savings Plan: Reserved Instances :Compute Savings Plans : EC2 Instance Savings Plans
Capacity Reservation
When you create a Capacity Reservation, we reserve the specified capacity for your use. The reserved capacity is charged at the selected instance type’s On-Demand rate whether an instance is running in it or not. You can also use your regional reserved instances with your Capacity Reservations to benefit from billing discounts.
AWS Calculator
Reserved Instances
Platform[Linux, Windows], Tenancy[Default/Dedicated], Offering class[Convertible,Standard]
Instance type[c2.medium], Term[1 year/3 Years], Payment option[Partial, Upfront, None]
Savings Plan
Savings Plans also offer significant savings on your Amazon EC2 costs compared to On-Demand Instance pricing. With Savings Plans, you make a commitment to a consistent usage amount, measured in USD per hour. This provides you with the flexibility to use the instance configurations that best meet your needs and continue to save money, instead of making a commitment to a specific instance configuration
---------------------------------------
SageMaker Savings Plans
Compute Savings Plans
EC2 Instance Savings Plans
---------------------------------------
Compute Savings Plans [Better than EC2 Instance Savings Plan]
Applies to EC2 instance usage, AWS Fargate, and AWS Lambda service usage, regardless of region, instance family, size, tenancy, and operating system.
Term, Payment option, Purchase commitment [Hourly commitment]
---------------------------------------
EC2 Instance Savings Plans
Applies to instance usage within the committed EC2 family and region, regardless of size, tenancy, and operating system.
Region,Instance family, Term, Payment option, Purchase commitment [Hourly commitment]
Tuesday, July 13, 2021
Docker Compose
- docker-compose logs --follow elasticsearch
- docker-compose -f docker-compose.elastic.yml up -d
- docker-compose ps
- docker-compose down
- docker-compose up
- docker-compose -f filename up
- docker-compose images
- docker-compose down -v [Removes all volumes created by docker]
Monday, July 12, 2021
Docker Compose YAML Elasticsearch HTTPS
https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls-docker.html
Saturday, July 10, 2021
TLS SSL Docker Elasticsearch
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup.html#generate-certificates
https://www.elastic.co/guide/en/elasticsearch/reference/current/security-basic-setup-https.html#encrypt-http-communication
--------------------------------------------------------------------------------------------------------
https://stackoverflow.com/questions/50832249/enable-authentication-in-elasticsearch-with-docker-environment-variable
https://dev.to/thehoodsdev/securing-our-dockerized-elastic-stack-3o15
https://medium.com/@mandeep_m91/setting-up-elasticsearch-and-kibana-on-docker-with-x-pack-security-enabled-6875b63902e6
https://askubuntu.com/questions/772050/reset-the-password-in-ubuntu-linux-bash-in-windows
wsl --user root
elasticsearch has own cert creation module
https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-tls.html#tls-http
xpack.security.enabled to true
OS
Basic
Gold
Platinum
https://stackoverflow.com/questions/51445846/elasticsearch-max-virtual-memory-areas-vm-max-map-count-65530-is-too-low-inc/51447991#51447991
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
sudo sysctl -w vm.max_map_count=262144
wsl --user root
https://stackoverflow.com/questions/22049212/docker-copying-files-from-docker-container-to-host
docker cp <containerId>:/file/path/within/container /host/path/target
docker cp elasticsearch:/usr/share/elasticsearch ~
docker run -it --rm --privileged --pid=host justincormack/nsenter1
PKCS#12 format
A PFX file indicates a certificate in PKCS#12 format
https://www.sslmarket.com/ssl/how-to-create-an-pfx-file#:~:text=A%20PFX%20file%20indicates%20a,need%20to%20deploy%20a%20certificate.
docker-compose down -v
https://www.bleepingcomputer.com/news/security/new-meow-attack-has-deleted-almost-4-000-unsecured-databases/
http://localhost:9200/_xpack
http://localhost:9200/
xpack.security.http.ssl.enabled
xpack.security.transport.ssl.enabled
- xpack.security.transport.ssl.enabled=true
Friday, July 9, 2021
Docker Desktop + WSL2 + Ubuntu + WSL +docker
docker run -it --rm --privileged --pid=host justincormack/nsenter1
\\wsl2$ - Special Shared Path on Windows - which hides complex Path of WindowsDocker Desktop + Ubuntu(WSL2) --- linked
Docker commands run from both
but Images go in Docker Desktop only - \\wsl$\docker-desktop-data\version-pack-data\community\docker
Volume should be uploaded from Ubuntu/WSL2
Volume upload from Windows wont work
You can upload Folders from Windows to \\wsl$ shared path -specfically here - \\wsl$\Ubuntu-20.04\home\karankaw
and it will show up as ~ in WSL2/Ubuntu
Thursday, July 8, 2021
Windows: Docker : exec :Docker Desktop
https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/
Getting a Shell in the Docker for Windows Moby VM
Moby VM - Mean Dummy Linux VM running on WSL Windows
docker run -it --rm --privileged --pid=host justincormack/nsenter1
Docker Volumes - File Locations for docker which Persist after container
https://www.freecodecamp.org/news/where-are-docker-images-stored-docker-container-paths-explained/
https://www.docker.com/blog/how-to-use-the-official-nginx-docker-image/ [Example]
Docker Volumes
It is possible to add a persistent store to containers to keep data longer than the container exists or to share the volume with the host or with other containers. A container can be started with a volume by using the -v option:
$ docker run --name nginx_container -v /var/log nginx
$
docker inspect nginx_container
Adding Custom HTML
By default, Nginx looks in the /usr/share/nginx/html
directory inside of the container for files to serve. We need to get our html files into this directory. A fairly simple way to do this is use a mounted volume. With mounted volumes, we are able to link a directory on our local machine and map that directory into our running container.
docker run -it --rm -d -p 8080:80 --name web -v ~/site-content:/usr/share/nginx/html nginx
docker run -v /path/to/host/directory:/path/inside/the/container imageWhen a docker container is deleted,
volume is not deleted by itself, atleast not by default.
Clean up space used by Docker
It is recommended to use the Docker command to clean up unused containers. Container, networks, images, and the build cache can be cleaned up by executing:
$ docker system prune -a
Additionally, you can also remove unused volumes by executing:
$ docker volumes prune
Docker Images : Where are they stored
Docker images
The heaviest contents are usually images. If you use the default storage driver overlay2, then your Docker images are stored in
/var/lib/docker/overlay2 - List of all images
/var/lib/docker/image/overlay2/imagedb/content/sha256 List of kinda Top Images
\\wsl2$ - Special Shared Path on Windows - which hides complex Path of Windows
Docker Desktop + Ubuntu(WSL2) --- linked
Docker commands run from both
but Images go in Docker Desktop only - \\wsl$\docker-desktop-data\version-pack-data\community\docker
Volume should be uploaded from Ubuntu/WSL2
Volume upload from Windows wont work
You can upload Folders from Windows to \\wsl$ shared path -specfically here - \\wsl$\Ubuntu-20.04\home\karankaw
and it will show up as ~ in WSL2/Ubuntu
Docker Command List - Full list
Management Commands:
app* Docker App (Docker Inc., v0.9.1-beta3)
builder Manage builds
buildx* Build with BuildKit (Docker Inc., v0.5.1-docker)
compose* Docker Compose (Docker Inc., 2.0.0-beta.1)
config Manage Docker configs
container Manage containers
context Manage contexts
image Manage images
manifest Manage Docker image manifests and manifest lists
network Manage networks
node Manage Swarm nodes
plugin Manage plugins
scan* Docker Scan (Docker Inc., v0.8.0)
secret Manage Docker secrets
service Manage services
stack Manage Docker stacks
swarm Manage Swarm
system Manage Docker
trust Manage trust on Docker images
volume Manage volumes
Commands:
attach Attach local standard input, output, and error streams to a running contain
er
build Build an image from a Dockerfile
commit Create a new image from a container's changes
cp Copy files/folders between a container and the local filesystem
create Create a new container
diff Inspect changes to files or directories on a container's filesystem
events Get real time events from the server
exec Run a command in a running container
export Export a container's filesystem as a tar archive
history Show the history of an image
images List images
import Import the contents from a tarball to create a filesystem image
info Display system-wide information
inspect Return low-level information on Docker objects
kill Kill one or more running containers
load Load an image from a tar archive or STDIN
login Log in to a Docker registry
logout Log out from a Docker registry
logs Fetch the logs of a container
pause Pause all processes within one or more containers
port List port mappings or a specific mapping for the container
ps List containers
pull Pull an image or a repository from a registry
push Push an image or a repository to a registry
rename Rename a container
restart Restart one or more containers
rm Remove one or more containers
rmi Remove one or more images
run Run a command in a new container
save Save one or more images to a tar archive (streamed to STDOUT by default)
search Search the Docker Hub for images
start Start one or more stopped containers
stats Display a live stream of container(s) resource usage statistics
stop Stop one or more running containers
tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
top Display the running processes of a container
unpause Unpause all processes within one or more containers
update Update configuration of one or more containers
version Show the Docker version information
wait Block until one or more containers stop, then print their exit codes
Docker-Learn3
docker container run --detach --rm --publish 80:80 --name webserver nginx
docker container run --detach --publish 80:80 --name webserver nginx
docker container rm <container_Name|container_Id_First3Digit_SHA>
Use -- flags, its a good practice in docker
docker image ls
docker container logs webserver
docker container top ContainerID|containerName
docker run -d --name mongo mongo
-------------------------------------
$ docker run -it --rm --privileged --pid=host justincormack/nsenter1
/ #
-------------------------------------
ps aux | { head -1; grep -E 'mysql|mongo' ; }
ps -ef | { head -1; grep bash; }
ps aux | { head -1; grep 999 ; }
-------------------------------------
Docker proces is running on host
docker top mongo
//mongo is name of container and this command tells us processes running as a part of mongo docker container - top 10 Processes
ps aux | grep mongod
-------------------------------------
docs.docker.com
--help
Our friends
-------------------------------------
docker container run -d -p 3306:3306 --name db --env MYSQL_RANDOM_ROOT_PASSWORD=yes mysql
-------------------------------------------------------------------------------------------------
Analyse Outside container
docker container top Container_Name
docker container inspect [OPTIONS] Container_Name
docker container stats [OPTIONS] [CONTAINER...] //if no container_Name , it means all
-------------------------------------------------------------------------------------------------
Analyse Inside container - To know about whats happening in container
docker container start -ai Container_Name //a means attach , i means interactive
docker container exec -it Container_Name //t means tty , i means interactive
docker container run -it ImageName //t means tty , i means interactive
-------------------------------------------------------------------------------------------------
apt-get update
apt-get install -y procps //Install "ps" in mysql - Its having debian
-------------------------------------------------------------------------------------------------
docker container port nginx
virtual network ----is having a container's port
host port
only host port is mapped to only 1 Container...mapped
A container can talk to other container if they are on same virtual networks
-------------------------------------------------------------------------------------------------------
• Each Container is by default connected to - private virtual network "Bridge"
• Each PVN routes through NAT Firewall on host IP
• All containers on a virtual network can talk to each other without -p
For example :
A network has 2 Containers :-> Mysql and httpd
httpd has 8080:80
While Mysql has nothing
Mysql can talk to httpd
• 2 Different networks cannot talk to each other they have to go via NAT
• 1 host level port is mapped to 1 container only
• Make new virtual networks
• Attach containers to more than 1 virtual network
• Use docker network Drivers.
----------------------------------------------------------------------------------------------
ifconfig en0 // Linux based Actual Host machine
ipconfig // Windows based Actual Host machine
----------------------------------------------------------------------------------------------
docker container port ContainerID
docker container inspect ContainerID
docker container inspect --format "{{ .NetworkSettings.IPAddress}}" nginx
----------------------------------------------------------------------------------------------
Why its called a bridge network ?
Its a type of "Driver"
because this vpn connects our container to outside physical network through NAT firewall
----------------------------------------------------------------------------------------------
:::::docker network commands :::::
• docker network ls // Shows list of all private virtual networks with type of Drivers they possess
//bridge is called "bridge" or "docker0"
• docker network inspect bridge //shows containers attached to this network
Each Container has its own IP Address , although they are attached to same Network SHAid
• Network Type has a Subnet": "172.17.0.0/16" in IPAM Config
has many containers attached to it , Each having its own Ip Addresses
"IPv4Address": "172.17.0.6/16"
"IPv4Address": "172.17.0.3/16"
"IPv4Address": "172.17.0.2/16"
----------------------------------------------------------------------------------------------
172.17.0.0 ---- Default IP Address of Bridge Network
----------------------------------------------------------------------------------------------
another network is "host", gives up docker security
attaches directly to host interface
----------------------------------------------------------------------------------------------
docker network inspect bridge
docker network inspect host
docker network inspect none
----------------------------------------------------------------------------------------------
:::: Create a new network
docker network create my_app_net
docker network inspect my_app_net
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
----------------------------------------------------------------------------------------------
docker network create my_app_net
docker container run --name new_nginx --network my_app_net nginx:alpine //New Container
docker network inspect my_app_net // it has new_nginx attached to it
// --network network flag on "run" command Connect a container to a network
----------------------------------------------------------------------------------------------
//docker network
// docker network connect [OPTIONS] NETWORK CONTAINER
docker network connect --help
docker network connect my_app_net nginx //Attach network to container
docker container inspect nginx //Inspect container --- It shows connection to 2 networks, now
"Networks": {
"bridge": {
"IPAMConfig": null,
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.6",
},
"my_app_net": {
"IPAMConfig": {},
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.3",
}
}
----------------------------------------------------------------------------------------------
docker network disconnect my_app_net nginx // Disconnect custom network from ContainerName
----------------------------------------------------------------------------------------------
if apps are on same host, then you should connect both apps to same network
Explicit -p is very safe because all other ports are blocked!!!! so its very safe.
----------------------------------------------------------------------------------------------
docker network create --driver bridge my_app_net
----------------------------------------------------------------------------------------------
Containers should not rely on IP addresses for communication. DNS Should be used.
Custom Network have DNS Server built into them
default "bridge" network does not has DNS, use --link as workaround.
docker container exec -it my_nginx ping new_nginx
Container Names can be used as - DNS Hostnames , So, if there are 2 containes on same custom Network
They can ping each other using just "Container Names" which are DNS Names
Azure - Pipeline - Add Approver for Stage
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass
-
https://www.baeldung.com/spring-properties-file-outside-jar https://docs.spring.io/spring-boot/docs/current/reference/html/spring-boot-featu...
-
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass
-
The decision was made to block such external HTTP repositories by default https://stackoverflow.com/questions/66980047/maven-build-failure-d...