Tuesday, September 28, 2021

Ubuntu : Debian : update vs upgrade vs list –upgradable

https://linuxsecurity.com/features/how-to-install-security-updates-in-ubuntu-debian


The commands we shall cover pertaining to this topic are: 

  • apt update: This command only fetches the information on latest packages that can be upgraded. Note that it does not actually upgrade any packages on the system, only refreshes the index local to the system. This package information is obtained from standard official sources and then stored locally on the system. If ever you need to check from which sources the package information gets picked, you will see it in under /etc/apt/sources.list on the system.
  • apt list –upgradable: This command will then display the packages that have updates available and therefore can be upgraded on the system. This information is based on the information fetched previously from the update command
  • apt upgrade: This is the actual command that does the upgrade of the packages in the system. Once executed, the OS will be successfully upgraded. Note that this command can install new packages if the dependencies require it, but it will never remove packages.
  • apt full-upgrade: This command does a little more than what the upgrade command does. In addition to upgrading new packages and installing new packages as required, it also removes existing installed packages if it determines that the dependencies are no longer required. Use this option with caution as it can cause unexpected system behavior if your application is dependent on a specific version of the package.
  • apt autoremove: This command is used to remove unused packages which are no longer needed by the dependent packages. This can be executed after apt upgrade

Monday, September 27, 2021

Linux: SSHD Configure SSH Timeout on Server Side

https://www.tecmint.com/increase-ssh-connection-timeout/


/etc/ssh/sshd_config  

ClientAliveInterval  1200
ClientAliveCountMax 3









The timeout value is given by the product of the above parameters i.e.

Timeout value = ClientAliveInterval * ClientAliveCountMax

For example, let’s say you have defined your parameters as shown:

ClientAliveInterval  1200
ClientAliveCountMax 3

Friday, September 24, 2021

Linux: Disk Handling Tool: Linux Disk Commands

lsblk -i
lsblk -f
df -Th 
du -sh * |sort -h

mount  /dev/sda  /data    [Manual]

mkfs.xfs /dev/sda
mkfs.ext4 /dev/sdb

edit FSTAB
/dev/sda /data ext4 defaults,nofail 0 0

mount -a

umount /dev/dsx
--------------------------------------------
DONT' UMount if its root disk
sudo parted /dev/sda
print
resizepart
resize2fs /dev/sda1
-------------------------------------------

Azure:Disk Resize Disk Size - Azure



----------------------------------------------------------------------------------------------------------------------------------

Create a Snapshot
Make a Disk out of Snapshot (Preferably Premium)
Execute These Steps after RESIZE on azure.portal 
[Stop Vm, Resize, Stop VM, Execute Below Steps]

DONT' UMount if its root disk
sudo parted /dev/sda
print
resizepart
resize2fs /dev/sda1

----------------------------------------------------------------------------------------------------------------------------------

lsblk -i
lsblk -f
df -Th 
du -sh * |sort -h

Thursday, September 23, 2021

Architecture Comparisons : HTTP vs Event vs Messgae Broker


https://github.com/sonusathyadas/EventGrid-Samples
https://docs.microsoft.com/en-us/rest/api/servicebus/peek-lock-message-non-destructive-read

AZURE -204 (24 Sept 2021) - DAY5

EVENT HUB  = KAFKA

EventHub Client -C# 
----------------------------------------------

Service Bus

Azure: Resize VM : How to expand the OS drive of a virtual machine

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/expand-os-disk


https://medium.com/100-days-of-linux/how-to-resize-a-linux-root-file-system-af3e5096b4e4


https://docs.microsoft.com/en-us/azure/virtual-machines/linux/expand-disks



Linux: Partitioning Disk : MKFS EXT4 vs XFS - Database Formatting

https://computingforgeeks.com/ext4-vs-xfs-complete-comparison/

https://www.tecmint.com/create-new-ext4-file-system-partition-in-linux/

https://www.cyberciti.biz/faq/how-to-install-xfs-and-create-xfs-file-system-on-debianubuntu-linux/#Create_xfs_filesystem

mkfs.xfs /dev/device

mkfs.ext4 /dev/device


:::::: vi /etc/fstab  ::::::

/dev/sdb /data xfs defaults,nofail 0 1


lsblk -f

lsblk -a

lsblk -i


df -h 



Chrome Extensions

https://chrome.google.com/webstore/detail/json-formatter/bcjindcccaagfpapjjmafapmmgkkhgoa?hl=en

Wednesday, September 22, 2021

AZURE -204 (23 Sept 2021) - DAY4

  • Resource Manager Deployment Template 
  • Azure Container
  • Azure Container Registry
  • API Management 

======================================

Sonu Sathyadas

Azure CLI is installed


1) Create the ACR

2) Enable admin account

3) Login to ACR using

Login to Azure : az login

Login to ACR: az acr login -n name

4) Create the docker image - ' docker build -t imagename:version .'

5) Tag the image with ACR prefix - docker tag imagename:version acrname/imagename:version

6) Push image to ACR using 'docker push imagename'


Instead of steps 4,5,6 we can run a single stepaz acr build -registry <your registry name> --image sampleapp .


https://sonustorageaccount.blob.core.windows.net/demofiles/MyDemoApp.zip


kkazcontainerregistry.azurecr.io

az acr login -n kkazcontainerregistry

 https://kkazcontainerregistry.azurecr.io/v2/



=============================================================================================================================


Clear Linux History

https://unix.stackexchange.com/questions/49214/how-to-remove-a-single-line-from-history

  1. To clear all your history, use

    history -c
    
  2. To delete a single line, use

    history -d linenumber

https://www.skillpipe.com/#/bookshelf/books

 https://www.skillpipe.com/#/bookshelf/books

Tuesday, September 21, 2021

Azure Cosmos DB

https://www.youtube.com/watch?v=p5rRGlKxNtk

Data Modelling in Azure Cosmos DB

Share Screen Recording - screenapp.io

 https://screenapp.io/#/dashboard

AZURE -204 (22 Sept 2021) - DAY3

 AZURE Functions   ---> LAMBDA


Trigger

--------------------

  • HTTP Trigger
  • Timer Trigger
  • CosmosDb Trigger
  • Blob Trigger
  • Queue Trigger

-----------------------------------------------------------------------------------

Consumption Plan -> Auto scaling happens automatically  -> Infinite Scaling

App Service Plan -> Cost Advantage, But Scaling is limited by auto scaling by App Service Plan

Premium Plan -> Unlimited Execution Duration, 

-----------------------------------------------------------------------------------

FunctionsApp is collection of Function


------------------------------------------------------------------------------------

VM - > Azure 

New VM 



Monday, September 20, 2021

AZURE -204 (21 Sept 2021) - DAY2

LRS - LOCALLY Redundant Storage (1n i Data Centre)

====================================

ZRS - Zone Redundant Storage (Across Zones)

======================================

GRS - Geo Redundant Storage (Acrosss Regions)


Paired Regions

=================================================================

Primary I create

Secondary , Not accessible, by user, But used by Azure themselves


=================================================================



LRS is by default Always ON

=================================================================

Replication vs Backup

Replication in Synchronous

it will give me 200 HTTP only when it saves on Secondary as well

Region Replication is Asynchronous

=================================================================

listing is allowed in CONTAINER, but not in BLOB Public Access


https://docs.microsoft.com/en-us/rest/api/storageservices/list-blobs


https://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list

=================================================================


https://stackoverflow.com/questions/26206993/how-to-revoke-shared-access-signature-in-azure-sdk/53158363#53158363


SAS not based on SAP - can't be revoked:

If you are using ad hoc URIs, you have three options. You can issue SAS tokens with short expiration policies and wait for the SAS to expire. You can rename or delete the resource (assuming the token was scoped to a single object). You can change the storage account keys. This last option can have a significant impact, depending on how many services are using that storage account, and probably isn't something you want to do without some planning.

SAS based on SAP - can be revoked by revoking SAP:

If you are using a SAS derived from a Stored Access Policy, you can remove access by revoking the Stored Access Policy – you can just change it so it has already expired, or you can remove it altogether. This takes effect immediately, and invalidates every SAS created using that Stored Access Policy. Updating or removing the Stored Access Policy may impact people accessing that specific container, file share, table, or queue via SAS, but if the clients are written so they request a new SAS when the old one becomes invalid, this will work fine.

Best practice:

Because using a SAS derived from a Stored Access Policy gives you the ability to revoke that SAS immediately, it is the recommended best practice to always use Stored Access Policies when possible.

================================================================


Access Tier

HOT Tier ---> Frequently Accessed 

Cool Tier -->  Infrequently Accessed

Archive  --> Archival data (rarely accessed)

================================

Premium -> Always Frequently--> SSD Tier

================================

Access Tier Lifecycle 
Lifecycle Management

=======================================================
By default Files are in HOT Tier

========================================================

Hot Tier is costly, but R/W Operations are cheap
Cool Tier is less costly, but R/W Operations are costier
========================================================

In your 1 Subscription, You can have 200 Storage Accounts

========================================================

https://docs.microsoft.com/en-us/azure/storage/blobs/soft-delete-blob-overview

========================================================

COSMOS DB  ???


https://volosoft.com/blog/Introduction-to-Azure-Cosmos-DB

Azure Cosmos DB supports 5 type of APIs.

  • SQL API (Json)
  • MongoDB API (Bson)
  • Gremlin API (Graph)
  • Table API (Key-Value)
  • Cassandra API (columnar)

========================================================

Consistency

Replication comes with a choice of consistency. So, when one instance of your app writes data to a write-region, Azure needs to replicate this data to other regions.

Azure Cosmos DB offers 5 type of consistency levels. It means, you need to select how Azure should replicate your data between your Azure Cosmos DB regions. Let’s see what are those consistency levels:

Strong

In this model, there are no dirty reads. It means, when a data is updated, everybody will read the old value until the data is replicated to all regions. This is the slowest option.

Bounded Staleness

In this option, you can define period of time or update count for the staleness of your data. You can say that, no dirty reads for 1 minute or no dirty reads for data updated more than 5 times. When you set the time option to 0, it will be exactly same as Strong consistency option.

Session

In this option, no dirty reads are possible for writers but dirty reads are possible for readers. This is the default option. So, if you are the one writing the data, you can read that data. But for others, they can read stale data for a while.

Consistent Prefix

In this option dirt-reads are possible but they are always on order. So, if a data is updated with the values 1,2,3 in order, readers always see the updated data in this order. No one will see the value 3 before 2.

Eventual

In this option, dirty reads are possible and there is no guarantee of order. So, if a data is updated with the values 1,2,3 in order, a reader can see value 3 before seeing value 2. But, this is the fastest option.

Here is a commonly used image for showing consistency options of Azure Cosmos DB:

img

Types of Deployment Strategies

https://medium.com/geekculture/what-is-your-deployment-strategy-51811b4ed973

Sunday, September 19, 2021

AZURE -204 (21 Sept 2021) - DAY1

- .NET 3.1/5
- Visual Studio 2019 Community (Azure Development, Cross-Platform Development, Web Development)
- Visual Studio Code
- Azure CLI
- Azure PowerShell
- Docker Desktop
- Azure Storage Explorer
- Azure CosmosDB Emulator
- Node JS 16.x





========================================================================

10% IAAS
90% PAAS in Exam AZ-204


Skills Measured - AZ 204


========================================================================


"App Service Web Apps" -  PAAS -> Deploy Apps (NodeJS, JAVA) from IDE
No VM management needed
Scalable easily

Web Job

Logical Sandbox  --> Logical VM

App Service 
1 App Service --> Multiple Web Apps
App Service Plan -> Feature of Logical VM --> Size, External Domain, CPU

Free Plan
Standard Plan
Premium Plan

========================================================================

*.azurewebsites.net

If You want Privacy -> Isolated N/W App Service Plan

Inbound IP -> Whitelisting

=========================================================================

S1 Plan -- App Service Plan

=========================================================================
Authentication/Authorization   is External Service which will be added to my "App Service"
=========================================================================

Hybrid Connection ???

App Service Hybrid Connection
App Service --> DB inside (On Premises)  via Agent 
===============================================
2 Flavours of Azure Command Line

CLI 
Power Shell

==============================================

1 Question from this
Order list of CLI Commands to create a App Service - Web 

==============================================

App Service --> Code
App Service --> Docker
App Service -> Configuration & Monitoring

==============================================
Docker Linux - App



































# generate a unique name and store as a shell variable
webappname=mywebapp$RANDOM

# create a resource group
az group create --location westeurope --name myResourceGroup

# create an App Service plan
az appservice plan create --name $webappname --resource-group myResourceGroup --sku FREE

# create a Web App
az webapp create --name $webappname --resource-group myResourceGroup --plan $webappname

# store a repository url as a shell variable
gitrepo=https://github.com/Azure-Samples/php-docs-hello-world

# deploy code from a Git repository

az webapp deployment source config --name $webappname --resource-group myResourceGroup --repo-url $gitrepo --branch master --manual-integration

=======================================================================

Staging Slot

Production 

=================================================================
"SLOT" is inside App Service Plan, Not in another VM or so

You can divide load between 2 SLOTS - Weightage
=================================================================

Linux :Zip How to Zip only Files in Zip File in Linux

https://superuser.com/questions/841642/zip-command-on-linux-how-to-force-to-zip-only-files-and-not-whole-directories-st


Use the -j (junk-paths) option.


@LAPTOP MINGW64 ~/Desktop/Devops/EmailCount

$ zip -j 12-18Sept2021.zip "12-18Sept 2021/*"

  adding: 12-9-2021.csv (164 bytes security) (deflated 45%)

  adding: 13-9-2021.csv (164 bytes security) (deflated 45%)

  adding: 14-9-2021.csv (164 bytes security) (deflated 68%)

  adding: 15-9-2021.csv (164 bytes security) (deflated 67%)

  adding: 16-9-2021.csv (164 bytes security) (deflated 66%)

  adding: 17-9-2021.csv (164 bytes security) (deflated 69%)

  adding: 18-9-2021.csv (164 bytes security) (deflated 66%)

  adding: 19-9-2021.csv (164 bytes security) (deflated 45%)

  adding: 20-9-2021.csv (164 bytes security) (deflated 45%)


Friday, September 17, 2021

Docker Image Load

karankaw@LAPTOP:~$ ll
total 208
drwxr-xr-x 12 karankaw karankaw   4096 Sep 17 17:18 ./
drwxr-xr-x  3 root     root       4096 May 24 13:37 ../
lrwxrwxrwx  1 karankaw karankaw     27 May 24 15:12 .aws -> /mnt/c/Users/703250313/.aws/
lrwxrwxrwx  1 karankaw karankaw     29 May 24 15:12 .azure -> /mnt/c/Users/703250313/.azure/
-rw-------  1 karankaw karankaw  24482 Sep 17 17:57 .bash_history
-rw-r--r--  1 karankaw karankaw    220 May 24 13:37 .bash_logout
-rw-r--r--  1 karankaw karankaw   3771 May 24 13:37 .bashrc
drwx------  3 karankaw karankaw   4096 Jun 21 10:56 .config/
drwxr-xr-x  5 karankaw karankaw   4096 Aug 14 22:44 .docker/
drwxr-xr-x  2 karankaw karankaw   4096 May 24 13:37 .landscape/
-rw-r--r--  1 karankaw karankaw      0 Sep 17 16:50 .motd_shown
-rw-r--r--  1 karankaw karankaw    807 May 24 13:37 .profile
-rw-r--r--  1 karankaw karankaw      0 May 24 13:38 .sudo_as_admin_successful
-rw-------  1 karankaw karankaw   8822 Aug 17 11:44 .viminfo
drwxr-xr-x  2 karankaw karankaw   4096 Sep 17 17:28 TarDockerImage/
-rw-r--r--  1 karankaw karankaw 109127 Jul 16 07:39 cacerts
drwxr-xr-x  2 karankaw karankaw   4096 Aug 17 12:42 contoso/
drwxr-xr-x  2 karankaw karankaw   4096 Aug 14 00:22 dockerFile/
drwxr-xr-x  3 karankaw karankaw   4096 Aug 14 14:32 dockerVolumes/
drwxr-xr-x  2 karankaw karankaw   4096 Jul  9 12:55 site-content/
drwxr-xr-x  2 karankaw karankaw   4096 Jul 18 21:05 udemy/
karankaw@LAPTOP:~$ docker image ls |wc -l
30
karankaw@LAPTOP:~$ docker image load
requested load from stdin, but stdin is empty
karankaw@LAPTOP:~$ docker image load < TarDockerImage/data-extraction.tar
4e006334a6fd: Loading layer [==================================================>]  119.3MB/119.3MB
e4d0e810d54a: Loading layer [==================================================>]  17.18MB/17.18MB
fe6a4fdbedc0: Loading layer [==================================================>]  17.87MB/17.87MB
7095af798ace: Loading layer [==================================================>]    150MB/150MB
cdc9dae211b4: Loading layer [==================================================>]  520.8MB/520.8MB
4b4c002ee6ca: Loading layer [==================================================>]  18.51MB/18.51MB
c8696448b1d7: Loading layer [==================================================>]  47.68MB/47.68MB
a686a12a5f5c: Loading layer [==================================================>]  4.608kB/4.608kB
8ee35f8cdac6: Loading layer [==================================================>]  8.869MB/8.869MB
9b12bdcee8ec: Loading layer [==================================================>]  17.64MB/17.64MB
10728d60d300: Loading layer [==================================================>]  308.2MB/308.2MB
8cba2fa01a3c: Loading layer [==================================================>]  346.6kB/346.6kB
6a30ba8c7293: Loading layer [==================================================>]  434.7MB/434.7MB
3ddb158e3552: Loading layer [==================================================>]  434.7MB/434.7MB
96aa7388242f: Loading layer [==================================================>]  434.6MB/434.6MB
72478e0b6a4e: Loading layer [==================================================>]  253.3MB/253.3MB
7abd99b59331: Loading layer [==================================================>]  10.98MB/10.98MB
Loaded image: data-extraction:latest
karankaw@LAPTOP:~$ docker image ls |wc -l
31
karankaw@LAPTOP:~$ docker image ls
REPOSITORY                TAG       IMAGE ID       CREATED        SIZE
data-data-extraction      latest    10d1a875578c   2 days ago     2.73GB
<none>                    <none>    901256969d1a   4 weeks ago    583MB
<none>                    <none>    a2dceb803533   4 weeks ago    583MB
contoso-gaming-platform   latest    66c6349ae7c0   4 weeks ago    583MB
<none>                    <none>    806ee7c4b656   4 weeks ago    583MB
<none>                    <none>    0caa37ddf1cf   4 weeks ago    566MB
<none>                    <none>    306555224bca   4 weeks ago    566MB
<none>                    <none>    ad74a10ad87a   4 weeks ago    583MB
<none>                    <none>    ec15359bfa9c   4 weeks ago    90.1MB
<none>                    <none>    5c409a7f13ee   4 weeks ago    243MB
<none>                    <none>    ce4917328858   4 weeks ago    243MB
<none>                    <none>    9502d5d08c9a   4 weeks ago    120MB
<none>                    <none>    f8503823f89d   4 weeks ago    181MB
<none>                    <none>    5e93f8eac9b1   4 weeks ago    181MB
<none>                    <none>    19e13a65cc40   4 weeks ago    181MB
<none>                    <none>    3801c0231636   4 weeks ago    181MB
copy                      latest    f5804b1ee781   4 weeks ago    72.8MB
workdir                   latest    d22df735581a   4 weeks ago    72.8MB
entru                     v1        dfd3f66ce376   4 weeks ago    72.8MB
cmd2                      latest    e86c0374e156   4 weeks ago    72.8MB
custom                    1         7c64bd78f858   4 weeks ago    72.8MB
jenkins/jenkins           latest    72b4a8d8d158   5 weeks ago    567MB
alpine                    latest    021b3423115f   5 weeks ago    5.6MB
redhat/ubi8-minimal       latest    cf2faf23cb46   6 weeks ago    103MB
redhat/ubi8               latest    ad42391b9b46   6 weeks ago    226MB
amazonlinux               latest    d85ab0980c91   6 weeks ago    163MB
ubuntu                    latest    1318b700e415   7 weeks ago    72.8MB
kawkaran/nginx            latest    08b152afcfae   8 weeks ago    133MB
nginx                     latest    08b152afcfae   8 weeks ago    133MB
hello-world               latest    d1165f221234   6 months ago   13.3kB
karankaw@LAPTOP:~$

Docker Load/Import : Difference between import and load in Docker?

https://pspdfkit.com/blog/2019/docker-import-export-vs-load-save/

https://stackoverflow.com/questions/36925261/what-is-the-difference-between-import-and-load-in-docker

docker save will indeed produce a tarball, but with all parent layers, and all tags + versions.

docker export does also produce a tarball, but without any layer/history.

However, once those tarballs are produced, load/import are there to:

  • docker import creates one image from one tarball which is not even an image (just a filesystem you want to import as an image)

Create an empty filesystem image and import the contents of the tarball

  • docker load creates potentially multiple images from a tarred repository (since docker save can save multiple images in a tarball).


To summarize what we’ve learned, we now know the following:

  • save works with Docker images. It saves everything needed to build a container from scratch. Use this command if you want to share an image with others.

  • load works with Docker images. Use this command if you want to run an image exported with save. Unlike pull, which requires connecting to a Docker registry, load can import from anywhere (e.g. a file system, URLs).

  • export works with Docker containers, and it exports a snapshot of the container’s file system. Use this command if you want to share or back up the result of building an image.

  • import works with the file system of an exported container, and it imports it as a Docker image. Use this command if you have an exported file system you want to explore or use as a layer for a new image.

Azure CLI DISKs

 Settings   ->  Disks

Settings   ->  Properties  -> Agent status : Not Ready  or Ready


TTY


Sys REQ 

Grub Loader

Serial Console

sysctl -a |grep -i sysrq


https://www.youtube.com/watch?v=KevOc3d_SG4&t=147s

https://www.youtube.com/watch?v=HnvUxnNzbe4

https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/serial-console-grub-proactive-configuration 


https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html


waagent 

https://github.com/Azure/WALinuxAgent

The Microsoft Azure Linux Agent (waagent) manages Linux provisioning and VM interaction with the Azure Fabric Controller.


What is Azure fabric?

Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers


Operations   ->

Monitoring  ->

Automation  ->


Support + troubleshooting  ->  Resource health

Support + troubleshooting  ->  Boot diagnostics

Support + troubleshooting  ->  Performance diagnostics

Support + troubleshooting  ->  Serial console


Connect -> Connect with Bastion Host 

https://docs.microsoft.com/en-us/azure/bastion/tutorial-create-host-portal

https://www.rebeladmin.com/2019/11/step-step-guide-access-azure-vms-securely-using-azure-bastion/


Role

https://docs.microsoft.com/en-us/azure/role-based-access-control/check-access



Azure VM     ->    Support + troubleshooting  ->   Boot diagnostics

Boot Diagnostics

https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/serial-console-grub-single-user-mode

https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/boot-diagnostics

https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles#virtual-machine-contributor



GrubLoader Issue

https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/troubleshoot-vm-boot-error

https://gutsytechster.wordpress.com/2018/07/24/how-to-resolve-grub-error-file-grub-i386-pc-normal-mod-not-found/

https://docs.microsoft.com/en-us/troubleshoot/azure/virtual-machines/serial-console-grub-proactive-configuration

https://www.youtube.com/watch?v=KevOc3d_SG4

https://askubuntu.com/questions/266429/error-file-grub-i386-pc-normal-mod-not-found


10.79.202.45

10.79.202.5

fda

Rage@1234567




azureubuntu


azlinux

R....1......e2021


GRand Unified Bootloader (GRUB) is likely the first thing you see when you boot a virtual machine (VM). Because it's displayed before the operating system has started, GRUB isn't accessible via SSH. In GRUB, you can modify your boot configuration to boot into single-user mode, among other things.


REISUB



az vm start -g CORA-AI -n FDA-VEA


az vm restart -g CORA-AI -n FDA-VEA


az vm restart -g CORA-AI -n FDA-VEA --force  --no-wait


az serial-console send reset -g CORA-AI -n FDA-VEA


az serial-console send reset -g CORA-AI -n FDA-VEA


az vm boot-diagnostics get-boot-log -g CORA-AI -n FDA-VEA


az serial-console connect -g CORA-AI -n FDA-VEA


az serial-console send reset -g CORA-AI -n FDA-VEA


az vm boot-diagnostics enable -g CORA-AI -n FDA-VEA


--------------------------------------------------------------------------

az disk list --query '[?managedBy==`null`].[id]' -o tsv -g CORA-AI 


id=

az disk delete --ids $id --yes

--------------------------------------------------------------------------


$subscriptionId=$(az account show --output=json | jq -r .id)


az resource show --ids "/subscriptions/$subscriptionId/providers/Microsoft.SerialConsole/consoleServices/default" --output=json --api-version="2018-05-01" | jq .properties



-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


dmesg | grep SCSI


https://docs.microsoft.com/en-us/azure/virtual-machines/boot-diagnostics



-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

https://eaasblobstorage.blob.core.windows.net/default/SOP%20-%20Reimb%20(002).pdf?sp=r&st=2021-08-10T09:09:09Z&se=2021-08-10T17:09:09Z&spr=https&sv=2020-08-04&sr=b&

sig=E%2BDVesrpOud11k%2Ftb5rjbdevUHkmruAU17Llnskjq9s%3D



sp=r&st=2021-08-10T09:09:09Z&se=2021-08-10T17:09:09Z&spr=https&sv=2020-08-04&sr=b&sig=E%2BDVesrpOud11k%2Ftb5rjbdevUHkmruAU17Llnskjq9s%3D



https://eaasblobstorage.blob.core.windows.net/




Connection string (Key or SAS)


https://eaasblobstorage.blob.core.windows.net/



sv=2020-04-08&ss=b&srt=sco&st=2021-08-10T09%3A16%3A31Z&se=2022-08-10T09%3A16%3A00Z&sp=rwdxftlacup&sig=NfSV5%2F9wbqXlBAhGBGxek8RAw723PYERkBgLe009Ifk%3D

sv=2020-04-08&ss=b&srt=sco&st=2021-08-10T09%3A16%3A31Z&se=2022-08-10T09%3A16%3A00Z&sp=rwdxftlacup&sig=NfSV5%2F9wbqXlBAhGBGxek8RAw723PYERkBgLe009Ifk%3D


SharedAccessSignature=sv=2020-04-08&ss=b&srt=sco&st=2021-08-10T09%3A16%3A31Z&se=2022-08-10T09%3A16%3A00Z&sp=rwdxftlacup&sig=NfSV5%2F9wbqXlBAhGBGxek8RAw723PYERkBgLe009Ifk%3D;BlobEndpoint=https://eaasblobstorage.blob.core.windows.net/;


eaasblobstorage

SharedAccessSignature=sv=2020-04-08&ss=b&srt=sco&st=2021-08-10T09%3A16%3A31Z&se=2022-08-10T09%3A16%3A00Z&sp=rwdxftlacup&sig=NfSV5%2F9wbqXlBAhGBGxek8RAw723PYERkBgLe009Ifk%3D;BlobEndpoint=https://eaasblobstorage.blob.core.windows.net/;




Query string:

?sv=2020-04-08&ss=b&srt=sco&st=2021-08-10T09%3A16%3A31Z&se=2022-08-10T09%3A16%3A00Z&sp=rwdxftlacup&sig=NfSV5%2F9wbqXlBAhGBGxek8RAw723PYERkBgLe009Ifk%3D



How will you connect to the storage account?

Connection string (Key or SAS)

Shared access signature URL (SAS)

Account name and key


?sv=2020-08-04&ss=b&srt=sco&sp=rwdlactfx&se=2022-08-10T17:52:53Z&st=2021-08-10T09:52:53Z&spr=https&sig=Iiz0%2FBxajPuPU9mBbbvb1OIw5dviL%2BzOkqVL%2Ft1wh3U%3D


https://eaasblobstorage.blob.core.windows.net/?sv=2020-08-04&ss=b&srt=sco&sp=rwdlactfx&se=2022-08-10T17:52:53Z&st=2021-08-10T09:52:53Z&spr=https&sig=Iiz0%2FBxajPuPU9mBbbvb1OIw5dviL%2BzOkqVL%2Ft1wh3U%3D


SharedAccessSignature=sv=2020-08-04&ss=b&srt=sco&sp=rwdlactfx&se=2022-08-10T17:52:53Z&st=2021-08-10T09:52:53Z&spr=https&sig=Iiz0%2FBxajPuPU9mBbbvb1OIw5dviL%2BzOkqVL%2Ft1wh3U%3D;

BlobEndpoint=https://eaasblobstorage.blob.core.windows.net/;



------------------------


SharedAccessSignature=sv=2020-08-04&ss=b&srt=sco&sp=rwdlactfx&se=2022-08-10T17:52:53Z&st=2021-08-10T09:52:53Z&spr=https&sig=Iiz0%2FBxajPuPU9mBbbvb1OIw5dviL%2BzOkqVL%2Ft1wh3U%3D;BlobEndpoint=https://eaasblobstorage.blob.core.windows.net/



------------------------

Azure - Pipeline - Add Approver for Stage

https://learn.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass