Azure Danny McDermott Azure Danny McDermott

Azure Bastion undocumented requirement gotcha

Just a quick post to highlight an undocumented requirement for Azure Bastion that I came across when deploying a Landing Zone.

I’m creating a new landing zone for a client and we’re using Azure Bastion for secure access to IaaS VM’s. I decided to create the resource in a separate resource group than the Virtual Network as it was uncertain whether this was going to be required long term or not. There’s nothing in the current documentation that indicates that it isn’t possible, so I tried to deploy.

After a few minutes, it failed:

Here’s the less than helpful error:

No matter what I tried (Portal, Terraform, Azure CLI), the same occurred.

Upon speaking to Azure Support, this is a known issue and the mitigation is to deploy the Bastion host within the same Resource Group as the Virtual Network that it is trying to connect to.

I’ve experienced the same when deploying API Management in Azure, but at least the errors from ARM are meaningful and pointed me in the right direction.

Hopefully if you come across the same, and the problem isn’t resolved, this will help you out.

Read More
Danny McDermott Danny McDermott

Deploying Azure Stack HCI 20H2 on PowerEdge R630

Ever since the new version of Azure Stack HCI was announced at Microsoft Inspire 2020, there has been a real buzz about the solution and where it is going. Thomas Maurer has written a great article detailing the technology improvements here. If you haven’t read it, I highly recommend you do!

If you want to get hands-on, there are a few options here to do so:

  • Deploy the preview to existing Hardware

  • Deploy on your existing Virtualization platform (use Nested Virtualization)

  • Deploy to a VM in Azure (using a VM Series that supports Nested Virtualization)

Matt McSpirit has written a fantastic step-by-step guide for deploying to virtualized platforms; again - highly recommend reading it.

You can also check out MSLab https://github.com/microsoft/MSLab which automates the deployment to Hyper-V environments

However, I want to write about deploying to physical hardware that I’m lucky enough to have access to that will be more representative of what real world deployments will look like. I won’t go into the full process, as the Microsoft documentation goes through all the steps, but I will explain how I installed it onto physical servers that aren’t officially supported

Hardware

I have access to a number of Dell EMC PowerEdge R630 servers that were originally used to host ASDK on , so I know they meet the majority of the requirements to run Hyper-V / S2D. Dell EMC don’t officially support this server for HCI, but I’m only using it to kick the tires, so I’m not bothered about that.

Each Server has:

  • 2 x 12 core Xeon E5-2670 processors

  • 384 GB RAM

  • 5 x 480GB SSD drives

  • Perc H730 mini RAID controller

  • 4 x Emulex 10GB NICs

The only real issue with the list above is the H730 controller. As it is RAID, it could cause issues, and Dell EMC recommend using a HBA330 controller, as per this thread. I don’t have any HBA330’s to hand, so I had to make do and try and make it work (spoiler alert: I did get it working, details later!)

Preparing the Hardware

The first thing I had to do was to ensure that each server was configured correctly from a hardware perspective. To do this, connect to the iDRAC interface for each of the servers and make the following changes, if needed.

First, I had to make sure the Perc R730 controller is set to ‘HBA’ mode.

Navigate to Storage / Controllers

From the Setup tab, I checked the current value for the Controller Mode. It should be set to ‘HBA’

If it isn’t, from the corresponding Action dropdown, select HBA.

From the Apply Operation Mode dropdown, select At Next Reboot and then click on Apply

Next, I navigate to iDRAC Settings / Network. Select the OS to iDRAC Pass-Through tab. Make sure Pass-through configuration is Disabled, and Apply changes if necessary

Reboot the server now for the changes to apply to the controller mode.

Next on the list was to ensure that your BIOS and Firmware are up to date. I used the Sever Update Utility from the LifeCycle controller, as documented here to do this.

Once all that was complete, I could start to deploy the Azure Stack HCI OS.

Installing Azure Stack HCI OS

There’s two things that we need to download; Azure Stack HCI ISO and Windows Admin Center (latest version is 2103.2). To get the necessary files, you need to sign up for the public preview. Fill in your details here and go grab them. WAC can be downloaded here as well.

Once you have the ISO, go ahead and connect to your Virtual Console and connect the Virtual Media (version downloaded via the Eval link is AzureStackHCI_17784.1408_EN-US.iso). On more than one occasion I forgot to click on Map Device, that wasted a few minutes!

To make things a little easier, go to the iDRAC, Server / Setup. Change the First Boot Device to Virtual CD/DVD/ISO and then apply the changes.

Next thing is to restart the server. I had to make sure I didn’t get distracted as after a couple of minutes of running through the BIOS steps, it prompts for you to press any key to boot from the CD/DVD. The image below shows it’s going to try and boot from the Virtual CD…

…and the following is if you don’t press a key in time :)

I won’t detail the installation process of the Azure Stack HCI OS, as there are very little configuration items required, just make sure you select the correct drive/partition to install the OS on (Drive 0 for me, I wiped all the volumes/partitions)

After a while (it can be slow doing an install via the Virtual Media) HCI OS will be installed. You’ll need to set an administrator password on first login.

Once that’s set, the config menu appears.

From here, first thing was for me to confirm I had a valid IP address, so selected 8 and then the NIC I wanted to use for management. The only other thing I really had to do was to set the computer name, so I did that, but I also added the system to a domain, just so I knew that name resolution and networking was working as expected.

I rinsed/repeated for all the servers (4 of them) that I wanted to form my HCI cluster.

Follow the instructions here to deploy the cluster.

Fixing Storage Spaces Direct Deployment

When going through the cluster installation, I did encounter one error that blocked the deployment of S2D. This could be seen in the Failover Cluster Validation report.

List All Disks for Storage Spaces Direct Failed

List All Disks for Storage Spaces Direct Failed

Bus type is RAID - it should be SAS, SATA or NVMe

Bus type is RAID - it should be SAS, SATA or NVMe

Fortunately, we can change this by running the following PowerShell commands

(get-cluster).S2DBusTypes="0x100"
S2DBusTypes should report back as Decimal 256

S2DBusTypes should report back as Decimal 256

Running the Cluster Validation again will now report as Success

13-ASHCI-R630-S2DReportSucess.png

You can now go ahead and create the S2D cluster :)

Read More
Danny McDermott Danny McDermott

Hands on with GitHub Actions and Azure Stack Hub

The past few months, I’ve been working with GitHub Actions as the CI/CD platform for Azure hosted applications. Whilst that wasn’t using Azure Stack Hub, I wanted to see how I could use Actions within my environment, as I really like the low entry barrier, and GitHub is fairly ubiquitous.

From a high level, in order to achieve the integration, we need to use an Action Runner, which in simple terms is an agent running on a VM (or container) that polls for workflow action events and executes those actions on the Action runner system where the agent resides. GitHub can provide containerized runners out-of-the-box, and that’s great if you’re using a public cloud, but in the vast majority of cases, Azure Stack Hub is in a corporate network, so we can use a self-hosted runner for these scenarios, and that’s what I’ll be using here.

There are already some tutorials and videos on how you can integrate into your Azure Stack Hub environment, but I wanted to go a little deeper, from showing and explaining how to get your environment setup, to automating the creation of a self-hosted runner in your Azure Stack Hub tenant.

First, here are some links to some great content:

https://channel9.msdn.com/Shows/DevOps-Lab/GitHub-Actions-on-Azure-Stack-Hub

and the official docs:

https://docs.microsoft.com/en-gb/azure-stack/user/ci-cd-github-action-login-cli?WT.mc_id=devopslab-c9-cxa&view=azs-2102

As ever, the documentation assumes you are familiar with the platform. I won’t, so will go through step-by-step what you need to do and hopefully be successful first time!

Pre-requisites

Before you get started, here’s what you’ll need up-front:

  1. A GitHub account and a repository. I would highly recommend a private repo, as you will be deploying a self-hosted runner which is linked to this repo - this will be able to run code in your environment. GitHub make this recommendation here.

  2. An Azure Stack Hub environment with a Tenant Subscription that you can access (assumption here is we’re using Azure AD as the identity provider)

  3. (Optional) A service principal with contributor rights to the Azure Stack Hub tenant subscription. If you don’t have this, I detail how this can be done later..

  4. Azure CLI

  5. A GitHub Personal Access Token (PAT)

Credentials to connect to Azure Stack Hub

First thing we need is a service principal that has rights to the tenant subscription (I recommend contributor role). We will use these credentials to create a secret within our GitHub repo that we will use to connect to the subscription.

If you don’t already have an SP, following the official documentation helps us create one.

All my examples are being run from Linux and Azure CLI (Ubuntu WSL to be specific :) )

First, register your Azure Stack Hub tenant environment (if not already done so)

az cloud register \ -n "AzureStackHubTenant" \ --endpoint-resource-manager "https://management.<region>.<FQDN>" \ --suffix-storage-endpoint ".<region>.<FQDN>" \ --suffix-keyvault-dns ".vault.<region>.<FQDN>" \ --endpoint-active-directory-graph-resource-id "https://graph.windows.net/" \ --profile 2019-03-01-hybrid \

We need to connect to our Azure Stack Hub environment so that when the Service Principal is created, we can assign it to a scope.

Once the environment is defined, we need to make sure this is active.

az cloud set -n AzureStackHubTenant

Run this command to confirm that it is set correctly:

az cloud list -o table
3.ashenv.png

Next, let’s connect to our subscription hosted on Azure Stack Hub.

az login

If there’s more than one subscription, you might need to specify which subscription you want to connect to.

az account set --subscription <subName>
4.ashenv.png

We can see above that I have set the active subscription to ‘DannyTestSub’.

Next, we want to create our service principal. To make things easier, let’s have the CLI do the work in assigning the scope to the user subscription:

#Retrieve the subscription ID SUBID=$(az account show --query id -o tsv) az ad sp create-for-rbac --name "ASH-github-runner" --role contributor \ --scopes /subscriptions/$SUBID \ --sdk-auth

Running that should produce something like the following in my environment:

5.spCreatepng.png

It is important that we copy the JSON output as-is , we need this exact format to create our GitHub secret. Theoretically, if you already have a clientID and secret, you could construct your own JSON formatted credential like this:

{ "clientId": "<your_ClientID>", "clientSecret": "<your_Client_secret>", "subscriptionId": "<Azure_Stack_Hub_Tenant SubscriptionID>", "tenantId": "<Your_Azure_AD_Tenant_Id>", "activeDirectoryEndpointUrl": "https://login.microsoftonline.com/", "resourceManagerEndpointUrl": "https://management.<REGION>.<FQDN>", "activeDirectoryGraphResourceId": "https://graph.windows.net/", "sqlManagementEndpointUrl": null, "galleryEndpointUrl": "https://providers.<REGION>.local:30016/", "managementEndpointUrl": "https://management.<REGION>.<FQDN>" }

Now we have the credentials, we need to set up a secret within GitHub.

From the GitHub portal, connect to your private repo that you will use for Azure Stack Hub automation.

Click on the Settings cog

Click on the Settings cog

Click on Secrets

Click on Secrets

Click on ‘New repository secret’

Click on ‘New repository secret

Enter a name (I’m using AZURESTACKHUB_CREDENTIALS), paste in the JSON content for the SP that were previously created, and then click on  Add Secret.

Enter a name (I’m using AZURESTACKHUB_CREDENTIALS), paste in the JSON content for the SP that were previously created, and then click on Add Secret.

You should now see your newly added credentials under Action Secrets.

You should now see your newly added credentials under Action Secrets.

So we have our credentials and have setup an Actions runner secret, now we need an Actions Runner to run our workflows against within our Azure Stack Hub environment.

If you already have a Windows Server or Linux host running in your Azure Stack Hub tenant subscription, you can follow the manual steps, per the guidance given under the Settings/ Actions / Runner config page:

11.GHrunner.png

You can select the OS type of the system you have running and follow the commands.

Note: The ./config.(cmd|sh) command uses a token which has a short lifetime, so be aware if you use this method and are expecting to use it for automating self-hosted runner deployments!

12.GHrunner.png

The above method works and is OK if you want to quickly test capabilities. However, I wanted the ability to automate the runner provisioning process, from the VM to the installation of the runner agent.

I did this by creating an ARM template that deploys an Ubuntu VM and runs a Bash script that installs necessary tools (e.g. Azure CLI, Docker, Kubectl, Helm, etc.) and most importantly, deploys the agent and dynamically retrieves a token from GitHub to add our runner. One crucial parameter we need is a GitHub Personal Access Token (PAT). We need this to authenticate to the GitHub Actions API to generate the actions token.

To create the PAT, highlight your user account from the top right of the GitHub portal:

Click Settings

Click Settings

Click Developer Settings

Click Developer Settings

Select Personal access tokens and then Generate new token

Select Personal access tokens and then Generate new token

Set the scope to repo

Set the scope to repo

Scroll to the bottom and then click Generate token

Scroll to the bottom and then click Generate token

Make sure to copy the PAT, as you can’t retrieve it afterwards (you could always regenerate it if needed :) )

Make sure to copy the PAT, as you can’t retrieve it afterwards (you could always regenerate it if needed :) )

Now we have the PAT, we can go ahead and deploy the VM using the ARM template I created.

Go ahead and get it from

https://github.com/dmc-tech/AzsHubTools/blob/main/ghRunner/template.json

There’s nothing fancy; it deploys a VNET, NIC, Public IP, VM (it uses Ubuntu 18.04 - make sure you have it available via the Azure Stack Hub Marketplace!) and then deploys a Custom script to install a bunch of tools and the runner agent. The only parameters you will need to provide are:

Parameter Description
gitHubPat Personal Access Token used to access GitHub
gitHubRepo GitHub Repo to create the Self Hosted Runner
gitHubOwner GitHub Owner or Organisation where the repo is located for the Runner
adminPublicKey Public SSH key used to login to the VM

If you’re not sure how the Owner and Repo are derived, it’s simple:

To generate the adminPublicKey on Windows systems, I prefer to use MobaXterm. See the end of this post on how to generate the Private/ Public key

Deploy using the ARM template within your tenant subscription.

21.template.png

When deployed, it takes me about 10 minutes in my environment to complete.

22.template.png

We can see that the agent has successfully deployed by checking the Actions / Runners settings within our GitHub repo:

Success!!!

Success!!!

Now we can go ahead and test a workflow.

Within your repo, if it doesn’t already exist, create the following directory structure:

/.github/workflows

This is where the workflow yaml files are stored that our actions will use.

For a simple test, go ahead and copy the following into this folder in your repo (and commit it to the main branch!):

https://github.com/dmc-tech/AzsHubTools/blob/main/.github/workflows/testAzureStackHub.yml

The workflow is manually triggered ( workflow_dispatch ), and prompts for a parameter ( the name of the subscription you want to run the action against)

on: 
  workflow_dispatch:
    inputs:
      subscription:
        description: 'Azure Stack Hub User subscription'
        required: true
        default: 'TenantSubscription'

name: Test GitHub Runner in an Azure Stack Hub environment

env:
  ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'

jobs: 
  azurestackhub-test:
    runs-on: self-hosted
    steps:
      - uses: actions/checkout@main
      - name: Login to AzureStackHub with CLI
        uses: azure/login@releases/v1
        with:
          creds: ${  }
          environment: 'AzureStack'
          enable-AzPSSession: false

      - name: Run Azure CLI Script Against AzureStackHub
        run: |
          hostname
          subId=$(az account show --subscription ${  } --query id -o tsv)
          az account set --subscription ${  }

          az group list --output table

You can see that the workflow refers to the secret we defined earlier: secrets.AZURESTACKHUB_CREDENTIALS

The workflow configures the Azure Stack Hub tenant environment on the runner VM (using the values from the JSON stored in the secret), connects using the service principal and secret and then runs the Azure CLI commands to list the resource groups in the specified subscription.

To run the workflow, head over to the GitHub site:

Click on Actions and then the name of the workflow we added: ‘Test GitHub Runner in an Azure Stack Hub environment’

Click on Actions and then the name of the workflow we added: ‘Test GitHub Runner in an Azure Stack Hub environment

Click on Run workflow and type in the name of your Azure Stack Hub tenant subscription (this caters for multiple subscriptions)

Click on Run workflow and type in the name of your Azure Stack Hub tenant subscription (this caters for multiple subscriptions)

Hopefully you will see that the action ran successfully as denoted above.  Click on the entry to check out the results.

Hopefully you will see that the action ran successfully as denoted above. Click on the entry to check out the results.

Click on the job and you can see the output. You can see that the Azure CLI command to list the resource groups for the subscription completed and returned the results.

Click on the job and you can see the output. You can see that the Azure CLI command to list the resource groups for the subscription completed and returned the results.

With that, we’ve shown how we can automate the deployment of a self-hosted runner on Azure Stack Hub and demonstrated how to run a workflow.

I really like GitHub Actions and there’s scope for some powerful automation, so although what I’ve shown is very simple, I hope you find this of use and helps you get started.

Appendix: Creating SSH keys in MobaXterm

1. Click on Tools2. Select MobaKeyGen (SSH key generator)

1. Click on Tools

2. Select MobaKeyGen (SSH key generator)

1.  Click on Generate2. Copy the Public key for use with the ARM template.3. Save the Private Key (I recommend setting a Key passphrase). You’ll need this if you need to SSH to the VM!

1. Click on Generate

2. Copy the Public key for use with the ARM template.

3. Save the Private Key (I recommend setting a Key passphrase). You’ll need this if you need to SSH to the VM!

Read More
Azure Stack Hub, Azure Stack Danny McDermott Azure Stack Hub, Azure Stack Danny McDermott

ASDK 2008 installation fix

Here’s a quick post on installing ASDK 2008, as there seems to have been a recent change to the names of Azure AD roles that causes the ASDK install routine to fail when it is checking to see if the supplied Azure AD account has the correct permissions on the AAD tenant.

Following the installation docs: https://docs.microsoft.com/en-us/azure-stack/asdk/asdk-install?view=azs-2008, after a short time of running the PowerShell routine you will be prompted for the Azure AD global admin account. Once you’ve done this, it will be verified, but will fail with an error similar to below:

Get-AzureAdTenantDetails : The account you entered 'admin@contoso.onmicrosoft.com' is not an administrator of any Azure Active
Directory tenant.
At C:\CloudDeployment\Setup\Common\InstallAzureStackCommon.psm1:546 char:27
+ ... ntDetails = Get-AzureAdTenantDetails -AADAdminCredential $InfraAzureD ...
+                 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,Get-AzureADTenantDetails

The reason for this is that there is a function called Get-AADTenantDetail in the c:\CloudDeployment\Setup\Common\AzureADConfiguration.psm1 module that checks if the supplied account has the correct permissions. It currently checks for “Company Administrator”, but that name no longer exists, it should be changed to “Global Administrator” .

To fix this, edit c:\CloudDeployment\Setup\Common\AzureADConfiguration.psm1, and navigate to line 339. Make the following change:

$roleOid = Invoke-Graph -method Get -uri $getUri -authorization $authorization | Select-Object -ExpandProperty Value | Where displayName -EQ 'Global Administrator' | Select-Object -ExpandProperty objectId

Note: In order to see the module that needs the change, you have to run through the install routine once and let it fail, as it expands the setup directories from a NuGet package if they don’t exist.

If you re-run the install routine, it should get past this stage (assuming the account you provided is a global admin for the tenant :) )

Read More
Danny McDermott Danny McDermott

Hands on with Azure Arc enabled data services on AKS HCI - part 3

This is part 3 in a series of articles on my experiences of deploying Azure Arc enabled data services on Azure Stack HCI AKS.

  • Part 1 discusses installation of the tools required to deploy and manage the data controller.

  • Part 2 describes how to deploy and manage a PostgreSQL hyperscale instance.

  • Part 3 describes how we can monitor our instances from Azure.

In the preview version, if you want to view usage data and metrics for your Arc enabled data services instances, you have to run some manual azdata CLI commands to first dump the data to a JSON file, and another command to upload to your Azure subscription. This can be automated by running it as a scheduled task, or CRON job.

Pre-requisites

Before we can upload and data, we need to make sure that some pre-reqs are in place.

If you’ve been following the previous two parts, the required tools should already be in place : Azure (az) and Azure Data (azdata) CLIs.

We also need to make sure the the necessary resource providers are registered (Microsoft.AzureArcData). Following the instructions here shows how you can do it using the Azure CLI, and is straightforward. The documents don’t show how you can use PowerShell to achieve the same outcome, so here are the commands needed, if you want to try it out :) :

$Subscription = '<your subscriptionName'
$ResourceProviderName = 'Microsoft.AzureArcData'

$AzContext = Get-AzContext

if (-not ($AzContext.Subscription.Name -eq $Subscription)) {
    Login-AzAccount -Subscription $Subscription
}


$resourceProviders = Get-AzResourceProvider -ProviderNamespace $ResourceProviderName  
$resourceProviders | Select-Object ProviderNamespace, RegistrationState
$resourceProviders | Where-Object RegistrationState -eq 'NotRegistered' | Register-AzResourceProvider

Get-AzResourceProvider -ProviderNamespace $ResourceProviderName  | Select-Object ProviderNamespace, RegistrationState

The next recommended pre-req is to create a service principal that can be used to automate the upload of the data. This is easy to create using the Azure CLI and as detailed here.

az ad sp create-for-rbac --name azure-arc-metrics
9. createSPN.png

Make a note of the appId (SPN_CLIENT_ID), password (SPN_CLIENT_SECRET) and tenant (SPN_TENANT_ID) values that are returned after the command has been run.

Run the following command to get the Subscription Id.

az account show --query {SubscriptionId:id}


Make a note of the output.

10.subId.png


The next thing to do is to assign the service principal to the Monitoring Metrics Publisher role in the subscription.

az role assignment create --assignee <appId> --role "Monitoring Metrics Publisher" --scope subscriptions/<Subscription ID>

The last thing to do is to setup a Log Analytics Workspace (if you don’t already have one). Run the following Azure CLI commands to create a resource group and workspace:

az group create --location EastUs --name AzureArcMonitoring
az monitor log-analytics workspace create --resource-group AzureArcMonitoring --workspace-name AzureArcMonitoring

From the output, take note of the Workspace Id.

The last thing we need to retrieve is the shared key for the workspace. Run the following:

az monitor log-analytics workspace get-shared-keys --resource-group AzureArcMonitoring --workspace-name AzureArcMonitoring-demo

Take note of the primary or secondary key.

Retrieving data

Retrieving data from our data controller and uploading to Azure is currently a two step process.

The following three commands will export the usage, metrics and log data to your local system:

azdata arc dc export --path c:\temp\arc-dc-usage.json --type usage --force
azdata arc dc export --path c:\temp\arc-dc-metrics.json --type metrics --force
azdata arc dc export --path c:\temp\arc-dc-logs.json --type logs --force

Note: Be careful using the —force switch, as it will overwrite the file specified. If you haven’t uploaded the existing data from that file to Azure, there is a potential you will miss those collected metrics for the period.

The second step, we have to upload the data to our Azure Subscription.

You can check the resource group that the data controller was deployed to via the data controller dashboard in ADS:

Once you have the service principal details, run the following commands:

azdata arc dc upload --path c:\temp\arc-dc-usage.json
azdata arc dc upload --path c:\temp\arc-dc-metrics.json
azdata arc dc upload --path c:\temp\arc-dc-logs.json

For each command. it will prompt for the tenant id, client id and the client secret for the service principal. Azdata currently does not allow you to specify these parameters, but you can set them as Environment variables, so you are not prompted for them. For the log upload, a Log Analytics Workspace ID and secret is required

Here’s an example PowerShell script that you can use to automate the connection to your data controller, retrieve the metrics and also upload to your Azure subscription:

$Env:SPN_AUTHORITY='https://login.microsoftonline.com'
$Env:SPN_TENANT_ID = "<SPN Tenent Id>"
$Env:SPN_CLIENT_ID = "<SPN Client Id>"
$Env:SPN_CLIENT_SECRET ="<SPN Client secret>"
$Env:WORKSPACE_ID = "<Your LogAnalytics Workspace ID"
$Env:WORKSPACE_SHARED_KEY = "<Your LogAnalytics Workspace Shared Key"
$Subscription = "<Your SubscriptionName> "


$Env:AZDATA_USERNAME = "<Data controller admin user>"
$Env:AZDATA_PASSWORD = "<Data controller admin password>"
$DataContollerEP = "https://<DC IP>:30080"

# Find your contexts: kubectl config get-contexts
$kubeContextName = 'my-workload-cluster-admin@my-workload-cluster'

$dataPath = 'c:\temp'

kubectl config use-context $kubeContextName

azdata login -e $DataContollerEP

az login --service-principal -u $Env:SPN_CLIENT_ID -p $Env:SPN_CLIENT_SECRET --tenant $Env:SPN_TENANT_ID

if ($Subscription) {
    az account set --subscription $Subscription
}

if (-not (test-path -path $dataPath)) {
    mkdir $dataPath
}

cd $dataPath

. azdata arc dc export --type metrics --path metrics.json --force
. azdata arc dc upload --path metrics.json

. azdata arc dc export --type usage --path usage.json --force
. azdata arc dc upload --path usage.json

. azdata arc dc export --type logs --path logs.json --force
. azdata arc dc upload --path logs.json

After you have uploaded the data to Azure, you should be able to start to view it in the Azure Portal.

At the time of writing, if you try and use the link from the Azure Arc Data Controller Dashboard in ADS, it will throw an error:

14.portalerr.png
15.portalerr.png

The reason for this is that the URI that is constructed by the ARC extension isn’t targeting the correct resource provider. It uses Microsoft.AzureData, when the correct one is Microsoft.AzureArcData. I assume this will be fixed in an imminent release of the extension soon as I think the namespace has changed very recently. In the meantime, it can be manually patched by doing the following (correct for version 0.6.5 of the Azure Arc extension ) :

Edit:

%USERPROFILE%\.azuredatastudio\extensions\microsoft.arc-0.6.5\dist\extension.js

Find and replace all instances of Microsoft.AzureData with Microsoft.AzureArcData (there should be 4 in total).

16.portalerr.png

If the file has no formatting (due to it being .js) just do the find/replace. I used a VS Code extension (JS-CSS-HTML Formatter) to beautify the formatting as can be seen in the screen grab above.

Save the file and restart the Azure Data Studio Session. When you now click on Open in Azure Portal, it should open as expected.

As it is, we can’t actually do that much from the portal, but enhanced capabilities will come in time
We can view a bit more about out PostgreSQL instances, however, looking similar to the Azure Data Studio Dashboard:

When looking at Metrics, make sure you select the correct namespace - in my example it’s postgres01

The integration with the portal and the uploaded logs is a bit hit and miss. I found that clicking on the Logs link from my resource did not point me to the Log Analytics workspace I specified when I ran my script, so I had to manually target it before I could query the logs.

20-logs.png


Thanks for reading this series, and I hope it will help others get around some of the small gotchas I encountered when evaluating this exciting technology stack!

Read More