AD FS, Azure Stack Danny McDermott AD FS, Azure Stack Danny McDermott

Azure Stack update 1811 - my favorite feature in this release

Microsoft have just released Azure Stack Update 1.1811.0.101, and for me, it is one I am looking forward to implementing now that I have read the release notes on the new capabilities.

azure-stack-operator1.jpg

Microsoft have just released Azure Stack Update 1.1811.0.101, and for me, it is one I am looking forward to implementing now that I have read the release notes on the new capabilities. For some, the headline feature is the introduction of Extension Host, which simplifies access to the portals and management endpoints over SSL (it acts as a reverse proxy).  This has been known about for some months, as Microsoft have been warning operators of additional certificate requirements and to be ready for it: https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-extension-host-prepare. This is good, as it means less firewall rules are required and I'm all for simplification, but not the most exciting introduction for me - that's the support for service principals using client secrets.

Why's that?

I've been working with Azure Stack with AD FS as the identity provider for many months and previously the only way to provision Service Principals (for use by automation or applications) was to use X509 certs for authentication.  Setting up the certs is pretty cumbersome , as they have to be generated, imported to systems that you want to run the automation on, grab the thumbprint, generate PEM files with the private key for use with Azure CLI.  For me, too many areas where stuff might not work (e.g., the certificate may not be present in the local computer store where the automation is running and throw an error.)

Using X509 certs to authenticate worked for a some scenarios, but not for others.  For instance, a number of third party solutions//tools (and first party!) couldn't be used, as they were written to be compatible with Azure AD Service Principals (which primarily uses secrets).  One example is the TerraForm provider; prior to this update, it could only be used for Azure AD implementations, but in theory it's now open to AD FS as well.  What this release also opens up is the possibility of deploying the Kubernetes ARM template that is currently in preview.  The template requires a Service Principal ClientID and Client Secret, so blocked deployment to disconnected systems previously.

I haven't had the chance to apply the update yet, but I will do it ASAP and look forward to testing whether client secrets for ADFS works as I expect.

 

 

Read More
Azure Stack Danny McDermott Azure Stack Danny McDermott

Adding Additional Nodes to Azure Stack

Last week, I had the opportunity to add some extra capacity to a four-node appliance that I look after. Luckily, I got to double the capacity, so making it an eight-node scale unit.

lego.jpg

Last week, I had the opportunity to add some extra capacity to a four-node appliance that I look after. Luckily, I got to double the capacity, so making it an eight-node scale unit. This post documents my experience and fills in the gaps that the official documentation doesn’t tell you 😊 From a high-level perspective, these are the activities that take place:

  • Additional nodes/servers are racked and stacked in the same rack as the existing scale unit.

  • The Node BMC interface is configured with an IP address and Gateway server within the management network, along with the correct username/password (the same as the existing nodes in the cluster).

  • The Azure Stack OEM configures the BMC and Top of Rack switch configurations to enable the additional switch ports for the additional nodes.

  • Azure Stack Operator adds each additional node to the Scale Unit (one at a time)

    • Compute resource becomes available first

    • S2D re-balances the cluster, once completed, the additional storage is made available

Easy huh?

Here’s a bit more detail into each of the steps:

Each of the additional nodes that are being added to the scale unit must be identical to the existing servers. This includes CPU, memory, storage capacity, hardware versions. The hardware must be installed as prescribed by the OEM and connected to the BMC and TOR switches to the correct ports. It is unclear whose responsibility this is, whether it is the OEM or the operator, so check beforehand when you purchase your additional nodes.

For Azure Stack to be able to add the additional nodes into the scale unit, the BMC interface must be configured so that the IP/subnet/Gateway are correctly configured, as well as the username and password that matches the existing nodes in the scale unit. This is critical as any misconfiguration will stop the node being added. As an example, assume that the management network is set to 10.0.0.0/27 and we have 4 existing nodes in our scale unit. 10.0.0.1 would be our Gateway address, 10.0.0.3 – 10.0.0.6 would be the IP address of nodes 1 – 4, so for our first additional node, we would use 10.0.0.7, incrementing from there up to a maximum of 16 nodes (10.0.0.18)

The network switches must be configured by the OEM. There is currently no provision for the additional configuration to be carried out via automation, and if the network switches were to be opened to an operator, this breaks the principle of Azure Stack being a blackbox appliance. The switches need reconfiguring to enable the additional ports on the BMC and two Top of Rack switches. Unused ports are purposely not enabled to keep the configuration as secure as possible.

Prior to attempting addition of the additional nodes, check in the Administrator Portal whether any existing FRU operations are taking place (e.g. rebuild of an existing node due to a hardware issue).

OK, so all the above has been carried out the Azure Stack operator can start to add in the additional nodes. From the Administrator portal:

Select Dashboard -> Region Management

Select Scale Units to open the blade:

We currently have a nice and healthy four-node cluster 😊. Select Add Node to open the configuration blade.

With the current release, as there is only one region and scale unit, there is only one option that we can select for the first two drop downs.

Enter the BMC IP address of the additional node and select OK

My first attempt didn't work as the BMC was incorrectly configured (The Gateway address for the BMC adapter was not set).

This is the error you will see if there is a problem:

Correcting the gateway address solved the problem. Here is what you’ll see when the scale unit is expanding:

After a few minutes, you will see the additional node being listed as a member of the s-cluster scale unit, albeit listed as Stopped.

Clicking on the new node will show the status as ‘Adding’, if you click on it from the blade.

If you prefer, you can check the status via PowerShell. From a system that has the Azure Stack PowerShell modules installed, connect to the Admin Endpoint environment and run:

 
#Retrieve Status for the Scale Unit
Get-AzsScaleUnit|select name,state

#Retrieve Status for each Scale Unit Node
Get-AzsScaleUnitNode |Select Name, ScaleUnitNodeStatus, PowerState

Whilst the expansion takes place, a critical alert fired. It's safe to ignore this.

Successfully completed node addition ill show the power status as running, plus the additional cores and memory available to the cluster :

It takes a little shy of 3 hours to complete the addition of a single cluster node:

Note: you can only add one extra node at a time, if you do, an error will be thrown as below:

I found that the first node added without a hitch, but subsequent nodes had some issues. I got error messages stating ‘Device not found’ on a couple of occasions. In hindsight, I guess that in the background, the cluster was performing some S2D operations and it caused some clashes for the newly added node. To fix this, I had to perform a ‘Repair’ on the new node. This invariably fixed the problem on the first attempt. If there was more information into what is actually happening under the hood, I could give a more qualified answer.

Eventually, All nodes were added 😊

Adding those additional nodes does not add additional storage to the Scale Unit until the S2D cluster has rebalanced. The only way you know that a rebalance is taking place is that the status of the scale unit shows as ‘expanding’, and will do for a long time after adding the additional node(s)!

Here’s how the Infrastructure File Shares blade looks like whist expansion is taking place:

Once expansion has completed, then the additional infrastructure file shares are created:

Unfortunately, there is no way to check the progress of the rebalance operation either in the portal or via PowerShell. The Privileged Endpoint does include the Get-StorageJob CMDlet, but this is useless unless the support session is unlocked. If it is unlocked, the following script could be used to check:


$ClusterName="s-cluster"

$jobs=(Get-StorageSubSystem -CimSession $ClusterName -FriendlyName Clus* | Get-StorageJob -CimSession $ClusterName)

if ($jobs){
	do{
		$jobs=(Get-StorageSubSystem -CimSession $ClusterName -FriendlyName Clus* | Get-StorageJob -CimSession $ClusterName)
		$count=($jobs | Measure-Object).count
		$BytesTotal=($jobs | Measure-Object BytesTotal -Sum).Sum
		$BytesProcessed=($jobs | Measure-Object BytesProcessed -Sum).Sum

		$percent=($jobs.PercentComplete)
		Write-output("$count Storage Job(s) Running. GBytes Processed: $($BytesProcessed/1GB) GBytes Total: $($BytesTotal/1GB) Percent: $($percent)% `r")

		Start-Sleep 10
	}until($jobs -eq $null)
}

Read More
AD FS, Azure Stack Danny McDermott AD FS, Azure Stack Danny McDermott

Azure Stack portal bug

As I’m mainly working with Azure Stack deployments that use AD FS as the identity provider, I’m coming across some differences and bugs compared to where Azure AD is used.

portalgraphic.png

As I’m mainly working with Azure Stack deployments that use AD FS as the identity provider, I’m coming across some differences and bugs compared to where Azure AD is used. One such bug is the following:

A user is a member of a global AD group that is assigned Contributor role to a Tenant Subscription. They aren’t added directly as a user to the subscription.

When that user connects to the portal, they will be presented with the following if they click on the subscription:

If they try and create a resource within the subscription, they get the following:

By connecting as this same user via PowerShell or Azure CLI, they can create a resource group and resources and do everything expected of a Contributor.

I logged a support case with Microsoft and they have confirmed this is a bug in the portal and that it will be fixed in an imminent release (potentially 1811).

In the meantime, the workaround is to assign users directly to the role rather than via a global group or to use the API / PowerShell / Az CLI to manage resources.

Read More
AD FS, Azure Stack, Uncategorized Danny McDermott AD FS, Azure Stack, Uncategorized Danny McDermott

Creating an Azure Stack AD FS SPN for use with az CLI

Following on from my previous blog post on filling in the gaps for AD FS on Azure Stack integrated systems, here are some more complete instructions on creating a Service Principal on Azure Stack systems using AD FS as the identity provider. Why do you need this? Well, check out the following scenarios as taken from https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-integrate-identity#spn-creation:

There are many scenarios that require the use of a service principal name (SPN) for authentication. The following are some examples:

  • CLI usage with AD FS deployment of Azure Stack

  • System Center Management Pack for Azure Stack when deployed with AD FS

  • Resource providers in Azure Stack when deployed with AD FS

  • Various third party applications

  • You require a non-interactive logon

azsADFS.png

Following on from my previous blog post on filling in the gaps for AD FS on Azure Stack integrated systems, here are some more complete instructions on creating a Service Principal on Azure Stack systems using AD FS as the identity provider. Why do you need this? Well, check out the following scenarios as taken from https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-integrate-identity#spn-creation:

There are many scenarios that require the use of a service principal name (SPN) for authentication. The following are some examples:

  • CLI usage with AD FS deployment of Azure Stack

  • System Center Management Pack for Azure Stack when deployed with AD FS

  • Resource providers in Azure Stack when deployed with AD FS

  • Various third party applications

  • You require a non-interactive logon

I’ve highlighted the first point ‘CLI usage with AD FS deployment of Azure Stack’. This is significant as AD FS only supports interactive login. At this point in time, the AZ CLI does not support interactive mode, so you must use a service principal.

There are a few areas that weren’t clear to me at first, so I worked it all out and tried to simplify the process.

At a high level, these are the tasks:

  • Create an X509 certificate (or use an existing one) to use for authentication

  • Create a new Service Principal (Graph Application) on the internal Azure Stack domain via PEP PowerShell session

  • Return pertinent details, such as Client ID, cert thumbprint, Tenant ID and relevant external endpoints for the Azure Stack instance

  • Export the certificate as PFX (for use on clients using PowerShell) and PEM file including private certificate (for use with Azure CLI)

  • Give the Service Principal permissions to the subscription

Here’s the link to the official doc’s: https://docs.microsoft.com/en-gb/azure/azure-stack/azure-stack-create-service-principals#create-service-principal-for-ad-fs

I’ve automated the process by augmenting the script provided in the link above. It creates a self-signed cert, AD FS SPN and files required to connect. It needs to be run on a system that has access to the PEP and also has the Azure Stack PowerShell module installed.

The script includes the steps to export the PFX (so you can use it with PowerShell on other systems) and PEM files, plus output ALL the relevant info you will need to connect via AZ CLI/ PoSh


# Following code taken from https://github.com/mongodb/support-tools/blob/master/ssl-windows/Convert-PfxToPem.ps1

Add-Type @'
   using System;
   using System.Security.Cryptography;
   using System.Security.Cryptography.X509Certificates;
   using System.Collections.Generic;
   using System.Text;
   public class Cert_Utils
   {
      public const int Base64LineLength = 64;
      private static byte[] EncodeInteger(byte[] value)
      {
         var i = value;
         if (value.Length > 0 && value[0] > 0x7F)
         {
            i = new byte[value.Length + 1];
            i[0] = 0;
            Array.Copy(value, 0, i, 1, value.Length);
         }
         return EncodeData(0x02, i);
      }
      private static byte[] EncodeLength(int length)
      {
         if (length < 0x80)
            return new byte[1] { (byte)length };
         var temp = length;
         var bytesRequired = 0;
         while (temp > 0)
         {
            temp >>= 8;
            bytesRequired++;
         }
         var encodedLength = new byte[bytesRequired + 1];
         encodedLength[0] = (byte)(bytesRequired | 0x80);
         for (var i = bytesRequired - 1; i >= 0; i--)
            encodedLength[bytesRequired - i] = (byte)(length >> (8 * i) & 0xff);
         return encodedLength;
      }
      private static byte[] EncodeData(byte tag, byte[] data)
      {
         List result = new List();
         result.Add(tag);
         result.AddRange(EncodeLength(data.Length));
         result.AddRange(data);
         return result.ToArray();
      }
       
      public static string RsaPrivateKeyToPem(RSAParameters privateKey)
      {
         // Version: (INTEGER)0 - v1998
         var version = new byte[] { 0x02, 0x01, 0x00 };
         // OID: 1.2.840.113549.1.1.1 - with trailing null
         var encodedOID = new byte[] { 0x30, 0x0D, 0x06, 0x09, 0x2A, 0x86, 0x48, 0x86, 0xF7, 0x0D, 0x01, 0x01, 0x01, 0x05, 0x00 };
         List privateKeySeq = new List();
         privateKeySeq.AddRange(version);
         privateKeySeq.AddRange(EncodeInteger(privateKey.Modulus));
         privateKeySeq.AddRange(EncodeInteger(privateKey.Exponent));
         privateKeySeq.AddRange(EncodeInteger(privateKey.D));
         privateKeySeq.AddRange(EncodeInteger(privateKey.P));
         privateKeySeq.AddRange(EncodeInteger(privateKey.Q));
         privateKeySeq.AddRange(EncodeInteger(privateKey.DP));
         privateKeySeq.AddRange(EncodeInteger(privateKey.DQ));
         privateKeySeq.AddRange(EncodeInteger(privateKey.InverseQ));
         List privateKeyInfo = new List();
         privateKeyInfo.AddRange(version);
         privateKeyInfo.AddRange(encodedOID);
         privateKeyInfo.AddRange(EncodeData(0x04, EncodeData(0x30, privateKeySeq.ToArray())));
         StringBuilder output = new StringBuilder();
         var encodedPrivateKey = EncodeData(0x30, privateKeyInfo.ToArray());
         var base64Encoded = Convert.ToBase64String(encodedPrivateKey, 0, (int)encodedPrivateKey.Length);
         output.AppendLine("-----BEGIN PRIVATE KEY-----");
         for (var i = 0; i < base64Encoded.Length; i += Base64LineLength)
            output.AppendLine(base64Encoded.Substring(i, Math.Min(Base64LineLength, base64Encoded.Length - i)));
         output.Append("-----END PRIVATE KEY-----");
         return output.ToString();
      }
      public static string PfxCertificateToPem(X509Certificate2 certificate)
      {
         var certBase64 = Convert.ToBase64String(certificate.Export(X509ContentType.Cert));
         var builder = new StringBuilder();
         builder.AppendLine("-----BEGIN CERTIFICATE-----");
         for (var i = 0; i < certBase64.Length; i += Cert_Utils.Base64LineLength)
            builder.AppendLine(certBase64.Substring(i, Math.Min(Cert_Utils.Base64LineLength, certBase64.Length - i)));
         builder.Append("-----END CERTIFICATE-----");
         return builder.ToString();
      }
   }
'@

# Credential for accessing the ERCS PrivilegedEndpoint typically domain\cloudadmin 
$creds = Get-Credential 
$pepIP = "172.16.101.224"
$date = (get-date).ToString("yyMMddHHmm")
$appName = "appSPN"

$PEMFile = "c:\temp\$appName-$date.pem"
$PFXFile = "c:\temp\$appName-$date.pfx"

# Creating a PSSession to the ERCS PrivilegedEndpoint 
$session = New-PSSession -ComputerName $pepIP -ConfigurationName PrivilegedEndpoint -Credential $creds

 # This produces a self signed cert for testing purposes. It is preferred to use a managed certificate for this. 
$cert = New-SelfSignedCertificate -CertStoreLocation "cert:\CurrentUser\My" -Subject "CN=$appName" -KeySpec KeyExchange 
$ServicePrincipal = Invoke-Command -Session $session {New-GraphApplication -Name $args[0] -ClientCertificates $args[1]} -ArgumentList $appName,$cert
$AzureStackInfo = Invoke-Command -Session $session -ScriptBlock { get-azurestackstampinformation } 
$session|remove-pssession 

# For Azure Stack development kit, this value is set to https://management.local.azurestack.external. We will read this from the AzureStackStampInformation output of the ERCS VM. 
$ArmEndpoint = $AzureStackInfo.TenantExternalEndpoints.TenantResourceManager 
$AdminEndpoint = $AzureStackInfo.AdminExternalEndpoints.AdminResourceManager 
# For Azure Stack development kit, this value is set to https://graph.local.azurestack.external/. We will read this from the AzureStackStampInformation output of the ERCS VM. 
$GraphAudience = "https://graph." + $AzureStackInfo.ExternalDomainFQDN + "/" 
# TenantID for the stamp. We will read this from the AzureStackStampInformation output of the ERCS VM. 
$TenantID = $AzureStackInfo.AADTenantID 
# Register an AzureRM environment that targets your Azure Stack instance 
Add-AzureRMEnvironment ` -Name "azurestacktenant" ` -ArmEndpoint $ArmEndpoint 
Add-AzureRMEnvironment ` -Name "azurestackadmin" ` -ArmEndpoint $AdminEndpoint 

# Set the GraphEndpointResourceId value 
Set-AzureRmEnvironment ` -Name "azurestacktenant" -GraphAudience $GraphAudience -EnableAdfsAuthentication:$true 
    
Add-AzureRmAccount -EnvironmentName "azurestacktenant" `
 -ServicePrincipal ` -CertificateThumbprint $ServicePrincipal.Thumbprint `
  -ApplicationId $ServicePrincipal.ClientId `
   -TenantId $TenantID
 

# Output details required to pass to PowrShell or AZ CLI 
write-host "ClientID          : $($ServicePrincipal.ClientId)"
write-host "Cert Thumbprint   : $($ServicePrincipal.Thumbprint)"
write-host "Application Name  : $($ServicePrincipal.ApplicationName)"
write-host "TenantID          : $TenantID"
write-host "ARM EndPoint      : $ArmEndpoint"
write-host "Admin Endpoint    : $AdminEndpoint"
write-host ""
write-host "PEM Cert path     : $PEMFile"
write-host "PFX Cert Path     : $PFXFile"


# Export the Cert to a pem file for user with Azure CLI
$result = [Cert_Utils]::PfxCertificateToPem($cert)

$parameters = ([Security.Cryptography.RSACryptoServiceProvider] $cert.PrivateKey).ExportParameters($true)
$result += "`r`n" + [Cert_Utils]::RsaPrivateKeyToPem($parameters);

$result | Out-File -Encoding ASCII -ErrorAction Stop  $PEMFile


# Now Export the cert to PFX
$pw = Read-Host "Enter PFX Certificate Password" -AsSecureString
Export-PfxCertificate -cert $cert -FilePath $PFXFile -Password $pw

Here is an example of the output produced:

Next, connect to the Tenant Portal and give the Service Principal access to the subscription you want it to have access to:

Once you’ve done the above, here are the high-level steps to use the Service Principal account with Azure CLI:

  • Trust the Azure Stack CA Root Certificate (if using Enterprise CA / ASDK) within AZ CLI (Python). This is a one-time operation per system you’re running AZ CLI on.

  • Register Azure Stack environment (either tenant/user or admin)

  • Set the active cloud environment for CLI

  • Set the CLI to use Azure Stack compatible API version

  • Sign into the Azure Stack environment with service principal account

For reference, here are the official links with the information on how to do it. It works well, so just follow those:

https://docs.microsoft.com/en-us/azure/azure-stack/user/azure-stack-version-profiles-azurecli2


 
az cloud register -n AzureStackUser --endpoint-resource-manager 'https://management.' --suffix-storage-endpoint '' --suffix-keyvault-dns '.vault.'   

az cloud register -n AzureStackAdmin --endpoint-resource-manager 'https://adminmanagement.' --suffix-storage-endpoint '' --suffix-keyvault-dns '.vault.'

az cloud set -n AzureStackUser

az cloud update --profile 2017-03-09-profile

az login --tenant   --service-principal  -u  -p 

Read More
AD FS, Azure Stack Danny McDermott AD FS, Azure Stack Danny McDermott

AD FS identity integration on Azure Stack – filling in the gaps

One of the clients I’ve been engaged with use AD FS as the identity provider for their Azure Stack integrated system. All well and good, as setting that up using the instructions provided here is *fairly* straightforward: https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-integrate-identity. Here’s a high level of the tasks that need to be performed:

azsADFS.png

One of the clients I’ve been engaged with use AD FS as the identity provider for their Azure Stack integrated system. All well and good, as setting that up using the instructions provided here is *fairly* straightforward: https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-integrate-identity. Here’s a high level of the tasks that need to be performed:

On Azure Stack (by the operator via Privileged Endpoint PowerShell session):

  1. Setup Graph integration (configure to point to on-premises AD Domain, so user / group searches can be performed, used by IAM/RBAC)

  2. Setup AD FS integration (Create federation data metafile and use automation to configure claims provider trust with on-premises AD FS)

  3. Set Service Admin Owner to user in the on-premises AD Domain

On customer AD FS server by an admin with correct permissions:

  1. Configure claims provider trust (either by helper script provided in Azure Stack tools, or manually)

  2. If performing manually:

    1. Enable Windows forms-based authentication

    2. Add the relying party trust

    3. Configure AD FS properties to ignore token ring bindings (If using IE / Edge browsers and AD FS is running on WS 2016)

    4. Set AD FS Relying party trust with Token lifetime of 1440

    5. If performing by helper script:

      1. From Azure Stack tools directory, navigate to \DatacenterIntegration\Identity and run setupadfs.ps1

The only gotcha with the instructions that I encountered was that the certificate chain for AD FS was different than was provisioned for Azure Stack endpoints, so I tried to follow the instructions for this scenario provided in the link above, but they didn’t work.

It turns out that there was a problem with me running the provided PowerShell code:


[XML]$Metadata = Invoke-WebRequest -URI https:///federationmetadata/2007-06/federationmetadata.xml -UseBasicParsing

$Metadata.outerxml|out-file c:\metadata.xml

$federationMetadataFileContent = get-content c:\metadata.cml

$creds=Get-Credential

Enter-PSSession -ComputerName  -ConfigurationName PrivilegedEndpoint -Credential $creds

Register-CustomAdfs -CustomAdfsName Contoso -CustomADFSFederationMetadataFileContent $using:federationMetadataFileContent

…and here’s one that is correctly configured:

The things to check for are that the correct FQDN to the AD domain are provided and the user / password combination is correct. Graph just needs a ‘normal’ user account with no special permissions. Make sure the password for the user is set to not expire!

You can re-run the command from the PEP without having to run Reset-DatacenterIntegationConfiguration. Only run this when AD FS integration is broken.

If you want to use AD Groups via Graph for RBAC control, keep in mind that they need to be Universal, otherwise they will not appear.

Hopefully this information will help some of you out.

In my next blog post, I’ll fill in the gaps on creating AD FS SPN’s for use by automation / Azure CLI on Azure Stack.

Read More