Azure Automation RBAC permissions for a single runbook

Microsoft Azure

Azure automation is great but one of the major features it has never had is the ability to give users access only to certain runbooks and not the whole automation account with every single runbook. Microsoft have now released the ability for us to assign permissions to individual runbooks for users or groups. As of writing this article the ability is only provided via powershell but i am sure a GUI version will be made available over the next few months.

In order to get this working there are two RBACs you need to apply, this first is give the users ability to run automation jobs which is set on the automation account (you only need to do this once) and the second is the ability for the users to see the runbook (you need to apply this to every runbook you want the users to see). Please also make sure that the users does not have read permission over the automation account resource group or they will be able to see all runbooks and run them.

 

Automation Job Operator – this is required once so the users or groups can run the runbooks and is set on automation account level

Automation Runbook Operator – this is required for every runbook you want the user or group to be able to see

Once you do this the users will be able to see individual runbooks only, i recommend you make a shared dashboard to make the user experience better.

This i what it will look like from a users perspective:

SSH to Azure HDInsight Premium cluster nodes

Microsoft Azure

With HDInsight Standard cluster any user can SSH to the cluster nodes. In comparison HDInsight Premium cluster nodes by default restricts SSH access to two groups sudo and root. My initial assumption was that Microsoft may have done this for security reasons but then why allow the root user to login over SSH – this is something that most sysadmins disable.

HDInsight Premium cluster nodes have the following line in the /etc/ssh/sshd_config:

AllowGroups  sudo root

This line states that members of the group sudo and root (e.g. in the later case that’s the root user) are permitted to login via SSH. If you would like to allow any user to login via SSH simply remove this line.

A better approach is to create a group in AD (and ensure this group is synchronised to the HDInsight cluster – this is something that you must configure when you deploy the cluster) and use that instead.

There seems to be a limitation that AllowGroups does not work with AD groups other than those shown via id <username>. I suspect this behaviour may be due to a limitation with winbind – when using SSD and Realmd to domain join a Linux VM, the full group membership is shown for a user. Furthermore if your AD groups contain spaces then because the space character is used to separate users then this won’t work – you can partially work around this by using the asterisk character:

AllowGroups  sudo domain*users

How to enable LZO compression on HDInsight

Microsoft Azure

This blog post explains how to enable LZO compression on a HDInsight cluster.

ARM Template

You will need to modify the ARM template configuration and under the clusterDefinition, configuration section:

  •  Add core-site section and specify the codecs and compression codec class
  • Add a mapred-site enable map output compression and the compression codec class

Install compression libraries on cluster nodes

You will also need to install the compression libraries on the cluster nodes.

On the point of compression libraries, if you are using snappy you will need to install the snappy compression libraries with:

Displaying HDInsight cluster information at login time

Microsoft Azure

This blog post describes how to display HDInsight cluster information when a user logs in via SSH.

Linux HDInsight clusters run Ubuntu, which allows you to customise the Message of the Day (MOTD) by placing scripts under /etc/update-motd.d and make the file executable.

The script has been published to GitHub https://github.com/vijayjt/AzureHDInsight/blob/master/script-actions/get-cluster-info.sh

Azure HDInsight Premium

Microsoft Azure

This blog post discusses HDInsight premium which is currently in preview. HDInsight Premium adds the ability to domain join HDInsight clusters and Apache Ranger which can then be used to control access to databases/tables on HDInsight.

At the time of writing the documentation for HDInsight very poor and there are number of different limitations and issues with HDInsight Premium, most of which are not documented so I hope this post will help others.

Overview

HDInsight Premium allows you to join clusters to Azure AD Domain Services (AAD DS) domains. This then allows you to use accounts in your on-premise domain (provided you are synchronising users/groups via AAD Connect and have enabled password hash synchronisation) in HDInsight. Furthermore, you can then configure role based access control for Hive using Apache Ranger.

At the time of writing HDInsight is currently in Preview and has not GA’d – this means it is not backed by a full SLA. The Premium SKU is only available for “Hadoop” clusters – which do not come with Spark. However, HDInsight Premium with Spark clusters is available in private preview to a limited number of customers.

The domain-joining feature relies on Azure AD Domain Services (AADDS) – which provisions a Microsoft managed read-only domain controller. Until recently it was only possible to deploy AAD DS to a classic VNET which then required a VNET peering connection to the ARM VNET containing your HDInsight cluster (this obviously requires your VNETs are in the same region).

AD Connect and Password Synchronisation

In order to use accounts in your on-premise domain to authenticate with HDInsight you need two things:
  • Firstly you must use Azure AD Connect to synchronise users and groups to Azure AD
  • Secondly you need to enable password synchronisation.
Since HDInsight Premium implements authentication using Kerberos, this requires that Azure AD Domain Services holds the users passwords. This in turn requires that we synchronise password hashes from the on-premise domain to our Azure AD directory.
It should be noted that:
  • Password synchronisation will apply to all users that are being synchronised to Azure AD.
  • Synchronisation traffic uses HTTPS
  • When synchronizing passwords, the plain-text version of your password is not exposed to the password synchronization feature, to Azure AD, or any of the associated services.
  • The original hash is not transmitted to Azure AD. Instead, the SHA256 hash of the original MD5 hash is transmitted. As a result, if the hash stored in Azure AD is obtained, it cannot be used in an on-premises pass-the-hash attack.
Accounts are synchronised from the on-premise Active Directory to Azure AD, the AD objects are then synchronised to the Azure AD Domain Services instance. The synchronization process from Azure AD to Azure AD Domain Services is one-way/unidirectional in nature. Your managed domain is largely read-only except for any custom OUs you create. Therefore, you cannot make changes to user attributes, user passwords, or group memberships within the managed domain. As a result, there is no reverse synchronization of changes from your managed domain back to your Azure AD tenant.
  • On-Premise to Azure AD Syncrhonisation: this is usually on an hourly basis unless you have a newer version of Azure AD Connect and have customised the sychronisation interval.
  • Azure AD to AAD DS: the documentation states this takes 20 minutes, but in my experience this usually takes closer to 1 hour.
What if you don’t want to synchronise the password hash (e.g. if your security department objects)? In this case you can use cloud only users and AD groups instead.

Azure AD Domain Services

Create an Azure AD Domain Services (AAD DS) from the Azure portal. Once the AAD DS instance is created you will receive two IP addresses which are the domain controllers.
Note that it may take 10-20 minutes before the AAD DS IP addresses are available.

VNET DNS

The ARM VNET that contains the HDInsight cluster and the VNET that contains the AAD DS instance will need to be reconfigured to use the two IPs as DNS servers – this is required otherwise the cluster creation will fail.
When you create your Azure AD DS instance the actual domain used will match the domain that you have set as primary in Azure AD. If the primary domain is of the form: <MyAADTenant>.onmicrosoft.com – then this is the domain that will be used. As we will see later this has some implications in terms of LDAPS configuration.

Enabling SSL/TLS for AAD DS

HDInsight requires that you enable LDAPS for AAD DS. If you have a public domain configure as your primary in Azure AD then you can obtain a public certificate from public CA such as Symantec or DigiTrust. However, if your primary is using the default Microsoft provided domain <MyAADTenant>.onmicrosoft.com, then since you don’t own onmicrosoft.com you will need to use a self-signed certificate and request an exception by raising a support case with Microsoft.
Next  an SSL certificate needs to be uploaded in PFX format with the private key (you will also need the password) via the Azure portal and enable Secure LDAP.
Ensure that “Allow secure LDAP access over the internet” is  (which is the default).

Management Server

You cannot RDP to the two IP address or otherwise log on directly to the domain controllers. So how do you manage AAD DS?
The answer is a management Windows Server  2012 R2 VM should be created within the VNET that contains the AAD DS instance and then  using an account that is a member of the “AAD DC Administrators” AD group (created when AAD DS instance is created) join the server to the domain.
Next install the RSAT and DNS management tools.

OUs

Although the Microsoft documentation does not mention this it is my recommendation that you create a HDInsight OU and then OUs under that for each HDInsight cluster. This will make it easy to find the computer, account and SPN objects for each cluster.

Cluster Domain Join Account

When creating a HDInsight Premium cluster, you must specify a “domain account” which is used by the cluster to join the node to the AAD DS instance. The account will require the following permissions:
  • Permissions to join machines to the domain
  • Permissions to place the machines into the OU created for HDInsight clusters
  • Permissions to create service principals within the OU Create reverse DNS entries
The Microsoft documentation appears to give an example of using an account that is a member of “AAD DC Administrators”.
However, given the account used to domain join the cluster also becomes the cluster admin (e.g. in Ambari), I would strongly advice against doing this as such an account would have full control over the AAD DS instance. Furthermore, if you then have multiple clusters e.g. dev, test, production or by business group then they would all have admin access to AAD DS.
Therefore a separate account should be used for each cluster since this prevents a compromise of one cluster being used to gain access to another. Using a separate account enables administration of clusters to be delegated to different teams.
The permissions can then be granted as follows:
  • Right-click the OU, select  Delegate Control
  • Click Next
  • Click Add
  • Select the account to be used for domain joining and click OK
  • Click Next Select , and select . Delegate the following common tasks Create, delete, and manage user accounts
  • Click Next then click Finish
  • From ADUC click  >  View Advanced Features
  • Right-click the OU and click Properties
  • Click the  tab Security
  • Grant the domain join account the following permissions
    • Read
    • Write
    • Create all child objects
    • Delete all child objects
The username (samaccountname) must be 15 characters or less and all lowercase – otherwise cluster provisioning using this account will fail. This is not documented by Microsoft – I had to find this out the hard way by digging through log files and looking at how Microsoft had implemented domain joined clusters. Microsoft are doing this using winbind/samba which is where this limitation comes from (that and a combination of compatibility with Win2K). It’s not clear to me why Microsoft are not using SSSD and Realmd instead.

DNS

A forward DNS zone will be automatically created upon provisioning Azure AD Domain Services however reverse zones are not. HDInsight Premium relies upon Kerberos for authentication, this requires that reverse DNS entries are created for the nodes in the cluster. As a result we must configure (via the management server) reverse DNS zones for all the subnets that will contain HDInsight Premium clusters and enable secure updates.
The reverse DNS zones need to be configured based on the /8, /16 or /24 boundaries (classless ranges are not supported directly).
You might also want to consider adding conditional forwarding for your on-premise domains if you have connectivity to them.

Issues and Limitations

I’ve summarised below the main issues and limitations that I have come across (this is based on testing with HDInsight Premium spark clusters):
  • HDInsight is in public preview – which means that it is not subject to any SLAs
  • The synchronisation lag can be quite large – in theory this should be 1 hour 20 minutes from on-premise AD to AAD DS. However, in practice this is more like 2 hours. You need to keep this in mind when troubleshooting permission / access issues.
  • The documentation for HDInsight is pretty bare bones and contains mistakes/errors.
    • For example, this article https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-domain-joined-configure-use-powershell#run-the-powershell-script links to a repo in GitHub that is supposed to do the AAD DS configuration for you. However, apart from a README.md file it is an empty repo;
    • It does not explain the permissions required to domain join a cluster in enough detail e.g. on the OU, the exact DNS permissions, how to create reverse DNS zones (unless you are a DNS admin you won’t know this);
    • There are special requirements for the username of the domain join account but these are not documented anywhere.
  • If you delete a cluster it leaves behind the DNS entries (forward and reverse), computer accounts, as well as the user and service principal objects. This obviously clutters AAD DS but can also cause problems if you want to do CI/CD and the objects already exist.
  • The components that are available with HDInsight are also not well documented e.g.
    • Jupyter is currently not available – presumably because the it’s not that trivial to integrate with kerberos. You can use Zeppelin though.
    • The Microsoft provided Hue script action will not work because it does not support kerberos – a significant amount of effort is required to do this. In light of this you would have to use Ambari Hive views.
    • Oozie is not available on the cluster either.
    • Applications are not supported – which means you cannot add edge nodes via an ARM template
  • Other things that are not documented include
    • If you are using Azure Data Factory (ADF) then Hive activities do not work.
    • Spark activities with ADF does work but you have to disable CSRF protection in the livy.conf configuration file (you can do this via Ambari) but this isn’t a good idea from a Security standpoint.
  • Ranger policies are only provided for Hive/Spark – they do not cover HDFS. I believe this is because of the limitations with Azure Storage authorisation and authentication listed here https://hadoop.apache.org/docs/current3/hadoop-azure/index.html#Limitations

How to configure Apache Zeppelin to use LDAP Authentication on HDInsight

Microsoft Azure

Apache Zeppelin supports integration with Active Directory/LDAP via the Shiro pluggable authentication module.

Configuration files

Configuration file Settings Description
zeppelin-config zeppelin.anonymous.allowed: false This disables anonymous access to Zeppelin
zeppelin-env The shiro_ini_content setting should be configured with the following:

[users]
# List of users with their password allowed to access Zeppelin.
# To use a different strategy (LDAP / Database / …) check the shiro doc at http://shiro.apache.org/configuration.html#Configuration-INI Sections
# LDAP configuration, for user Authentication, currently tested for single Realm
[main]
activeDirectoryRealm = org.apache.zeppelin.server.ActiveDirectoryGroupRealm
activeDirectoryRealm.systemUsername = CN=<service account tbc>,CN=Users,DC=my,DC=domain,DC=com
activeDirectoryRealm.systemPassword = <not the password>
#activeDirectoryRealm.hadoopSecurityCredentialPath = jceks://use r/zeppelin/zeppelin.jceks
activeDirectoryRealm.searchBase = CN=Users,DC=my,,DC=domain,DC=com
activeDirectoryRealm.url = ldap://<domain controller fqdn>:389
#activeDirectoryRealm.groupRolesMap = ‘tbc’
#activeDirectoryRealm.authorizationCachingEnabled = true
shiro.loginUrl = /api/login
[urls]
# anon means the access is anonymous.
# authcBasic means Basic Auth Security
# To enfore security, comment the line below and uncomment the next one
/** = authc

The first few lines under main defines the user account and password to use to connect to the domain controller.

We then define the search base path to use when looking up users/groups.

We  then define the domain controller to connect to.

The last line enables authentication for all URLs.

You have two options for applying these configuration changes:
  • Through the Ambari web interface or;
  • You can make these changes at cluster deployment time with ARM template HDInsight bootstrap configuration, although these configuration files are not officially listed in the Microsoft documentation it is possible to configure these in an ARM template (in the clusterDefinition, configurations section).

The only problem is you will not likely want to add the password to the ARM template so you could add the password via the Ambari web interface post deployment or inject it into the template at runtime.

How to create user specific databases on HDInsight Standard

Microsoft Azure

This post describes a way of creating user specific databases on HDInsight standard. This uses a similar technique as described in a previous post.

Overview

The script creates databases using beeline taking a list of the databases names from a CSV file. Since we are creating user specific databases the database names should match the username.
  • First create a CSV file
    • The first line should contain the header name dbname
    • The subsequent lines should contain the
  • Store the CSV file on the default Azure Storage account
  • Attach the Storage account to the HDInsight cluster
  • Deploy the cluster with an ARM template that uses a custom script
  • The script
    • Determines the cluster name
    • Based on the cluster name it looks for a file named <clustername>-user-db-list.csv on the storage account
    • Copies the file to the node and iterates through the lines in the file and iterates through the file to create the databases in the file

Future improvements

If we wanted to create user specific databases but use a different name for the database then the CSV and script can be modified to use two columns; the first the database name and the other the owner of the database.
The script assumes the storage account that contains the CSV file contains the string artifacts in its name; the script could and should be updated to take the storage account and container name as parameters.

Modifying the PAM Configuration on HDInsight Standard

Microsoft Azure

As mentioned in a previous blog post on HDInsight standard, Microsoft modify the PAM configuration (at least this is the case on HDInsight 3.5) such that when you create a user and try to set the password you are asked to set the password twice.

The gist below can be used to reset the PAM configuration. In the code below the various PAM configuration files have been gzipped and base64 encoded.

This technique of using gzip compressed and base64 encoded files is very useful when running script actions on HDInsight or even configuring VMs via custom script actions on Linux.

The code creates files:

  • common-account
  • common-auth
  • common-password
  • common-session
  • common-session-noninteractive

echo 'H4sIACqlxVgAA5VUyW7bMBA9218xiA9JANtpeigKFDnkAwoEaG9FEYzFkT0xRapc7Khf30dKNrK0
h54Mk5w3b5nRYr6gG0nNTc/d2tw0vuu8W3HT+OwSrYhz2vmgvzmpdxQlJXXbSOM7Sp7YWhyHgzYS
5wugfd9ppFatEH7VNTYbMdQG35FPOwmn16vYS6OtNvRw/xWArtVtrYtLoLAzFHc+W1OuEqsjJqsx
kW8JMG+Idd5kVOKGExlp1QlAyrtGXApsCZIkRuq91Wag1gfKUahowKM4xCTdmkBeSjlnmwr95IHi
nR1w6IYT8yIbxSHScecBMrkViYOQPPcaIBiEq69xx8Yf19Wa+1jYw2m6XX9Y364+LdH9hVsdO96i
djOUR6sicZV7w0nK0cRrXTz2lHiPzubAMGcroysal6SVeJCSkLhiffVk8HnyOIMkuwEo1jcw5mSd
aE1nI/AGeIG4TfifXjiyQcV+WbOBfiC8ZQljRg2wykpTowGzMfepEVz+Jn8pvvp8XWMxgrRtLI7h
EQql+lp49BJWPTf70uBE+6pcXDwE7TgMFyPF6/kUyexHzDX3u4/k5PhY2iW/fwzyy9wZ787S7nTr
IPvnDKQes9PndfT0HuX2P1CO6jbqTAUadVzGqqLFymyggrQl5ychVFuIieemQM8aNclsVvHKBAKs
GAexoyExFZwjksN29B6v9SAIP+Xg6MA2S2mSqoka3SW2B2zZBmEzfKkbgmHhg1cTEelUiQ1HxiQh
II6njJ3bSMNlXRySLLf4DqCMJlswWKYEGrHtI69TOLzx4HNUfCOEm90I9pS7HpFCpHktFmszaUXQ
naZRbZm28xR0ZTb/PQb3xmgZOrbnSVgQtmBau1ezOm7D/A8+arm/AQUAAA==' | base64 -d | gunzip > common-account
echo 'H4sIAK2lxVgAA3VUTWvbQBA9279iiA9JwFaaQkuh5GBoC6UpBNqeSjGr3ZE9sbSr7odd/fu+Xckh
TdKTbO3M2/cxo8V8QVcc9VWvuspcadd1zq5UijtaUX6wjaJVFGcpcIxit4HGKoqOVNvitT+I5jBf
AOv7TgI10jLhKVa3ybChxruOHMD8qXoVetbSiKa79VcA2ka2pS8sgaKsobBzqTX5KCqxpKiVEMk1
BJinzDpnElpxpCIZbsQyUHKhRpVX7TMpescdU+M8pcCUxaA4DCFyh84LrrbVcjQm7JRxxyXdfljf
LekL+5q9C0vCWXVZERRzvlKlNmbNMCUjAi4z8MpIvhAMflj584w3652yErpQFffWIQtEFHRdvaqu
V2+XAHpkaKes2sLPeshFJaZV6o2KnF9NLKocg6Oo9vDJHBT82/JonIC3FJqec4hsczrFtcGlKYbk
0WcHoLROg/jJXJYSYM1wDXieVBPxPz7SX6NjvyzxwQQgPGUJd0YNmIOWdTEBzMbRmC6Cp9/4heaL
d5clMMMYiBaOoQRtIOuL39SzX/VK7zP8ifRFPji789IpP5yNBC/nGXX2MySNsQ03r0/8b2Rroe3X
DDdvEvKqgiOb2tbtN4E1jHnSef1i51FsLdbk5r2v32zKNpVfWisM3iYOPd98+nz7kcp/s2ndFiMe
/bBpxIe46VUIk7rzULQ12LQa2kgasm6SR4UHmzDS8vw7SZDIs1nhYdgOIJGdhP7RoxAzyBFRYqN6
h2o5MKYhJm/poNrE+YZYfJVgz7FxFg63npUZ3pelwvSogxMTkPHUia8CQif2HvncJ+xpzVrlPbCI
Np/i24E2mozDpJmccMAXYuR1ykvVDnyOgu8Kw5sR7D51PVJ2yZpHSj2bSSiC7ySOUvPsPUxFlyf1
/2OxNqftfJiMBWEnpiX8Z3LH3Zj/BcXPIoQwBQAA' | base64 -d | gunzip > common-auth
echo 'H4sIAPSlxVgAA3VU224TSRB9tr+i5DwEpHiCg2BXu/KDiSIhIS7agHhYIW/PdI1duKd76IuD/35P
zy0EyNNcuuvUqVOn6mx+Rpccq8tWNYW+rFzTOLtsVQh3zmta0vi69GxUZE2N08lwoP4qRUfKGArs
j1JxmJ8B8ONeAtVimPAUW5mkEVh715CLe/bj7WVouZJaKvqweQtAW8uuiwsXQFFWU9i7ZHQ+ikos
KTISIrl6YhH3KpLmWizjnScemVfJQEkBqfFR7ZXdcf70U02hIHDlHK+SiZktZNgmK9+LOWJvvrdG
WRUFdSLneEauzb/CX0OxTIuwVy9WV4vhhNiqMrMLymTJbl9vcPog7WeJKC2CM5L2Ubnm+JDNp5yt
8qc2IuSDF+cJbWAVgN0XhvtDzkWjXyyKe0quDFXyPHHyjGI6ZXIMRP3v/avb60//3GyvX99cv7nd
3rzbvDofb4sFjnE7sQX4hB73lnuRJyEaZVsFVWsQ6zs7SNPJtwmDarQqnhWr5cuLvtzRGYhGsKby
lC8tVYr7ZWo1XJZ/DTIUuR5HUR2YlD4qGAEJgZuhLkg6nTxnN7LVnSRwxMmlwU+QAE46ddVUykzG
Yen4lgzuwPOk6ojvHxtQIuJw0fkQYgPhZ5awVV8DbGe4Gn3SKzEkQuMg26/BT/582smmGc42vWAI
A1k/iMwec1gdMvxI+kk+WMAIjfKnRU/w6Xz01czztyRBIs9ms9yjyiPeSFkEB4miP62fk5baHfKz
8qwlrpcrMuPrFTViDdv16hm5+/M0vd6n+jekCmYK66tRrrXsLKT8MptGCFkHE2b9trn46A4EHtta
fIjbjEb96IBfw03Jfv3Hb5KsfpvkTmwpVuc8j+MPsp73vq+xq0qIQlKTdYOu1KVhHR5XUrM9IU9u
I8QfVk3MQHfwEfZS63Bbjpx1Tt7SUZnEOUvsmirBnmNvYUkp41np09/dsMO66uhE52keIsXu4Dhi
72GOrwnbruRKoT7wRQBOA0eE0aANbK6zvQL2bM9rNIsqHfjcCbYzq2rfg31NTQuLuWT1T9Wix0Ox
cF4jsS83m3+yZZNH5XFfbjRcghlQZrLmGVahHrbAg9Hph3P+Pyb57/5/BgAA' | base64 -d | gunzip > common-password
echo 'H4sIADemxVgAA3VUy27bMBA8x1+xsA95wFKaS1Gg6CEfUCBAcyuKghJX9sYUqfLhVH/fIfVImqYH
wQa5Ozs7O8vdZke3HNvbQfW1vm1d3ztbBQ5BnKWK5n+VZ6Mia+qdToYDTYEUHSljEOXP0nLY7AD3
eJRAnRgm/IptTdJI7LzrycUj+yW6CgO30klLD/dfAWg7OZS8sAeKsprC0SWj81VUYkmRkRDJdSuL
eFSRNHdimaIKp5AJNUwD+875nnUGighjClH5WFAZHzDmzkL+f6PseEMnwc1VA5KgHdmrNsqZcw5g
LER4dXpdl17vSzq0o7v6Q31Xfdyj2Kv2e2XVAd03Yw6qVIrHKg0aUuYjMFfJxDqL5tDACcX0WaHb
A2fcDLUniRnJc5Yc3IFW2h5dmkVLPpMcgWJcq8yqDkuRu2FoATxPqgP/osZcmRpknPZFlhQYCG9Z
Qs+pB+hlGK1j6GA2DXIuVBN943eSrz5dE0qjGMZnQr1BCNJAFl9mgTFVg2pPGX4hfZUvtg9eeuXH
7UTwejMP6+L7TPzL3Y+LiwvU+wmMXmId3Ax+GQp0B1s2gCbpMLoZnUJqW2YdVjzPv5IEiTyjabbj
hDWAAS/OAc4zxIQDB4fo7ArPMXlLZ2US5yKxdCbBXsKhsKMynpUePwOqOEKdnegAledMsQfITuw9
FHpK8HXDrcIQQBcJuA0ckTZxDnnjdNY4YKMmXotiqnHg8yzYQ1btcQJ7Sv0AnV2Cef9q1rN+R7nH
PA2cpR5btKhVIMGiVJtuVNs6rzM7GKOIM4bI/eonsct7YtxBbI3zsLgrL36MyIWngzPngpLretcY
YMBWWroOMtoIlKngkjLpv97jaWBjAKQlDEaNs0k9Tjz3LvLLeoNMXjA4lLaIemlzWy/KkBuysZWZ
lSnXkzCZ++rZPu/R/017r7VMOP/4dpW+wFv5DXRab9+Ufxbb4Cl6PyQHTKJPAbvlPXu7udPbsLa4
MCj9G8HkQ+7wD73dztYABgAA' | base64 -d | gunzip > common-session
echo 'H4sIAH6mxVgAA3VUy27cMAw8b76C2BySAGunuRQFih7yAQUCNLeiKGSJ3mVWllw9NvXfdyTbmzRN
D4YBkxwOh0NfXlzSLSd9O6qhNbfaD4N3TeQYBW/nnbjEQekkJ6aG1kBgqxIbGrzJliNA5kpKnpS1
hMLmdWXkcBJdEpH6eJBIvVgmvMVpmw2g+uAH8unAYc1u4shaetH0cP8VDVwv+1oXd0BRzlA8+GxN
CSUljhRZiYl8v/KidFCJDPfimJKKx1gIdkwjh96HgU0BSkgDxaRCqqiMBxjvz1Hnj20d5D6WPChH
d+2H9q75uAPSq9kG5dQeo3VTSWpUTocmjwbKlU+gpbJNbVHEg92RSZmTwih7LrgFakeSClLgoi+I
Aa3ONPm8KJID6twEFOu1sufRWaqWHWNQ4AVSPeaooy6dqUPFcVdnzpGB8JYlxJpnwOCWIQE2DGbz
lpZGLdE3fqf4+tMNoTWaYTcWiiEFZSCLp7DADppR6WOBX0lfl8D2IcigwrSdCd5cLKpvvi/Ev9z9
2Gw26PcTGIOkNvoF/CpW6B676wBN0mOFCzrFrDWziWe8wL+yREm8oBl204w1ggGvtgDOM8SEvUaP
7OKDwCkHRydlM5cmqU4m0V3BfvCasoGVmT4DqjpCnbyYCJWXSnF7yE4cAhR6yjBtx1phCaCLAkQj
J5TNnGPErk3ROOJcZl6rYqrz4PMsMCsrfZjBnvIwQmefnfl72MDmHeUeyzbwLQ84kVWtCgkWtdsc
UVr7YAo7GKOKM8XEw9lP4ta/ifV7cS2+x9Vd5apTQi08Hb09VZTSN/jOAgO2MtL3kNEloMwN15JZ
/3Mcd8/WAshIHK2aFpMGfAk8+PRypwQy5cDgUNoi62XMbbsqQ34sxlZ2UaaGZ2EK97Nnh3JH/zft
vTEy4/zj27P0Fd7Jb6DTOfqm/bO4TpypKZfrv+jtYc6nf/EH3Yf0SL4FAAA=' | base64 -d | gunzip > common-session-noninteractive

These should replace the ones in /etc/pam.d/

HDInsight Creating Local OS and Ambari users via the REST API

Microsoft Azure

HDInsight is a semi-managed Hadoop cluster on the Microsoft Azure cloud. Although the standard version isn’t geared towards multiple users from a security perspective I recently had to figure out a way to create local users across the cluster at cluster build time. This blog post describes one way you could do this.

Overview

The way we will create users on the cluster at boot time is:

  • Create a CSV file containing usernames,user and group ids, shell etc.
  • Store the CSV file on the default Azure Storage account
  • Attach the Storage account to the HDInsight cluster
  • Deploy the cluster with an ARM template that uses a custom script which creates the user accounts
  • The script
    • Determines the cluster name
    • Based on the cluster name it looks for a file named <clustername>-user-list.csv on the storage account
    • Copies the file to the node and iterates through the lines in the file and:
      • Creates local OS users;
      • Create local OS groups for the users;
      • If the users is an admin user it adds them to sudoers;
      • Creates user accounts in Ambari – note that in a HDInsight standard cluster the user accounts in Ambari are separate to the OS level accounts;
      • Creates Pig and Oozie views if they do not already exist;
      • Adds the user to either the
        • clusteruser group which will have read only access to cluster stats/configuration and access to Hive views etc in Ambari
        • clusteradministrator which will have full access to manage everything through Ambari
        • grants access to various Ambari views to the aforementioned groups

The CSV file containing the user details includes the uid, we do this to ensure the users have the same uid across all nodes in the cluster.

The creation of Ambari users, groups, checking membership of the Ambari groups and creating views makes heavy use of the Ambari REST API.

At the moment the script lists all the storage accounts associated with the cluster and finds one that contains the string artifacts and looks on this storage account for the user list CSV file. The script needs to be improved by making the storage account and container names parameters.

I have uploaded the script to github here https://github.com/vijayjt/AzureHDInsight/blob/master/script-actions/create-local-users.sh

Ambari REST API

As mentioned the script makes heavy use of the Ambari REST API.

  • Checking a user exists
    • We do this by calling the REST API endpoint http://${ACTIVEAMBARIHOST}:8080/api/v1/users
    • Then iterating to through the users returned to see if the username is in this list
    • Lines 2 – 16 in the gist shown below provides some example code for checking if a user exists in Ambari (this is an extract from the full script mentioned above).
  • Check if a user is a member of an Ambari Group
    • We call the REST API endpoint http://${ACTIVEAMBARIHOST}:8080/api/v1/groups/${GROUP_TO_CHECK}/members
    • To obtain a list of users that are a member of the specified group
    • Then we iterate through the list to see if the user is in the list
  • Adding a user to Ambari
    • As shown in line 88 of the gist, we make a post request to the endpoint http://${ACTIVEAMBARIHOST}:8080/api/v1/users with a JSON body that contains the username and password
  • Adding a group to Ambari
    • As shown in line 91 of the gist, we make a post request to the endpoint http://${ACTIVEAMBARIHOST}:8080/api/v1/groups with a JSON body that contains the group name
  • Adding a user to a group in Ambari
    • As shown in line 94 of the gist we make a post request to the endpoint http://${ACTIVEAMBARIHOST}:8080/api/v1/groups/${ambarigroup}/members with a JSON body that contains the username and group name

function check_ambari_user_exists()
{
USER_TO_CHECK=$1
# Get list of users in Ambari
USER_LIST=$(curl -u "$USERID:$PASSWD" -sS -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/users" | grep 'user_name' | cut -d":" -f2 | tr -d '"','',' ' )
for User in $( echo "$USER_LIST" | tr '\r' ' '); do
echo "-${User}-"
if [ "$User" == "$USER_TO_CHECK" ];then
echo 0
return
fi
done
# the user does not exist
echo 1
}
#end function check_ambari_user_exists
function check_ambari_group_exists()
{
GROUP_TO_CHECK=$1
# store the whole response with the status at the and
HTTP_RESPONSE=$(curl -u "$USERID:$PASSWD" --silent --write-out "HTTPSTATUS:%{http_code}" -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups")
# extract the status
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
#http_response=$(curl -u "$USERID:$PASSWD" -sS -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups" | grep 'group_name' | cut -d":" -f2 | tr -d '"','',' ' )
if [[ "$HTTP_STATUS" -ge 200 && "$HTTP_STATUS" -le 299 ]]; then
# Get list of groups in Ambari
IFS=$'\n'
GROUP_LIST=$(curl -u "$USERID:$PASSWD" -sS -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups" | grep 'group_name' | cut -d":" -f2 | tr -d '"','',' ' )
unset IFS
for GROUP in "$GROUP_LIST"; do
#echo "$GROUP"
if [ "$GROUP" == "$GROUP_TO_CHECK" ];then
echo 0
return
fi
done
else
echo 1
return
fi
# the group does not exist
echo 1
}
#end function check_ambari_group_exists
function check_user_is_member_of_ambari_group()
{
USER_TO_CHECK=$1
GROUP_TO_CHECK=$2
# store the whole response with the status at the and
HTTP_RESPONSE=$(curl -u "$USERID:$PASSWD" --silent --write-out "HTTPSTATUS:%{http_code}" -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups/${GROUP_TO_CHECK}/members")
# extract the status
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
#http_response=$(curl -u "$USERID:$PASSWD" -sS -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups/${GROUP_TO_CHECK}/members" | grep 'user_name' | cut -d":" -f2 | tr -d '"','',' ' )
if [[ "$HTTP_STATUS" -ge 200 && "$HTTP_STATUS" -le 299 ]]; then
# Get members of the group
IFS=$'\n'
USER_LIST=$(curl -u "$USERID:$PASSWD" -sS -G "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups/${GROUP_TO_CHECK}/members" | grep 'user_name' | cut -d":" -f2 | tr -d '"','',' ' )
unset IFS
for User in "$USER_LIST"; do
#echo "$User"
if [ "$User" == "$USER_TO_CHECK" ];then
echo 0
return
fi
done
else
echo 1
return
fi
# the user is not a member of the specified group
echo 1
}
#end function check_ambari_group_exists
# Creating a user in Ambari
curl -iv --write-out "HTTPSTATUS:%{http_code}" --output /dev/null --silent -u "$USERID:$PASSWD" -H "X-Requested-By: ambari" -X POST -d "{\"Users/user_name\":\"${username}\",\"Users/password\":\"${userpassword}\",\"Users/active\":\"true\",\"Users/admin\":\"false\"}" "http://${ACTIVEAMBARIHOST}:8080/api/v1/users")
# Create a group in Ambari
curl -iv --write-out "HTTPSTATUS:%{http_code}" --output /dev/null --silent -u "$USERID:$PASSWD" -H "X-Requested-By: ambari" -X POST -d "{\"Groups/group_name\":\"${ambarigroup}\"}" "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups
# Add a user to a group in Ambari
curl -iv --write-out "HTTPSTATUS:%{http_code}" --output /dev/null --silent -u "$USERID:$PASSWD" -H "X-Requested-By: ambari" -X POST -d "[{\"MemberInfo/user_name\":\"${username}\", \"MemberInfo/group_name\":\"${ambarigroup}\"}]" "http://${ACTIVEAMBARIHOST}:8080/api/v1/groups/${ambarigroup}/members

 

Important considerations

PAM Configuration

It should be noted that Microsoft do modify the pam configuration files which can lead to an issue where the passwd <username> command will prompt you twice because they have added kerberos related configuration items to PAM. I suspect they made this change for Azure HDInsight Premium which supports domain joined clusters but accidentally included this on Standard clusters as well. I will write a separate post on how to change the PAM configuration.

 

Force password change

When the account is created you should force the user to change their password shortly afterwards.

Security of user list file on Azure storage

The user passwords are stored in plaintext on the Azure storage account. This is one of the reasons we delete the file after cluster provisioning. If you were to leave it on the storage account any user on the cluster will be able to view it using hdfs commands.

If you persisted the script and you scale the cluster from the Azure portal and add additional nodes you will need to reinstate the file otherwise the local OS users will not be created.
It goes without saying but you should ensure storage account encryption is enabled and the container is private.

As an alternative you could look to distribute SSH keys instead of using password authentication.

What other options are there for provisioning users on HDInsight Standard?

If you are using a configuration management tool such as Chef, Ansible or Puppet, you could create the accounts via one of these tools and also use it to distribute SSH keys so that no passwords are involved.

If you do take this approach you need to be careful that there are no scripts that rely upon the chef/ansible code to execute first and create the users otherwise the script actions may fail or you may end up with a race condition.

How to create an Azure AD Application and Service Principal that uses certificate authentication

Microsoft Azure

Creating Azure AD Applications and Service Principals that use certificate based authentication is not quite as straightforward as you might expect.

The following article provides the instructions on how to do this https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authenticate-service-principal#create-service-principal-with-self-signed-certificate

However, what if you want to use multiple certificates using the KeyCredentials parameter to New-AzureRmAdApplication? In this case you might guess from the following article that you could create an array of objects of type

Microsoft.Azure.Commands.Resources.Models.ActiveDirectory.PSADKeyCredential

The problem is if you have a version of the Azure PowerShell module newer than 4.2.1, then the object will not have a type property as per this issue: https://github.com/Azure/azure-powershell/issues/4491

Assuming you don’t want to downgrade to version 4.2.1 how do you achieve this? Well the issue mentions the correct way of doing this is to use the New-AzureRmAdAppCredential cmdlet as shown in the example code below:

Login-AzureRmAccount
# Create the self signed cert
mkdir c:\certificates
$currentDate = Get-Date
$endDate = $currentDate.AddYears(1)
$notAfter = $endDate.AddYears(1)
$pwdplaintext = "P@ssW0rd1"
$thumb = (New-SelfSignedCertificate -CertStoreLocation cert:\localmachine\my -DnsName AadAppCertTest1 -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" -NotAfter $notAfter).Thumbprint
$pwd = ConvertTo-SecureString -String $pwdplaintext -Force -AsPlainText
Export-PfxCertificate -cert "cert:\localmachine\my\$thumb" -FilePath c:\certificates\AadAppCertTest1.pfx -Password $pwd
$thumb = (New-SelfSignedCertificate -CertStoreLocation cert:\localmachine\my -DnsName AadAppCertTest2 -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" -NotAfter $notAfter).Thumbprint
$pwd = ConvertTo-SecureString -String $pwdplaintext -Force -AsPlainText
Export-PfxCertificate -cert "cert:\localmachine\my\$thumb" -FilePath c:\certificates\AadAppCertTest2.pfx -Password $pwd
$thumb = (New-SelfSignedCertificate -CertStoreLocation cert:\localmachine\my -DnsName AadAppCertTest2 -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" -NotAfter $notAfter).Thumbprint
# Load the certificate
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate("C:\certificates\AadAppCertTest1.pfx", $pwdplaintext)
$certValue = [System.Convert]::ToBase64String($cert.GetRawCertData())
# Create the Azure AD Application using the first certificate
$adapp = New-AzureRmADApplication -DisplayName "TestAzureAdApp01" -HomePage "http://TestAzureAdApp01.azurewebsites.net/" -IdentifierUris "http://TestAzureAdApp01.azurewebsites.net/" -CertValue $certValue -StartDate (Get-Date $cert.GetEffectiveDateString()) -EndDate $notAfter
# Next add the second certificate using the New-AzureRmAdAppCredential
$cert2 = New-Object System.Security.Cryptography.X509Certificates.X509Certificate("C:\certificates\AadAppCertTest2.pfx", $pwdplaintext)
$certValue2 = [System.Convert]::ToBase64String($cert2.GetRawCertData())
New-AzureRmADAppCredential -ApplicationId $adapp.ApplicationId -CertValue $certValue2
# Running Get-AzureRmADApplication and piping it to Get-AzureRmADAppCredential should show the two keys
Get-AzureRmADApplication -ApplicationId $adapp.ApplicationId | Get-AzureRmADAppCredential
# Finally create the Azure AD Service Principal
$sp = New-AzureRmADServicePrincipal -ApplicationId $adapp.ApplicationId