Enable ADFS OAUTH2 for Mattermost 3.0


Since Mattermost released a new version with a lot of bug fixes, features and security enchantments i decided to release a second version for Mattermost with ADFS integration. This is a modified version of the May 17, 2016 stable Mattermost release v3.0.2


The advantages of using ADFS over other methods:

  • True SSO
  • Much more secure then LDAP or gitlab with LDAP
  • Proven for Enterprise

We have also made sure that the following features are available:

  • Other domains and forest can also use Mattermost if invited and a trusts exists
  • Authentication is based on AD SID so if a user is deleted or leaves the company a new user with the same domain username will get a new account with a different username. This is very important as it insures that users are unique and that even if you have two users with the same usernames in different domains they will each get there unique username and not effect one another.
  • Please note that emails do need to be unique, if a user tries to register with an email which is already in the system they will get an error informing them that a user already exists.
  • Visual error message if user is denied access from ADFS (Added on 21/06/2016)

Here is the guide on where to get it and how to configure it:

adfsm10adfsm9  adfsm3 adfsm4ADFS_Error_Mattermost

You will first need to download/compile and install the new version which can be found below:

You can download the compiled version from form https://github.com/lubenk/platform/releases or here:

Linux: https://gi-architects.co.uk/wp-content/uploads/2016/05/mattermost-team-linux-amd64.tar.gz
OSX: https://gi-architects.co.uk/wp-content/uploads/2016/05/mattermost-team-osx-amd64.tar.gz
Windows: https://gi-architects.co.uk/wp-content/uploads/2016/05/mattermost-team-windows-amd64.tar.gz

You can get the code from: https://github.com/lubenk/platform/tree/ADFS-3.0.2


Now that you have a working copy it’s time to configure ADFS 3.0 for OAUTH2.0 please use the instructions on : https://gi-architects.co.uk/2016/04/setup-oauth2-on-adfs-3-0/

with the following additions notes:

ClientID : Just generate one at https://www.guidgenerator.com/online-guid-generator.aspx (please make sure this guid is either more then or less then 26 characters).
Redirect URI : https://mattermost.local/signup/adfs/complete (where mattermost.local is the dns address of your mattermost app)
Relaying party identifier: you can just use your dns address of your mattermost app

The following Claim setup, please make sure the claims are exact, the rules name can be anything:






Once you setup adfs you need to configure mattermost, you can either do this via the config.json or via the admin interface as show below:


Please make sure you copy the public key of the ADFS root CA of your Service Communications Certificate in PEM format (the format that has —-BEGIN CERTIFICATE—- in it) into /usr/local/share/ca-certificates and name it with a .crt file extension, then run “sudo update-ca-certificates”.

You also need the public key of the signing certificate in PEM format somewhere on the server which you will need to reference in the settings.

And that is it you should have a working version with ADFS


Additional Update (21/06/2016)

I have coded in an error checking method if you deny access from the ADFS side so now it will display a nice message as show above.

If you want to configure ADFS to deny access for users based on group or email or other variables you can easily do by:

Go into you Mattermost reply party and edit the claims, once in go to Issuance Authorization Rules and delete the default one which permits access for everyone.


Once deleted add a new Rule based on “Permit or Deny Users Based on an Incoming Claim”


And chose the type of filtering for example i chose based on group membership and then allow.


You can create multiple rules as well as create deny rule, just make sure you order them correctly.

Azure Tier Comparison (Benchmark)

Microsoft Azure

So i had some free time and an itch to understand how the different tiers really differed from each other in Azure. It’s important to understand that this is not a comparison against other cloud providers, it’s not designed to show the maximum potential in azure and the results are valid as of 05/08/2016 (most likely will be outdated in 3 months).

To elaborate on the three points above:

  • I wanted to see the differences in terms of performance between the different VM size which Azure offers. I don’t want to compare them with other cloud providers as there are too many variables to considers and will be like comparing apples to oranges.
  • The test i ran where identical for each VM so to show the true differences between them, while i tried to configure the test to get the max performance i can’t grantee that the results i have are the absolute maximum you can achieve nor are they the guaranteed performance you will get when you spin up a VM, especially when it comes to the network results.
  • As with anything the results will probably be outdated in the next 3 months, as Azure replaces old hardware and migrates servers to new more powerful one, prime example would be the network for A series which can now achieve a lot more than the 5 – 400 mbit/s it was initially configured to have.

While the CPU and Ram tests are quite accurate, it was very tricky to get consistent network results due to it’s complexity, i guess that’s why Microsoft do not have official public results of there network and you should in no case take these results as concrete. If you think about it you have different networks over different racks, over different hardware, with Network Security group rules and so on and so on. In my case even creating different machines of the same size showed some slight variation as it most likely was on the other side of the data center. As such i have given the averages by running a lot of tests.

The way i ran the test was to create a G5 machine and install Iperf3 and run it as a server. Then i spun up a VM starting with A0 in the same network, subnet and the same NSG group. I installed Iperf3 and ran it as a client to test against the G5 machines. I ran a few tests (get the average) after which i would change the size instance to the next size up. One thing to stress this is the network speed of the internal network between virtual machines.

I did not post or test extensively the storage io as i found it to be very close to what Microsoft posted (500 iops per normal disk).

Iperf3 Settings:

  • Number of threads: 30
  • Packet size: 2048 KB
  • Test duration: 30 seconds


  • CPU,RAM Tests: Novabench
  • Disk Tests: Diskspd Utility (superseding SQLIO)
  • Network: Iperf3


  • VM OS: Windows Server 2012 R2
  • Location: Azure West Europe
  • Type:  ARM
  • Network: Same Virtual network, Subnet and NSG
  CPU Floating Point CPU Integer CPU MD5 Hashing Ram Speed Iperf Internal Network Test Latency  
OPS/Second OPS/Second Generated/Second MB/s Mbit/s
A0 7,342,245 17,403,189 195,629 2,158 450 2ms
A1 14,356,746 34,109,137 390,896 4,126 1200 2ms
A2, A5 28,304,496 72,205,882 413,747 4,451 2000 2ms
A3, A6 56,267,428 142,505,408 389,636 4,515 2000 2ms
A4, A7 113,955,256 279,482,496 381,127 4,518 2000 2ms
A8, A10 204,486,040 702,109,104 1,009,893 11,731 3200 2ms
A9, A11 409,001,600 1,412,715,328 1,011,888 12,026 3200 2ms
D1 24,268,074 59,110,993 665,754 7,987 2000 2ms
D2, D11 49,442,928 125,056,984 704,646 8,695 2000 2ms
D3, D12 100,603,964 246,954,620 704,402 8,418 2000 2ms
D4, D13 195,991,600 491,080,272 684,456 8,403 2000 2ms
D1_V2 25,528,753 89,765,315 941,641 11,068 2900 2ms
D2_V2, D11_V2 50,998,032 176,682,360 978,454 10,924 2900 2ms
D3_V2, D12_V2 101,907,360 346,187,376 987,393 10,761 2900 2ms
D4_V2, D13_V2 203,695,704 682,719,000 955,868 10,410 2900 2ms
D5_V2, D14_V2 407,601,120 1,405,837,744 939,464 10,247 2900 2ms
GS1 50,991,708 173,447,442 971,055 10,335 3200 2ms
GS2 102,068,092 367,824,420 975,045 10,833 3200 2ms
GS3 204,035,376 696,130,296 977,296 10,070 3200 2ms
GS4 408,175,696 1,421,048,528 980,252 10,551 3200 2ms
GS5 816,471,872 Error 1,013,625 9,734 3200 2ms