Properly monitoring end-user experience on SaaS platforms is a highly advanced topic.  But we know that understanding the results should not be.  So, we provide results in the form of an easy-to-understand score.  Think of it like a test score in school – 100 is the best score you can get, and 0 is the worst. 

How Should I Read the Scores?

Our scores come out of the box with a few built-in ranges. 

How do we score?

There are many factors that going into our scoring mechanism. First of all, we don’t just connect to M365 and say it Is ok.

Being able to ‘reach’ a SaaS service doesn’t mean that service is satisfactorily working for your users. So, our score is the product of multiple real user simulations that we do continuously from the users context, culminating in three distinctive scores that help us identify and isolate potential cause areas:

  1. Authentication: Can users successfully authenticate and stay authenticated to the SaaS service?
  2. Networking: Can users successfully and satisfactory reach the SaaS service?
  3. API: Can users successfully and satisfactory interact with the SaaS service?

We do this for each of the monitored M365 workloads / services and each user individually and then compare that against what is expected as ‘normal’.

What do we mean with ‘expected normal’?

To calculate the expected normal we look at the user’s individual situation and expectations. A user’s network experience expectations while working in an office with a 100Gb LAN connection are probably a little different from that of their home office with a 10MB Wi-Fi network.

You also need to account for the fact that the expected ‘normal’ can depend on time of day and day of week. For instance, Monday morning when everyone logs in at the same time, the authentication is likely a little slower for everyone and therefor shouldn’t be compared to Friday afternoon. The user might not even notice this, but a normal monitoring tool would and could give off a false alert if you don’t account for those specific time related situations.

And similar for all the different services. Each workload/service consists of multiple functions and APIs needed to be accessible for the services to be used. Sometimes individually, sometimes in sequence. Understanding the intricacies of connectivity between them and the ability to track them gives us the ability to indicate if an outage is affecting for instance all of OneDrive (including the web interface) or only when opened in the desktop client. Giving you the ability to redirect your users in the case only one of the two clients is affected.

The above examples are just a few of the many criteria and advanced calculations that come into play while calculating the users ‘expected normal’. In realty it consists of hundreds of millions of data points that are being calculated and considered every minute.

What scores do we identify?

We score the user experience both for individual users and the organization.

Additionally, we score the Teams call quality:                                    

Requirements & considerations

To provide accurate user or organizational wide scoring we ideally need a minimum of 1 week of collected data and for accurate organizational scoring ideally at least 100 monitored users.