Many monitoring systems provide various assessment capabilities for servers and some level of performance evaluations using PowerShell scripts that e.g. open an application/web page and perform certain actions such as opening or generating a random or dedicated object.
This is indeed a good starting point for measurement of system performance and is certainly much more advanced than mere availability checks. Ask yourself the following questions:
- Does it provide all necessary information for troubleshooting?
- Can user experience be derived from server-to-server checks?
- Is it enough to perform such tests within the data center?
- Are your users working with Office 365 applications via PowerShell? 🙂
The answer to these questions is “No”, even though some vendors may insist that their server-to-server script approach is still state of the art.
Your users are either working via web access (Web API) or, and that is the majority, are using a regular client installation of Outlook, Skype for Business etc. i.e. using the Client API.
It´s a matter of fact that even Web API and Client API measurements differ with regards to the time needed to complete the same simulation process. PowerShell is simply not applicable if you´d like to know about the quality your users’ experience.
That said, it is just logical that the simple execution of PowerShell scripts does not provide real-life and valuable step-by-step process analytics that give you the level of insight that is needed for troubleshooting and optimization of client based performance issues.
Last but not least, the best way to perform a true quality of service analysis is to work with a whole network of decentral simulation bots that use the same solutions and access method your real end users do. And there again, the Client API makes the difference.