QuickTip: Disabling workflows to optimize for large environments

<!--[if lt IE 9]>

<![endif]-->

Comments

  • Anonymous
    August 16, 2017
    Another great Post!! Thx for sharing.
  • Anonymous
    August 16, 2017
    Thx for tips!
  • Anonymous
    August 16, 2017
    Thanks as always for your tips Kevin. I was wondering why my 'Active Alerts' count was showing so low (vastly incorrect). The Rule indeed uses Get-SCOMAlert, but specifies -resolutionstate 0. For those of us with Connectors an alert doesn't stay New longer than a minute. To be more accurate, as you said if someone really wanted to keep using it, disable the original and create a duplicate changing Get-SCOMAlert -ResolutionState 0 to Get-SCOMAlert -Criteria "ResolutionState 255"
    • Anonymous
      August 16, 2017
      Correction: It should be Get-SCOMAlert -Criteria "ResolutionState 255"
    • Anonymous
      August 16, 2017
      Hi Bill!Yep. This whole MP was just an added on dashboard to show "cool" stuff.... but is inefficient in how it gets the data, and not applicable to all customers because of resolution states just as you pointed out. It would be better to consider "active alerts" as "not equal 255".
  • Anonymous
    August 16, 2017
    Kevin, yourself and others at Microsoft utilize environment size definitions of small, large, very large, etc..What is the ballpark of what you (as an individual or MS as a group) consider each of these definitions?
    • Anonymous
      August 16, 2017
      Ugh. I don't think there really is a "standard" and I am not sure everyone would agree with what I think.A SCOM Management group can range in Agent count from 0 to 15,000 agents. We know that anything over 6000 agents we reduce the number of allowed console sessions to 25, just because of the performance impact there.In my personal experience, it is usually around the 1000 agent mark that I start to see where performance becomes a huge factor. Consoles running slow, SDK responses slow, disk latency and CPU become super important on SQL and Management Servers, etc. The number and complexity of management packs imported, start to take a much bigger toll once the agent count grows. By comparison - I have seen 5,000 agent management groups SCREAM.... because the customer was only monitoring about 30 custom critical rules and monitors, and they did not import all the typical enterprise MP's you normally see. That environment was super fast and responsive in the consoles.So personally, I consider anything less than 500 agents to be "small", 500 to 2000 agents "large", and anything over 2000 agents to be "very large". What is interesting, is that deployments with 2000 agents or 10,000 agents often have very similar characteristics, responsiveness, and similar problems and challenges. The largest single management group I have ever seen was a little over 17,000 agents. That customer ran on all VM's, for SQL, and Management servers, and while the console responsiveness was far less than ideal, the monitoring environment was stable, and very maintainable. I would still advocate for any environment over 2000 agents to use dedicated physical hardware for the SQL database servers, but we see more and more customers adopt VM's for everything these days.
      • Anonymous
        August 17, 2017
        The comment has been removed
      • Anonymous
        August 23, 2017
        As for the physical SQL server. I gave up on requesting those. I tell my customers (Always over 500 servers) what the performance should be and let them decide.e.g. i ask for max. 10 ms disk latency on SQL (and max. 20ms disk latency on management servers) measured from the windows OS. The ones who choose VM's usually suffer from performance issues, but a well thoughtout virtualization strategy can provide the required specs.
        • Anonymous
          August 23, 2017
          The comment has been removed
  • Anonymous
    August 19, 2017
    Thanks for sharing this!