DocumentationAPI ReferenceRelease Notes
Log In
Documentation

EPM for Mac performance reports

EPM-M 24.7 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the Privilege Management for Mac desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. The are many different factors in a live environment which could show different results such as hardware configuration, macOS configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test Scenario

The tests were ran on a virtual machine with the following configuration:

  • macOS: 13.4.1
  • Apple Silicon
  • 8GB RAM

Tests were completed with the GA releases.

  • 24.5
  • 24.7

Test Names

  • Quick Start policy in High flex, admin user
  • Running the Automation Tests Suite with a Monitoring tool
  • Installing and building Qt5 from source

Test Method

We have included three methods to test our product, all make use of the caching feature. To find out how to use our caching feature, see the information in our administrator guides in the Configure Caching on Policies section.

There were three types of tests conducted:

  • The first one uses the default quick start policy where we expect to match on an allowed binary with no dialog presented to the user, ensuring consistency and reducing user interaction.

The quick start policy is the policy that's most commonly used as a base for all of our customers. It can be applied by the MMC and WPE with Import Template. It was chosen as it's our most common use case. The binary we're launching is bash with the -help argument, and we run it through the hyperfine tool during the test process to produce the minimum, maximum and mean time of our rule matching engine.

  • The next one involves running our Automation Tests Suite and measuring the components resource consumption. We conduct tests on all of our core functionality with our automation suite and keeps our software under load for approximately an hour. We’ve used the Automation Tests Suite to portray as close as possible a real life environment usage and to monitor PMfM through all types of intensive and non intensive scenarios.
  • The last one is about installing the Qt5 package on the system and building it from source to simulate a developer environment trying to set it up and compile code. We use the hyperfine tool to run the 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5' command which executes it 10 times in a row and times the minimum and maximum in seconds for each run, also producing the mean values.

Summary

Comparing 24.7 results against 24.5, we can observe a small increase in performance for the 24.7 candidate with or without caching enabled for the 1 and 3 synthetic tests, starting from 2% and going up to 12%. This is because we’ve either removed some of our logging messages or we’ve not included them into our release builds. The tests were done at a micro level focusing on a single binary and at a macro level focusing on multiple binaries being run while installing and building from source.

On the other hand, looking at our automation performance data for the 24.7 release candidate, we can see a small increase in resource consumption for some components when compared to the previous release of PMfM, and this can be explained by the new addition of the JIT Admin and Application features.

Overall, we can observe a small variable increase in performance on focused time-boxed tests and a slight performance degradation in resource consumption when running the full automation suite.

Results

Rule matching latency

Note: Hyperfine v1.19.0 used both 24.5 and 24.7 for the results of tests 1 and 3.

Test/versionMeanMinMax
24.5 Process Matching Rule Latency With Caching4.5 ms ± 3.1 ms [User: 0.3 ms, System: 0.7 ms]1.9 ms48.3 ms
24.7 Process Matching Rule Latency With Caching4.3 ms ± 1.6 ms [User: 0.3 ms, System: 0.8 ms]2.3 ms36.1 ms
24.5 Process Matching Rule Latency No Caching5.4 ms ± 3.9 ms [User: 0.4 ms, System: 1.1 ms]2.6 ms56.7 ms
24.7 Process Matching Rule Latency No Caching5.3 ms ± 3.3 ms [User: 0.4 ms, System: 1.1 ms]2.5 ms53.9 ms
No EPM-M installed1.4 ms ± 1.6 ms [User: 0.3 ms, System: 0.6 ms]0.0 ms34.0 ms

µs is microsecond. ms is millisecond.

Automation Tests resource consumption per component

EPM-M 24.7 caching disabled
Defendpoint ComponentMeanMinMax
CPU %0.680.05.0
Memory (mb)10.096.6414.0
Energy Impact0.690.05.0
Endpoint Security ComponentMeanMinMax
CPU %0.620.05.1
Memory (mb)5.74.475.78
Energy Impact0.670.05.1
PrivilegeManagement ComponentMeanMinMax
CPU %0.060.02.4
Memory (mb)9.997.5510.0
Energy Impact0.090.05.2
Interrogator ComponentMeanMinMax
CPU %1.050.08.0
Memory (mb)107.4456.0158.0
Energy Impact1.050.08.0
EPM-M 24.5 caching disabled
Defendpoint ComponentMeanMinMax
CPU %0.60.05.0
Memory (mb)9.896.417
Energy Impact0.60.05.0
Endpoint Security ComponentMeanMinMax
CPU %0.530.05.3
Memory (mb)5.493.125.6
Energy Impact0.580.05.3
PrivilegeManagement ComponentMeanMinMax
CPU %0.070.08.7
Memory (mb)10.499.7633
Energy Impact0.100.08.9
Interrogator ComponentMeanMinMax
CPU %0.930.07.8
Memory (mb)103.254.19135.0
Energy Impact0.930.07.8

Installing and building Qt5 from source with a QuickStart High Flexibility policy applied

Hyperfine --runs 10 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5'
Test/versionMean(sec)Min(sec)Max(sec)
24.5 with Caching Disabled37.972 s ± 1.254 s [User: 6.533 s, System: 13.859 s]36.182 s40.011 s
24.7 with Caching Enabled35.031 s ± 2.143 s [User: 6.201 s, System: 13.159 s]32.126 s38.467 s
24.5 with Caching Disabled59.099 s ± 2.677 s [User: 6.997 s, System: 16.516 s]54.506 s62.046 s
24.7 with Caching Enabled52.814 s ± 3.105 s [User: 6.549 s, System: 15.011 s]48.409 s58.204 s
No EPM-M installed29.046 s ± 3.159 s [User: 6.294 s, System: 11.815 s]23.995 s32.816 s

EPM-M 24.5 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-M desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. There are many different factors in a live environment which could show different results such as hardware configuration, macOS configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test scenario

The tests were ran on a physical machine with the following configuration:

  • macOS: 14.5
  • Apple Silicon
  • 16GB RAM

Tests were completed with the GA releases.

  • 24.3
  • 24.5

Test names

  • Quick Start policy in High flex, admin user
  • Running the Automation Tests Suite with a Monitoring tool
  • Installing and building Qt5 from source

Test method

We have included three methods to test our product, all make use of the caching feature.

There were three types of tests conducted:

  • The first one uses the default quick start policy where we expect to match on an allowed binary with no dialog presented to the user, ensuring consistency and reducing user interaction.

The Quick Start policy is the policy that's most commonly used as a base for all of our customers. It can be applied by the MMC and WPE with Import Template. It was chosen as it's our most common use case. The binary we're launching is bash with the -help argument, and we run it through the hyperfine tool during the test process to produce the minimum, maximum and mean time of our rule matching engine.

  • The next one involves running our Automation Tests Suite and measuring the components resource consumption. We conduct tests on all of our core functionality with our automation suite and keep our software under load for approximately an hour. We’ve used the Automation Tests Suite to portray as close as possible a real life environment usage and to monitor EPM-M through all types of intensive and non intensive scenarios.
  • The last one is about installing the Qt5 package on the system and building it from source to simulate a developer environment trying to set it up and compile code. We use the hyperfine tool to run the 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5' command which executes it 10 times in a row and times the minimum and maximum in seconds for each run, also producing the mean values.

Summary

Comparing 24.3 results against 24.5, we can observe no major differences between these two builds when caching is disabled but there is a 10-15% improvement when caching is enabled. This is due to a new performance improvement feature that introduces caching for file reads on the system.

The tests were done at a micro level focusing on a single binary and at a macro level focusing on multiple binaries being run while installing and building from source.

You may see a negative impact on performance when compared to the previous release of EPM-M but this can be attributed to a increase of automation especially around URM and the addition of URM which is a Just In Time procedure so modifications are made instantly.

Overall there is no performance degradation and the resource consumption differences between the automation runs are negligible, with a substantial performance improvement when caching is enabled.

Results

Rule matching latency

ℹ️

Note

Hyperfine v1.18 used both 24.3 and 24.5 for the results of tests 1 and 3.

Test/versionMeanMinMax
24.5 Process Matching Rule Latency With Caching7.3 ms ± 5.9 ms [User: 0.7 ms, System: 1.5 ms]0.2 ms35.0 ms
24.3 Process Matching Rule Latency With Caching6.9 ms ± 2.1 ms [User: 0.7 ms, System: 1.6 ms]1.4 ms11.0 ms
24.3 Process Matching Rule Latency No Caching9.0 ms ± 3.9 ms [User: 0.6 ms, System: 1.6 ms]1.3 ms20.5 ms
24.5 Process Matching Rule Latency No Caching8.1 ms ± 11.3 ms [User: 0.6 ms, System: 1.4 ms]2.1 ms107.7 ms
No EPM-M installed2.1 ms ± 1.6 ms [User: 0.5 ms, System: 1.0 ms]0.0 ms6.0 ms

µs is microsecond. ms is millisecond.

Automation tests resource consumption per component

EPM-M 24.5 caching disabled
Defendpoint ComponentMeanMinMax
CPU %2.44556896010.4
Memory (mb)12.80033148.57819
Energy Impact2.44556896010.4
Endpoint Security ComponentMeanMinMax
CPU %3.505315270.912.9
Memory (mb)7.983397083.0898.29
Energy Impact3.505315270.912.9
PrivilegeManagement ComponentMeanMinMax
CPU %0.23777315017.6
Memory (mb)17.64984391247
Energy Impact0.23777315017.6
Interrogator ComponentMeanMinMax
CPU %1.72966805015.4
Memory (mb)83.98306435.169117
Energy Impact1.72966805015.4
EPM-M 24.3 caching disabled
Defendpoint ComponentMeanMinMax
CPU %2.2709503207.8
Memory (mb)12.61316528.01818
Energy Impact2.2709503207.8
Endpoint Security ComponentMeanMinMax
CPU %2.791961410.711
Memory (mb)8.015125942.9778.386
Energy Impact2.791961410.711
PrivilegeManagement ComponentMeanMinMax
CPU %0.22316747017.5
Memory (mb)22.90580264.94550
Energy Impact0.22316747017.5
Interrogator ComponentMeanMinMax
CPU %1.66139883014
Memory (mb)83.18116715.922117
Energy Impact1.66139883014

Installing and building Qt5 from source with a QuickStart High Flexibility policy applied

Hyperfine --runs 10 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5'
Test/versionMean(sec)Min(sec)Max(sec)
24.5 with Caching Disabled38.247 s ± 0.557 s [User: 6.170 s, System: 16.760 s]37.401 s39.142 s
24.3 with Caching Enabled30.048 s ± 0.567 s [User: 6.252 s, System: 16.721 s]29.358 s31.233 s
24.3 with Caching Disabled37.156 s ± 0.300 s [User: 6.155 s, System: 16.749 s]36.636 s37.571 s
24.5 with Caching Enabled30.460 s ± 0.274 s [User: 6.333 s, System: 16.923 s]30.145 s30.924 s
No EPM-M installed56.466 s ± 29.484 s [User: 14.027 s, System: 26.076 s]46.478 s140.366 s

EPM-M 24.3 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-M desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. The are many different factors in a live environment which could show different results such as hardware configuration, macOS configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test scenario

The tests were ran on a Parallels virtual machine with the following configuration:

  • macOS: 13.4.1
  • Apple Silicon
  • 8GB RAM

Tests were completed with the GA releases.

  • 24.1
  • 24.3

Test names

  • Quick Start policy in High flex, admin user
  • Running the Automation Tests Suite with a Monitoring tool
  • Installing and building Qt5 from source

Test method

We have included three methods to test our product, all make use of the caching feature.

There were three types of tests conducted:

  • The first one uses the default quick start policy where we expect to match on an allowed binary with no dialog presented to the user, ensuring consistency and reducing user interaction.

The Quick Start policy is the policy that's most commonly used as a base for all of our customers. It can be applied by the MMC and WPE with Import Template. It was chosen as it's our most common use case. The binary we're launching is bash with the -help argument, and we run it through the hyperfine tool during the test process to produce the minimum, maximum and mean time of our rule matching engine.

  • The next one involves running our Automation Tests Suite and measuring the components resource consumption. We conduct tests on all of our core functionality with our automation suite and keep our software under load for approximately an hour. We’ve used the Automation Tests Suite to portray as close as possible a real life environment usage and to monitor EPM-M through all types of intensive and non intensive scenarios.
  • The last one is about installing the Qt5 package on the system and building it from source to simulate a developer environment trying to set it up and compile code. We use the hyperfine tool to run the 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5' command which executes it 10 times in a row and times the minimum and maximum in seconds for each run, also producing the mean values.

Summary

Comparing 24.1 results against 24.3, we can observe no major differences between these two builds when caching is disabled but there is a 10-15% improvement when caching is enabled. This is due to a new performance improvement feature that introduces caching for file reads on the system.

The tests were done at a micro level focusing on a single binary and at a macro level focusing on multiple binaries being run while installing and building from source.

Overall there is no performance degradation and the resource consumption differences between the automation runs are negligible, with a substantial performance improvement when caching is enabled.

Results

Rule matching latency

ℹ️

Note

Hyperfine v1.18 used both 24.1 and 24.3 for the results of tests 1 and 3.

Test/versionMeanMinMax
24.1 Process Matching Rule Latency No Caching4.6 ms ± 1.0 ms2.1 ms11.2 ms
24.3 Process Matching Rule Latency No Caching2.4 ms ± 0.9 ms0.8 ms7.0 ms
24.1 Process Matching Rule Latency With Caching1.2 ms ± 0.4 ms0.6 ms2.9 ms
24.3 Process Matching Rule Latency With Caching1.1 ms ± 0.3 ms0.6 ms2.2 ms
No EPM-M installed925.2 µs ± 332.6 µs390.1 µs2742.0 µs

µs is microsecond. ms is millisecond.

We’ve included a visualization of the resources our components are using when running the following command on the 24.1 release candidate with caching disabled:

Hyperfine --runs 1000 'bash -help'


This shows how our components under load using the command and for a short time after, which is due to our components needing additional time to fully compute the 1000 actions in this test case.

All resources drop to near zero after that.

Automation tests resource consumption per component

Defendpoint component
Process load ( % )
Test/versionMeanMinMax
24.30.5605.3
24.10.37805.6
Memory (MB)
Test/versionMeanMinMax
24.310.1236.19416
24.110.4545.82516
Energy Impact
Test/versionMeanMinMax
24.30.56905.3
24.10.37805.6
PrivilegeManagement component
Process load ( % )
Test/versionMeanMinMax
24.30.03200.9
24.10.02009.1
Memory (MB)
Test/versionMeanMinMax
24.39.9989.90610
24.110.3409.58632
Energy impact
Test/versionMeanMinMax
24.30.05302.7
24.10.029509.3
EndpointSecurity component
Process load ( % )
Test/versionMeanMinMax
24.30.52209.5
24.10.40502.9
Memory (MB)
Test/versionMeanMinMax
24.36.1824.4016.273
24.15.7364.3536.081
Energy impact
Test/versionMeanMinMax
24.30.55409.5
24.10.43302.9
Interrogator Component
Process load ( % )
Test/versionMeanMinMax
24.30.98707.5
24.10.46505.7
Memory (MB)
Test/versionMeanMinMax
24.3116.2366.865148
24.1107.9544.898137
Energy impact
Test/versionMeanMinMax
24.30.98807.5
24.10.46505.7

Install and build Qt5 from source with a QuickStart high flexibility policy applied

Hyperfine --runs 10 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5'
Test/versionMean(sec)Min(sec)Max(sec)
23.9.1 with Caching Disabled36.242 ±  0.31735.71536.631
24.1 with Caching Disabled39.747 ±  0.76538.05840.558
24.3 with Caching Disabled40.563 s ± 1.406 s38.542 s43.180 s
23.9.1 with Caching Enabled29.470 ±  0.40828.89030.342
24.1 with Caching Enabled35.653 s ±  1.002 s33.65736.570
24.3 with Caching Enabled35.521 s ± 0.893 s33.701 s36.668 s
No EPM-M installed26.607 s ± 0.845 s25.38528.556

EPM-M 24.1 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-M desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. The are many different factors in a live environment which could show different results such as hardware configuration, macOS configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test scenario

The tests were ran on an Parallels virtual machine with the following configuration:

  • macOS: 13.4.1
  • Apple Silicon
  • 8GB RAM

Tests were completed with the GA releases.

  • 23.9.1.1
  • 24.1

Test names

  • Quick Start policy in High flex, admin user
  • Running the Automation Tests Suite with a Monitoring tool
  • Installing and building Qt5 from source

Test method

We have included three methods to test our product, all make use of the caching feature.

There were three types of tests conducted:

  • The first one uses the default quick start policy where we expect to match on an allowed binary with no dialog presented to the user, ensuring consistency and reducing user interaction.

The quick start policy is the policy that's most commonly used as a base for all of our customers.  It can be applied by the MMC and WPE with Import Template.  It was chosen as it's our most common use case.  The binary we're launching is bash with the -help argument, and we run it through the hyperfine tool during the test process to produce the minimum, maximum and mean time of our rule matching engine.

  • The next one involves running our Automation Tests Suite and measuring the components resource consumption. We conduct tests on all of our core functionality with our automation suite and keep our software under load for approximately an hour. We’ve used the Automation Tests Suite to portray as close as possible a real life environment usage and to monitor PMfM through all types of intensive and non intensive scenarios.
  • The last one is about installing the Qt5 package on the system and building it from source to simulate a developer environment trying to set it up and compile code. We use the hyperfine tool to run the 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5' command which executes it 10 times in a row and times the minimum and maximum in seconds for each run, also producing the mean values.

Summary

Comparing 23.9.1 results against 24.1, we can observe no major differences between these two builds when caching is disabled but there is a 10-15% improvement when caching is enabled. This is due to a new performance improvement feature that introduces caching for file reads on the system.

The tests were done at a micro level focusing on a single binary and at a macro level focusing on multiple binaries being run while installing and building from source.

Overall there is no performance degradation and the resource consumption differences between the automation runs are negligible, with a substantial performance improvement when caching is enabled.

Results

Rule matching latency

Note: Hyperfine v1.18. used both 23.9.1 and 24.1 for the results of tests 1 and 3.

Test/versionMeanMinMax
23.9.1 Process Matching Rule Latency No Caching4.6 ms ± 1.8 ms3.410.1
24.1 Process Matching Rule Latency No Caching4.6 ms ±   1.0 ms2.1 ms11.2 ms
23.9.1 Process Matching Rule Latency With Caching615.8 µs ± 323.3 µs209.1 µs1888.5 µs
24.1 Process Matching Rule Latency With Caching 1.2 ms ±   0.4 ms 0.6 ms2.9 ms
No EPM-M installed 965.3 µs ± 338.9 µs468.2 µs2245.1 µs

µs is microsecond. ms is millisecond.

We’ve included a visualization of the resources our components are using when running the following command on the 24.1 release candidate with caching disabled:

Hyperfine --runs 1000 'bash -help'

This shows how our components under load using the command and for a short time after, which is due to our components needing additional time to fully compute the 1000 actions in this test case.

All resources drop to near zero after that.

Automation tests resource consumption per component

Defendpoint component
Process load ( % )
Test/versionMeanMinMax
24.10.37805.6
23.9.11.162011.8
23.90.78507.4
Memory (MB)
Test/versionMeanMinMax
24.110.4545.82516
23.9.19.7785.53714
23.98.9375.53712
Energy impact
Test/versionMeanMinMax
24.10.37805.6
23.9.11.163011.8
23.90.78809.3
PrivilegeManagement component
Process load ( % )
Test/versionMeanMinMax
24.10.02009.1
23.9.10.513010.1
23.90.575017.5
Memory (MB)
Test/versionMeanMinMax
24.110.3409.58632
23.9.18.3404.49714
23.928.2594.89735
Energy impact
Test/versionMeanMinMax
24.10.029509.3
23.9.10.528010.1
23.90.787017.9
EndpointSecurity component
Process load ( % )
Test/versionMeanMinMax
24.10.40502.9
23.9.11.046010.1
23.90.936016.7
Memory (MB)
Test/versionMeanMinMax
24.15.7364.3536.081
23.9.15.9904.4976.161
23.95.5714.4335.665
Energy impact
Test/versionMeanMinMax
    
24.10.43302.9
23.9.11.095010.1
23.90.979016.7
Interrogator component
Process load ( % )
Test/versionMeanMinMax
24.10.46505.7
23.9.13.517028.6
23.92.149019.8
Memory (MB)
Test/versionMeanMinMax
24.1107.9544.898137
23.9.198.3657.665134
23.9107.9046.097143
Energy impact
Test/versionMeanMinMax
24.10.46505.7
23.9.13.517028.6
23.92.149019.8

Install and build Qt5 from source with a QuickStart high flexibility policy applied

Hyperfine --runs 10 'HOMEBREW_NO_AUTO=1 brew reinstall qt@5'
Test/versionMean(sec)Min(sec)Max(sec)
23.9.1 with Caching Disabled36.242 ±  0.31735.71536.631
24.1 with Caching Disabled39.747 ±  0.76538.05840.558
23.9.1 with Caching Enabled29.470 ±  0.40828.89030.342
24.1 with Caching Enabled35.653 s ±  1.002 s33.65736.570
No EPM-M installed27.226 s ±  0.807 s25.80428.146

©2003-2025 BeyondTrust Corporation. All Rights Reserved. Other trademarks identified on this page are owned by their respective owners. BeyondTrust is not a chartered bank or trust company, or depository institution. It is not authorized to accept deposits or trust accounts and is not licensed or regulated by any state or federal banking authority.