DocumentationAPI ReferenceRelease Notes
Log In
Documentation

EPM for Windows performance reports

EPM-W 24.7 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-W desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. There are many different factors in a live environment which could show different results such as hardware configuration, Windows configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test Scenario

Tests are performed on dedicated VMs hosted in our data center. Each VM is configured with:

  • Windows 10 22H2
  • 4 Core 3.3GHz CPU
  • 4GB Ram

Testing was performed using the GA release of 24.7.

Test Name

Low Flexibility Quick Start policy with a single matching rule.

Test Method

This test involves a modified Low Flexibility QuickStart policy where a single matching rule is added which auto elevates an application based on its name. Auditing is also turned on for the application. The application is trivial command line app and is executed once per second for 30 minutes. Performance counters and EPM-W activity logging are collected and recorded by the test.

The QuickStart policy is commonly used as a base for all of our customers and it can be applied using our Policy Editor Import Template function. It was chosen as it's our most common use case. The application being elevated is a dummy application .exe, which we’ve created specifically for this testing and it terminates quickly and doesn't have a UI, making it ideal for the testing scenario.

Results

Listed below are the results from the runs of the tests on 24.7 and our previous release, 24.5. Due to the nature of our product, we are very sensitive to the OS and general computer activity, so some fluctuation is to be expected.

This report uses a different test methodology to previous reports, so the results aren’t comparable. However, the previous release was re-run using the new methodology to allow us to compare results in this report.

Rule matching latency

Shows the time taken for the rule to match. A small increase in mean and max latency was observed, but this change in performance should not affect customers in a significant way as it is caused by small variations in the tests that we perform.

SeriesMeanMinMax
24.7 Process Matching Rule Latency (ms)5.641.2520.64
24.5 Process Matching Rule Latency (ms)5.384.2317.74

% Processor Time

Percentage of processor time used. In 24.5, we observed a single large spike which contributed to a higher max percentage value, this is not present in 24.7, resulting in a lower max. The minimum is lower and the mean was within <1% so it’s unlikely that there would be any meaningful differences for a customer.

SeriesMeanMinMax
24.7 % Processor Time11.746.9114.47
24.5 % Processor Time11.858.2919.46

Thread Count (Defendpoint)

Number of threads being used by Defendpoint. 24.7 is using one more thread on average than 24.5. This new thread was short lived which results in the mean being very similar between 24.5 and 24.7.

SeriesMeanMinMax
24.7 Thread Count (DP)35.5835.0040.00
24.5 Thread Count (DP)35.4834.0039.00

Memory testing

For each release, we run through a series of automation tests (covering Application control, token modification, DLL control and Event auditing) using a build with memory leak analysis enabled to ensure there are no memory leaks. We use Visual Leak Detector (VLD) version 2.51 which when enabled at compile time replaces various memory allocation and deallocation functions to record memory usage. On stopping a service with this functionality an output file is saved to disk, listing all leaks VLD has detected. We use these builds with automation so that an output file is generated for each test scenario within a suite.

The output files are then examined by a developer who will review the results looking for anything notable. Due to the number of automation tests, test suites of impacted features are identified and ran. If nothing concerning is found then the build is continued to production.

In the 24.7 release, our testing uncovered no concerns.

EPM-W 24.5 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-W desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. There are many different factors in a live environment which could show different results such as hardware configuration, Windows configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test Scenario

Tests are performed on dedicated VMs hosted in our data center. Each VM is configured with:

  • Windows 10 22H2
  • 4 Core 3.3GHz CPU
  • 4GB RAM

Testing was performed using the GA release of 24.5

Test Name

Low Flexibility Quick Start policy with a single matching rule.

Test Method

This test involves a modified Low Flexibility Quick Start policy where a single matching rule is added which auto elevates an application based on its name. Auditing is also turned on for the application. The application is trivial command line app and is executed once per second for 30 minutes. Performance counters and EPM-W activity logging are collected and recorded by the test.

The Quick Start policy is commonly used as a base for all of our customers and it can be applied using our Policy Editor Import Template function. It was chosen as it's our most common use case. The application being elevated is a dummy application exe, which we’ve created specifically for this testing and it terminates quickly and doesn't have a UI, making it ideal for the testing scenario.

Results

Listed below are the results from the runs of the tests on 24.5 and our previous release, 24.3. Due to the nature of our product, we are very sensitive to the OS and general computer activity, so some fluctuation is to be expected.

This report uses a different test methodology to previous reports, so the results aren’t comparable. However the previous release was re-run using the new methodology to allow us to compare results in this report.

Rule matching latency

Shows the time taken for the rule to match. We can observe a small decrease in mean and max latency from 24.3 to 24.5, and a seemingly large increase in minimum latency, though on further inspection you can see that in the 24.3 run, there was only a single spike at that greatly reduced latency number, otherwise the graphs look very similar. These minor changes are unlikely to make any difference to user experiences.

SeriesMeanMinMax
24.3 Process Matching Rule Latency (ms)5.461.2519.67
24.5 Process Matching Rule Latency (ms)5.384.2317.74

% Processor Time

Percentage of processor time used. We observed a single large spike which contributed to a higher max percentage value, but the minimum and mean were within 0.4%. It’s unlikely that there would be any meaningful differences for a customer.

SeriesMeanMinMax
24.3 % Processor Time11.927.9315.76
24.5 % Processor Time11.858.2919.46

Thread Count (Defendpoint)

Number of threads being used by Defendpoint. 24.5 is using one more thread on average than 24.3.

SeriesMeanMinMax
24.3 Thread Count (DP)34.5933.0038.00
24.5 Thread Count (DP)35.4834.0039.00

Memory testing

For each release, we run through a series of automation tests (covering Application control, token modification, DLL control and Event auditing) using a build with memory leak analysis enabled to ensure there are no memory leaks. We use Visual Leak Detector (VLD) version 2.51 which when enabled at compile time replaces various memory allocation and deallocation functions to record memory usage. On stopping a service with this functionality an output file is saved to disk, listing all leaks VLD has detected. We use these builds with automation so that an output file is generated for each test scenario within a suite.

The output files are then examined by a developer who will review the results looking for anything notable. Due to the number of automation tests, test suites of impacted features are identified and ran. If nothing concerning is found then the build is continued to production.

In the 24.5 release, our testing uncovered no concerns.

EPM-W 24.3 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-W desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. There are many different factors in a live environment which could show different results such as hardware configuration, Windows configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test Scenario

Tests are performed on dedicated VMs hosted in our data center. Each VM is configured with:

  • Windows 10 21H2
  • 4 Core 3.3GHz CPU
  • 8GB RAM

Testing was performed using the GA release of 24.3

Test Name

Quick Start policy with a single matching rule.

Test Method

This test involves a modified Quick Start policy where a single matching rule is added which auto elevates an application based on its name. Auditing is also turned on for the application. The application is trivial command line app and is executed once per second for 60 minutes. Performance counters and EPM-W activity logging are collected and recorded by the test.

The Quick Start policy is commonly used as a base for all of our customers and it can be applied using our Policy Editor Import Template function. It was chosen as it's our most common use case. The application being elevated is a dummy application exe, which we’ve created specifically for this testing and it terminates quickly and doesn't have a UI, making it ideal for the testing scenario.

Results

Listed below are the results from the runs of the tests on 24.3 and our previous release, 24.1. Due to the nature of our product, we are very sensitive to the OS and general computer activity, so some fluctuation is to be expected.

Rule matching latency

Shows the time taken for the rule to match. A small increase in mean and max latency was observed, but this change in performance should not affect customers in an significant way as it is caused by small variations in the tests that we perform.

SeriesMeanMinMax
24.3 Process Matching Rule Latency (ms)1.740.4510.36
24.1 Process Matching Rule Latency (ms)0.800.454.57

% Processor Time

Percentage of processor time used. A small decrease in mean processor time along with a small spike of maximum CPU Utilization was observed due to the nature of background tasks being performed by Windows.

SeriesMeanMinMax
24.3 % Processor Time1.730.9225.58
24.1 % Processor Time2.521.708.52

Thread Count (Defendpoint)

Number of threads being used by Defendpoint. There was a small decrease in thread count between 24.1 and 24.3.

SeriesMeanMinMax
24.3 Thread Count (DP)37.6035.0039.00
24.1 Thread Count (DP)38.8035.0041.00

Memory testing

For each release, we run through a series of automation tests (covering Application control, token modification, DLL control and Event auditing) using a build with memory leak analysis enabled to ensure there are no memory leaks. We use Visual Leak Detector (VLD) version 2.51 which when enabled at compile time replaces various memory allocation and deallocation functions to record memory usage. On stopping a service with this functionality an output file is saved to disk, listing all leaks VLD has detected. We use these builds with automation so that an output file is generated for each test scenario within a suite.

The output files are then examined by a developer who will review the results looking for anything notable. Due to the number of automation tests, test suites of impacted features are identified and ran. If nothing concerning is found then the build is continued to production.

In the 24.3 release, our testing uncovered no concerns.

A# EPM-W 24.1 performance report

Introduction

The aim of this document is to provide data on agreed performance metrics of the EPM-W desktop client compared to the previous release.

The content of this document should be used to provide general guidance only. There are many different factors in a live environment which could show different results such as hardware configuration, Windows configuration and background activities, 3rd party products, and the nature of the EPM policy being used.

Performance benchmarking

Test scenario

Tests are performed on dedicated VMs hosted in our data center. Each VM is configured with:

  • Windows 10 21H2
  • 4 Core 3.3GHz CPU
  • 8GB RAM

Testing was performed using the GA release of 24.1

Test name

Quick Start policy with a single matching rule.

Test method

This test involves a modified QuickStart policy where a single matching rule is added which auto elevates an application based on its name. Auditing is also turned on for the application. The application is trivial command line app and is executed once per second for 60 minutes. Performance counters and EPM-W activity logging are collected and recorded by the test and this data is available as an archive which should be attached to this report.

Process start latency after rule match

SeriesMeanMinMax
Process Matching Rule Latency (ms)0.800.454.57

Process start latency vs. processor time

SeriesMeanMinMax
Process Matching Rule Latency (ms)0.800.454.57
% Processor Time2.521.708.52

CPU user/system time

SeriesMeanMinMax
% User Time1.010.314.58
% Privileged Time1.500.894.08

Defendpoint CPU user/system time

SeriesMeanMinMax
% User Time (DP)1.020.162.02
% Privileged Time (DP)0.930.002.18

Defendpoint private bytes

SeriesMeanMinMax
Private Bytes (DP)13,905,599.0012,357,630.0014,520,320.00

Defendpoint handle/thread counts

SeriesMeanMinMax
Thread Count (DP)38.8035.0041.00
Handle Count (DP)565.84489.00641.00


©2003-2025 BeyondTrust Corporation. All Rights Reserved. Other trademarks identified on this page are owned by their respective owners. BeyondTrust is not a chartered bank or trust company, or depository institution. It is not authorized to accept deposits or trust accounts and is not licensed or regulated by any state or federal banking authority.