Database maintenance
Microsoft SQL Server Management Studio offers database administrators (DBA) a set of tools for ensuring databases are optimized, regularly backed up, and free of inconsistencies. Tasks can be created as Maintenance Plans. The SQL Server Management Studio includes a wizard for creating core Maintenance Plans.
We recommend database maintenance tasks be performed only by a qualified DBA.
Below are a subset of Maintenance Plan tasks BeyondTrust recommends for the Endpoint Privilege Management Reporting database.
Other Maintenance Plan tasks are offered by Microsoft SQL Server. Your DBA may suggest other plans that best suit your organization.
Manage the Endpoint Privilege Management database cache
When Endpoint Privilege Management Reporting is deployed for larger corporate networks the amount of data can affect the performance of the reports. Reports are less interactive and load times can increase.
Upgrades to the processor, disk, or memory only make a negligible difference to the load times of reports once the database gets to a certain size. BeyondTrust has developed a solution to improve the user experience. This new feature introduces tables that store summary data to help return the information quickly.
Data summarization
The data summarization is linked to the current mechanism that normalizes data from the staging area. This is called by either the SQL Agent or Service Broker. Service Broker is installed by default but you can use the script Create_ER_Database_Agent.sql to use the SQL Agent.
The data is stored against the following intervals:
- 12 Months
- 6 Months
- 30 Days
- 7 Days
- 24 Hours
Each interval has an associated configuration that determines the amount of time until the data needs to be refreshed.
The data is refreshed depending on the interval:
- Once a day for 12-month, 6-month, and 30 day intervals
- Once an hour for 7-day interval
- Refreshed with new data for a 24-hour interval
Summarizing the grouping data adds overhead to the CopyFromStaging process. Summarizing is not done for every iteration on large installations.
The 24-hour interval is an exception; all data is refreshed since only cached data is stored for the Discovery dashboard.
Enable and disable report caching
The cache feature can be enabled or disabled when you install the Endpoint Privilege Management Reporting database. After installation, change cache settings using one of the following ways:
- In SQL Server Management Studio, connect to the database and run the following query:
UPDATE Config
SET BitValue = 1
WHERE ConfigId = 'CacheEnabled'
- Open SQL Server Reporting Studio and navigate to the Admin folder. Open the ErpCacheAdmin report and click Enable or Disable, depending on the current state.
The grouping data is cached in a table and refreshed at a configurable interval. This can be every x minutes or at a specific time of the day. The values that determine this behavior are in the dbo.Config table.
The values can be manually edited by running the following script in SQL Server Management Studio.
DECLARE @Interval NVARCHAR(40)= NULL -- Set the interval Last12Months, Last6Months, Last30Days, Last7Days
Set the local time of the server to run fresh cache
-- daily for 12 / 6 Month, 30 Day eg '13:00'
-- Time on the hour for 7 Day eg '13:05'
DECLARE @LocalRefreshTime TIME = NULL
DECLARE @ForceCache BIT = NULL -- Reset the force cache
DECLARE @MyRefreshTime DATETIME
IF @LocalRefreshTime IS NOT NULL AND @Interval IS NOT NULL
BEGIN
DECLARE @UTCOffset INT = DATEDIFF(MINUTE, GETDATE(), GETUTCDATE())
DECLARE @MinutesPastMidnight INT = DATEDIFF(MINUTE, '00:00:00.000', @LocalRefreshTime)
SET @MyRefreshTime = DATEADD(MINUTE, @MinutesPastMidnight + @UTCOffset, CAST(CAST(GETUTCDATE() AS DATE) AS DATETIME))
UPDATE [dbo].[Config]
SET DateTimeValue = ISNULL(@MyRefreshTime, DateTimeValue)
WHERE ConfigId = 'Cache' + REPLACE(REPLACE(@Interval, 'Last', ''), ' ', '') + 'Refresh'
END
IF @ForceCache IS NOT NULL
UPDATE [dbo].[Config]
SET BigValue = @ForceCache,
DateTimeValue = GETUTCDATE()
WHERE ConfigId = 'ForceCache'
Rebuild indexes in SQL Server Management Studio
Microsoft SQL Server maintains indexes whenever events and data are added to or purged from the Endpoint Privilege Management database. Over time, these indexes can become fragmented. Heavily fragmented indexes can cause degradations in database performance and result in increased report generation times. Rebuilding the indexes improves database performance and restores report generation times to their optimum.
Therefore we recommend a maintenance plan be applied to SQL Server to ensure that indexes are regularly rebuilt.
If you are unsure Endpoint Privilege Management Reporting database indexes are causing a degradation in performance, then you can observe their percentage fragmentation in SQL Server Management Studio. The Processes table of the Endpoint Privilege Management database is typically a high volume table, and most likely will become fragmented first.
To check the index fragmentation of the Endpoint Privilege Management database in SQL Server Management Studio:
- Navigate to Databases > BeyondTrust Endpoint Privilege Management(Privilege Guard) > Tables > dbo.Processes > Indexes.
- Right-click IDC Processes and select Properties.
- In the Index Properties dialog box, select Fragmentation.
- The Total Fragmentation is displayed as a percentage.
To manually rebuild indexes for a table, right-click the Indexes node of any table and select Rebuild All.
Rebuild indexes in a SQL Azure environment
As indexes grow they are not stored contiguously on disk and the database becomes fragmented. This results in degraded performance as more disk reads are required to load an index into memory.
It is best practice to regularly rebuild indexes. This can have a significant performance benefit.
Azure does not support rebuilding indexes using SQL Server Management Studio. It also does not support maintenance plans and does not use a SQL Server Agent. Therefore, a manual approach to rebuilding indexes is required.
Check for index fragmentation
The following SQL query returns a list of the indexes in the database, with the most fragmented first.
SELECT
OBJECT_SCHEMA_NAME(ips.OBJECT_ID) 'Schema',
OBJECT_NAME(ips.OBJECT_ID) 'Table',
i.NAME,
ips.index_id,
index_type_desc,
avg_fragmentation_in_percent,
avg_page_space_used_in_percent,
page_count
FROM
sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'SAMPLED') ips
INNER JOIN
sys.indexes i
ON (ips.object_id = i.object_id)
AND
(
ips.index_id = i.index_id
)
ORDER BY
avg_fragmentation_in_percent DESC
Rebuild indexes
The guidance from Microsoft is that indexes be rebuilt if the fragmentation is over 30% and reorganized if the fragmentation is between 5 and 30%. Reorganizing an index is a faster, lightweight version of rebuilding and the indexes remain online.
To rebuild an index:
ALTER INDEX IDX_Processes5 ON Processes REBUILD WITH (ONLINE = ON)
If the index can be unavailable for a short time you can speed up rebuilding time by specifying:
(ONLINE = OFF)
The syntax to reorganize an index is:
ALTER INDEX IDX_Processes5 ON Processes REORGANIZE
Rebuild all database indexes
Manually rebuilding indexes can be time-consuming and error prone. A Microsoft Engineer has provided a stored procedure that builds a list of indexes in the database and rebuilds or reorganizes them as appropriate.
We recommend you inspect and change the code as required. Run the procedure when the database is not busy. The procedure is processor and I/O intensive.
After you create the stored procedure, run as follows:
EXEC AzureSQLMaintenance 'index'
Schedule index rebuilding
Schedule index rebuilding regularly - daily or weekly, depending on how quickly indexes are becoming fragmented.
Microsoft supplies Azure Automation to allow the scheduling of stored procedure calls.
For more information, see Managing Azure SQL databases using Azure Automation.
Database backup recommendations
Backing up the Endpoint Privilege Management database on a regular basis is important for preserving Endpoint Privilege Management activity in the event of a hardware or system failure on the SQL Server that may cause a corruption. Backed up databases can be quickly restored with minimum disruption to the business.
There are several options for backing up components of a database. We recommend backing up the entire database.
We recommend a Maintenance Plan be applied to SQL Server to ensure the Endpoint Privilege Management database is fully backed up on a regular basis.
Create a maintenance plan in SQL Server Management Studio
Maintenance Plans allow you to create a workflow of maintenance tasks in SQL Server to ensure your databases are fully optimized and backed up. Plans can be created manually, or by using the built-in wizard, and can be performed manually or automatically on a schedule.
Maintenance Plans are executed as SQL Server Agent jobs. The SQL Server Agent must be running.
To create a Maintenance Plan in SQL Server Management Studio:
- Navigate to Management > Maintenance Plans.
- Right-click Maintenance Plans and choose Maintenance Plan Wizard.
- Rebuild Index
- Backup Database (Full)
- Proceed through the wizard to the Select Maintenance Tasks page and check the following recommended tasks (as a minimum):
- Proceed through the wizard (setting any options as appropriate) to the Define Rebuild Index Task page.
- Select the BeyondTrustReporting Database and click OK.
- Proceed through the wizard (setting any options as appropriate) to the Define Back Up Database (Full) Task page.
- Select the BeyondTrustReporting Database and click OK.
- Set the backup schedule, backup location, and any other options as appropriate.
- Proceed through the wizard (setting any options as appropriate). Click Finish to complete the wizard and create the new maintenance plan.
The new plan is now listed under the Maintenance Plans node and can be edited at any time. The maintenance plan can be run manually by right-clicking and choosing Execute.
Purge reporting data
Purge data with automation
Automated purging is built into Endpoint Privilege Management Reporting. The automatic purge of data is enabled during a new installation or an upgrade of Endpoint Privilege Management Reporting and defaults to keeping 90 days of data in the database.
Configure automated purging via the dbo.Config table with the following records:
MaintenanceEnabled | A config flag to indicate if maintenance is enabled. This can be updated by the following SQL Query (1 is enabled, 0 is disabled)UPDATE Config SET BitValue = 1 WHERE ConfigId = 'MaintenanceEnabled' |
PurgeEnabled | A config flag to indicate if automatic purging is enabled. This can be updated by the following SQL Query (1 is enabled, 0 is disabled)UPDATE Config SET BitValue = 1 WHERE ConfigId = 'PurgeEnabled' |
LastMaintenanceRun | A config to indicate when the last time that Maintenance was run. This is an internal configuration which should not be updated without consulting our support team. |
DataRetentionDays | Number of days worth of data to keep. This can be updated by the following SQL QueryUPDATE Config SET BigIntValue = 90 WHERE ConfigId = 'DataRetentionDays' |
Both the MaintenanceEnabled and the PurgeEnabled flags need to be enabled for purging to work.
Purge data by stored procedure
You can use the stored procedure PurgeData to purge data from the Privilege Management Reporting Database. PurgeData accepts three arguments. Either the first argument, or the second argument can be used, but not both. NULL should be passed for the unused argument.
- Date: All events from before this date are purged.
- Integer: All events older that this number (in months) are purged.
- Integer: A declared integer variable to store the return value of the PurgeData stored procedure (number of events purged). See example below. This can be set to NULL if not required.
This example purges all data before the specified date in the format YYYYMMDD:
EXEC PurgeData '20230601', NULL, NULL
This purges all data older than 6 months, and outputs the number of events deleted to the @EventsDeleted variable:
DECLARE @EventsDeleted INT
EXEC PurgeData NULL, 6, @EventsDeleted = @EventsDeleted OUTPUT
Purge by individual user, host, or workstyle
You can use the stored procedure PurgeEventsByEntity to purge events from a specified host, user, or Workstyle by running the script below with the specified ID.
You can query the Hosts, FullDetail_Users, or Policies tables for the HostID, UserID, or Workstyle name respectively. The ID is populated in the script to purge the events from that specific entity. Only one ID can be used each time you run the script.
To obtain the HostID from the Hosts table:
SELECT HostID FROM Hosts WHERE NAME = 'YourHostName'
To obtain the HostID from the Users table:
SELECT UserID FROM FullDetail_Users WHERE FormattedName = 'YourDomain\YourUser'
To obtain the PolicyID:
SELECT ID FROM Policies WHERE PGPolicyName = 'WorkstyleName'
Script
Only one ID can be used each time you run this script. Replace the other two IDs with NULL.
EXEC PurgeEventsByEntity HostID, UserID, PolicyID
To purge events for HostID 12, run:
EXEC PurgeEventsByEntity 12, NULL, NULL
To purge events for UserID 17, run:
EXEC PurgeEventsByEntity NULL, 17, NULL
To purge events for Workstyle 5, run:
EXEC PurgeEventsByEntity NULL, NULL, 5
Shrink the database
If a large amount of data is being purged from the Endpoint Privilege Management database, we recommend the database is shrunk once the purge is complete. Shrinking the database reduces the disk space consumed by the database and log files by removing empty data and log pages.
A database shrink can be configured as a Maintenance Plan in SQL Server Management Studio, and can be configured to run on a regular schedule.
Populate the database with Endpoint Privilege Management event data
Event data enters the database using four staging tables:
- Staging for process type events
- Staging_UserLogon for user logon events
- Staging_ServiceStart for service start events
- Staging_ServiceStop for service stop events
The data in the tables is normalized across the database using the CopyFromStaging stored procedure.
This procedure is normally called every 10 seconds by a Service Broker contract. This is setup by the database installer.
CopyFromStaging locks
The CopyFromStaging stored procedure requires exclusive access to the staging data otherwise data could be lost. To prevent it being run concurrently in separate sessions a configuration table called Config manages which process currently has a lock on the system.
A row in the table has a configid of CopyFromStagingLocked and a BitValue. If the BitValue is set to 1 the CopyFromStaging stored procedure terminates without processing any events. The StringValue column is used to show what has a lock on the system.
Restart the database
If the database is restarted when CopyFromStaging is running the lock table remains in place.
Use the following procedure before restarting the database (or rebooting the database host).
- Prevent CopyFromStaging from being called by renaming it.
- If CopyFromStaging is currently executing you should wait for it to complete. If it is still executing then the Config table shows that the BitValue column equals 1 where the ConfigId is CopyFromStaging. If you instructed CopyFromStaging to run for a certain amount of time by setting the @MinDurationToRunForInMinutes parameter you can stop CopyFromStaging after the next batch by executing InterruptCopyFromStaging.
CopyFromStaging processes a batch of 10000 events at a time. It checks the new events against existing events for duplicates so if you have a large database, then the processing of each batch can take some time.
- Restart the database / machine.
- Rename CopyFromStaging back to its original name.
Recover from a restart leaving the lock in place
If you restarted the database without following the above procedure, then the lock may remain in place and CopyFromStaging terminates without processing any events. If you are sure that no events are being processed, then you can delete the lock by executing the procedure ReleaseStagingLock CopyFromStaging. This results in any data in the current batch being lost. To recover without losing the current data, contact BeyondTrust Technical Support.
Database error management
If an error occurs during the execution of CopyFromStaging the batches in StagingTemp and the three other StagingTemp_x tables are copied to StagingTempBadBatches and StagingTemp_xBadBatches and the error message is stored in the StagingErrors table. Processing of new events then continues as normal.
If a batch has an error then the whole batch is copied to StagingTempBadBatches, not just the bad rows. To process the data in StagingTempBadBatches and leave just the bad rows you can call RetryCopyFromStaging. This processes the rows one at a time and leaves only the offending ones.
If CopyFromStaging is running, then RetryCopyFromStaging does not run. If you are using the Service Broker you can temporarily rename CopyFromStaging to stop it being run again. If you are using jobs you can disable the job.
If CopyFromStaging is set to run for a long time you can call InterruptCopyFromStaging to stop CopyFromStaging after the current batch is processed.
Fixed column
The StagingTempBadBatches table contains a column called Fixed. Occasionally, data issues or bugs may redirect rows to the StagingTempBadBatches table. Once these issues are identified and remedied, set Fixed to 1 and the rows are automatically reprocessed.
By default, 100 rows are copied into the Staging table on each CopyFromStaging cycle. However, this can be changed by updating a row in the config table.
An example of the T-SQL query:
UPDATE Config
SET BigIntValue = 100
WHERE ConfigId = 'BadRowRetrySize'
Updated about 1 month ago