Overview
To effectively administer Endpoint Privilege Management for Unix and Linux (EPM-UL), it is necessary to understand how the product works. A typical configuration consists of the following primary components:
- pbrun: Used for secured task submission
- pbmasterd: Used for security policy file processing
- pblocald: Used for task execution
- pblogd: Used for writing event logs and I/O logs
It is possible to install any or all of these components on a single machine, or to distribute them among different machines. For optimal security, the policy server hosts and log hosts should be separate machines that are isolated from normal activity.
Task requests
There are two types of task requests:
- Secured: Requests must undergo security validation processing by EPM-UL before they can be run.
- Unsecured: Do not undergo security validation processing. These should be tasks that are not potential threats to the system and therefore do not fall under a company’s security policy implementation. Unsecured tasks are handled by the operating system. EPM-UL is not involved in the processing of such tasks.
Secured task submission to SSH-managed devices - pbssh
Secured tasks can also be submitted through pbssh. pbssh is the Endpoint Privilege Management component used to access SSH-managed devices where Endpoint Privilege Management is not installed (routers, firewalls, Windows devices, or Unix/Linux devices where Endpoint Privilege Management is not installed). pbssh connects to the target device using the SSH configuration.
Secured task submission and execution - pbrun
All secured tasks must be submitted through pbrun, the EPM-UL component that receives task requests. A separate pbrun process is started for each secured task request that is submitted. If the use of pbrun is not enforced for secured tasks, then a company’s security policy implementation could be compromised.
If the task request is accepted by the policy server, pbrun executes the task and logs pertinent task information to the EPM-UL event log.
Note
pbrun is part of the EPM-UL Client, which must be installed on any machine from which a user can submit a secured task request.
Policy file processing - pbmasterd
pbmasterd is responsible for applying the security rules (as defined in the EPM-UL policy files) which make up a company’s network security policy. It performs security verification processing to determine if a request is accepted or rejected based on the logic in the EPM-L security policy files.
- If a request is rejected, then the result is logged and processing terminates.
- If a request is accepted, then it is immediately passed to pbrun for execution.
If pblogd is used, then pbmasterd terminates when the request is passed to pblocald. A separate pbmasterd process is started for each secured task request that is submitted. If the pblogd component is not being used, then pbmasterd waits for the pblocald process to complete before terminating.
Note
During security verification processing, the first accept or reject condition that is met causes security policy file processing to immediately terminate. No further security verification processing is performed.
If pbmasterd recognizes that a command is to be run on the host that submitted the request, then pblocald is optimized out of the connection. The command is run directly under the control of the client (that is, pbrun, pbsh, or pbksh), along with all logging and services that would have otherwise been provided by pblocald.
Task execution - pblocald
pblocald executes task requests that have passed security verification processing (that is, requests that have been accepted by pbmasterd). After a task request is accepted, it is immediately passed from pbmasterd to pblocald in normal mode, or to pbrun, pbsh, or pbksh in local and optimized run modes. pblocald executes the task request as the user who is specified in the policy variable runuser, typically root or an administrative account. This action transfers all task input and output information back to pbrun.
In addition, pblocald logs pertinent task information to the EPM-UL event log (using pbmasterd or pblogd, depending on how EPM-UL has been deployed). The run host can also record task keystroke information to an EPM-UL I/O log (through pbmasterd or pblogd, depending on how EPM-UL has been deployed). A separate pblocald process is started for each secured task request that is submitted.
Logging - pblogd
pblogd is an optional EPM-UL component that is responsible for writing event and I/O log records.
If pblogd is not installed, then pbmasterd writes log records directly to the appropriate log files rather than passing these records to pblogd. In addition, if pblogd is not installed, then pbmasterd must wait for the pblocald process to complete. If the pblogd is used, then pbmasterd terminates after task execution starts, and pblocald sends its log records directly to pblogd.
Using pblogd optimizes EPM-UL processing by centralizing the writing of log records in a single, dedicated component and eliminating the need for the pbmasterd process to wait for task execution to complete.
Logging - message router (pblighttpd-svc)
In EPM-UL v10.1.0, a new Message Router service was introduced to streamline the processing of events and other important messages throughout the system. It allows a single log server to quickly accept, process, and store tens of thousands of events every second.
Step-by-step task processing
To make the following information more concise and easier to understand, this guide assumes:
- EPM-UL is installed on all of the machines.
- The network is functioning and there are sufficient resources (memory and disk space) to run the application and log what is required. Error processing in EPM-UL reports these problems when they occur.
This section describes the process that occurs when a task is submitted in EPM-UL, and indicates which modes use each part of the process.
There are three modes for EPM-UL:
- Normal Mode: All tasks are performed including those that are run by pblocald.
- Optimized Run Mode: After pbmasterd has accepted a request, the specified task runs directly on the submit host, without invoking pblocald. Doing this enables the administrator to use pbmasterd to validate a command, log the commands that are started in the event log, and record an I/O log for the secured task. The optimized run mode also reconfirms the password, performs time-out processing, and logs the status.
- Local Mode: After pbmasterd has accepted a request, the specified task runs directly on the submit host, without invoking pblocald. This mode enables the administrator to use pbmasterd to authorize a command, log the accepted task. All other EPM-UL functionality is bypassed.
The following table summarizes the steps that are used for each of the three modes. An X represents a task that is processed by a specific mode, and N/A means that the task does not apply in the specified mode.
Process Task | Normal Mode | Optimized Run Mode | Local Mode |
---|---|---|---|
Secure task submitted | X | X | X |
Policy Server daemon starts | X | X | X |
Policy file processing | X | X | X |
Local daemon started | X | N/A | N/A |
Log daemon started | pblocald | pbrun | pbrun |
pbrun/pblocald reconnect | X | N/A | N/A |
runconfirmuser check | pblocald | pbrun | N/A |
Executable check | pblocald | pbrun | pbrun |
Secured task runs | X | X | X |
Time-out processing | X | X | N/A |
Secured task ends | X | X | X |
pblocald completes | X | N/A | X |
pblogd completes | Logs exit status and closes the I/O log | Logs exit status and closes the I/O log | Closes I/O log |
pbmasterd completes | X | X | X |
pbrun completes | X | X | X |
Task submitted (all modes)
The initial step is for a user to execute pbrun. This is done either from the command line as:
pbrun list
or from a shell script such as:
#!/bin/sh
/usr/local/bin/pbrun list
where list is the task that is being requested. pbrun checks the settings file and sends the request with other information from the submit host to a policy server daemon that is specified in the submitmasters setting.
Policy server daemon starts (all modes)
The policy server daemon (pbmasterd) listens for requests from pbrun. When a request arrives, the policy server daemon checks its settings file. The policy server host settings file may be different from the settings file on the submit host because they may be on different machines. Validation that pbrun is trying to connect is performed and the rest of the policy server processing continues.
If there is an error at any point in the settings file validation or pbrun connect verification, then pbmasterd stops, and when possible, sends a message for the pbrun session to the user, and validates the client host name checks.
Policy file processing (all modes)
The main action of the policy server daemon is to confirm that the user may run a request, and to modify or set values for the request. Values can be set in the policy file that affect how the policy server daemon runs.
The values that are set in the policy file are shown in the following table:
Policy Values | Description |
---|---|
eventlog | Specifies the file in which the events are logged. |
iolog | Identifies the file in which the I/O streams are logged. |
localmode | Deprecated in favor of Optimized Run Mode Processing. This mechanism allowed execution on local host without the use of pblocald, with the expense of several features not available. Optimized Run Mode Processing enables all the features that localmode lacks, also without using pblocald. |
lognopassword | Specifies whether passwords should be logged. |
lognoreconnect | Identifies whether the log server should be allowed to run through pblocald or stay connected to pbmasterd and whether the pblocald should be allowed to connect to pbrun on submit host or stay connected to pbmasterd. In Optimized Run Mode, this has no affect. |
noreconnect | Controls whether the policy server daemon should stay connected. |
If necessary as part of the processing, the policy server daemon communicates with the pbrun session to get further information from the user, such as passwords or input.
If the log daemon is used and the logmktemp () function is called, then pbmasterd starts the log daemon to create a log file on the log host. If the policy language variable lognoreconnect allows it, the log server reconnects to pblocald when the secured task is ready to run.
If the processing of the policy file reaches an accept statement, then pbmasterd tries to connect to pblocald on the run host.
If the processing of the policy file reaches a reject statement, then pbmasterd logs the result (possibly through the log server daemon) and terminates the request.
If the log daemon is being used, then pbmasterd tries to connect to the log daemon on the log host.
EPM-UL 8.0.2 adds a policytimeout mechanism to protect against policies that appear nonresponsive.
Note
As soon as an accept or reject statement executes, policy file processing stops. No further policy file processing takes place.
Local daemon started (normal mode)
- The local daemon listens for requests from pbmasterd.
- When one arrives, it checks its settings file. The run host settings file may be different from the settings file on the policy server host because they can be on different machines.
- Validation that pbmasterd is trying to connect is performed and the rest of the local processing continues. The local daemon immediately determines whether it can accept requests from the policy server daemon by comparing the host to the acceptmasters line in the settings file.
- If there is an error at any point in the settings file validation or the verification that pbmasterd is trying to connect the local daemon, then the process stops. When possible, a message is sent via the pbmasterd session for the pbrun session to the user. Validate policy server host name checks are also performed.
Log daemon started (all modes)
- The log daemon listens for requests from pbmasterd or pblocald.
- When one arrives, it checks its settings file. The log host settings file can be different from the settings file on the policy server host or run host because they can be on different machines.
- Validation that pbmasterd or pblocald is trying to connect is performed and the rest of the local processing continues.
- If there is an error at any point in the settings file validation or the verification that pbmasterd or pblocald is trying to connect, then the log daemon stops. When possible, a message is sent using the requesting session for the pbrun session to the user. pblocald starts the log daemon in normal mode and pbrun starts the log daemon in local mode and optimization run mode.
pbrun/pblocald reconnect (normal mode)
If pbmasterd does not need to stay in the middle of the connection between pbrun and pblocald, it instructs pbrun and pblocald to connect directly to each other. pbmasterd then exits.
pbmasterd removes itself when the following are all true:
- A log daemon is used.
- The noreconnect and lognoreconnect variables are false.
If these conditions are not met, then pbmasterd remains in the job stream and passes the data from pbrun to pblocald.
The only reason a policy server daemon would need to stay in the middle of a connection is that the policy server daemon is located between two subnets that do not normally allow traffic between them.
runconfirmuser check (normal mode and optimized run mode)
With all sessions now established, the pblocald session determines whether the runconfirmuser variable is set and requests the run host password for the runconfirmuseruser user from the pbrun session. If this request fails three times, then the pblocald session stops.
Executable check (all modes)
pblocald does some final checking before starting the actual command. If the runcksum or runcksumlist variable is set, pblocald determines whether the checksum of the command in runcommand matches the value in runcksum or runcksumlist. If the runmd5sum or runmd5sumlist variable is set, pblocald determines whether the MD5 checksum of the command in runcommand matches the value in runmd5sum or runmd5sumlist.
To log the checksum of the runcommand being compared against runcksum, runcksumlist, runmd5sum, or runmd5sumlist, use the policy variable logcksum.
These actions provide protection against viruses, Trojan horses, or other unintentional changes to the program file. pblocald also runs secure command checks. The final checking is done by pblocald for the normal mode and by pbrun in the optimized run mode and local mode.
Secured task runs (normal mode)
When the pblocald reaches this point, it can finally execute the command specified in the runcommand variable, pblocald checks that runcommand points to an executable file. If the file is not found or cannot be executed, pblocald stops and an error is sent back to the pbrun session.
pblocald sets up the run environment as follows:
- The runtime environment to execute the command is established according to the values in the runenv list.
- The user that is specified in the runuser variable runs the command.
- The utmp entry is written with the runutmpuser variable value as the user.
- The syslog is updated.
- The group is the value of the rungroup variable.
- The secondary groups are the value of the rungroups variable.
- The arguments to the command are the values that are specified in the runargv variable. The current directory is the value that is specified in the runcwd variable.
- The umask is the value of the runumask variable.
- The nice priority is the value of the runnice variable.
- If the runchroot variable is set, then the top of the file system is set via chroot
- The processing of HUP signals is set based on the value of the runbkgd variable.
- pblocald then starts the command.
Timeout processing (normal mode and optimized run mode)
If there is a mastertimeout, submittimeout, or runtimeout in effect (as specified in the policy or overridden by a client’s runtimeout keyword in the settings), then the session terminates if there is no input or output activity within the specified number of seconds. These timeouts are effective only after the policy has accepted a request, during the lifetime of the secured task.
The EPM-UL 8.0.2policytimeout() procedure provides a timeout mechanism that is effective during policy processing (before an accept or reject). This allows protection against pbmasterd/policy that appears nonresponsive waiting for user input, infinite loops within the policy, etc.
Secured task ends (all modes)
At some point the task ends, because the command finished, the user interrupted it by pressing CTRL+C, or it was exited in some other way.
pblocald completes (normal mode)
pblocald recognizes task completion and stops processing. It captures the reason for the completion (such as a signal or an exit code) and sends it for logging as the exitstatus variable. The exittime and exitdate are also logged. In normal mode, pblocald completes.
pblogd completes (all modes)
If a log server is used, then the I/O log is closed. For normal mode and optimized run mode, the exit status of the secured task is also logged.
pbmasterd completes (normal mode only)
If the pbmasterd session is still running, then it shuts down. The pblogd session also shuts down.
pbrun completes (normal mode and optimized run mode)
pbrun displays the exitstatus of the string of the secured task if the task detects an error or abnormal exit.
The exit status of the secured task is also returned in the pbrun exit status value.
Note
For more information, see your Unix/Linux man pages.
Normal mode processing
In normal mode:
- The machine from which a task is submitted is the submit host.
- The machine on which security policy file processing takes place is the policy server host.
- The machine on which a task is executed is referred to as the run host.
- The machine on which event log records and I/O log records are written the log host. Use of the log server daemon pblogd is optional, but highly recommended.
EPM-UL workflow
Optimized run mode processing
- Version 3.5 and earlier: Optimized run mode not available.
- Version 4.0 and later: Optimized run mode available.
In optimized run mode, after pbmasterd has accepted a request, the specified task runs directly on the submit host, without invoking pblocald. This feature enables the administrator to use pbmasterd to validate a command, log the commands that were started in the event log, and log the I/O streams for the secured task. The optimized run mode also reconfirms the password, performs time-out processing, and logs the status.
Optimized run mode availability
Optimized run mode is enabled when all of the following conditions are met:
- The policy server host is configured to use a log server.
- The values of the submithost and runhost variables must be equal.
- pbrun is invoked without the --disable_optimized_runmode command line option.
- pbmasterd is invoked without the --disable_optimized_runmode command line option.
- The settings file on the submit host has the clientdisableoptimizedrunmode setting set to no.
- The settings file on the policy server host has the masterdisableoptimizedrunmode setting set to no.
- The policy sets the runoptimizedrunmode variable to true.
Local mode processing
Deprecated in favor of Optimized Run Mode.
In local mode, after pbmasterd has accepted a request, the specified task runs directly on the submit host, without invoking pblocald and without using optimized run mode.
This feature enables the administrator to use pbmasterd to authorize a command and to log the accepted task in the event log. However, unlike optimized run mode, this mode does not perform timeout processing, log the exit status of the accepted task, or Advanced Control and Audit (ACA). In local mode, the process pbrun is replaced by the secured task, unless the I/O logging is on.
With the introduction of optimized run mode, the use of local mode is no longer a benefit, since optimized run mode allows the task to run without invoking pblocald (if on the same host), and allows time-out processing, I/O logging, logs the exit status of the task, and allows ACA.
Local mode processing can be controlled in the /etc/pb.settings file (allowlocalmode setting) or in the policy (localmode and runlocalmode variables).
Local mode availability
Deprecated in favor of Optimized Run Mode.
Local mode is enabled when the allowlocalmode setting on the submit host, policy server host, and run host is set to yes.
Note
pbrun must be invoked with the -l command line option, or the policy must set the runlocalmode variable to true.
Local mode effects
Local mode does the same processing on the submit host and the policy server host, including the logging of the accepted request. However, instead of the policy server daemon requesting the pblocald to run the task, the pbrun session is replaced with the task. pblocald is not run when using local mode. pblogd may be run to record the Accept event.
In local mode, the accepted task runs on the submithost. Local mode fails with an error if a different runhost is specified.
Local mode limitations
Because the Endpoint Privilege Management for Unix and Linux programs are not active when a program runs with local mode, the following limitations exist:
- Exit status of the job is not logged.
- runtimeout and submittimeout cannot be processed.
- Keystroke actions cannot be processed.
- The setkeystrokeaction function is not supported in local mode.
- The program specified by iologcloseaction() policy procedure is not executed.
- ACA is not compatible with local mode.
Cached policy and logging
Cached policies and logging allow users who are temporarily disconnected to continue to work. When your computer gets disconnected from the corporate network (or corporate host/client loses network connectivity to policy/log servers), pbrun, pbsh, and pbksh are no longer able to connect either to a policy server or to a log server.
EPM-UL v23.1 now offers an option to cache the role-based policy (RBP) stored on the client, and store EPM-UL event and iologs in cache on the clients, so clients can continue to work when network connectivity to the policy and log server is lost. Once the connectivity is re-established, the client resumes using the server, and the logs are synchronized back to the log server. Moreover, if the server supports Elasticsearch delivery, then the synchronized records are forwarded to Elasticsearch.
EPM-UL v23.1 client hosts support policy and log caching only on the x86-64 Linux platform. Any supported platform can serve as the role-based policy server for the cached policy. Cached policy clients must be one of:
- RHEL7+
- Ubuntu 18+
- Suse 12+
- Debian 9+
Installation and configuration of the policy caching feature on a host is done by either pbinstall or the EPM-UL Linux package installer. To protect the cached policy on client host side, the role-based policy database is automatically encrypted using the new cachedrbpencryption setting which defaults to the AES-256 encryption method. The client uses its copy of the public policy certificate to digitally verify the signature of any cached policies that are fetched.
The caching capability is optional, and first needs to be allowed on the Policy Servers. When Allow caching is enabled on the policy servers, a client on Linux connecting to this policy server can be installed to Enable policy and log caching. The Client Registration feature is highly recommended when installing the Linux client host, since the policy server hostname is needed at install time to determine if Allow caching is enabled on the policy server.
When network connectivity is not available, the event and IO logs stored on the client are also encrypted. Event log data is stored as encrypted write queue files using the new cachedwqencryption keyword set on the client, while IO log data is encrypted using the existing iologencryption keyword also set on the client. If these keywords are empty or set to none, then a default encryption scheme is selected.
The scheduler tasks running on the computer (pblighttpd-svc #mon and #sched) are used to periodically (using the new cachedforwardinterval keyword) transfer back the event and IO logs stored on the computer when there was no network connectivity to the log server, after the connectivity is re-established. The scheduler also retrieves cached policy from the policy server and stores it until connectivity becomes unavailable and it is needed. The new keyword cachedpolicylimitdays limits the number of days where a client can be disconnected and use the cached policy.
Once the connectivity is re-established, the last update date/time of record for the client in the license database will be updated. If the client is retired, the cached policy is removed from the client.
The log server to which cached event and IO logs are forwarded is based on the logservers setting in the computer’s /etc/pb.settings file. Some logserver-related variables recorded in event and IO log headers that would be populated from pblogd are instead written by log caching-related processes on the logserver. These include eventlog, iolog_list, logpid, loghostip, loghostname, logserver, logserverlocale, and logserver_utcoffset. Finally, the logserver’s log caching process will write logkeystroke_utc for keystroke events.
Only role-based policy is supported when caching is enabled. The script-based policy is currently unsupported. When the Allow Caching feature is enabled, the install automatically enables role-based policy. If role-based policy is set to no, the caching feature will not be functional. Since this feature needs to work when there is no network connectivity, if the role-based policy contains external queries requiring network connectivity, those will not work.
The rbptransactions keyword is also enabled by default. This allows the tracking of the RBP changes and is used to show which version of the RBP is used on the clients that have caching enabled in the eventlog records.
If rbptransactions is disabled, the eventlog will show that cached policy was used, but will not show the policy version. There is also a pbdbutil command on the policy server, that shows the clients and the version of the policy they are using.
Since this feature is only functional when there is no network connectivity, pbrun -h and pbssh -h are not supported.
RNS also requires network connectivity and will not be supported.
EPM virtualization
- Cost-effective solution for consistent granular privilege identity management across guest operating systems as well as hypervisor hosts.
- Provides granular delegation of administrative privileges on virtual guest and host hypervisors, including detailed and flexible reporting with keystroke logging of administrative actions, for a secure and compliant virtualized datacenter environment.
- Enables organizations that move to virtualized platforms to control administrative access to the Hypervisor/VMM layer while still realizing all virtualization cost efficiencies. Administrative tools prevent the virtualization layer from being compromised, possibly posing significant security risks to all hosted workloads.
- Programmable role-constraint mechanisms enforce segregation of duties for users and virtual platform-specific, cost-effective deployment capabilities enable secure datacenter virtualization.
The following diagram shows how Endpoint Privilege Management virtualization works.
Features
The features of virtualization include:
- Automated workflows for policy creation and change management
- Granular delegation of administrative privileges
- Detailed and flexible reporting including keystroke logging of administrative activities
- Two-click entitlement reports
- Programmable role-constrain mechanisms for segregation of duties
- Secures virtual guest and host hypervisors
- Supports VMware ESX, Solaris Zones, AIX WPAR, and IBM z/VM
EPM-UL and AD Bridge
Starting in v7.0, integrate BeyondTrust's Endpoint Privilege Management for Unix and Linux and AD Bridge.
The AD Bridge Enterprise tools on Windows include:
- Management console that supports a number of plug-ins for performing various tasks.
- A report plug-in for viewing configuration and event related queries. EPM-UL has dedicated reports for the various operations that it performs.
Event log central collection
AD Bridge features:
- A database-centric reporting architecture that enables event collection from multiple devices and the ability to report this data from a central location using plug-ins.
- ADB service, BTCollector, collects events. The collector machines aggregate all events in an enterprise-wide MS SQL Server database.
Collect and query EPM-UL events using the ADB collectors and report plug-in: Accept, Reject, Finish, and Keystroke Action.
EPM-UL health check
Starting with v7.0, Endpoint Privilege Management for Unix and Linux:
- Sends events to AD Bridge collectors based on the responsiveness of policy server hosts, log hosts, and pblocald.
- Clients pbrun, pbsh, pbksh, and pbssh, optionally report a new failover event every time a policy server host or log host fails to respond in a timely manner.
This feature is closely tied to the current Endpoint Privilege Management for Unix and Linux failover mechanism. In this integration, failover events are written to syslog and the AD Bridge event log database, when:
- Any policy server that does not respond within the number of seconds specified by the masterdelay.
- Any log host that does not respond within the number of seconds specified by the logserverdelay.
- pbmasterd reports events any time pblocald fails to respond.
This feature also allows for the optional recording of successful connection events.
The EPM Operations Dashboard tool provides a view on key metrics that an administrator can configure to show green, yellow, and red status indicators depending on user-defined thresholds.
Integration process
The AD Bridge agent must be installed on these Endpoint Privilege Management for Unix and Linux machines:
- On the policy server host and log host computers to send the event log records (Accept, Reject, Finish, and Keystroke Actions events) and the health event log records (related to pblocald) to AD Bridge.
- On the client computers (where pbrun, pbksh or pbsh, and pbssh are installed), policy server host, and run host (where pblocald is installed), to send the health event log records related to the policy server host and log host to AD Bridge.
To send event logs to AD Bridge, set the following in the pb.settings file:
- sharedlibpbisdependencies
- pbis_event_logging
To send event records about the health of the policy server host, log host, and pblocald, set the following in the pb.settings file:
- sharedlibpbisdependencies
- pbis_log_failover
- pbis_log_connect_success
Solr indexing and search
Note
As of version 23.1, Solr is deprecated. EPM-UL no longer supports installing Solr, but features that use an existing Solr installation will continue to work.
There are separate tar files for Solr installation. Each log server and policy server host is able to communicate with a Solr server and submit I/O log output data for indexing. BeyondInsight and BIUL (BeyondInsight for Unix & Linux) provide a search GUI, allowing users to search indexed I/O logs using a selected set of variables, but also allowing to search the content of I/O log sessions using queries such as this AND that AND NOT other OR somethingelse.
For each I/O log file, the result of pbreplay -O output of the I/O log file is sent to Solr to be indexed. Some of the event log variables in the header of the I/O log are indexed as well. These variables are:
- user
- runuser
- runhost
- runcommand
- runargv
The name of the I/O log file name is also indexed, as well as the start and end time of the I/O log session.
You can add user-defined eventlog variables (defined in the policy) to the list of variables to be indexed by setting Solrvariables in pb.settings to the list of user variables defined in the policy. These variables must be named _pbul.
The result displayed contains a path to the actual I/O log file, which can then be replayed using Endpoint Privilege Management for Unix and Linux GUI (this requires Endpoint Privilege Management GUI to be installed on the log server and policy server hosts where the I/O log files reside).
If I/O log indexing with Solr is enabled, the Solr index is updated when I/O logs get archived.
If a problem occurs while trying to contact the Solr server (broken connection, miscellaneous errors, etc.), an appropriate error is logged in the diagnostics log file, and the unsent I/O log file name is saved to be forwarded to the Solr server at a later time.
Endpoint Privilege Management periodically checks to see if there are events that are outstanding, and are older than the autofwdtime setting. If conditions are met, it launches the pbreplay admin binary to forward the I/O log data to the Solr server for indexing. The path where pbreplay resides is specified by the setting keyword pbadminpath.
Starting with Endpoint Privilege Management for Unix and Linux v10.0.0, a queue mechanism is used to process I/O logs for Solr, while limiting the number of indexing processes. This mechanism is shared by the feature I/O Log Close action.
I/O log close action
The EPM policy procedure iolocloseaction allows the policy to specify a program that is executed for each completed iolog.
This mechanism allows the I/O log to be processed in some way determined by the specified program. For example, Endpoint Privilege Management includes a perl script that sends ACA data from the I/O log to Splunk.
The iologcloseaction mechanism and the Solr indexing mechanism share a queue that allows pbconfigd to control and monitor pbreplay processes, which in turn perform the Solr indexing and iologcloseaction actions. This mechanism uses a combination of fast write to queue files, and a database. Each I/O logging process writes the iolog path and filename to the queue, as well as periodic heartbeat information to inform the queue mechanism that the I/O log is still being generated. When an iolog is closed (normally), that information is written to the queue as well. pbconfig runs a scheduled task that transfers data from the IO Log Action queue to both SOLR and any specified IO Log Close Action scripts and once they have been successfully processed the entry is deleted. The scheduler automatically tries to resend any outstanding entries if the SOLR service is down or unavailable.
Both Solr indexing and iologcloseaction are processed by pbreplay.
- pbconfigd runs a scheduled task that monitors the pbreplay processes handling Solr indexing and/or iologcloseaction.
- The number of allowed pbreplay processes is configured with the iologactionmaxprocs keyword.
- pbreplay processes are launched as needed to process the database queue.
- The iologactionretry keyword controls the number of retries to acquire a database lock for the internal database queue operations.
Splunk integration
EPM can send accept and reject event data to Splunk via syslog, using the syslogsession_start_format and syslog_reject_format settings.
EPM-UL 10.0.1 adds a new keyword syslogsession_finished_format_logserver, which adds exit status data, and operates from the log server (as opposed to the syslogsession_finished_format keyword that operates from each runhost). Both syslog and syslogsessions must be set to yes to enable those keywords. The syslog keyword needs to be configured to send data to Splunk.
Note
Various syslog implementations have data rate limiting and must be configured accordingly.
EPM can send ACA data to Splunk, via the iologcloseaction() procedure defined in the policy language. This makes use of the Perl script closeactionsplunk.pl, normally located in /opt/pbul/scripts/.
Note
The use of this Perl script may require additional Perl modules to be installed. This script requires an Endpoint Privilege Management for Unix and Linux REST App ID and App Key to be configured near the top of the script.
Example Splunk app
Endpoint Privilege Management for Unix and Linux has an example Splunk app available from the Splunk website.
Once the Splunk App is installed in Splunk, if Splunk is to be configured to accept syslog data, do the following within the Splunk GUI:
- Click Settings > Data Inputs > UDP + Add New.
- Enter port 514, then click Next.
- Click App Context.
- Select BeyondTrust App for Splunk (App-BeyondTrust).
- Click Select Source Type.
- Enter the first few characters of beyondtrust:syslog. The search box should find beyondtrust:syslog.
- Select that, click Review, and then Submit.
- Click Settings > Advanced Search > Search Macros.
- Select the app: BeyondTrust App for Splunk.
- Verify that the macro named get_beyondtrust_index_sourcetype has the Definition: (index="main" sourcetype="beyondtrust:syslog").
To send Reject and Finish event data to Splunk (in a format that the Splunk app recognizes), set the following syslog formatting keywords in /etc/pb.settings on the policy servers and log servers:
syslog_reject_format "BeyondTrust_PBUL_REJECT_Event: Time_Zone='%timezone%'; Request_Date='%date%'; Request_Time='%time%'; Request_End_Date='%date%'; Request_End_Time='%time%'; Submit_User='%user%'; Submit_Host='%submithost%'; Submit_Host_IP='%submithostip%'; Run_User='None'; Run_Host='None'; Run_Host_IP='No IP Address'; Current_Working_Directory='%cwd%'; Requested_Command='%command%'; Requested_Arguments='%argv%'; Command_Executed='None'; Command_Arguments='%runargv%'; ACA_Event='False'; ACA_Date='NA'; ACA_Time='NA'; ACA_Authorization='NA'; ACA_CWD='NA'; ACA_Action='NA'; ACA_Target='NA'; ACA_Arguments='NA'; Log_Servers='%logservers%'; Session_Recording_File='Session Not Recorded'; Risk_Rating='%pbrisklevel%'; Authorizing_Server='%masterhost%'; Event_Status='Reject'; Exit_Status='%exitstatus%'; Risk_Rating='%pbrisklevel%'; Authorizing_Server='%masterhost%'; Event_Status='Reject'; Exit_Status='%exitstatus%'"
syslogsession_finished_format_logserver "BeyondTrust_PBUL_ACCEPT_Event: Time_Zone='%timezone%'; Request_Date='%date%'; Request_Time='%time%'; Request_End_Date='%exitdate%'; Request_End_Time='%exittime%'; Submit_User='%user%'; Submit_Host='%submithost%'; Submit_Host_IP='%submithostip%'; Run_User='%runuser%'; Run_Host='%runhost%'; Run_Host_IP='%runhostip%'; Current_Working_Directory='%cwd%'; Requested_Command='%command%'; Requested_Arguments='%argv%'; Command_Executed='%runcommand%'; Command_Arguments='%runargv%'; ACA_Event='False'; ACA_Date='NA'; ACA_Time='NA'; ACA_Authorization='NA'; ACA_CWD='NA'; ACA_Action='NA'; ACA_Target='NA'; ACA_Arguments='NA'; Log_Servers='%pblogdnodename%'; Session_Recording_File='%iolog_list%'; Risk_Rating='%pbrisklevel%'; Authorizing_Server='%masterhost%'; Event_Status='Accept'; Exit_Status='%exitstatus%'"
The log servers require the -r option to syslog rejects.
For example, on RHEL 6.x, edit /etc/xinetd.d/pblogd, changing server_args to include the -r, then restart xinetd.
Example
server_args = -r -i xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Similarly, for RHEL 7.x, edit /etc/systemd/system/[email protected] so that ExecStart includes the -r, and restart the pblogd service.
Example
ExecStart=-/usr/sbin/pblogd -r -i xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
To send ACA data to Splunk (in a format that the Splunk app recognizes), the policy must specify an I/O log and enable session history as well as specify an iologcloseaction to run the Perl script. The example Endpoint Privilege Management for Unix and Linux policy /opt/pbul/policies/pbul_functions.conf includes example Procedure SplunkRole() to accomplish all the necessary tasks (to enable this procedure, set EnableSplunkRole = true in /opt/pbul/policies/pbul_policy.conf).
Note
Perl modules such as perl-JSON and perl-Sys-Syslog may need to be installed.
Create an App ID and App Key for the Splunk script on the log server:
pbadmin --rest -g SPLUNK-DATA -m SplunkDataAppID
Then edit the /opt/pbul/scripts/closeactionsplunk.pl script on the log server(s) and change the configurable items appropriately:
17
18 my $pbr_appid = "REPLACE-ME";
19 my $pbr_appkey = "REPLACE-ME";
20
In addition to editing the App ID and App Key, several other edits may be necessary:
The closeactionsplunk.pl script currently uses the auth syslog facility. Depending on the log server OS, this may need to be changed to authpriv in closeactionsplunk.pl, or auth may need to be configured in addition to authpriv in /etc/syslog.conf (rsyslog.conf, etc.).
The closeactionsplunk.pl script uses /usr/sbin/pbrestcall internally. This works for installations without a prefix or suffix. If a prefix/suffix installation is used, edit the script to use the appropriate prefix/suffix for pbrestcall. The closeactionsplunk.pl script uses the default rest port (24351), which may need to be changed depending on the actual port used. That port number currently appears in the line:
my $pbr_url = https://localhost:24351/REST
Note
The Splunk app can be located at https://splunkbase.splunk.com/app/4017/ or from within the Splunk GUI under Apps > Find More Apps.
REST API
A REST API has been developed for EPM-UL to allow other software to configure, customize, and retrieve data from EPM-UL. The API is web based and uses industry standard modern components, connectors, and data elements within a distributed and secure enterprise environment. The software is installed on the policy server, log server or run/submit hosts, alongside a suitable HTTP service (one which supports FastCGI), that provides the communications between the client and the REST services.
The REST API provides a RESTful interface for product settings, policy configuration, and I/O log retrieval. The REST API can be used with EPM-UL v7.1.0 and later.
Multi-byte character set support
EPM-UL uses the locale settings on the host operating systems to support UTF-8 multi-byte character sets in policy files, I/O logs, and installation scripts. To correctly use EPM-UL in a multi-byte character set environment, you must ensure the following:
- All EPM-UL hosts (policy server host, log host, run host, submit host, and so on) have their locale settings correctly configured to the same locale.
- All processes that start at boot time or that are started by inetd or xinetd inherit the locale settings.
Note
UTF-8 multi-byte character sets are not yet supported in the following Endpoint Privilege Management for Unix and Linux components:
- shells (pbsh, pbksh)
- utilities (pbvi, pbnvi, pbless, pbmg, pbumacs)
- browser interface (pbgui)
Note
If the environment variable LANG, or one of the environment variables LC_xxxx is set to an invalid value, Endpoint Privilege Management for Unix and Linux components do not error and set LANG to C. You must ensure LANG is correctly set, or if not set correctly, other components of Endpoint Privilege Management for Unix and Linux (policy server, log server, run host and submit host), are also using C or a single-byte character set.
Manage locale data and virtual memory usage on Red Hat Enterprise Linux
On RHEL 6 and 7, all available locales are stored by default in the /usr/lib/locale/locale-archive file. This archive file, whose size could be upwards of 100MB, is provided by the glibc-common package.
When Endpoint Privilege Management for Unix and Linux binaries run on an RHEL 6/7 machine and set their locale based on the systems settings of that host, the locale-archive file is mapped into their process memory space. This increases the virtual memory usage of each EPM-UL binary by the physical size of the locale-archive file, which could be around 100MB.
If this increase in EPM-UL virtual memory usage is a concern on your system, it is possible to reduce the physical size of locale-archive. This can be done by discarding unnecessary locales from locale-archive and rebuilding it using localedef and build-locale-archive commands. However, be mindful of caveats and risks that arise when you do this, such as:
- Your customized file could be overwritten whenever you update glibc.
- Other users on that host who try to use a locale that was removed will encounter errors.
RHEL 8 and later versions have individual langpack packages for each language, and you can choose to install a minimal set of locales during system configuration. The issue with the increased virtual memory usage of EPM-UL binaries might still occur if the glibc-all-langpacks package is installed.
Note
For more information about localedef and build-locale-archive commands, refer to Red Hat Enterprise Linux product documentation.
PAM to RADIUS authentication module
Starting in v8.5, EPM-UL includes a PAM module (pam_radius_auth) to support authentication against a configured RADIUS server. The module allows EPM-UL to act as a RADIUS client for authentication and accounting requests.
You must have a RADIUS server already installed and configured before using this module. Your RADIUS server must also have the EPM-UL host requesting authentication already defined as a RADIUS client.
To configure EPM-UL to use pam_radius_auth, perform the following steps.
-
Locate the PAM to RADIUS Authentication Module:
Upon installation, the PAM module (pam_radius_auth) can be found in /usr/lib/beyondtrust/pb. It may be copied to a custom location or the system’s default PAM module directory (for example, /lib/security or/usr/lib/security).
-
Configure the PAM configuration to use pam_radius_auth:
Configure a PAM configuration file for pam_radius_auth which would define a service stack using the pam_ radius_auth module. For most Unix operating systems, it can be added in /etc/pam.conf. On Linux, it is a separate file in /etc/pam.d directory. The service name defined here may be used in the PAM-related Endpoint Privilege Management for Unix and Linux settings keyword, policy functions, and variables.
Example
/etc/pam.d/pbul_pam_radius:
#task control module
auth required pam_radius_auth.so
account required pam_radius_auth.so
password required pam_radius_auth.so
-
Create/locate the pam_radius_auth configuration file:
The pam_radius_auth configuration file identifies the RADIUS server(s) that performs the authentication. By default, the pam_radius_auth configuration file is /etc/raddb/server. You can use a different path/filename and use the module option field in the PAM config file to specify the location:
Example
/etc/pam.d/pbul_pam_radius:
auth required pam_radius_auth.so conf=<filepathname>
-
Set up the pam_radius_auth configuration file:
Edit the pam_radius_auth configuration file and add a line that represents your RADIUS server using this format:
server[:port] shared_secret [timeout]
Name | Description |
---|---|
server | Required. RADIUS server name or IP address. |
port | Optional. Specify if the port name or number if different from the defined radius port name in /etc/services. |
shared_secret | Required. The authentication key defined in the client configuration file for this host on the RADIUS server. |
timeout | Optional. The number of seconds the module waits before deciding that the server has failed to respond. The default timeout is 3 seconds. |
Example
216.27.61.130:1812 secretCnz9CkUtIeHqtCya89LzPTJEq0VnLCNA2SB9KWhIoSnC 10
- Set up EPM-UL to use the pam_radius_auth module.
Note
For more information on using the services defined here, see Pluggable Authentication Modules.
Component, directory, and file locations
Note
For the locations of the Endpoint Privilege Management for Unix and Linux components, directories, and files, along with other changes and post-installation instructions, see the EPM-UL Installation Guide.
Sudo wrapper
After EPM-UL is installed and its clients deployed throughout the enterprise, you can ideally start using pbrun instead of sudo to request secured tasks. However, you might need time to modify preexisting scripts or become accustomed to typing pbrun.
On Linux x86-64 systems, administrators have the option to install and configure a sudo wrapper, a Perl script which facilitates the translation of sudo options into pbrun options and uses pbrun to execute the requested command. This way, users can continue typing sudo but pbrun is used to elevated privileges.
Important
Consider the following before installing the sudo wrapper:
- The sudo wrapper is currently supported on Linux x86 64-bit systems only.
- The sudo wrapper and its installation do not touch the preexisting sudoers file. The system administrator must migrate the rules from sudoers to the EPM-UL policy before installing the sudo wrapper.
- Many of sudo's switches need to be implemented in the policy of EPM-UL. This modified policy must be in place prior to installing the sudo wrapper to have those options available.
Packaging
The Perl script pbsudo-wrapper.pl is added to the bin directory in the TAR file.
Starting with EPM-UL version 22.1, the sudo wrapper is available only with the Linux distribution: pmul_linux.x86-64.
Install details
pbinstall
The pbinstall program has a new -O switch to install sudo wrapper. The Linux host where pbinstall is run must already have an unprefixed/unsuffixed pbrun installed and configured. Before attempting to install sudo wrapper, you must already have an updated Endpoint Privilege Management for Unix and Linux policy in place that contains the important prerequisites mentioned in the notes above. To ensure that sudo wrapper is installed in the correct environment, the -O switch is purposely exclusive and cannot be combined with the other pbinstall options.
When installing the sudo wrapper, pbinstall locates the actual sudo binary and renames it to a backup name (with the suffix .orig), after which, the Perl script pbsudo-wrapper.pl is copied from the distribution to the same location and renamed sudo.
pbuninstall
The pbuninstall program has a new -O switch to manually uninstall the sudo wrapper but leave Endpoint Privilege Management for Unix and Linux components intact.
When uninstalling the sudo wrapper, pbuninstall locates the backup sudo binary (suffixed with .orig) and renames it back to its regular name.
Demo policy files
Default policy files /pbul_policy.conf and /pbul_functions.conf contain sample instructions that define a Sudo role. This Sudo role is disabled by default, but it illustrates how you can craft a policy to support the sudo wrapper options.
/pbul_policy.conf:
# This enables "Sudo role", which allows root (or any user in SudoUsers) to run any command on the current host (or any host in SudoHosts)
# By default, this role is disabled. To ensable this set EnableSudoRole to true below.
#
EnableSudoRole = false;
SudoUsers = {"root"};
SudoHosts = {submithost, TargetSubmitHostShortName};
SudoRole();
/pbul_functions.conf:
## Procedure SudoRole:
## If 'EnableSudoRole' is enabled, it allows any user in SudoUsers list to run any command on hosts in SudoHosts
##
procedure SudoRole()
{
if ( EnableSudoRole && user in SudoUsers && (runhost in SudoHosts || TargetRunHostShortName in SudoHosts) )
{
SetRunEnv("root", false);
if (getenv("SUDOLOGIN") == "true") {
setenv("SHELL", "!!!");
setenv("HOME", "!~!");
runcwd = "!~!";
runargv[0] = "-" + basename(getenv("SHELL","/bin/sh"));
unsetenv("SUDOLOGIN");
unsetenv("SUDOUSERSHELL");
}
if (getenv("SUDOPRESERVE") == "true") {
setenv("USER", runuser);
setenv("USERNAME", runuser);
setenv("LOGNAME", runuser);
unsetenv("SUDOPRESERVE");
} else {
#runcwd = "!~!";
#setenv("SHELL", "!!!");
#setenv("HOME", "!~!");
setenv("USER", runuser);
setenv("USERNAME", runuser);
setenv("LOGNAME", runuser);
setenv("PWD", runcwd);
setenv("PATH", "/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/opt/pbis/bin");
keepenv("SHELL", "HOME", "USER", "USERNAME", "LOGNAME", "PWD", "PATH",
"TERM", "DISPLAY", "SUDO_GID", "SUDO_UID", "SUDO_USER",
"SUDO_COMMAND");
}
accept;
}
}
Updated 5 days ago