AWS Elastic Load Balancer directs traffic to a specific PrivX application EC2 instance. The load balancing can be based on sticky session (can be enabled on ELB) or source IP (needs Nginx config changes). The load balancer keeps track of PrivX application server statuses and if it detects an anomaly, requests the autoscaling group to terminate the instance.
The PrivX EC2 autoscaling group has been configured to retain at least 2 instances of PrivX running. The number of instances can be configured according to load.
A PrivX application server consists of a Nginx reverse proxy and a number of PrivX microservices. The Nginx reverse proxy also serves the PrivX HTML5 UI static resources for the requesting clients. The PrivX microservices offer REST APIs over HTTPS. The PrivX application servers store all persistent data to AWS RDS - once a PrivX application server has been configured, it is just a matter of taking a snapshot of the server and deploying a new instance of the snapshot to add application nodes.
The PrivX microservices use AWS Elasticache to sync state between themselves - the cache is used only to trigger updates which are done via REST calls.
The PrivX microservices persist data AWS RDS. The RDS database engine should be PostgreSQL.
- The PrivX audit trail storage for recorded SSH/RDP/HTTPS sessions in AWS EFS.
- Configure AWS RDS database for PrivX
- Configure AWS Elasticache for PrivX
- Create an EC2 autoscaling group for PrivX EC2 instances
- Create an AWS Elastic Load Balancer for PrivX
- Create an EC2 instance for PrivX (Amazon Linux, RHEL)
- Install PrivX, configure PrivX to connect to RDS & Elasticache defined in 1 & 2
- Create AWS EFS for PrivX (NFS accessible from PrivX EC2 instance), mount boot-persistent privx user owned
- Attach PrivX EC2 instance to the ELB and ensure that it works
- Take a snapshot of the PrivX EC2 instance and configure it to the autoscaling group. Set minimum number of running instances to the autoscaling group.
- Terminate initial EC2 instance and observe the autoscaling group starting a new instance from the snapshot
- Configure ELB to inform autoscaling group of an instance malfunction (ELB health check needs to poll path
For production environments, it is recommended to use CloudFormation or similar template to set up the environment.
- Detach instances from the ELB
- Set autoscaling group instance count to 1
- Transfer PrivX upgrade package to the remaining host or use PrivX repository
- Upgrade host, run
yum update PrivX
- Attach the instance to ELB, verify that PrivX works
- Take a snapshot of the instance, attach the snapshot to autoscaling group
- PrivX is updated
Backup & restore
PrivX automatically creates full backups (certificates and configuration files) and stores them to
/var/backups/privx. Ideally, this directory would be mounted to AWS EFS.
- Transfer the backup directory from
/var/backups/privx/hostname_yyyy-mm-dd-hhmmto a new PrivX instance
- Install PrivX (do not run postinstall.sh after)
- Set environment variable, run
yum install PrivX
- Set environment variable, run
- Ensure that the PrivX service is functional
- Take a snapshot of the instance and attach the snapshots to autoscaling group