Deploy Prebuilt NGINX Agent Image

Background

tCell agents were designed with the goal of being easy to install and having low footprint, such that deploying in applications is generally a fast process. However, there are certain systems, such as a blog deployed on Wordpress, in which modifying the production application at all is impossible. Rapid7 tCell has a simple solution for this use case, by using a preconfigured NGINX server with tCell NGINX agent deployed, running inside a docker container. This tutorial provides end-to-end steps needed to implement this use case.

Here is a sample customer deployment with existing application:

example deployment with existing app

New deployment will be as follows:

example new deployment

Prerequisites

The deployment methodology we want to use is a Reverse-Proxy-based configuration as explained at https://linuxize.com/post/nginx-reverse-proxy/.

We assume the nginx.conf file is located at /host/path/nginx.conf for the purposes of the tutorial.

Steps

  1. The nginx.conf must have the tcell module enabled. This is done by adding load_module modules/ngx_http_tcell_agent_module.so; before the events section in the nginx.conf file mentioned in Prerequisites. An example of the beginning of nginx.conf file is:

    1
    user nginx;
    2
    worker_processes auto;
    3
    4
    error_log /var/log/nginx/error.log warn;
    5
    pid /var/run/nginx.pid;
    6
    7
    8
    load_module modules/ngx_http_tcell_agent_module.so;
    9
    10
    events {
    11
    worker_connections 1024;
    12
    }
  2. Start a docker container from Docker Hub hosted image using the following docker run template.

    Docker execution template
    1
    docker run -d -p 8090:80 -e "TCELL_AGENT_APP_ID=APIID" -e "TCELL_AGENT_API_KEY=APIKEY" -v /host/path/nginx.conf:/etc/nginx/nginx.conf:ro -t rapid7/nginx-tcell

    More detailed information about the flags and usage are provided below. All flags marked required must be passed to docker run or the agent will not work correctly.

    FlagDescriptionRequired
    -dRun in detached mode.No
    -p 8090:80Expose port 80, standard HTTP port, within the container to the host as port 8090. If, for example, one wants to expose port 443 within the container to the host as port 443, one would use -p 443:443."Yes
    -e "TCELL_AGENT_APP_ID=APIID"This tells the tCell agent what App ID the agent should use to send data to the tCell cloud. In this example, it is using appid "APPID".Yes
    -e "TCELL_AGENT_API_KEY=APIKEY"This tells the tCell agent what API Key to use when authenticating the agent with the tCell cloud.Yes
    -v /host/path/nginx.conf:/etc/nginx/nginx.conf:roThis is used to provide the nginx.conf custom configuration file, described in Prerequisites section and NGINX Beginner's Guide, which is used to configure the NGINX server. The configuration file is passed to the docker container using the docker volume mounting framework. In this example, the nginx.conf file is located in the host machine in the /host/path folder.Yes
    -t rapid7/nginx-tcellThis specifies the docker image to run. In this case, the image is the tCell NGINX agent docker image pulled from Docker Hub.Yes
  3. Update any DNS servers to resolve to new NGINX docker exposed machine/port instead of legacy backend. We recommend testing the legacy app through non-DNS before updating DNS in case of a problem.

Congratulations! You are now deployed with a tcell agent protecting and monitoring your legacy backend application!

Validation

The easiest way to validate an agent is online is to log into the tcell console as documented in Quick Start.

Troubleshooting

No data show up in the tcell console while validating. In this case, it is likely that when doing the docker run command, either the appid/API Key was not passed in or is invalid. Check the tcell.log file to see if errors occurred while sending data.

Next Steps

The above steps are meant to be a minimal quick deployment. For a more robust, highly available deployment, we recommend you do a clustered deployment. See https://docs.docker.com/ee/ucp/admin/configure/join-nodes/join-linux-nodes-to-cluster/ or https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/ for details on how to achieve this.