This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

The following articles cover getting up and running with Trickster.

1 - Installing Trickster

Install Trickster to get started.

Installing with Docker

Docker images are available on Docker Hub. To install Trickster with Docker, run the following command:

$ docker run --name trickster -d -v /path/to/trickster.conf:/etc/trickster/trickster.conf -p 0.0.0.0:8480:8480 tricksterproxy/trickster

See the Deployment documentation for more information about using or creating Trickster Docker images.

Installing with Kubernetes

To install Trickster with Kubernetes, see Deployment.

Helm

Trickster Helm Charts are located at https://helm.tricksterproxy.io for installation, and maintained at https://github.com/tricksterproxy/helm-charts. We welcome chart contributions.

Building from source

To build Trickster from the source code yourself you need to have a working Go environment with version 1.9 or greater installed.

You can directly use the go tool to download and install the trickster binary into your GOPATH:

$ go get github.com/tricksterproxy/trickster
$ trickster -origin-url http://prometheus.example.com:9090 -origin-type prometheus

You can also clone the repository yourself and build using make:

$ mkdir -p $GOPATH/src/github.com/tricksterproxy
$ cd $GOPATH/src/github.com/tricksterproxy
$ git clone https://github.com/tricksterproxy/trickster.git
$ cd trickster
$ make build
$ ./OPATH/trickster -origin-url http://prometheus.example.com:9090 -origin-type prometheus

The Makefile provides several targets, including:

  • build: build the trickster binary
  • docker: build a docker container for the current HEAD
  • clean: delete previously-built binaries and object files
  • test: runs unit tests
  • bench: runs benchmark tests
  • rpm: builds a Trickster RPM

2 - Configuring Trickster

Learn how to configure Trickster for your project.

There are 3 ways to configure Trickster, listed here in the order of evaluation.

  • Configuration File
  • Environment Variables
  • Command Line Arguments

Note that while the Configuration file provides a very robust number of knobs you can adjust, the ENV and CLI Args options support only basic use cases.

Internal Defaults

Internal Defaults are set for all configuration values, and are overridden by the configuration methods described below. All Internal Defaults are described in examples/conf/example.full.yaml comments.

Configuration File

Trickster accepts a -config /path/to/trickster.yaml command line argument to specify a custom path to a Trickster configuration file. If the provided path cannot be accessed by Trickster, it will exit with a fatal error.

When a -config parameter is not provided, Trickster will check for the presence of a config file at /etc/trickster/trickster.yaml and load it if present, or proceed with the Internal Defaults if not present.

Refer to examples/conf/example.full.yaml for full documentation on format of a configuration file.

Environment Variables

Trickster will then check for and evaluate the following Environment Variables:

  • TRK_ORIGIN_URL=http://prometheus.example.com:9090 - The default origin URL for proxying all http requests
  • TRK_ORIGIN_TYPE=prometheus - The type of supported backend server
  • TRK_LOG_LEVEL=INFO - Level of Logging that Trickster will output
  • TRK_PROXY_PORT=8480 -Listener port for the HTTP Proxy Endpoint
  • TRK_METRICS_PORT=8481 - Listener port for the Metrics and pprof debugging HTTP Endpoint

Command Line Arguments

Finally, Trickster will check for and evaluate the following Command Line Arguments:

  • -log-level INFO - Level of Logging that Trickster will output
  • -config /path/to/trickster.yaml - See Configuration File section above
  • -origin-url http://prometheus.example.com:9090 - The default origin URL for proxying all http requests
  • -provider prometheus - The type of supported backend server
  • -proxy-port 8480 - Listener port for the HTTP Proxy Endpoint
  • -metrics-port 8481 - Listener port for the Metrics and pprof debugging HTTP Endpoint

Configuration Validation

Trickster can validate a configuration file by running trickster -validate-config -config /path/to/config. Trickster will load the configuration and exit with the validation result, without running the configuration.

Reloading the Configuration

Trickster can gracefully reload the configuration file from disk without impacting the uptime and responsiveness of the the application.

Trickster provides 2 ways to reload the Trickster configuration: by requesting an HTTP endpoint, or by sending a SIGHUP (e.g., kill -1 $TRICKSTER_PID) to the Trickster process. In both cases, the underlying running Configuration File must have been modified such that the last modified time of the file is different than from when it was previously loaded.

Config Reload via SIGHUP

Once you have made the desired modifications to your config file, send a SIGHUP to the Trickster process by running kill -1 $TRICKSTER_PID. The Trickster log will indicate whether the reload attempt was successful or not.

Config Reload via HTTP Endpoint

Trickster provides an HTTP Endpoint for viewing the running Configuration, as well as requesting a configuration reload.

The reload endpoint is configured by default to listen on address 127.0.0.1 and port 8484, at /trickster/config/reload. These values can be customized, as demonstrated in the example.full.yaml The examples in this section will assume the defaults. Set the port to -1 to disable the reload HTTP interface altogether.

To reload the config, simply make a GET request to the reload endpoint. If the underlying configuration file has changed, the configuration will be reloaded, and the caller will receive a success response. If the underlying file has not changed, the caller will receive an unsuccessful response, and reloading will be disabled for the duration of the Reload Rate Limiter. By default, this is 3 seconds, but can be customized as demonstrated in the example config file. The Reload Rate Limiter applies to the HTTP interface only, and not SIGHUP.

If an HTTP listener must spin down (e.g., the listen port is changed in the refreshed config), the old listener will remain alive for a period of time to allow existing connections to organically finish. This period is called the Drain Timeout and is configurable. Trickster uses 30 seconds by default. The Drain Timeout also applies to old log files, in the event that a new log filename has been provided.

View the Running Configuration

Trickster also provides a http://127.0.0.1:8484/trickster/config endpoint, which returns the yaml output of the currently-running Trickster configuration. The YAML-formatted configuration will include all defaults populated, overlaid with any configuration file settings, command-line arguments and or applicable environment variables. This read-only interface is also available via the metrics endpoint, in the event that the reload endpoint has been disabled. This path is configurable as demonstrated in the example config file.

3 - Where to Place Trickster

Choose the best Trickster placement for your project.

Depending upon the size of your existing or planned deployment, there are several placement configurations available. These designs are suggestions based on common usage, and you may find alternative or hybrid placement configurations that make the most sense for your situation, based on the activity of your Dashboard and TSDB instance(s).

Single “Everything”

Diagram of one optional dashboard endpoint, one Trickster endpoint, and one HTTP or TSDB endpoint.

Single “Everything” is the most common placement model. In this configuration, you have one optional dashboard endpoint, one Trickster endpoint, and one HTTP or TSDB endpoint. Behind each endpoint, you may have a single instance or a cluster. Each component is only aware of the other component’s endpoint exposure and not the underlying configuration. This configuration represents a one-for-one-for-one deployment of your Dashboard, Origin, and Trickster endpoints.

Multiple Backends

Diagram of one dashboard endpoint, one Trickster endpoint, and multiple TSDB and/or HTTP endpoints.

In a Multiple Backend placement, you have one dashboard endpoint, one Trickster endpoint, and multiple TSDB and/or HTTP endpoints. Trickster is aware of each upstream endpoint and treats each as a unique backend to which it proxies and caches data independently from the others. Trickster routes a request to a specific backend based on Host Header or URL Path in the client request.

This setup may benefit situations where you have one ore more a static file server origins serving HTML, CSS and JavaScript assets and/or one or more API endpoints, all supporting a common platform.

For Time Series Dashboard acceleration, this is a good configuration to use when you have a single dashboard that displays data about multiple redundant clusters (each with its own TSDB), or when you have a single dashboard representing information about many different kinds of systems. For example, if you operate a “Dashboard as a Service” solution under which many teams use your Dashboard system by designing their own dashboard screens and bringing their own databases, a single Trickster endpoint can be used to accelerate dashboards for all of your customers.

You will need to configure each Trickster-to-TSDB mapping separately in your dashboard application as a separately named TSDB data source. Refer to the multi-origin documentation for configuring multi-origin support in Trickster and Grafana.

In this configuration, be aware that the default ‘memory’ cache may be underpowered depending on the number of customers, as well as the size and number of queries that need to be cached by each customer. Refer to the caches document to select and configure the caching layers as needed to meet your specific situation.

Multi-Trickster

Diagram of one dashboard endpoint, multiple Trickster endpoints, and multiple TSDB or HTTP endpoints, with each Trickster Endpoint having a one-to-one mapping to a TSDB/HTTP Endpoint as a pair.

In a Multi-Trickster configuration, you have one dashboard endpoint, multiple Trickster endpoints, and multiple TSDB or HTTP endpoints, with each Trickster Endpoint having a one-to-one mapping to a TSDB/HTTP Endpoint as a pair. This is a good design if Multiple Backends is not performant enough for the amount of activity associated with your solution (e.g., you need more Tricksters). If the Dashboard system owner is different from the TSDB system owner, either party could own and operate the Trickster instance.

4 - Deployment

Learn how to deploy Trickster for your project.

Docker

$ docker run --name trickster -d [-v /path/to/trickster.yaml:/etc/trickster/trickster.yaml] -p 0.0.0.0:9090:9090 trickstercache/trickster:latest

Kubernetes, Helm, RBAC

If you want to use Helm and kubernetes rbac, use the following install steps in the deploy/helm directory.

Bootstrap Local Kubernetes-Helm Dev

  1. Install Helm Client Version 2.9.1

    brew install kubernetes-helm
    
  2. Install kubectl client server 1.13.4, client version 1.13.4

    curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
    
  3. Install minikube version 0.35.0

    curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.2/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
    
  4. Start minikube and enable RBAC make start-minikube or manually with --extra-config=apiserver.Authorization.Mode=RBAC --kubernetes-version=v1.8.0.

  5. Install Tiller make bootstrap-peripherals

  6. Wait until Tiller is running kubectl get po --namespace trickster -w

  7. Deploy all K8 artifacts make bootstrap-trickster-dev

Deployment

  1. Make any necessary configuration changes to deploy/helm/values.yaml or deploy/helm/template/configmap.yaml
  2. Set your kubectl context to your target cluster kubectl config use-context <context>
  3. Make sure Tiller is running kubectl get po --namespace trickster -w
  4. Run deployment script ./deploy from within deploy/helm

Kubernetes

Bootstrap Local Kubernetes Dev

  1. Install kubectl client server 1.8.0, client version 1.8.0

    brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/e4b03ca8689987364852d645207be16a1ec1b349/Formula/kubernetes-cli.rb
    brew pin kubernetes-cli
    
  2. Install minikube version 0.25.0

    brew cask install https://raw.githubusercontent.com/caskroom/homebrew-cask/903f1507e1aeea7fc826c6520a8403b4076ed6f4/Casks/minikube.rb
    
  3. Start minikube make start-minikube or manually with minikube start.

  4. Deploy all K8 artifacts make bootstrap-trickster-dev

Deployment

  1. Make any necessary configuration changes to deploy/kube/configmap.yaml
  2. Set your kubectl context to your target cluster kubectl config use-context <context>
  3. Run deployment script ./deploy from within deploy/kube

Local Binary

Binary Dev

  1. Use parent directory and run make, then ./trickster [-config <path>]