This the multi-page printable view of this section. Click here to print.
Getting Started
1 - Installing Trickster
Try Trickster
To try a demo of Trickster with Docker before installing, see our end-to-end Quickstart for a zero-configuration running environment.Installing with Docker
Docker images are available on Docker Hub. To install Trickster with Docker, run the following command:
$ docker run --name trickster -d -v /path/to/trickster.conf:/etc/trickster/trickster.conf -p 0.0.0.0:8480:8480 tricksterproxy/trickster
See the Deployment documentation for more information about using or creating Trickster Docker images.
Installing with Kubernetes
To install Trickster with Kubernetes, see Deployment.
Helm
Trickster Helm Charts are located at https://helm.tricksterproxy.io for installation, and maintained at https://github.com/tricksterproxy/helm-charts. We welcome chart contributions.
Building from source
To build Trickster from the source code yourself you need to have a working Go environment with version 1.9 or greater installed.
You can directly use the go
tool to download and install the trickster
binary into your GOPATH
:
$ go get github.com/tricksterproxy/trickster
$ trickster -origin-url http://prometheus.example.com:9090 -origin-type prometheus
You can also clone the repository yourself and build using make
:
$ mkdir -p $GOPATH/src/github.com/tricksterproxy
$ cd $GOPATH/src/github.com/tricksterproxy
$ git clone https://github.com/tricksterproxy/trickster.git
$ cd trickster
$ make build
$ ./OPATH/trickster -origin-url http://prometheus.example.com:9090 -origin-type prometheus
The Makefile provides several targets, including:
- build: build the
trickster
binary - docker: build a docker container for the current
HEAD
- clean: delete previously-built binaries and object files
- test: runs unit tests
- bench: runs benchmark tests
- rpm: builds a Trickster RPM
2 - Configuring Trickster
There are 3 ways to configure Trickster, listed here in the order of evaluation.
- Configuration File
- Environment Variables
- Command Line Arguments
Note that while the Configuration file provides a very robust number of knobs you can adjust, the ENV and CLI Args options support only basic use cases.
Internal Defaults
Internal Defaults are set for all configuration values, and are overridden by the configuration methods described below. All Internal Defaults are described in examples/conf/example.full.yaml comments.
Configuration File
Trickster accepts a -config /path/to/trickster.yaml
command line argument to specify a custom path to a Trickster configuration file. If the provided path cannot be accessed by Trickster, it will exit with a fatal error.
When a -config
parameter is not provided, Trickster will check for the presence of a config file at /etc/trickster/trickster.yaml
and load it if present, or proceed with the Internal Defaults if not present.
Refer to examples/conf/example.full.yaml for full documentation on format of a configuration file.
Environment Variables
Trickster will then check for and evaluate the following Environment Variables:
TRK_ORIGIN_URL=http://prometheus.example.com:9090
- The default origin URL for proxying all http requestsTRK_ORIGIN_TYPE=prometheus
- The type of supported backend serverTRK_LOG_LEVEL=INFO
- Level of Logging that Trickster will outputTRK_PROXY_PORT=8480
-Listener port for the HTTP Proxy EndpointTRK_METRICS_PORT=8481
- Listener port for the Metrics and pprof debugging HTTP Endpoint
Command Line Arguments
Finally, Trickster will check for and evaluate the following Command Line Arguments:
-log-level INFO
- Level of Logging that Trickster will output-config /path/to/trickster.yaml
- See Configuration File section above-origin-url http://prometheus.example.com:9090
- The default origin URL for proxying all http requests-provider prometheus
- The type of supported backend server-proxy-port 8480
- Listener port for the HTTP Proxy Endpoint-metrics-port 8481
- Listener port for the Metrics and pprof debugging HTTP Endpoint
Configuration Validation
Trickster can validate a configuration file by running trickster -validate-config -config /path/to/config
. Trickster will load the configuration and exit with the validation result, without running the configuration.
Reloading the Configuration
Trickster can gracefully reload the configuration file from disk without impacting the uptime and responsiveness of the the application.
Trickster provides 2 ways to reload the Trickster configuration: by requesting an HTTP endpoint, or by sending a SIGHUP (e.g., kill -1 $TRICKSTER_PID
) to the Trickster process. In both cases, the underlying running Configuration File must have been modified such that the last modified time of the file is different than from when it was previously loaded.
Config Reload via SIGHUP
Once you have made the desired modifications to your config file, send a SIGHUP to the Trickster process by running kill -1 $TRICKSTER_PID
. The Trickster log will indicate whether the reload attempt was successful or not.
Config Reload via HTTP Endpoint
Trickster provides an HTTP Endpoint for viewing the running Configuration, as well as requesting a configuration reload.
The reload endpoint is configured by default to listen on address 127.0.0.1
and port 8484
, at /trickster/config/reload
. These values can be customized, as demonstrated in the example.full.yaml The examples in this section will assume the defaults. Set the port to -1
to disable the reload HTTP interface altogether.
To reload the config, simply make a GET
request to the reload endpoint. If the underlying configuration file has changed, the configuration will be reloaded, and the caller will receive a success response. If the underlying file has not changed, the caller will receive an unsuccessful response, and reloading will be disabled for the duration of the Reload Rate Limiter. By default, this is 3 seconds, but can be customized as demonstrated in the example config file. The Reload Rate Limiter applies to the HTTP interface only, and not SIGHUP.
If an HTTP listener must spin down (e.g., the listen port is changed in the refreshed config), the old listener will remain alive for a period of time to allow existing connections to organically finish. This period is called the Drain Timeout and is configurable. Trickster uses 30 seconds by default. The Drain Timeout also applies to old log files, in the event that a new log filename has been provided.
View the Running Configuration
Trickster also provides a http://127.0.0.1:8484/trickster/config
endpoint, which returns the yaml output of the currently-running Trickster configuration. The YAML-formatted configuration will include all defaults populated, overlaid with any configuration file settings, command-line arguments and or applicable environment variables. This read-only interface is also available via the metrics endpoint, in the event that the reload endpoint has been disabled. This path is configurable as demonstrated in the example config file.
3 - Where to Place Trickster
Depending upon the size of your existing or planned deployment, there are several placement configurations available. These designs are suggestions based on common usage, and you may find alternative or hybrid placement configurations that make the most sense for your situation, based on the activity of your Dashboard and TSDB instance(s).
Single “Everything”
Single “Everything” is the most common placement model. In this configuration, you have one optional dashboard endpoint, one Trickster endpoint, and one HTTP or TSDB endpoint. Behind each endpoint, you may have a single instance or a cluster. Each component is only aware of the other component’s endpoint exposure and not the underlying configuration. This configuration represents a one-for-one-for-one deployment of your Dashboard, Origin, and Trickster endpoints.
Multiple Backends
In a Multiple Backend placement, you have one dashboard endpoint, one Trickster endpoint, and multiple TSDB and/or HTTP endpoints. Trickster is aware of each upstream endpoint and treats each as a unique backend to which it proxies and caches data independently from the others. Trickster routes a request to a specific backend based on Host Header or URL Path in the client request.
This setup may benefit situations where you have one ore more a static file server origins serving HTML, CSS and JavaScript assets and/or one or more API endpoints, all supporting a common platform.
For Time Series Dashboard acceleration, this is a good configuration to use when you have a single dashboard that displays data about multiple redundant clusters (each with its own TSDB), or when you have a single dashboard representing information about many different kinds of systems. For example, if you operate a “Dashboard as a Service” solution under which many teams use your Dashboard system by designing their own dashboard screens and bringing their own databases, a single Trickster endpoint can be used to accelerate dashboards for all of your customers.
You will need to configure each Trickster-to-TSDB mapping separately in your dashboard application as a separately named TSDB data source. Refer to the multi-origin documentation for configuring multi-origin support in Trickster and Grafana.
In this configuration, be aware that the default ‘memory’ cache may be underpowered depending on the number of customers, as well as the size and number of queries that need to be cached by each customer. Refer to the caches document to select and configure the caching layers as needed to meet your specific situation.
Multi-Trickster
In a Multi-Trickster configuration, you have one dashboard endpoint, multiple Trickster endpoints, and multiple TSDB or HTTP endpoints, with each Trickster Endpoint having a one-to-one mapping to a TSDB/HTTP Endpoint as a pair. This is a good design if Multiple Backends is not performant enough for the amount of activity associated with your solution (e.g., you need more Tricksters). If the Dashboard system owner is different from the TSDB system owner, either party could own and operate the Trickster instance.
4 - Deployment
Docker
$ docker run --name trickster -d [-v /path/to/trickster.yaml:/etc/trickster/trickster.yaml] -p 0.0.0.0:9090:9090 trickstercache/trickster:latest
Kubernetes, Helm, RBAC
If you want to use Helm and kubernetes rbac, use the following install steps in the deploy/helm
directory.
Bootstrap Local Kubernetes-Helm Dev
Install Helm Client Version 2.9.1
brew install kubernetes-helm
Install kubectl client server 1.13.4, client version 1.13.4
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.4/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
Install minikube version 0.35.0
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.23.2/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
Start minikube and enable RBAC
make start-minikube
or manually with--extra-config=apiserver.Authorization.Mode=RBAC --kubernetes-version=v1.8.0
.Install Tiller
make bootstrap-peripherals
Wait until Tiller is running
kubectl get po --namespace trickster -w
Deploy all K8 artifacts
make bootstrap-trickster-dev
Deployment
- Make any necessary configuration changes to
deploy/helm/values.yaml
ordeploy/helm/template/configmap.yaml
- Set your kubectl context to your target cluster
kubectl config use-context <context>
- Make sure Tiller is running
kubectl get po --namespace trickster -w
- Run deployment script
./deploy
from withindeploy/helm
Kubernetes
Bootstrap Local Kubernetes Dev
Install kubectl client server 1.8.0, client version 1.8.0
brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/e4b03ca8689987364852d645207be16a1ec1b349/Formula/kubernetes-cli.rb brew pin kubernetes-cli
Install minikube version 0.25.0
brew cask install https://raw.githubusercontent.com/caskroom/homebrew-cask/903f1507e1aeea7fc826c6520a8403b4076ed6f4/Casks/minikube.rb
Start minikube
make start-minikube
or manually withminikube start
.Deploy all K8 artifacts
make bootstrap-trickster-dev
Deployment
- Make any necessary configuration changes to
deploy/kube/configmap.yaml
- Set your kubectl context to your target cluster
kubectl config use-context <context>
- Run deployment script
./deploy
from withindeploy/kube
Local Binary
Binary Dev
- Use parent directory and run make, then
./trickster [-config <path>]