Benchmarking tool · Valkey

Benchmarking tool

Usage

valkey-benchmark [ OPTIONS ] [ COMMAND ARGS… ]

Description

Valkey includes the valkey-benchmark utility that simulates running commands done by N clients while at the same time sending M total queries. The utility provides a default set of tests, or you can supply a custom set of tests.

Options

-h hostname
Server hostname (default 127.0.0.1)
-p port
Server port (default 6379)
-s socket
Server socket (overrides host and port)
-a password
Password for Valkey Auth
--user username
Used to send ACL style ‘AUTH username pass’. Needs -a.
-u uri
Server URI on format valkey://user:password@host:port/dbnum. User, password and dbnum are optional. For authentication without a username, use username ‘default’. For TLS, use the scheme ‘valkeys’.
-c clients
Number of parallel connections (default 50). Note: If --cluster is used then number of clients has to be the same or higher than the number of nodes.
-n requests
Total number of requests (default 100000)
-d size
Data size of SET/GET value in bytes (default 3)
--dbnum db
SELECT the specified db number (default 0)
-3
Start session in RESP3 protocol mode.
--threads num
Enable multi-thread mode.
--cluster
Enable cluster mode. If the command is supplied on the command line in cluster mode, the key must contain “{tag}”. Otherwise, the command will not be sent to the right cluster node.
--enable-tracking
Send CLIENT TRACKING ON before starting benchmark.
-k boolean
1=keep alive 0=reconnect (default 1)
-r keyspacelen
Use random keys for SET/GET/INCR, random values for SADD, random members and scores for ZADD. Using this option the benchmark will expand the string __rand_int__ inside an argument with a 12 digits number in the specified range from 0 to keyspacelen - 1. The substitution changes every time a command is executed. Default tests use this to hit random keys in the specified range. Note: If -r is omitted, all commands in a benchmark will use the same key.
-P numreq
Pipeline numreq requests. Default 1 (no pipeline).
-q
Quiet. Just show query/sec values
--precision
Number of decimal places to display in latency output (default 0)
--csv
Output in CSV format
-l
Loop. Run the tests forever
-t tests
Only run the comma separated list of tests. The test names are the same as the ones produced as output. The -t option is ignored if a specific command is supplied on the command line.
-I
Idle mode. Just open N idle connections and wait.
-x
Read last argument from STDIN.
--seed num
Set the seed for random number generator. Default seed is based on time.
--tls
Establish a secure TLS connection.
--sni host
Server name indication for TLS.
--cacert file
CA Certificate file to verify with.
--cacertdir dir
Directory where trusted CA certificates are stored. If neither cacert nor cacertdir are specified, the default system-wide trusted root certs configuration will apply.
--insecure
Allow insecure TLS connection by skipping cert validation.
--cert file
Client certificate to authenticate with.
--key file
Private key file to authenticate with.
--tls-ciphers list
Sets the list of preferred ciphers (TLSv1.2 and below) in order of preference from highest to lowest separated by colon (“:”). See the ciphers(1ssl) manpage for more information about the syntax of this string.
--tls-ciphersuites list
Sets the list of preferred ciphersuites (TLSv1.3) in order of preference from highest to lowest separated by colon (“:”). See the ciphers(1ssl) manpage for more information about the syntax of this string, and specifically for TLSv1.3 ciphersuites.
--help
Output help and exit.
--version
Output version and exit.

Examples

Run the benchmark with the default configuration against 127.0.0.1:6379. You need to have a running Valkey instance before launching the benchmark:

$ valkey-benchmark

Run a benchmark with 20 parallel clients, pipelining 10 commands at a time, using 2 threads and less verbose output:

$ valkey-benchmark -c 20 -P 10 --threads 2 -q

Use 20 parallel clients, for a total of 100k requests, against 192.168.1.1:

$ valkey-benchmark -h 192.168.1.1 -p 6379 -n 100000 -c 20

Fill 127.0.0.1:6379 with about 1 million keys only using the SET test:

$ valkey-benchmark -t set -n 1000000 -r 100000000

Benchmark 127.0.0.1:6379 for a few commands producing CSV output:

$ valkey-benchmark -t ping,set,get -n 100000 --csv

Benchmark a specific command line:

$ valkey-benchmark -r 10000 -n 10000 eval 'return redis.call("ping")' 0

Fill a list with 10000 random elements:

$ valkey-benchmark -r 10000 -n 10000 lpush mylist __rand_int__

On user specified command lines __rand_int__ is replaced with a random integer with a range of values selected by the -r option.

Running only a subset of the tests

You don’t need to run all the default tests every time you execute valkey-benchmark. For example, to select only a subset of tests, use the -t option as in the following example:

$ valkey-benchmark -t set,lpush -n 100000 -q
SET: 74239.05 requests per second
LPUSH: 79239.30 requests per second

This example runs the tests for the SET and LPUSH commands and uses quiet mode (see the -q switch).

You can even benchmark a specific command:

$ valkey-benchmark -n 100000 -q script load "server.call('set','foo','bar')"
script load server.call('set','foo','bar'): 69881.20 requests per second

Selecting the size of the key space

By default, the benchmark runs against a single key. In Valkey the difference between such a synthetic benchmark and a real one is not huge since it is an in-memory system, however it is possible to stress cache misses and in general to simulate a more real-world work load by using a large key space.

This is obtained by using the -r switch. For instance if I want to run one million SET operations, using a random key for every operation out of 100k possible keys, I’ll use the following command line:

$ valkey-cli flushall
OK

$ valkey-benchmark -t set -r 100000 -n 1000000
====== SET ======
  1000000 requests completed in 13.86 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

99.76% `<=` 1 milliseconds
99.98% `<=` 2 milliseconds
100.00% `<=` 3 milliseconds
100.00% `<=` 3 milliseconds
72144.87 requests per second

$ valkey-cli dbsize
(integer) 99993

Using pipelining

By default every client (the benchmark simulates 50 clients if not otherwise specified with -c) sends the next command only when the reply of the previous command is received, this means that the server will likely need a read call in order to read each command from every client. Also RTT is paid as well.

Valkey supports pipelining, so it is possible to send multiple commands at once, a feature often exploited by real world applications. Valkey pipelining is able to dramatically improve the number of operations per second a server is able do deliver.

Consider this example of running the benchmark using a pipelining of 16 commands:

$ valkey-benchmark -n 1000000 -t set,get -P 16 -q
SET: 403063.28 requests per second
GET: 508388.41 requests per second

Using pipelining results in a significant increase in performance.

Pitfalls and misconceptions

The first point is obvious: the golden rule of a useful benchmark is to only compare apples and apples. You can compare different versions of Valkey on the same workload or the same version of Valkey, but with different options. If you plan to compare Valkey to something else, then it is important to evaluate the functional and technical differences, and take them in account.

The valkey-benchmark program is a quick and useful way to get some figures and evaluate the performance of a Valkey instance on a given hardware. However, by default, it does not represent the maximum throughput a Valkey instance can sustain. Actually, by using pipelining and a fast client (hiredis), it is fairly easy to write a program generating more throughput than valkey-benchmark. The default behavior of valkey-benchmark is to achieve throughput by exploiting concurrency only (i.e. it creates several connections to the server). It does not use pipelining or any parallelism at all (one pending query per connection at most, and no multi-threading), if not explicitly enabled via the -P parameter. So in some way using valkey-benchmark and, triggering, for example, a BGSAVE operation in the background at the same time, will provide the user with numbers more near to the worst case than to the best case.

To run a benchmark using pipelining mode (and achieve higher throughput), you need to explicitly use the -P option. Please note that it is still a realistic behavior since a lot of Valkey based applications actively use pipelining to improve performance. However you should use a pipeline size that is more or less the average pipeline length you’ll be able to use in your application in order to get realistic numbers.

The benchmark should apply the same operations, and work in the same way with the multiple data stores you want to compare. It is absolutely pointless to compare the result of valkey-benchmark to the result of another benchmark program and extrapolate.

For instance, Valkey and memcached in single-threaded mode can be compared on GET/SET operations. Both are in-memory data stores, working mostly in the same way at the protocol level. Provided their respective benchmark application is aggregating queries in the same way (pipelining) and use a similar number of connections, the comparison is actually meaningful.

When you’re benchmarking a high-performance, in-memory database like Valkey, it may be difficult to saturate the server. Sometimes, the performance bottleneck is on the client side, and not the server-side. In that case, the client (i.e., the benchmarking program itself) must be fixed, or perhaps scaled out, to reach the maximum throughput.

Factors impacting Valkey performance

There are multiple factors having direct consequences on Valkey performance. We mention them here, since they can alter the result of any benchmarks. Please note however, that a typical Valkey instance running on a low end, untuned box usually provides good enough performance for most applications.

Other things to consider

One important goal of any benchmark is to get reproducible results, so they can be compared to the results of other tests.

Other Valkey benchmarking tools

There are several third-party tools that can be used for benchmarking Valkey. Refer to each tool’s documentation for more information about its goals and capabilities.

See also

valkey-cli, valkey-server