Skip to content

Instantly share code, notes, and snippets.

@knutin
Last active April 20, 2017 22:21
Show Gist options
  • Select an option

  • Save knutin/9001be031090088cb6b17f4661300a91 to your computer and use it in GitHub Desktop.

Select an option

Save knutin/9001be031090088cb6b17f4661300a91 to your computer and use it in GitHub Desktop.
Similar benchmark to protostore, same client, but hacked to speak to Redis.
10M keys, randomly distributed between 1024 and 4096. Generated with mk_redis_data.py
using Pythons random.randrange. I think the average size is higher than in previous benchmarks.
I need to redo the benchmark with similar distribution.
Server runs on r3.4xlarge. Redis uses 27G of RAM. With 50 clients it ends up using 100% CPU on one core,
although it semes we are also limited by bandwidth at this point. 243MB/s is the max. With two redis
server instances on the same machine, each with 50 clients, we no longer use all the CPU,
but cannot go higher than 243MB/s. The test client is also running on r3.4xlarge, so I should redo the
test with more client instances, maybe it's possible to squeeze more network throughput out of this machine.
The avg rps is lower than in previous benchmarks, probably because the values are larger now.
============================
Concurrent clients: 50
Runtime: 30 s
Total requests: 2937017
Avg rps: 97900.57
Bytes transferred: 7170.76 MB
Bytes per second: 239.03 MB
Roundtrip latencies: 50th: 497us 75th: 543us 90th: 582us 95th: 611us 99th: 778us 99.9: 1600us
============================
Concurrent clients: 25
Runtime: 30 s
Total requests: 2252323
Avg rps: 75077.43
Bytes transferred: 5499.40 MB
Bytes per second: 183.31 MB
Roundtrip latencies: 50th: 321us 75th: 353us 90th: 393us 95th: 422us 99th: 548us 99.9: 1615us
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment