Served is a C++ library for easy creation of highly performant web servers. It presents a clean and elegant modern C++ interface, drastically reducing the amount of boiler-plate code that would normally be needed. Overall, it looks very promising for when you want everything to just work, without compromising on performance. Now, let’s dive right into it.
Getting To Hello World
Getting started is fairly standard for a from-source installation, you can also opt to compile it into a deb or rpm package via the build flags. Running the following commands installs served on your system. This installation requires Boost 1.53 or newer, if you do not have Boost installed on your system, you can install it using your favorite package manager or by following these instructions.
git clone https://github.com/meltwater/served.git mkdir served/served.build && cd served/served.build cmake ../served && make sudo make install
Now, that you have it installed, it is time to build a simple webserver. This server will just give back “Hello world!” when you query against the endpoint GET /hello.
#include <served/served.hpp> int main() { served::multiplexer mux; mux.handle("/hello") .get([](served::response & res, const served::request & req) { res << "Hello world!\n"; }); served::net::server server("0.0.0.0", "8080", mux); server.run(10); return 0; }
To compile your program, you will need to link against the pthread, boost_system, and served shared objects, and use at least C++-11. On Linux, this would look roughly like the following.
g++ main.cpp -o demo -std=c++17 -pthread -lboost_system -lserved
Just run the binary and visit the URL http://localhost:8080/hello in your browser, and you have successfully reached the hello-world. All of the code for this demo is available on github.
Performance
Let’s take a quick look at the performance of C++ Served. At only 60k, the output binary from that demo is surprising small on my system. Of course, this is not the statically linked binary size. For this benchmark, we will be using Vegeta for load testing.
Requests [total, rate, throughput] 99999, 20000.17, 19057.02 Duration [total, attack, wait] 5.000256833s, 4.999908146s, 348.687µs Latencies [mean, 50, 90, 95, 99, max] 2.539834ms, 375.457µs, 7.240899ms, 8.184653ms, 38.421652ms, 1.691368283s Bytes In [total, mean] 1238770, 12.39 Bytes Out [total, mean] 0, 0.00 Success [ratio] 95.29% Status Codes [code:count] 0:4709 200:95290
On my machine (Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz 16 threads), the basic hello world server was able to handle around 19,000 requests per second! Additionally, it was able to maintain relatively low CPU usage during this attack, using only ~30% of the capacity available to it.
Requests [total, rate, throughput] 150000, 30000.10, 14094.09 Duration [total, attack, wait] 7.948649187s, 4.999982507s, 2.94866668s Latencies [mean, 50, 90, 95, 99, max] 9.756488ms, 5.137985ms, 7.529834ms, 7.989994ms, 51.273487ms, 6.782636245s Bytes In [total, mean] 1456377, 9.71 Bytes Out [total, mean] 0, 0.00 Success [ratio] 74.69% Status Codes [code:count] 0:37971 200:112029
As the request rate increases above 20k, the performance started to degrade. Here, it seems like 20k requests per second is the sweet spot. Despite this, the performance of the library is very satisfactory under the given load. The majority of requests took under 1ms when it was under the 20k stress test. This should be more than enough to meet the needs of all but the most extreme demands.