server | Module Safari https://modulesafari.com development simplified Sat, 07 Dec 2019 02:17:38 +0000 en-US hourly 1 https://wordpress.org/?v=5.5.3 https://modulesafari.com/wp-content/uploads/2019/11/favivon.ico server | Module Safari https://modulesafari.com 32 32 Learn how to set ulimit on Linux https://modulesafari.com/linux-ulimits/ https://modulesafari.com/linux-ulimits/#respond Sat, 30 Nov 2019 19:27:17 +0000 https://modulesafari.com/?p=565 Often times you will need to increase the maximum number of file descriptors in order to achieve proper functionality of your application. Perhaps you have a database or have run into the “Too many open files” error. In this tutorial, we will go over increasing this value and other ulimits. But first, what is a […]

The post Learn how to set ulimit on Linux appeared first on Module Safari.

]]>
Often times you will need to increase the maximum number of file descriptors in order to achieve proper functionality of your application. Perhaps you have a database or have run into the “Too many open files” error. In this tutorial, we will go over increasing this value and other ulimits. But first, what is a Linux ulimit?

What is a Linux ulimit?

A ulimit or user limit in Linux is a restriction placed on a user’s access to certain resources. You can use the command ulimit -a to view them, this will give you a list of the ulimit values you can set, as well as their value. It is important to note that there are two types of limits, soft and hard.

Hard Limits

A hard limit is the maximum allowed to the user. It is set by the root user and acts as the hard ceiling for that value. The user will be unable to have a limit higher than this unless it is changed by the root user. There are a few ways to update this value, but the most common way is by updating the /etc/security/limits.conf file. In most situations, it is best to not update this value.

Soft Limits

A soft limit is the current maximum for the user. The user is able to change this at any time, either through setrlimit(2) or the ulimit command. Often times, you will only need to set this value, as very few situations call for resources above the hard limit.

Permanently Updating Limits

The best method for permanently updating soft limits and hard limits is via the /etc/security/limits.conf file. In this file, you can include the new soft limit defaults for a user, or update the hard limits. The format in which you insert your updates into this file is simple, it is just <domain> <type> <item> <value>. So, if you wanted to update the default file limit for the user apache, it would look like this.

apache soft nofile 5000

Now, all you would have to do is reboot the system, and the user apache’s file descriptor limit would default to 5000.

Temporarily Updating Limits

The easiest way to temporarily update a soft limit is with the ulimit command. If you wanted to set your file descriptor limit to 5000 for the duration of your shell session, you can use this command.

ulimit -S -n 5000

The -S flag signals that you are temporarily updating the soft limit value, and the -n flag states you wish to change the value for file-descriptors. You can also update hard limits using this method, just swap out the -S with -H. However, you can only set a lower value.

The post Learn how to set ulimit on Linux appeared first on Module Safari.

]]>
https://modulesafari.com/linux-ulimits/feed/ 0
Served; Build RESTful C++ servers https://modulesafari.com/served/ https://modulesafari.com/served/#respond Fri, 29 Nov 2019 03:01:44 +0000 https://modulesafari.com/?p=440 Served is a C++ library for easy creation of highly performant web servers. It presents a clean and elegant modern C++ interface, drastically reducing the amount of boiler-plate code that would normally be needed. Overall, it looks very promising for when you want everything to just work, without compromising on performance. Now, let’s dive right […]

The post Served; Build RESTful C++ servers appeared first on Module Safari.

]]>
Served is a C++ library for easy creation of highly performant web servers. It presents a clean and elegant modern C++ interface, drastically reducing the amount of boiler-plate code that would normally be needed. Overall, it looks very promising for when you want everything to just work, without compromising on performance. Now, let’s dive right into it.

Getting To Hello World

Getting started is fairly standard for a from-source installation, you can also opt to compile it into a deb or rpm package via the build flags. Running the following commands installs served on your system. This installation requires Boost 1.53 or newer, if you do not have Boost installed on your system, you can install it using your favorite package manager or by following these instructions.

git clone https://github.com/meltwater/served.git
mkdir served/served.build && cd served/served.build
cmake ../served && make
sudo make install

Now, that you have it installed, it is time to build a simple webserver. This server will just give back “Hello world!” when you query against the endpoint GET /hello.

#include <served/served.hpp>

int main() {
	served::multiplexer mux;

	mux.handle("/hello")
		.get([](served::response & res, const served::request & req) {
			res << "Hello world!\n";
		});

	served::net::server server("0.0.0.0", "8080", mux);
	server.run(10);

	return 0;
}

To compile your program, you will need to link against the pthread, boost_system, and served shared objects, and use at least C++-11. On Linux, this would look roughly like the following.

g++ main.cpp -o demo -std=c++17 -pthread -lboost_system -lserved

Just run the binary and visit the URL http://localhost:8080/hello in your browser, and you have successfully reached the hello-world. All of the code for this demo is available on github.

Performance

Let’s take a quick look at the performance of C++ Served. At only 60k, the output binary from that demo is surprising small on my system. Of course, this is not the statically linked binary size. For this benchmark, we will be using Vegeta for load testing.

Requests      [total, rate, throughput]    99999, 20000.17, 19057.02
Duration      [total, attack, wait]        5.000256833s, 4.999908146s, 348.687µs
Latencies     [mean, 50, 90, 95, 99, max]  2.539834ms, 375.457µs, 7.240899ms, 8.184653ms, 38.421652ms, 1.691368283s
Bytes In      [total, mean]                1238770, 12.39
Bytes Out     [total, mean]                0, 0.00
Success       [ratio]                      95.29%
Status Codes  [code:count]                 0:4709  200:95290  

On my machine (Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz 16 threads), the basic hello world server was able to handle around 19,000 requests per second! Additionally, it was able to maintain relatively low CPU usage during this attack, using only ~30% of the capacity available to it.

Requests      [total, rate, throughput]    150000, 30000.10, 14094.09
Duration      [total, attack, wait]        7.948649187s, 4.999982507s, 2.94866668s
Latencies     [mean, 50, 90, 95, 99, max]  9.756488ms, 5.137985ms, 7.529834ms, 7.989994ms, 51.273487ms, 6.782636245s
Bytes In      [total, mean]                1456377, 9.71
Bytes Out     [total, mean]                0, 0.00
Success       [ratio]                      74.69%
Status Codes  [code:count]                 0:37971  200:112029 

As the request rate increases above 20k, the performance started to degrade. Here, it seems like 20k requests per second is the sweet spot. Despite this, the performance of the library is very satisfactory under the given load. The majority of requests took under 1ms when it was under the 20k stress test. This should be more than enough to meet the needs of all but the most extreme demands.

The post Served; Build RESTful C++ servers appeared first on Module Safari.

]]>
https://modulesafari.com/served/feed/ 0