Learning heterogeneous parallelism in C++ with AMP

At first, GPUs could be used for a very narrow range of tasks (try to guess what), but they looked very attractive, and software developers decided to use their power for allocating a part of computing to graphics accelerators. Since GPU cannot be used in the same way as CPU, this required new tools that did not take long to appear. This is how originated CUDA, OpenCL and DirectCompute. The new wave was named ‘GPGPU’ (General-purpose graphics processing units) to designate the technique of using GPU for general purpose computing. As a result, people began to use a number of completely different microprocessors to solve some very common tasks. This gave rise to the term “heterogeneous parallelism”, which is actually the topic of our today’s discussion.

Read full article →


Tempesta FW, a handfull firewall against DDoS attacks

Open source tools for protection against DDoS (IPS), such as, Snort, are based on DPI, that is, they analyze the entire protocol stack. However, they cannot control the opening and closing of TCP connections, since they are too high in the network stack of Linux and represent neither server nor client side. This allows to bypass IPS data. Proxy servers are also involved in establishing the connection, but they cannot protect against major DDoS attacks, because they are relatively slow, as they work based on the same principle as the server. For them, it is desirable to use the equipment which, despite being not as good as the one for the back end, can withstand heavy loads.

Read full article →


0-day attacks using “keep-alive” connections

Before turning to unconventional methods of usage, I will describe how “keep-alive” is working. The process is utterly simple – in a connection, multiple requests are sent instead of just one, and multiple responses come from the server. The benefits are obvious: there is less time spent on establishing connection, less load on CPU and memory. The number of requests in a single connection is usually limited by settings of the server (in most cases, there are at least several dozen). The procedure for establishing a connection is universal.

Read full article →


Using synctool for server configuration management

*nix systems are by default provided with remote management tools, while the method of storing and format of configuration files allows you to rapidly distribute the updated version of settings by simply copying them to the node. This scheme will be good enough for up to a certain number of systems. However, when there are several dozens of servers, they cannot be handled without a special tool. This is when it becomes interesting to have a look at configuration management systems that allow a programmable rather than manual configuration of servers. As a result, the systems can be configured quickly and with fewer errors while the administrator will get the comprehensive report. Also, a CM system knows how to keep track of all changes in the server while supporting the desired configuration.

Read full article →


How to find vulnerabilities in routers and what to do with it

Often, the manufacturers of routers do not particularly care about the quality of their code. As a result, the vulnerabilities are not uncommon. Today, the routers are a priority target of network attacks that allows to steal money and data while bypassing local protection systems. How can you personally check the quality of firmware and adequacy of settings? You can do this by using free utilities, online test services and this article.

Read full article →