Linux in 30 minutes. A Beginner’s Guide to Choosing and Using Linux

Date: 02/06/2025

It doesn’t matter which operating system you prefer to have on your desktop, Linux is literally everywhere today: on servers, on hardware like Raspberry Pi, on smart devices, on computers in government institutions… Well, even in Windows, you can now run LinuxLinux to make it easier, for example, to test server software. A hacker just needs to navigate Linux.

A huge amount of hacker software runs only on Linux and is compiled in specialized distributions like Kali. In addition, you will often encounter Linux systems during a pentest and should be able to handle them. And it’s just useful to have experience working with this powerful and completely free system. It will come in handy in life, believe me!

In this article, we will try to tell you everything that we ourselves would like to know when we started being interested in Linux many years ago. This is both theoretical information that will help you navigate, and quite practical advice.

Important warning

In terms of complexity, this is not a completely typical article for a “Hacker” — it is designed for very, very new users. The idea of it came when we started putting together a collection of materials about using Linux and found that we didn’t have anything that would be suitable from the very beginning. And if there is, it is covered with thick mosses.

If you are against such articles in Hacker, you can, of course, check in the comments, but, really, it’s better to go read about the kernel operation or how to write a minimalistic back shell in assembler. Fortunately, we have the majority of such articles and are not going to change anything in this regard.

If the topic seems just right for you, then buckle up — our spaceship is going to sweep through the basics of Linux at superluminal speed.

What Linux can be

The first thing a person who wants to install Linux faces is a huge variety of distributions. All these names are simply impossible to mention, but in fact, they are not necessary.

The three main distribution families that you need to know about first are Debian, Red Hat, and Arch. You can also remember SUSE, Mandriva and Gentoo, but their glory days are over, may their users forgive us!

info

Chrome OS is also a real Linux inside, and new versions support running Linux programs. But you still can’t put this OS on a par with other distributions.

Of the Debian family, Ubuntu is the first one to pay attention to. This is the most obvious choice, if you’re wondering where to start. Canonical, the company behind Ubuntu, makes great efforts to ensure that this distribution works well and is user-friendly. There is a wide range of stable programs for this distribution — you definitely won’t lack them.

Freshly installed Ubuntu
Freshly installed Ubuntu
Standard set of applications in Ununtu
Standard set of applications in Ununtu

In addition, Ubuntu has a huge community, which greatly simplifies problem solving: in 99% of cases, you will not be the first to experience one or another difficulty. Just copy the error message, and you’ll probably find a forum thread, where more experienced users explain to other victims how to deal with this.

There are other popular Debian—based distributions, such as Raspberry Pi OS, MX Linux, or Kali Linux. And [Linux Mint] is based on Ubuntu(https://linuxmint.com /), elementary OS and many others. By the way, Mint and elementary are also good options for beginners.

Installing Kali as the first system is usually not recommended: this highly specialized hacker distribution is poorly adapted for everyday work, and it is supposed to be installed in a virtual machine or as a second OS. In addition, it is packed to the brim with hack tools instead of regular applications, which will be confusing. But if you’re installing Linux specifically to get all this wealth, then who are we to stop you?

As for Debian itself, its main feature is license purity. The developers carefully ensure that not a single string of code that is not distributed under a free license gets into it (which may be commendable from the point of ideology, but when you want to get used to Linux, it is likely to result in all sorts of difficulties).

The Red Hat distribution family primarily includes Fedora, Red Hat Enterprise Linux (RHEL), and Rocky Linux. It makes sense to install Fedora on a regular PC, whereas RHEL is a commercial server solution, and Rocky Linux is a non-commercial clone created by the community.

Fedora Linux
Fedora Linux

And finally, Arch is an extremely interesting “geeky” distribution that you can build yourself brick by brick and customize as you want. However, we do not recommend diving into this without prior knowledge of Linux. Solving problems, of course, you will gain a lot of valuable knowledge, but this is far from the easiest way, and it is better to postpone following it for later.

Arch also has less severe variations — Manjaro and EndeavourOS. At least both of them have pre-configured and prepared environments, but they are also ascetic. On the other hand, the lack of unbridled diversity can be a plus when you first get to know each other, so starting with Manjaro is not such a bad idea, especially if you plan to install Linux on a weak computer.

Manjaro for ARM
Manjaro for ARM

Not Unix

What is the difference between Linux and Unix? To simplify things very much, we can say that Unix is the Linux ancestor. A more complex and detailed answer requires going a little deeper into the story.

In the seventies, Richard Stallman came up with the idea of cloning commercial and expensive Unix at that time and creating his own operating system, which he called GNU is not Unix or simply GNU. Stallman and company rewrote all the Unix components and published them under the “viral” GPL license they invented.

Initially, the word Linux was used only for the kernel created by Linus Torvalds. But the name Linux and the mascot penguin quickly caught on, and they now refer to the entire OS — despite Stallman’s objections and requests to write only GNU/Linux.

Linux quickly gained popularity in the Internet age, and commercial Unix variants eventually ran out of space. Nevertheless, its descendants are still alive — FreeBSD and OpenBSD operating systems, which are now free and borrow a lot from modern Linux. By the way, macOS and iOS are based on FreeBSD.

It turns out that choosing a distribution is primarily a choice of approach and even ideology. However, a more mundane guideline is usually a set of basic components from which distributions are built. Let’s discuss the main ones.

Kernel

The kernel, although critically important for the system operation, is not particularly interesting from the user’s point of view — you probably won’t have to interact with it directly until you become a real guru.

You may have often heard about “kernel assembly” and you can even try to do it yourself. Since the Linux kernel is monolithic, it must include support for many things that relate to all aspects of computer operation. Accordingly, before assembly, you can and should set a bunch of settings, but there is nothing particularly exciting about it, believe me, although the process is extremely informative.

There can be any number of kernels in the system at the same time, and you can choose which one will be used in the bootloader. Upgrading the kernel to a newer version is a completely routine matter on Linux and usually happens automatically.

Package Manager

Any Linux system consists of thousands of small components — programs, libraries, and resources (for example, configuration files, icon sets, and so on). They are distributed in the form of packages.

A package manager is a special program that installs, configures, deletes, and updates both individual applications and the entire system or its components.

Very often, one package requires others to work, and it is impossible to monitor these dependencies manually on a modern system. Therefore, the basis of each distribution is a package manager that manages the installation and updating of software. In Debian-based distributions, it is called APT, in Red Hat — DNF, and in Arch — pacman.

The manager takes packages from the repository, a large warehouse where the distribution originators upload them. You can often connect multiple repositories at once. For example, Ubuntu has four basic ones: Main (developer-supported), Universe (community-supported), Restricted (with proprietary software), and Multiverse (with proprietary software).

Since soon after installing Linux, you will find the need for hardware drivers, additional fonts, codecs, and the like, you will most likely need to allow the system access to commercially tainted repositories. In Ubuntu, this is done in the Programs and Updates menu.

Graphics system

Not every Linux is equipped with a graphical system or even needs one — a lot of actions here can be done from the command line. However, the modern desktop is still icons and windows.

To work with graphics, a display server must be present on Linux. X.Org (the traditional version) or the newfangled Wayland composer, which the most advanced distributions are now switching to. Plus, you need a window manager, a program that is responsible for how the interface elements look and work.

However, these are all pretty low-level details that you don’t have to dive into right off the bat. Much sooner you will have to think about choosing a work environment (Desktop Environment, DE). This is a combination of a window manager and various kinds of programs, small (for example, drawing different panels, desktop, widgets) and large — like a file manager. This usually includes a set of basic software: calendar, mailer, and so on.

The most famous windowing environments are GNOME and KDE. But in fact, their list is much longer. Minimalist lovers can take a closer look at Xfce or LXDE, while Ratpoison, dwm, i3 and xmonad provide an environment with non-overlapping windows, which some find convenient.

Also, the MATE and Cinnamon projects fell off GNOME — their developers did not like the GNOME 3 interface, and they continued to develop the second branch. And elementary OS uses its own environment called Pantheon, which you won’t find anywhere else. In general, the variety is enormous!

The originators of distributions that include a graphical environment usually choose one or more environments that they will officially support. But at the same time, nothing prevents you from changing the DE or installing more than one at the same time in order to switch between them or use programs from one environment from another. Try, experiment, and you’ll find out for yourself which is closer to you.

Command interpreter

Windows users are used to the fact that this operating system has a standard command prompt cmd.exe, which is commonly referred to as the command line. In recent versions of Windows, PowerShell has organically supplemented it, but these two environments exhaust the range of command interpreters in Windows.

There are many command interpreters in Linux, and if they are just an administrative tool for Windows, here it is one of the main and very powerful tools for working with the system.

Actually, the history of Linux itself began with the command line, more precisely, a terminal or even a teletype. The graphical interface was added to it much later. That is why the command line in Linux is often called a “terminal emulator”, and processes with them are prefixed with tty (teletype).

As you know, using commands in Windows, you can write scripts that automate any actions: batch files have been in use since the days of MS-DOS, and PowerShell has significantly expanded and deepened this technology. In Linux, you can do the same thing: an interpreter’s command set assembled into a file can work as a complex program, and the commands themselves are, by and large, a programming language.

Sets of commands saved as a single file are commonly referred to as scenarios or scripts. All scripts in Linux start with the characters #! (this combination is called “shebang”) and the path to the interpreter, the command that will execute the script.

The standard command interpreter on Linux is bash, an updated and upgraded version of the Bourne shell, which was invented by Stephen Bourne in 1978 and was used back in classic Unix.

Hardcore linuxoids prefer to replace bash with a more advanced interpreter, the Z shell (ZSH), which is backward compatible with bash but has many improvements over it. For this shell, the community has developed a special open and free Oh My ZSH framework, which contains many plug-ins for automating work with commands and scripts. At a minimum, Oh My ZSH allows to use beautiful command line window themes, thanks to which others will definitely consider you a genius hacker.

info

Read more about ZSH and Oh My ZSH in the article «Upgrade the terminal! Useful tricks that will make you a console guru».

Let’s warn you about the problem that every new Linux user immediately faces. If you go into some directory and try to write the name of an executable to run it, it won’t work. Why?

The reason is that the interpreter searches for files only in the directories that are specified in $PATH environment variable. In other words, you either need to specify the full path to the executable file, or explicitly point to the current directory. As you know, the parent directory is marked with two dots (../),, and to indicate the current one, you need to write ./. That is, instead of program, write ./program,, and everything will work!

Another important aspect. In Windows, the file type is determined by its extension — depending on it, the command interpreter and the shell decide how they will process the file. Things work a little differently on Linux: bash completely lacks any respect for file extensions. An executable file differs from a regular file not by extension, but by having the right to execute it: if it exists, the system considers such a file to be a program (or script) and tries to execute it. We will discuss file rights in more detail later in the relevant section.

Home directory and hidden files

Since Linux was originally conceived as a multiuser operating system, all paths to “home” folders, environment variables, programs that run when the terminal is opened, and other settings are set in the user profile. They are correspondingly different for various users. Thanks to this, you can, for example, configure the system environment in a way that is comfortable for you.

It is very convenient to use the ~ symbol to point to the home directory. So, instead of /home/vasya/, you can just write ~/, if you are logged in as vasya.

On Linux, there is often something that is simply impossible on Windows: files whose name begins with a dot (Windows users are unaccustomed to thinking that these are files without a name that have only one extension). In fact, this is what hidden files are called in Linux. For example, the name .htacess tells us that this file is hidden. Due to the presence of a dot in front of the name, it is easy to distinguish it from other file objects.

The user’s home directory contains several hidden files that can be very useful when working on Linux. To view hidden files in the current directory, use the ls -a console command or browse through the file manager menu: for example, in Nautilus, the “Show hidden files” option is hidden in the “View” menu. Pay attention to the following hidden files:

  • .bash_profile — It contains information about the user’s environment and the programs that run when the user is logged in. In some Debian-based distributions, this file does not exist by default, but you can create it yourself;
  • .bash_login — this file is executed if there is no .bash_profile, and performs a similar function. This file does not exist by default in either the Debian or the Red Hat distribution;
  • .profile — performed in the absence of .bash_profile and.bash_login;
  • .bash_logout — the scenario that is automatically executed when the command shell is shut down;
  • .bash_history — stores information about all commands typed in bash;
  • .ssh — the directory where the encryption keys for SSH connection are stored;
  • .bashrc — a scenario that is usually configured by other scenarios for their own needs, such as running daemons or processing any commands.

Minimum required commands

So, remember the most important commands, if you don’t already know them:

  • man — almost the most important command is that it displays information about the team, the name of which you will write next;
  • ls (from the word ’list‘) — list all files in the current directory, analogous to the Windows dir command. The most important keys are: -a (all) — show hidden files,-l (long) — show details,-h (human) — show sizes in “human” units, not in bytes. You can write all the keys at once: ls -lha;
  • cd (change directory) — change the directory. Then you can specify the folder you want to go to;
  • pwd (print working directory) — find out the current path;
  • cp (copy) — copy the file. Next, you need to specify what to copy and where;
  • mv (move) — move the file. We also specify which one, then — where to;
  • rm (remove) — erase the file. If you erase a directory, specify the -r (recursive) option to erase all the subdirectories inside, the subdirectories inside them, and so on;
  • chmod and chown — change the rights to the file or the file owner;
  • cat (concatenate) — It is designed to combine files, but is often used to simply display the contents of a text file. Just write its name after cat;
  • less — if the file is long, it is convenient to scroll through it. That’s what the less command was designed for;
  • head and tail — with the -n option show the number of strings from the beginning (head) or end (tail) of the specified file;
  • grep — string search by substring or regular expression;
  • find — file search;
  • mkdir (make directory) — creating a directory;
  • touch — creating an empty file. Just specify its name;
  • sudo — run the following command on behalf of the superuser;
  • df (disk free) — see how much free space there is on the disks. I recommend writing df -h by analogy with ls -h;
  • du (disk usage) — find out how much the catalog takes up. There is also the -h option.
  • ps (processes) — view the list of processes you have started and their IDs;
  • kill and id — end some process.

A few important network commands:

  • ping — ping the node;
  • nslookup — find out information about a node;
  • traceroute — trace the path of the packets to the node;
  • netstat — information about open ports and connections;
  • whois — information about domain registration.

In addition, Linux usually has several utilities that will make your life much easier. If there aren’t any, then it’s worth installing them:

  • git — the most popular version control system, like the Linux kernel, created by Linus Torvalds;
  • nano — the simplest text editor running in a terminal;
  • unzip and unrar — I think you can guess why they are needed;
  • curl is needed for web requests.
  • wget — for downloading large files;
  • htop shows the system load level and a list of processes.

Important: you can usually exit programs that do not close themselves by pressing Q. To interrupt the work — Ctrl-C. And to exit vim, if you opened it by accident, type the sequence :q! and press Enter.

I/O and pipes

Most programs running from the command line accept input data and output something. In this case, the output of one program can be directed to the input of another and thus achieve some more complex goal or automate some process. Let’s take a closer look.

The standard input stream to which the keyboard is “bound” by default is called standard input (stdin). The standard output stream is called standard output (stdout). There is also a separate output stream dedicated exclusively to error messages. It is called standard error, or stderr. By default, a monitor is associated with these two output streams.

Application and command streams can be redirected to files or other commands. Since standard I/O streams are designed primarily for exchanging text information, this redirection allows programs to “communicate” with each other.

The simplest example of such communication is when we transfer the standard output (stdout) of one program to the standard input (stdin) of another. This redirection option is indicated in Linux by the symbol | and is called the term “pipeline” or “pipe”. For example, if we use an entry like command 1 | command 2, this will mean that the entire standard output of command 1, which by default would be directed to the display (stdout), will be transferred to the standard input (stdin) of command 2. This is the implementation of the simplest pipe or pipeline.

You may have already come across the use of pipes in combination with the grep command, designed to filter text data. It works like this:

$ command | grep [options] template

where a command is a command whose standard output is redirected to the grep command;
options are different search options;
The template is the string or value that we are looking for.

For example, the ls | grep string command means that after receiving a list of the contents of the current directory using the ls console command, we search for a file or folder in it, the name of which contains the line string.

It is also convenient to redirect the command output to a file. Write ls -lha > list.txt, and you will receive the file list.txt with a detailed list of everything contained in the current catalog.

Useful cheat sheets

All the power of Linux commands is in the additional parameters that you can specify. To find out about them, you need to read the help (man), but there are ways to cheat and make your life easier.

  • tldr pages — an abridged version of man, in which meticulous descriptions have been reduced to an absolute minimum (more details);
  • cheat.sh — online database with examples of popular command options (more details);
  • Marker — it’s a similar thing, but offline and with on-the-fly hints (more details);
  • explainshell.com — a service that automatically parses a complex command and explains the meaning of its component parts.

It is impossible to remember all the parameters of all commands, so even avid Linuxoids resort to such tricks (and tirelessly invent new ones)!

File systems

Linux supports different file systems. Any modern Linux is installed on ext4 by default and requires the creation of a separate Swap partition (analogous to a swap file in Windows). In addition, ext2 and ext3, XFS and FAT disks of different versions are usually supported. In Ubuntu and some other distributions, read and write NTFS partitions are available out of the box. As for the Mac HFS+ and APFS, they usually require a separate driver.

To work with some kind of FS (located on a hard disk or on external media), it must be mounted, and before shutting down, it must be dismantled. The mount and umount commands are responsible for this. The /etc/fstab file specifies the systems that Linux will automatically mount at boot.

FUSE (Filesystem in Userspace) deserves special mention (as opposed to kernel-level support). Through this thing, you can connect unsupported file systems by default, or even make a semblance of a file system from the cloud service API. Read more about this in the article «Everything is a file! We mount Git repositories, FTP and SSH resources, ZIP archives, torrents, magnetic links and much more».

Catalog system

Any OS has directories with system files, which are better not to touch unnecessarily. But if in Windows the folders with the system just lie on the sidelines, then in Linux it’s the other way around: you’re working inside an already defined directory structure, and you can and should look into many of them.

One of the reasons for this organization is that files in Linux can be not only data on disk, but also ports, processes, and other entities. As you might guess, it’s sometimes very convenient to read and write in them using the same means as when working with regular files.

Slashes “in the wrong direction” are unlikely to confuse you, but it is much more unusual after Windows that the paths are completely virtual and have nothing to do with disks. Data on different paths can be on different partitions, on different media, or even on different computers.

So let’s look at the directory structure that you’ll see in almost any Linux:

  • / — the root folder, or, as it is also called, the root directory — the folder in which all other contents of the file system are stored;
  • /bin (from the word binary) — here are binary executable files with all the basic commands;
  • /boot — the bootloader and the OS kernel are located here (vmlinuz files are exactly what it is);
  • /dev — files in this folder are ports and devices. By working with these files, applications and drivers can exchange information directly with the hardware. However, some files are not real devices, but virtual ones. For example, the famous /dev/null accepts any information and does nothing with it, while /dev/random generates random numbers;
  • /etc — this folder contains system-wide configuration files (whereas user configuration files are located in each user’s home directory). If you are a system administrator, then you will have to look here often when configuring different programs;
  • /home — this folder contains the home directories of Linux users. For example, if your username is xakep, your home folder will be called/home/xakep/`;
  • /lib — a folder for storing libraries needed by executable files in bin and sbin folders;
  • /lost+found — files recovered in case of a system failure are saved to this folder;
  • /media — In some systems, there is an additional directory where all removable media mounted in the system is displayed. In older operating systems, it can be called /cdrom;
  • /mnt — a folder containing temporary mount points: file systems are mounted here for temporary use;
  • /opt — The directory contains subdirectories for additional software packages. It is usually used by proprietary software that does not follow the standard Linux file system hierarchy;
  • /proc — a directory with special files that provide information about the system and processes;
  • /root — root superuser’s home directory;
  • /run — This directory provides applications with a standard location for files that exist only during system operation (hence the name), such as sockets and process IDs;
  • /sbin — this folder is similar in purpose to the bin folder. Here are binary executable files, which are usually designed to be run by the root user for system administration purposes;
  • /tmp — default folder for storing temporary files;
  • /srv — contains information about the services provided by the system;
  • /usr — this folder contains applications and files of the system users. In the old Unix systems, this was the equivalent of /home, but then these things were separated. Conditionally: in /usr/ — programs, in /home — all sorts of junk. The directories /usr/bin, /usr/sbin and /usr/lib located here used to have the same purpose as their counterparts at the root, but for user files (while the folders at the root are for files used by the system itself). And then there’s the /usr/local directory, which has its own bin, sbin, and lib! It was once assumed that there would be programs specific to a particular computer, that is, theoretically depending on its hardware. In practice, software gets here for a variety of reasons;
  • /var — from the word variable, that is, something that can change. Backups, caches, libraries, logs, and the like are stored here. One of the important directories is /var/www, where website data is stored if a web server is installed on the machine.
In modern Ubuntu, /bin, /sbin, and /lib are symbolic links to the corresponding directories in /usr
In modern Ubuntu, /bin, /sbin, and /lib are symbolic links to the corresponding directories in /usr

If this directory system seems a bit confusing to you, don’t worry, it’s totally fine! It has been created for years, and what’s grown is what’s grown. No one is going to simplify it in the near future, since it is standard and any changes will affect compatibility.

Users, file permissions

Linux was originally conceived as a multi-user system, and therefore the separation of files and user profiles is organized at the highest level. A user with limited rights in the system can interact with certain files and directories.

It is important to remember that in Linux, there is a superuser named root, who has full administrative privileges in the operating system, so to speak, the most important boss of all bosses. He can create and delete accounts of other users and generally change the global OS settings. Any user can temporarily act as root using the sudo command (Substitute User and do, literally “substitute user and execute”). But this sudo will only work if you know the password for the superuser account.

Each file in Linux is assigned a set of permissions that determine who can do what with this file. These permissions are indicated by special letters:

  • r (read) — permission to read the file;
  • w (write) — permission to write to a file;
  • x (execute) — permission to run the file;
  • - (dash) — permission has not been set.

It is important to remember that Linux also considers directories to be files, so all the same permissions and restrictions apply to them. However, these permissions would not make much sense if they applied to all users of the operating system. Fortunately, this is not the case: Linux has three categories of users, for each of which you can set your own file permissions:

  • owner is the user who created this file or is assigned as its owner. The file owner can be not only the account, but also the operating system itself or the application that created the file;
  • group — a group of users “linked” to this file. You can find out which user group your account belongs to using the groups <username> console command. The list of all groups registered in the system is usually stored in the /etc/group folder;
  • other are all those who do not belong to the owner of the file object or user groups.

Thus, access permissions to any file or folder can be written as a string consisting of nine characters and having the following format:

rwxrwxrwx

The first three characters here define permissions for the file object owner, the second — for the group to which the file or folder owner belongs, and the last three — for all others. Permissions always follow exactly in this order: “read, write, execute”, that is, rwx. For example, a designation like rwxrw-r-- means that the owner of this file can do anything with it, the members of his group can only read and write to the file, but not execute it (permission x is not set), and the file is read— only to all other users of the system.

If these permissions are set for a folder, it means that the group users will also not be able to run the files stored in it, and other users have access to the contents of the folder exclusively in read only mode.

You can view the rights and permissions of files and folders using the ls console command, equipped with the -l key.

To change access rights, there is the chmod (Change Mode) command. With this command, you don’t even have to register all the required permissions manually: for lazy people, Linux provides numeric designations for standard permission sets. For example, the chmod 755 filename command will assign permissions to the filename file rwxr-xr-x (every user has the right to read and run; the owner can edit), the chmod 777 filename will output rwxrwxrwx (everyone can do whatever they want), and the “diabolical” chmod 666 filename command will return rw-rw-rw- (all users can read and edit the file).

In modern Linux, there are also so-called special permissions, but we will not consider them: to begin with, there is enough information to feel more or less confident in the system.

Links

There are shortcuts in Windows — we don’t think anyone needs to explain what they are. On Linux, there are two types of links instead, hard links and symbolic links.

A hard link is basically the file name. It’s just that on Linux, a file can have several of them, and they can be located in different directories. Therefore, if you create a hard link and then delete the source file, it will still be available via the link, because it is no worse than the original name that you erased!

If you delete the last hard link, the file system will no longer assume that the file exists and will recognize the location where it is located as suitable for recording other information.

Symbolic links are more like standard Windows shortcuts. They contain the address of the target file or directory (there are no hard links to the directory), and if it disappears, the link will lead “nowhere”.

Hard links are created by the ln file link command, and if you need to make a symbolic link, add the -s key.

Software installation

Of course, no one prevents you from downloading the program as a single binary file and running it. The main thing is not to forget to give it the rights to perform! But such free-standing files are rare. Usually, in order for the program to work, you need to install a lot of things into the system at once. That is why the programs are distributed as packages through the repository.

For example, in Ubuntu, to install a package, it is enough to write the sudo apt install package. However, it is recommended that you first do a sudo apt update so that the OS updates its list of packages and finds out about the release of new versions.

An important difference between Linux is that the program will be scattered in different directories after installation. Executable files go to their own directory, graphics resources go to their own, settings go to their own, and so on. At the same time, programs usually use shared libraries, which saves a little disk space, but sometimes creates inconvenient situations with library versions.

When using such an installation system, it is almost useless to try to find out where the program is installed. If you suddenly need to delete it, write the apt remove package, and its contents will leave your disk, and with it all the components that no one else has used.

But Linux is a country of free source codes, and therefore, building programs from source is a common thing. Consider that instead of an armchair you bought a “design kit” from IKEA. An important difference from it is that instead of instructions with funny people, you get a Makefile scenario for the make program, which will assemble everything on autopilot. And on the way, it will study and take into account all the features of your system or warn you because of the lack of some components (unlike the package manager, who would install them itself).

So, let’s say you found the nnn utility on github (it’s such a minimalistic file manager running in the terminal) and you want to install it from the source. You will need to do the following.

  1. Make sure that you have Git in your system. If it’s not there, install it:

    sudo apt install git
  2. Install dependencies for nnn. Collecting all of them from source is a bit too much, so just write

    sudo apt install pkg-config libncursesw5-dev libreadline-dev
  3. Now it’s time to get nnn from GitHub. This is done by this command:

    git clone https://github.com/jarun/nnn.git
  4. Go to the downloaded catalog: cd nnn.

  5. Write make and press Enter. This command will find the Makefile and execute the compilation instructions.

  6. Write sudo make install — this command will decompose the created binary files into directories.

Ready! You can write nnn from any directory, and the file manager will start.

Assembly nnn
Assembly nnn
nnn
nnn

Pay attention, it was just a demonstration. In fact, it will be easier and better to install the nnn version from the repository. In this case, the package manager will be able to update the installed program and delete it cleanly, if necessary. An assembly from the source codes may be needed if the software is rare or the newest version is needed.

By the way, in addition to binary packages, there are also source packages in repositories. Their assembly will take place automatically and will allow not to mess around with installing dependencies.

Recently, new installation systems have been gaining popularity, in which programs are distributed along with all dependencies and libraries: AppImage, Flatpak, and Snap. This is a less economical method, but more convenient and reliable. It is also convenient to install some programs via Docker, that is, along with a miniature Linux image. But talking about all this goes beyond the scope of today’s article.

Init and systemd

On Unix and Linux, the system initialization process, which is the responsibility of the init utility, plays an important role. Ancient unixes up to the fifth version simply executed the script when turned on — consider it an analog of autoexec.bat. When there was too much software, we had to add such a thing as runlevel.

The system moves from one stage to another when booting, and at each transition it runs scripts from a specific folder /etc/rcX.d/, where X is one of the boot levels:

  • 0 — system is turned off;
  • 1 — single-user mode;
  • 2 — multi-user;
  • 3 — with network support;
  • 5 — full download;
  • 6 — reboot.

So, if you add a link to the script in the rc0.d folder, it will be executed every time before shutdown.

/etc/rc* in Ubuntu
/etc/rc* in Ubuntu

This is how the system does everything it is supposed to do, for example, it starts checking the disk after a sudden shutdown, rotates logs, starts and stops services running in the background (in Unix and Linux, they are called daemons).

In modern Linux, this system has been replaced by an even more sophisticated one — systemd. It can also manage devices and network connections and do other things. In systemd, configuration files are created for each action or service, which specify when and under what conditions something needs to be started or stopped. You can find a list of them in /lib/systemd/, and work with them using the service command.

At the same time, a system with boot levels is still supported in popular distributions, despite the presence of systemd. Otherwise, there would be compatibility issues.

Other applications

When setting up a network on Linux, you will most likely encounter iptables, a standard firewall and part of the Netfilter system. If you have chosen Ubuntu, then you can configure it using the convenient ufw utility.

Another very useful utility in Linux is cron. It does about the same thing as the task scheduler in Windows. In new systems, cron works side by side with the already mentioned (and much more sophisticated) systemd, but using cron is much easier. Just write crontab -e, and a text editor will open a file with a list of programs that run at a certain time. The format is usually described in the same place.

www

crontab.guru website makes working with crontab much easier.

Conclusions

You can continue to talk about Linux almost indefinitely, but we’ll leave that to other articles. Our goal was not to replace serious books on Linux with this text. We just wanted to give the necessary minimum to those who want to start learning in practice as soon as possible. We hope that we have managed to provide you with the necessary set of basic knowledge that will help you not get lost and disappear into the dark corners of Linux!

Related posts:
2022.06.01 — Cybercrime story. Analyzing Plaso timelines with Timesketch

When you investigate an incident, it's critical to establish the exact time of the attack and method used to compromise the system. This enables you to track the entire chain of operations…

Full article →
2022.01.12 — First contact. Attacks against contactless cards

Contactless payment cards are very convenient: you just tap the terminal with your card, and a few seconds later, your phone rings indicating that…

Full article →
2022.06.01 — Routing nightmare. How to pentest OSPF and EIGRP dynamic routing protocols

The magic and charm of dynamic routing protocols can be deceptive: admins trust them implicitly and often forget to properly configure security systems embedded in these protocols. In this…

Full article →
2023.02.21 — Pivoting District: GRE Pivoting over network equipment

Too bad, security admins often don't pay due attention to network equipment, which enables malefactors to hack such devices and gain control over them. What…

Full article →
2022.06.02 — Blindfold game. Manage your Android smartphone via ABD

One day I encountered a technical issue: I had to put a phone connected to a single-board Raspberry Pi computer into the USB-tethering mode on boot. To do this,…

Full article →
2022.06.03 — Challenge the Keemaker! How to bypass antiviruses and inject shellcode into KeePass memory

Recently, I was involved with a challenging pentesting project. Using the KeeThief utility from GhostPack, I tried to extract the master password for the open-source KeePass database…

Full article →
2022.06.01 — WinAFL in practice. Using fuzzer to identify security holes in software

WinAFL is a fork of the renowned AFL fuzzer developed to fuzz closed-source programs on Windows systems. All aspects of WinAFL operation are described in the official documentation,…

Full article →
2022.06.01 — First contact. Attacks on chip-based cards

Virtually all modern bank cards are equipped with a special chip that stores data required to make payments. This article discusses fraud techniques used…

Full article →
2023.07.20 — Evil modem. Establishing a foothold in the attacked system with a USB modem

If you have direct access to the target PC, you can create a permanent and continuous communication channel with it. All you need for this…

Full article →
2023.03.26 — Attacks on the DHCP protocol: DHCP starvation, DHCP spoofing, and protection against these techniques

Chances are high that you had dealt with DHCP when configuring a router. But are you aware of risks arising if this protocol is misconfigured on a…

Full article →