- Versions
- Packages
- Users
- Groups
- Network
- CPU temperature
- Web-servers
- Set up a new server for NET Core deployment
- Files and folders
- List files
- Get a list of distinct extensions in a folder
- Get the size of a directory
- Create a directory and open it
- Do something based on directory existence
- Sync folders
- Copy one folder contents to another
- Deleting
- Execute some command for all the files in the folder
- Preview ZIP archive contents
- Count
- Get absolute path from relative path
- Get the path folder and name
- Get the last section of path
- Fix files permissions
- Get numerical chmod value
- Copy files based on a list from text file
- Remove duplicate lines from the file
- Watch the progress of a packing operation
- Get access rights for every section in the path
- Symlinks
- Pack several files into individual archives with 7z
- Mounting USB drives
- Build something from source
- Get return code
- systemd
- Run commands in background
- GRUB
- Set time zone
- Cron
- screen
- Define a variable using configure
- List only files from ZIP archive contents
- Most frequent commands from Bash history
- fail2ban
- xargs
- Making a file out of a template by substituting variables
- Installing newer JDK
- Swap and cache
Best way:
$ lsb_release -a
Good way:
$ cat /etc/*-release
Kernel and stuff:
$ uname -a
More stuff:
$ cat /proc/version
$ glxinfo | grep "OpenGL version"
$ sudo apt install SOMETHING
If you need to reinstall the package and restore its original configs:
$ sudo apt install --reinstall \
-o Dpkg::Options::="--force-confask,confnew,confmiss" \
SOMETHING
$ apt list --installed
$ sudo apt update
$ sudo apt upgrade
$ sudo apt autoremove
$ sudo apt install update-manager-core
Switch from lts
to normal
in Prompt
:
$ sudo nano /etc/update-manager/release-upgrades
Start a screen session and run:
$ do-release-upgrade
$ sudo apt search ninja-build
Sorting... Done
Full Text Search... Done
ninja-build/focal 1.10.0-1build1 amd64
small build system closest in spirit to Make
List all the available package versions:
$ apt list -a ninja-build
Listing... Done
ninja-build/focal 1.10.0-1build1 amd64
or:
$ apt-cache policy ninja-build
ninja-build:
Installed: (none)
Candidate: 1.10.0-1build1
Version table:
1.10.0-1build1 500
500 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages
For example, we want to delete LibreOffice. All of its packages names start with libreoffice
, so:
$ sudo apt remove --purge libreoffice*
$ sudo apt clean
$ sudo apt autoremove
Or, more civilized way, let's remove minidlna and its dependencies:
$ sudo apt remove --auto-remove minidlna
Yeah, fuck Snap:
$ sudo systemctl stop snapd
$ sudo systemctl disable snapd
$ sudo apt autoremove --purge snapd gnome-software-plugin-snap
$ rm -rf ~/snap
$ sudo rm -rf /var/snap /var/cache/snapd /usr/lib/snapd
If it gives you something like:
rm: cannot remove '/var/snap/firefox/common/host-hunspell/en_US.aff': Read-only file system
rm: cannot remove '/var/snap/firefox/common/host-hunspell/en_US.dic': Read-only file system
then do that first:
$ sudo umount /var/snap/firefox/common/host-hunspell
and then repeat.
https://ubuntuhandbook.org/index.php/2022/04/install-firefox-deb-ubuntu-22-04/
$ sudo add-apt-repository ppa:mozillateam/ppa
$ sudo apt update
$ sudo apt install -t 'o=LP-PPA-mozillateam' firefox
$ sudo nano /etc/apt/preferences.d/mozillateamppa
Package: firefox*
Pin: release o=LP-PPA-mozillateam
Pin-Priority: 501
$ sudo dpkg -i /path/to/somePackage.deb
$ dpkg -l
or:
$ dpkg --get-selections | grep -v deinstall
$ sudo dpkg -r some-package
Unpack it somewhere:
$ dpkg-deb -R ./some-package.deb ./tmp
Edit files, add/remove files and pack it back:
$ dpkg-deb -b ./tmp ./some-package-new.deb
$ cat /etc/passwd | awk -F ':' '{ print $1 }'
With home
directory and change his password.
$ useradd -m vasya
$ passwd vasya
Another option, if you want a system user for some service needs:
$ adduser \
--system \
--group \
--disabled-password \
--home /home/vasya \
vasya
If later you'll want to be able to "login" as this user:
$ sudo usermod -s /bin/bash vasya
$ sudo --login --user vasya
$ passwd
$ lastlog
Say, you have a user teamcity
and you want to allow it to restart certain systemd service. Edit the following file as root:
$ sudo nano /etc/sudoers.d/teamcity
%teamcity ALL= NOPASSWD: /bin/systemctl restart some.service
Now log-in as that user and test new rights:
root@somehost:~# sudo --login --user teamcity
teamcity@somehost:~$ sudo systemctl restart some.service
$ groups
$ cut -d: -f1 /etc/group | sort
$ groupadd NEW-GROUP
$ usermod -a -G GROUP-NAME USER-NAME
$ deluser USER-NAME GROUP-NAME
$ grep NEW-GROUP /etc/group
$ chgrp -R NEW-GROUP /etc/SOME-FOLDER/
Check what network you have:
# sudo lshw -C network
To turn off Wi-Fi, use its logical name
:
$ sudo ifconfig wlp4s0 down
$ sudo systemd-resolve --flush-caches
$ sudo nano /etc/netplan/00-installer-config.yam
network:
version: 2
ethernets:
ens18:
dhcp4: true
addresses: [10.200.16.96/24]
gateway4: 10.200.16.1
nameservers:
addresses: [10.200.16.110]
$ sudo netplan apply
You will lose connection and will need to reconnect.
For example, if host machine has changed the network, and you need to update the IP address in your guest VM:
$ dhclient -v -r
$ sudo netstat -lntup
or
$ sudo ss -lntup
If your host is inside some cloud provider infrastructure (for example, Oracle Cloud), aside from creating routes to external internet from their subnet you might also need to open ports on the host:
$ sudo firewall-cmd --zone=public --permanent --add-port=80/tcp
$ sudo firewall-cmd --reload
For example, you want to allow Grafana to bind to 80 port without running it as root:
$ setcap 'cap_net_bind_service=+ep' /usr/sbin/grafana-server
All current rules:
$ iptables -L
Just incoming rules:
$ iptables -L INPUT
Block incoming requests from some IP:
$ iptables -A INPUT -s 178.128.230.58 -j DROP
Delete a rule:
$ iptables -D INPUT -s 178.128.230.58 -j DROP
Install this thing:
$ sudo apt install iptables-persistent
and then either:
$ sudo /etc/init.d/iptables-persistent save
$ sudo /etc/init.d/iptables-persistent reload
or:
$ sudo netfilter-persistent save
$ sudo netfilter-persistent reload
If saved rules (/etc/iptables/rules.v4
) are not restored after reboot, then perhaps the service loading order is wrong. Add iptables.service
and ip6tables.service
both to Wants
and Before
in /usr/lib/systemd/system/netfilter-persistent.service
:
...
Wants=network-pre.target systemd-modules-load.service local-fs.target iptables.service ip6tables.service
Before=network-pre.target shutdown.target iptables.service ip6tables.service
...
$ nmap -sP 192.168.1.0/24
$ cat /sys/class/thermal/thermal_zone*/temp
or, more useful:
$ paste <(cat /sys/class/thermal/thermal_zone*/type) <(cat /sys/class/thermal/thermal_zone*/temp) | column -s $'\t' -t | sed 's/\(.\)..$/.\1°C/'
$ curl -s -I example.com|awk '$1~/Server:/ {print $2}'
Log files are split every week and rotated every 8 weeks (2 months).
$ sudo nano /etc/logrotate.d/nginx
/var/log/nginx/*.log {
weekly
missingok
rotate 8
maxage 90
compress
delaycompress
notifempty
create 0640 www-data adm
sharedscripts
prerotate
if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
run-parts /etc/logrotate.d/httpd-prerotate; \
fi \
endscript
postrotate
invoke-rc.d nginx rotate >/dev/null 2>&1
endscript
}
here:
weekly
- switch to a new log file each weekrotate 8
- number of files based on rotation value, so here it's 8 weeksmaxage 90
- disregarding rotation value, number of days to keep files, so here it's 90 days
After editing the file:
$ sudo kill -USR1 $(cat /var/run/nginx.pid)
Get password generator:
$ sudo apt install apache2-utils
Add a new user/password:
$ sudo htpasswd -c /etc/nginx/.htpasswd someusername
And configure your website to use this file for Basic Authentication.
location / {
try_files $uri $uri/ =404;
auth_basic "restricted area";
auth_basic_user_file /etc/nginx/.htpasswd;
}
<VirtualHost *:8998>
...
<Directory "/var/www/website">
AuthType Basic
AuthName "Restricted Content"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
</Directory>
</VirtualHost>
Install .NET Core: https://www.microsoft.com/net/download/linux-package-manager/ubuntu16-04/sdk-current
Install NGINX and edit the config:
apt install nginx
nano /etc/nginx/sites-available/default
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
nginx -s reload
Install MySQL:
apt install mysql-server
mysql_secure_installation
Check root
user authentication and set the password if it's not set:
SELECT host, user, authentication_string FROM mysql.user;
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'YOUR-PASSWORD';
Create new .NET Core Web API project for test:
mkdir -p /var/www/test
cd /var/www/test
dotnet new webapi
chown -R www-data:www-data /var/www/
Comment //app.UseHttpsRedirection();
line in Startup.cs
.
dotnet run
Open http://YOUR-IP/api/values
$ ls -lah
drwxr-xr-x 12 vasya root 384B Mar 5 20:45 ./
drwxr-xr-x 138 vasya root 4.3K Mar 5 20:38 ../
-rw-r--r-- 1 vasya root 136K Dec 31 2017 4k-nok-mistake.jpg
-rw-r--r-- 1 vasya root 105K Jan 1 2018 fisherman-friend.jpg
-rw-r--r-- 1 vasya root 188K Jan 1 2018 import-calculator.png
-rw-r--r-- 1 vasya root 134K Jan 1 2018 mobile-internet.png
-rw-r--r-- 1 vasya root 88K Jan 3 2018 nordea-feil.png
-rw-r--r-- 1 vasya root 42K Jan 3 2018 online-payment-fail.png
-rw-r--r-- 1 vasya root 92K Jan 3 2018 ruter-app-fail.png
-rw-r--r-- 1 vasya root 481K Dec 31 2017 ruter-fail.JPG
-rw-r--r-- 1 vasya root 222K Jan 3 2018 ruter-transports.png
-rw-r--r-- 1 vasya root 49K Dec 29 2010 tvoe-litso.jpg
$ ls -A1
Reverse order:
$ ls -A1r
$ ls -lah | awk '{print $9 " | " $5}'
|
./ | 384B
../ | 4.3K
4k-nok-mistake.jpg | 136K
fisherman-friend.jpg | 105K
import-calculator.png | 188K
mobile-internet.png | 134K
nordea-feil.png | 88K
online-payment-fail.png | 42K
ruter-app-fail.png | 92K
ruter-fail.JPG | 481K
ruter-transports.png | 222K
tvoe-litso.jpg | 49K
$ find . | sed -e "s/[^-][^\/]*\// |/g" -e "s/|\([^ ]\)/|-\1/"
You have files named image01.png
, image02.png
, image09.png
, image10.png
, image11.png
and so on. You need to list them sorted by the name, but also respecting the numbers order, and do some action on each, for example update modification timestamp attribute.
You can do it with pipe and sort
:
$ for f in `ls ./*.png | sort -V`; do touch "$f" && sleep 1; done
or right with ls
:
$ for f in `ls -v ./*.png`; do touch "$f" && sleep 1; done
If you have something like:
$ tree .
.
├── 01\ Jan\ 1999
│ └── 01_01_01.mp3
├── 02\ Feb\ 1999
│ ├── 01_02_01.mp3
│ ├── 01_02_02.mp3
│ ├── 01_02_03.mp3
│ ├── 01_02_04.mp3
│ └── 01_02_05.mp3
├── 03\ Aug\ 1999
│ └── 01_03_01.mp3
├── 04\ Aug\ 1999
│ ├── 01_04_01.mp3
│ ├── 01_04_02.mp3
│ ├── 01_04_03.mp3
│ ├── 01_04_04.mp3
│ ├── 01_04_05.mp3
│ └── 01_04_06.mp3
├── 05\ Mar\ 2000
│ └── 01_05_01.mp3
├── 06\ Apr\ 2000
│ ├── 01_06_01.mp3
Then you can flatten this structure this way:
$ find . ! -type d -exec gls -1v {} +
'./01 Jan 1999/01_01_01.mp3'
'./02 Feb 1999/01_02_01.mp3'
'./02 Feb 1999/01_02_02.mp3'
'./02 Feb 1999/01_02_03.mp3'
'./02 Feb 1999/01_02_04.mp3'
'./02 Feb 1999/01_02_05.mp3'
'./03 Aug 1999/01_03_01.mp3'
'./04 Aug 1999/01_04_01.mp3'
'./04 Aug 1999/01_04_02.mp3'
'./04 Aug 1999/01_04_03.mp3'
'./04 Aug 1999/01_04_04.mp3'
'./04 Aug 1999/01_04_05.mp3'
'./04 Aug 1999/01_04_06.mp3'
'./05 Mar 2000/01_05_01.mp3'
'./06 Apr 2000/01_06_01.mp3'
$ find . -type f -exec file --mime {} \; | grep "charset=utf-16"
The files tree:
$ tree .
└── wasm
├── cmake
│ ├── pdfLayerConfig-release.cmake
│ └── pdfLayerConfig.cmake
├── include
│ ├── Addons
│ │ └── PdfLayer
│ │ └── somePDF.h
│ └── pdfium
│ ├── fpdf_annot.h
│ ├── fpdf_attachment.h
│ ├── fpdf_catalog.h
│ ├── fpdf_dataavail.h
│ ├── fpdf_doc.h
│ ├── fpdf_edit.h
│ ├── fpdf_ext.h
│ ├── fpdf_flatten.h
│ ├── fpdf_formfill.h
│ ├── fpdf_fwlevent.h
│ ├── fpdf_javascript.h
│ ├── fpdf_ppo.h
│ ├── fpdf_progressive.h
│ ├── fpdf_save.h
│ ├── fpdf_searchex.h
│ ├── fpdf_signature.h
│ ├── fpdf_structtree.h
│ ├── fpdf_sysfontinfo.h
│ ├── fpdf_text.h
│ ├── fpdf_thumbnail.h
│ ├── fpdf_transformpage.h
│ └── fpdfview.h
├── lib
│ ├── libPdfLayer.a
│ └── libpdfium.a
└── share
└── pdfium
├── PDFiumConfig-release.cmake
├── PDFiumConfig.cmake
├── copyright
├── vcpkg.spdx.json
└── vcpkg_abi_info.txt
The flatten list:
$ find . ! -type d -exec ls -L1 {} +
./wasm/cmake/pdfLayerConfig-release.cmake
./wasm/cmake/pdfLayerConfig.cmake
./wasm/include/Addons/PdfLayer/somePDF.h
./wasm/include/pdfium/fpdf_annot.h
./wasm/include/pdfium/fpdf_attachment.h
./wasm/include/pdfium/fpdf_catalog.h
./wasm/include/pdfium/fpdf_dataavail.h
./wasm/include/pdfium/fpdf_doc.h
./wasm/include/pdfium/fpdf_edit.h
./wasm/include/pdfium/fpdf_ext.h
./wasm/include/pdfium/fpdf_flatten.h
./wasm/include/pdfium/fpdf_formfill.h
./wasm/include/pdfium/fpdf_fwlevent.h
./wasm/include/pdfium/fpdf_javascript.h
./wasm/include/pdfium/fpdf_ppo.h
./wasm/include/pdfium/fpdf_progressive.h
./wasm/include/pdfium/fpdf_save.h
./wasm/include/pdfium/fpdf_searchex.h
./wasm/include/pdfium/fpdf_signature.h
./wasm/include/pdfium/fpdf_structtree.h
./wasm/include/pdfium/fpdf_sysfontinfo.h
./wasm/include/pdfium/fpdf_text.h
./wasm/include/pdfium/fpdf_thumbnail.h
./wasm/include/pdfium/fpdf_transformpage.h
./wasm/include/pdfium/fpdfview.h
./wasm/lib/libPdfLayer.a
./wasm/lib/libpdfium.a
./wasm/share/pdfium/PDFiumConfig-release.cmake
./wasm/share/pdfium/PDFiumConfig.cmake
./wasm/share/pdfium/copyright
./wasm/share/pdfium/vcpkg.spdx.json
./wasm/share/pdfium/vcpkg_abi_info.txt
$ find . -type f | perl -ne 'print $1 if m/\.([^.\/]+)$/' | sort | uniq -c | sort -n
1 gitattributes
1 md
7 mp4
8 PNG
13 sample
89 jpg
158 png
There is also a variant without Perl:
$ find . -type f | rev | cut -d. -f1 | rev | tr '[:upper:]' '[:lower:]' | sort | uniq --count | sort -rn
but it won't work on Windows in Git BASH, because there is no rev
there.
$ du -hs /path/to/directory
-h
- human-readable size-s
- summary, shows the total size only for that directory, otherwise it will show it for all the child ones too
$ mkdir ololo && cd "$_"
$_
- special parameter that holds the last argument of the previous command
$ [ -d "somedir" ] && echo "directory exists" || echo "directory does not exist"
For example, when you need to restore NGINX config from a backup:
$ tree etc/
etc/
`-- nginx
|-- nginx.conf
|-- sites-available
| |-- default
| `-- protvshows
`-- sites-enabled
`-- protvshows -> /etc/nginx/sites-available/protvshows
$ mv etc/ /
mv: cannot move 'etc/' to '/etc': Directory not empty
$ rsync -a etc/ /etc/
Given this:
$ tree .
├── a
│ ├── another
│ │ └── thing.txt
│ └── some.txt
└── b
├── another
│ └── different.txt
└── ololo.txt
to copy contents of a
to b
:
$ cp -an ./a/* ./b/
but if, for whatever reasons, you can't use cp
, or if it acts differently on other systems/environments, then you could use rsync
or other similar tool for that, but without those there is just no way. Except for this trick with tar
(which should be available in any system):
$ (cd ./a && tar -c .) | (cd ./b && tar -xf -)
$ tree .
├── a
│ ├── another
│ │ └── thing.txt
│ └── some.txt
└── b
├── another
│ ├── different.txt
│ └── thing.txt
├── ololo.txt
└── some.txt
For example, delete all .php
files from the folder (and all the subfolders).
$ find . -type f -name "*.php" -exec rm {} +
$ find . -type d -empty -delete
For example, convert line endings with dos2unix
:
$ find . -type f -print0 | xargs -0 dos2unix
$ less archive.zip
or
$ unzip -l archive.zip | tail -10
Folders:
On the current level only:
$ find . -mindepth 1 -maxdepth 1 -type d | wc -l
Files recursively:
$ find . -type f -name '*.log' -printf x | wc -c
Files non-recursively:
$ find . -maxdepth 1 -type f -name '*.log' -printf x | wc -c
$ cd ~
$ readlink -f .bash_profile
/home/USERNAME/.bash_profile
$ dirname /var/www/html/index.html
/var/www/html
$ basename /var/www/html/index.html
index.html
If you need just the last section:
$ echo "/var/www/html/index.html" | rev | cut -d '/' -f 1 | rev
index.html
or, in case of Git BASH on Windows, where there is no rev
:
$ echo "/var/www/html/index.html" | tr '/' '\n' | tail -n1
index.html
And if you need the parent folder of this last section:
$ echo "/var/www/html/index.html" | rev | cut -d '/' -f 2 | rev
html
or:
$ dirname "/var/www/html/index.html" | tr '/' '\n' | tail -n1
html
find /home/user -type d -print0 | xargs -0 chmod 0775
find /home/user -type f -print0 | xargs -0 chmod 0664
$ stat -c %a ~/.ssh/github
600
Flat structure (will fail if there are files with the same name in different sub-folders):
$ cp $(<list.txt) /path/to/destination/folder
Preserving the folder structure and succeeding even if there are missing files:
$ cp --parents $(<list.txt) /path/to/destination/folder || :
Note that the output will be sorted:
$ sort -u ./file-with-duplicates.txt > ./permutted-file.txt
Also note that you cannot redirect output to the same file - it will be erased. If you'd like to put results into the same file, then do this:
$ sort -u -o ./some-file.txt ./some-file.txt
or shorter version:
$ sort -u -o ./some-file.txt{,}
Packing and compression:
$ tar cf - /path/to/folder/to/pack -P | pv -s $(du -sb /path/to/folder/to/pack | awk '{print $1}') | gzip > archive.tar.gz
Only packing, no compression:
$ tar -c /path/to/folder/to/pack | pv -s $(du -sb /path/to/folder/to/pack | awk '{print $1}') > archive.tar
alternative:
$ tar cf - /path/to/folder/to/pack -P | pv -s $(du -sb /path/to/folder/to/pack | awk '{print $1}') > archive.tar
$ namei -mov /home/user/some/certificate.pem
Absolute:
$ pwd
/home/vasya/Downloads
$ ln -s /home/vasya/programs/some/executable ../bin/executable
Relative:
$ pwd
/home/vasya/Downloads
$ ln -sr ../programs/some/executable ../bin/executable
On Mac OS for -r
you'll need to use gln
(GNU ln) instead of ln
.
$ cd /some/path/
$ ls -L1 ./*.vul
./ME1MS03_00010H2O-1.vul
./ME1MS03_00010H2O-2.vul
./ME1MS03_00010H2O-3.vul
./ME1MS03_00010H2O-4.vul
./ME1MS03_00010H2O-5.vul
...
$ time for f in ./*.vul; do 7za a "./${f%.*}.7z" "$f"; done
...
real 59m51.181s
$ du -hc ./*.vul
6.6G ./ME1MS03_00010H2O-1.vul
6.2G ./ME1MS03_00010H2O-2.vul
6.4G ./ME1MS03_00010H2O-3.vul
6.3G ./ME1MS03_00010H2O-4.vul
6.9G ./ME1MS03_00010H2O-5.vul
...
232G total
$ du -hc ./*.7z
3.9G ./ME1MS03_00010H2O-1.7z
3.6G ./ME1MS03_00010H2O-2.7z
3.7G ./ME1MS03_00010H2O-3.7z
3.6G ./ME1MS03_00010H2O-4.7z
3.9G ./ME1MS03_00010H2O-5.7z
...
129G total
You can try setting different compression level with -mx3
(from 0
to 9
, where 9
is the slowest and best compression), but actually it does just fine with the default level.
Suppose, you have NTFS-formated external HDD. Find out its "path" (/dev/sda1
) and:
$ sudo nano /etc/fstab
/dev/sda1 /media/hdd ntfs-3g defaults 0 0
But media can be discovered with different paths from time to time, so it's more reliable to use UUID or labels:
$ sudo blkid
/dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="5203-DF71" TYPE="vfat" PARTUUID="6c526e13-04"
/dev/mmcblk0p2: LABEL="rootfs" UUID="2ab3f8e1-7dc6-41f5-b0db-dd5959d54d4e" TYPE="ext4" PARTUUID="6c586e13-02"
/dev/sda1: LABEL="some" UUID="581e681f-9d3c-4945-b459-eb5086d3002b" TYPE="ext4" PARTUUID="6664625e-01"
/dev/sdb1: LABEL="another" UUID="34E9-3319" TYPE="exfat" PARTUUID="bf87c135-03"
/dev/mmcblk0: PTUUID="6c596e14" PTTYPE="dos"
We need USB drives with labels some
and another
, but the latter has exfat
filesystem, so add its support first:
sudo apt install exfat-fuse exfat-utils
And then:
$ sudo mkdir /media/some
$ sudo mkdir /media/another
$ sudo nano /etc/fstab
LABEL=some /media/some ext4 defaults,nofail 0 0
LABEL=another /media/another exfat defaults,nofail 0 0
$ sudo mount -a
https://unix.stackexchange.com/a/83157/254512
$ sudo fdisk -l
$ sudo umount /dev/sdb
$ sudo eject -s /dev/sdb
where sdb
is your disk.
An example of building glibc
- because this one is recommended to be installed into a different directory than the default one as it may corrupt the system.
Get sources (either clone or unpack the archive) and then:
mkdir build && cd "$_"
../glibc/configure --prefix=/opt/glibc-2.28
make -j4
sudo make install
And then you can refer to it with LD_LIBRARY_PATH=/opt/glibc-2.28/lib/
.
Say, you have some Python script and you want to get its return/exit value:
python some.py
returnCode=$?
echo "Exit code: $returnCode"
Create a config for the new service:
$ nano /etc/systemd/system/some.service
Specify the command, environment and user:
[Unit]
Description=some
[Service]
WorkingDirectory=/var/www/some/
ExecStart=/usr/bin/dotnet /var/www/some/some.dll
Restart=always
RestartSec=10
SyslogIdentifier=kestrel-some
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Production
[Install]
WantedBy=multi-user.target
Enable and launch it:
$ systemctl enable some.service
$ systemctl start some.service
$ systemctl status YOUR-SERVICE.service
$ journalctl -u YOUR-SERVICE.service
Navigation:
f
- scroll one page down;g
- scroll to the first line;SHIFT
+g
- scroll the the last line.
Get last 10 lines from the service journal:
$ journalctl --unit=YOUR-SERVICE.service -n 10 --no-pager
https://andreaskaris.github.io/blog/linux/setting-journalctl-limits/
$ sudo nano /etc/systemd/journald.conf
[Journal]
# ...
SystemMaxUse=111M
#SystemKeepFree=5G
SystemMaxFileSize=11M
# ...
MaxFileSec=1month
# ...
$ sudo systemctl restart systemd-journald.service
$ sudo systemctl restart YOUR-SERVICE.service
$ sudo systemctl daemon-reload
$ service --status-all
Add &
to the end of the command in order to run it in background::
$ ping ya.ru >> ping.txt &
To see the list of running jobs:
$ jobs
[1]+ Running ping ya.ru >> ping.txt &
To stop it by ID:
kill %1
Or bring it to foreground (also by ID):
fg 1
And stop it as usual with CTRL + C
.
$ sudo nano etc/default/grub
Set the default option, enumeration starts from 0. To be sure, check the list from the boooting menu.
$ sudo update-grub
You can also delete unwanted items from /boot/grub/grub.cfg
.
$ sudo dpkg-reconfigure tzdata
$ crontab -e
# run at 23:01 every 3 days
1 23 */3 * * /root/backup.sh > /dev/null 2>&1
# run at 01:01 on the 1st of every month
1 1 1 * * certbot renew
$ systemctl restart cron.service
$ sudo nano /etc/rsyslog.d/50-default.conf
# uncomment
cron.* /var/log/cron.log
$ sudo systemctl restart rsyslog.service
When you need to run some long process, and you're worried that your SSH connection might break, the solution would be to start the screen
session and then you can detach and reattach to it at any moment. That is especially useful when you do system upgrades.
$ screen -S updating
Do your stuff, run the upgrading process, whatever.
You can detach from the session by pressing the combination Ctrl
+ A
+ D
.
To attach back to it:
$ screen -r
But if you have several screen
sessions, then you might need to list them first:
$ screen -list
There are screens on:
27734.updating (01/22/20 12:16:16) (Detached)
27718.pts-0.283746 (01/22/20 12:14:32) (Detached)
27706.pts-0.283746 (01/22/20 12:14:00) (Detached)
3 Sockets in /run/screen/S-root.
And then reattach using the session ID:
$ screen -r 27734
To close a session:
$ screen -XS 27706 quit
$ screen -dmS ydl yt-dlp https://youtu.be/dQw4w9WgXcQ
here:
-d -m
- start screen in detached mode-S ydl
- name the sessionydl
, not required
At some point some motherfucker changed something in the default behavior of systemd, or perhaps motherfucking systemd is the reason, but anyway, you'll be surprised to discover that your screen sessions are fucking killed as soon as you are disconnected from the server, which defeats the whole motherfucking point.
I don't know what exactly from the following returned the normal behavior, but something did:
$ loginctl enable-linger USERNAME
$ ls /var/lib/systemd/linger
or/and:
$ sudo nano /etc/systemd/logind.conf
[Login]
#NAutoVTs=6
#ReserveVT=6
KillUserProcesses=no
#KillOnlyUsers=
KillExcludeUsers=root USERNAME
If the source code allows you to define some variable on configuration step, here's how you can do that:
$ sudo apt install automake autoconf
$ nano configure.ac
AC_DEFINE([SOME_VAR], [9000], [Set some variable to 9000])
$ touch configure.ac
$ ./configure
$ lesspipe /path/to/some.zip | tail --lines=+4 | head --lines=-2 | awk '{print $NF}'
Though, it seems to use spaces as separators, so listing might be incorrect in case of space in paths.
$ cat ~/.bash_history | sort | uniq -c | sort -n
Block those bastards from brute-forcing your server. A detailed guide: https://www.linode.com/docs/guides/how-to-use-fail2ban-for-ssh-brute-force-protection/.
$ sudo apt install fail2ban
$ sudo nano /etc/fail2ban/jail.local
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
findtime = 667
bantime = 11111
ignoreip = 127.0.0.1
$ sudo systemctl enable fail2ban.service
$ sudo systemctl start fail2ban.service
$ sudo fail2ban-client status
$ sudo fail2ban-client status sshd
$ sudo fail2ban-client set sshd unbanip 1.2.3.4
Use output from the previous command as an argument(s) for the next one in pipe.
Read file and use all of its lines as one argument:
$ cat ./some.txt
ololo
fuuu
some
another
thing
and
TEH END
$ cat ./some.txt | xargs echo "rrrargh"
rrrargh ololo fuuu some another thing and TEH END
To use every line in the file as an argument, so to execute the next command for every line, use placeholder/replacement:
$ cat ./some.txt | xargs -I {} echo "{} rrrargh"
ololo rrrargh
fuuu rrrargh
some rrrargh
another rrrargh
thing rrrargh
and rrrargh
TEH END rrrargh
Find all the files (except for .pyc
) that contain someuser
string and replace certain paths in those files with sed
:
$ grep -irnl --exclude \*.pyc -e "someuser" \
| xargs -I {} sed -i 's/\/Users\/someuser\/code\/python\/_venvs\/altaipony\//\/home\/anotheruser\/_venvs\/altaipony\//g' {}
A template file some.template
:
Here comes first variable value: "$SOME_VARIABLE".
And lastly, another thing: $ANOTHER_VARIABLE
And those variables can be now substituted with envsubst
:
$ export SOME_VARIABLE="some value"
$ export ANOTHER_VARIABLE="something else"
$ envsubst < /path/to/some.template > /path/to/some.txt
or:
$ SOME_VARIABLE="some value" ANOTHER_VARIABLE="something else" envsubst < /path/to/some.template > /path/to/resulting.txt
Such as if you have Java 8, but you need Java 11:
$ java --version
$ echo $JAVA_HOME
$ sudo apt update
$ sudo apt install openjdk-11-jdk
$ sudo update-java-alternatives --list
java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64
java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64
$ sudo update-alternatives --config java
There are 2 choices for the alternative java (providing /usr/bin/java).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode
Press <enter> to keep the current choice[*], or type selection number:
$ nano ~/.bash_profile
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
$ source ~/.bash_profile
$ java --version
$ echo $JAVA_HOME
https://www.digitalocean.com/community/tutorials/how-to-add-swap-space-on-ubuntu-20-04
Check if you already have it enabled:
$ sudo swapon --show
It it outputs nothing, then you don't. This is also the case if you have 0
for Swap
row here:
$ free -h
total used free shared buff/cache available
Mem: 857Mi 406Mi 66Mi 89Mi 384Mi 215Mi
Swap: 0B 0B 0B
If you have 1 GB of RAM on your server, then swap of 1 GB is reasonable (you can make it twice as big, if you want, or if you are building Qt / something massive, then you might need to set it to 4 GB or even more):
# there is also an option of using `sudo fallocate -l 1G /swapfile`, but it isn't recommended: https://askubuntu.com/a/1177620
$ sudo dd if=/dev/zero of=/swapfile bs=1M count=1024 oflag=append conv=notrunc
$ ls -lh /swapfile
-rw-r--r-- 1 root root 1.0G Jan 9 11:15 /swapfile
$ sudo chmod 600 /swapfile
$ ls -lh /swapfile
-rw------- 1 root root 1.0G Jan 9 11:15 /swapfile
$ sudo mkswap /swapfile
Setting up swapspace version 1, size = 1024 MiB (1073737728 bytes)
no label, UUID=43a09269-dc36-4e51-901d-a3450e7413e6
$ sudo swapon /swapfile
$ sudo swapon --show
NAME TYPE SIZE USED PRIO
/swapfile file 1024M 0B -2
$ free -h
total used free shared buff/cache available
Mem: 857Mi 387Mi 83Mi 89Mi 386Mi 235Mi
Swap: 1.0Gi 0B 1.0Gi
It will exist for the current section, but will not persist between reboots, so you need make it permanent:
$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 discard,errors=remount-ro 0 1
LABEL=UEFI /boot/efi vfat umask=0077 0 1
$ echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
$ cat /etc/fstab
LABEL=cloudimg-rootfs / ext4 discard,errors=remount-ro 0 1
LABEL=UEFI /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
Some more settings for swap and cache:
$ cat /proc/sys/vm/swappiness
60
$ cat /proc/sys/vm/vfs_cache_pressure
100
$ sudo sysctl vm.swappiness=10
$ sudo sysctl vm.vfs_cache_pressure=50
These also need to be made permanent to persist between reboots:
$ sudo nano /etc/sysctl.conf
vm.swappiness=10
vm.vfs_cache_pressure=50