diff --git a/docs/aleph.md b/docs/aleph.md index fa547b75908..5458ac82c75 100644 --- a/docs/aleph.md +++ b/docs/aleph.md @@ -592,5 +592,5 @@ Once you have the files that triggered the errors, the best way to handle them i - [Source](https://github.com/alephdata/aleph) - [Docs](https://docs.alephdata.org/) -- [Support chat](https://alephdata.slack.com) +- [Support](https://aleph.discourse.group/) - [API docs](https://redocly.github.io/redoc/?url=https://aleph.occrp.org/api/openapi.json) diff --git a/docs/authentik.md b/docs/authentik.md index 4aab4d72510..315d2d797d8 100644 --- a/docs/authentik.md +++ b/docs/authentik.md @@ -899,6 +899,9 @@ This export can be triggered via the API or the Web UI by clicking the download I've skimmed through the prometheus metrics exposed at `:9300/metrics` in the core and they aren't that useful :( +# [Using the API](https://docs.goauthentik.io/docs/developer-docs/api/) + +There is a [python library](https://pypi.org/project/authentik-client/) # Troubleshooting ## [I can't log in to authentik](https://goauthentik.io/docs/troubleshooting/login/) diff --git a/docs/badblocks.md b/docs/badblocks.md new file mode 100644 index 00000000000..759e5aa823b --- /dev/null +++ b/docs/badblocks.md @@ -0,0 +1,13 @@ +## Check the health of a disk with badblocks + +The `badblocks` command will write and read the disk with different patterns, thus overwriting the whole disk, so you will loose all the data in the disk. + +This test is good for rotational disks as there is no disk degradation on massive writes, do not use it on SSD though. + +WARNING: be sure that you specify the correct disk!! + +```bash +badblocks -wsv -b 4096 /dev/sde | tee disk_analysis_log.txt +``` + +If errors are shown is that all of the spare sectors of the disk are used, so you must not use this disk anymore. Again, check `dmesg` for traces of disk errors. diff --git a/docs/devops/kubectl/kubectl_commands.md b/docs/devops/kubectl/kubectl_commands.md index 84b10d2123e..15bc9668acf 100644 --- a/docs/devops/kubectl/kubectl_commands.md +++ b/docs/devops/kubectl/kubectl_commands.md @@ -239,6 +239,31 @@ kubectl -n exec -- df -ah You may need to use `kubectl get pod -o yaml` to know what volume is mounted where. +### Get the node architecture of the pods of a deployment + +Here are a few ways to check the node architecture of pods in a deployment: + +1. Get the nodes where the pods are running: +```bash +kubectl get pods -l app=your-deployment-label -o wide +``` +This will show which nodes are running your pods. + +2. Then check the architecture of those nodes: +```bash +kubectl get nodes -o custom-columns=NAME:.metadata.name,ARCH:.status.nodeInfo.architecture +``` + +Or you can combine this into a single command: +```bash +kubectl get pods -l app=your-deployment-label -o json | jq -r '.items[].spec.nodeName' | xargs -I {} kubectl get node {} -o custom-columns=NAME:.metadata.name,ARCH:.status.nodeInfo.architecture +``` + +You can also check if your deployment is explicitly targeting specific architectures through node selectors or affinity rules: +```bash +kubectl get deployment your-deployment-name -o yaml | grep -A 5 nodeSelector +``` + ## Services ### List services in namespace diff --git a/docs/dragonsweeper.md b/docs/dragonsweeper.md new file mode 100644 index 00000000000..eaa0c7de7e1 --- /dev/null +++ b/docs/dragonsweeper.md @@ -0,0 +1,13 @@ +[DragonSweeper](https://danielben.itch.io/dragonsweeper) is an addictive simple RPG-tinged take on the Minesweeper formula. You can [play it for free](https://danielben.itch.io/dragonsweeper) in your browser. + +If you're lost at the beginning start reading the [ArsTechnica blog post](https://arstechnica.com/gaming/2025/02/dragonsweeper-is-my-favorite-game-of-2025-so-far). + +# Tips + +- Use `Shift` to mark numbers you already know. + +# References + +- [Play](https://danielben.itch.io/dragonsweeper) +- [Home](https://danielben.itch.io/dragonsweeper) +- [ArsTechnica blog post](https://arstechnica.com/gaming/2025/02/dragonsweeper-is-my-favorite-game-of-2025-so-far) diff --git a/docs/emojis.md b/docs/emojis.md index 414e276d7fd..7e084b6a3c1 100644 --- a/docs/emojis.md +++ b/docs/emojis.md @@ -17,6 +17,8 @@ Curated list of emojis to copy paste. \\ ٩( ᐛ )و // +(•‿•) + (✿◠‿◠) (/゚Д゚)/ diff --git a/docs/fzf_nvim.md b/docs/fzf_nvim.md new file mode 100644 index 00000000000..26077402e5c --- /dev/null +++ b/docs/fzf_nvim.md @@ -0,0 +1,17 @@ +# Tips + +## [How to exclude some files from the search](https://github.com/junegunn/fzf.vim/issues/453) + +If anyone else comes here in the future and have the following setup + +- Using `fd` as default command: `export FZF_DEFAULT_COMMAND='fd --type file --hidden --follow'` +- Using `:Rg` to grep in files + +And want to exclude a specific path in a git project say `path/to/exclude` (but that should not be included in `.gitignore`) from both `fd` and `rg` as used by `fzf.vim`, then the easiest way I found to solve to create ignore files for the respective tool then ignore this file in the local git clone (as they are only used by me) + +```bash +cd git_proj/ +echo "path/to/exclude" > .rgignore +echo "path/to/exclude" > .fdignore +printf ".rgignore\n.fdignore" >> .git/info/exclude +``` diff --git a/docs/hacktivist_collectives.md b/docs/hacktivist_collectives.md index 25a82b10fa4..a0c5d5835ee 100644 --- a/docs/hacktivist_collectives.md +++ b/docs/hacktivist_collectives.md @@ -2,7 +2,13 @@ # Germany - Chaos Computer Club: [here](https://fediverse.tv/w/g76dg9qTaG7XiB4R2EfovJ) is a documentary on it's birth -# Galicia + +# Estado español + +- [Critical Switch](https://critical-switch.org/): una colectiva transhackfeminista no mixta1 interesades en la cultura libre, la privacidad y la seguridad digital. Promovemos la cultura de la seguridad para generar espacios más seguros en los movimientos sociales y activistas. + + +## Galicia Algunos colectivos de galiza son: @@ -15,3 +21,7 @@ Algunos colectivos de galiza son: - Enxeñeiros sen fronteiras: hicieron cosas de reciclar hardware para dárselo a gente sin recursos - [PonteLabs](https://pontelabs.org/) - [Mancomun](https://mancomun.gal/a-nosa-rede/): Web que intenta listar colectivos pero son asociaciones muy oficiales todas. + +# México? +- [Sursiendo](https://sursiendo.org/quienes-somos/) +- [Tecnoafecciones](https://tecnoafecciones.net) diff --git a/docs/hard_drive_health.md b/docs/hard_drive_health.md index c58152e0333..3007958ae65 100644 --- a/docs/hard_drive_health.md +++ b/docs/hard_drive_health.md @@ -57,50 +57,7 @@ hard drive, such as: # Check the disk health -You can run at least two tests, one with `smartctl` and another with `badblocks` - -## Check the health of a disk with smartctl - -Start with a long self test with `smartctl`. Assuming the disk to test is -`/dev/sdd`: - -```bash -smartctl -t long /dev/sdd -``` - -The command will respond with an estimate of how long it thinks the test will -take to complete. - -To check progress use: - -```bash -smartctl -A /dev/sdd | grep remaining -# or -smartctl -c /dev/sdd | grep remaining -``` - -Don't check too often because it can abort the test with some drives. If you -receive an empty output, examine the reported status with: - -```bash -smartctl -l selftest /dev/sdd -``` - -If errors are shown, check the `dmesg` as there are usually useful traces of the error. - -## Check the health of a disk with badblocks - -The `badblocks` command will write and read the disk with different patterns, thus overwriting the whole disk, so you will loose all the data in the disk. - -This test is good for rotational disks as there is no disk degradation on massive writes, do not use it on SSD though. - -WARNING: be sure that you specify the correct disk!! - -```bash -badblocks -wsv -b 4096 /dev/sde | tee disk_analysis_log.txt -``` - -If errors are shown is that all of the spare sectors of the disk are used, so you must not use this disk anymore. Again, check `dmesg` for traces of disk errors. +You can run at least two tests, one with [`smartctl`](smartctl.md) and another with [`badblocks`](badblocks.md). # Check the warranty status diff --git a/docs/himalaya.md b/docs/himalaya.md index 9383e255295..098740a2bdb 100644 --- a/docs/himalaya.md +++ b/docs/himalaya.md @@ -234,6 +234,19 @@ return { ## Show notifications when emails arrive You can set up [mirador](mirador.md) to get those notifications. + +## Configure GPG + +Himalaya relies on cargo features to enable gpg. You can see the default enabled features in the [Cargo.toml](https://github.com/pimalaya/himalaya/blob/master/Cargo.toml#L18) file. As of 2025-01-27 the `pgp-commands` is enabled. + +You only need to add the next section to your config: + +```ini +pgp.type = "commands" +``` + +And then you can use both the cli and the vim plugin with gpg. Super easy + # Usage ## Searching emails diff --git a/docs/img/x.mp4 b/docs/img/x.mp4 new file mode 100644 index 00000000000..b47c32b75b5 Binary files /dev/null and b/docs/img/x.mp4 differ diff --git a/docs/instant_messages_management.md b/docs/instant_messages_management.md index 8ffe5a178e5..b59048709fb 100644 --- a/docs/instant_messages_management.md +++ b/docs/instant_messages_management.md @@ -155,3 +155,7 @@ Sometimes the client applications don't give enough granularity, or you would like to show notifications based on more complex conditions, that's why I created the seed project to [improve the notification management in Linux](projects.md#improve-the-notification-management-in-linux). + +# Merge all your instant message apps into one + +You can [use bridges to merge all into matrix](https://technicallyrural.ca/2021/04/05/unify-signal-whatsapp-and-sms-in-a-personal-matrix-server-part-1-matrix/) diff --git a/docs/k9.md b/docs/k9.md new file mode 100644 index 00000000000..1f521c13a1b --- /dev/null +++ b/docs/k9.md @@ -0,0 +1,5 @@ +# Tips + +## [How to set a master password](https://forum.k9mail.app/t/password-protection-for-launch-of-k9/6871/11) + +You can't, it's not supported and it doesn't look that it will ([1](https://forum.k9mail.app/t/password-protection-for-launch-of-k9/6871/11), [2](https://forum.k9mail.app/t/can-i-password-protect-app-on-startup/6755/6)) diff --git a/docs/linux/zfs.md b/docs/linux/zfs.md index 75cb8d5e137..d72bf62a64a 100644 --- a/docs/linux/zfs.md +++ b/docs/linux/zfs.md @@ -77,7 +77,7 @@ It doesn't matter how big your disks are, you'll eventually reach it's limit bef To sort the datasets on the amount of space they use for their backups use `zfs list -o space -s usedds` -#### Clean it up +#### Clean it up Then you can go dataset by dataset using `ncdu` cleaning up. @@ -120,6 +120,7 @@ zfs diff @ | grep '^-' ``` This will help you identify which files or directories were in the snapshot but are no longer in the current dataset. + ## Get read and write stats from pool ```bash @@ -139,6 +140,7 @@ zfs get all {{ pool_name }} ``` ## [Set zfs module parameters or options](https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html) + Most of the ZFS kernel module parameters are accessible in the SysFS `/sys/module/zfs/parameters` directory. Current values can be observed by ```bash @@ -198,7 +200,7 @@ You'll lose the snapshots though, as explained below. ### [Rename the topmost dataset](https://www.solaris-cookbook.eu/solaris/solaris-zpool-rename/) -If you want to rename the topmost dataset you [need to rename the pool too](https://github.com/openzfs/zfs/issues/4681) as these two are tied. +If you want to rename the topmost dataset you [need to rename the pool too](https://github.com/openzfs/zfs/issues/4681) as these two are tied. ```bash $: zpool status -v @@ -272,7 +274,7 @@ The following snapshot rename operation is not supported because the target pool ```bash $: zfs rename tank/home/cindys@today pool/home/cindys@saturday -cannot rename to 'pool/home/cindys@today': snapshots must be part of same +cannot rename to 'pool/home/cindys@today': snapshots must be part of same dataset ``` @@ -305,10 +307,20 @@ users/home/neil@2daysago 0 - 18K - ## [Repair a DEGRADED pool](https://blog.cavelab.dev/2021/01/zfs-replace-disk-expand-pool/) -First you need to make sure that it is in fact a problem of the disk. Check the `dmesg` to see if there are any traces of reading errors, or SATA cable errors. +First you need to make sure that it is in fact a problem of the disk. Check the `dmesg` to see if there are any traces of reading errors, or SATA cable errors. A friend suggested to mark the disk as healthy and do a resilver on the same disk. If the error is reproduced in the next days, then replace the disk. A safer approach is to resilver on a new disk, analyze the disk when it's not connected to the pool, and if you feel it's safe then save it as a cold spare. +A resilver process will try to rebuild the missing disk from the data of the rest of the disks of the VDEV, the rest of disks of the zpool don't take part of this process. + +### Removing a disk from the pool + +```bash +zpool remove tank0 sda +``` + +This will trigger the data evacuation from the disk. Check `zpool status` to see when it finishes. + ### Replacing a disk in the pool If you are going to replace the disk, you need to bring offline the device to be replaced: @@ -335,7 +347,7 @@ tank0 DEGRADED 0 0 0 ata-ST4000VX007-2DT166_xxxxxxxx ONLINE 0 0 0 ``` -Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0). +Sweet, the device is offline (last time it didn't show as offline for me, but the offline command returned a status code of 0). Time to shut the server down and physically replace the disk. @@ -397,16 +409,170 @@ Follow [these instructions](hard_drive_health.md#check-the-disk-health). ### RMA the degraded disk Follow [these instructions](hard_drive_health.md#check-the-warranty-status). + +## [Encrypting ZFS Drives with LUKS](https://www.ogselfhosting.com/index.php/2022/06/24/zfs-on-luks/) + +### Warning: Proceed with Extreme Caution + +**IMPORTANT SAFETY NOTICE:** + +- These instructions will COMPLETELY WIPE the target drive +- Do NOT attempt on production servers +- Experiment only on drives with no valuable data +- Seek professional help if anything is unclear + +### Prerequisites + +- A drive you want to encrypt (will be referred to as `/dev/sdx`) +- Root access +- Basic understanding of Linux command line +- Backup of all important data + +### Step 1: Create LUKS Encryption Layer + +First, format the drive with LUKS encryption: + +```bash +sudo cryptsetup luksFormat /dev/sdx +``` + +- You'll be prompted for a sudo password +- Create a strong encryption password (mix of uppercase, lowercase, numbers, symbols) +- Note the precise capitalization in commands + +### Step 2: Open the Encrypted Disk + +Open the newly encrypted disk: + +```bash +sudo cryptsetup luksOpen /dev/sdx sdx_crypt +``` + +This creates a mapped device at `/dev/mapper/sdx_crypt` + +### Step 3: Create ZFS Pool or the vdev + +For example to create a ZFS pool on the encrypted device: + +```bash +sudo zpool create -f -o ashift=12 \ + -O compression=lz4 \ + zpool /dev/mapper/sdx_crypt +``` + +Check the [create zpool section](#create-your-pool) to know which configuration flags to use. + +### Step 4: Set Up Automatic Unlocking + +#### Generate a Keyfile + +Create a random binary keyfile: + +```bash +sudo dd bs=1024 count=4 if=/dev/urandom of=/etc/zfs/keys/sdx.key +sudo chmod 0400 /etc/zfs/keys/sdx.key +``` + +#### Add Keyfile to LUKS + +Add the keyfile to the LUKS disk: + +```bash +sudo cryptsetup luksAddKey /dev/sdx /etc/zfs/keys/sdx.key +``` + +- You'll be asked to enter the original encryption password +- This adds the binary file to the LUKS disk header +- Now you can unlock the drive using either the password or the keyfile + +### Step 5: Configure Automatic Mounting + +#### Find Drive UUID + +Get the drive's UUID: + +```bash +sudo blkid +``` + +Look for the line with `TYPE="crypto_LUKS"`. Copy the UUID. + +#### Update Crypttab + +Edit the crypttab file: + +```bash +sudo vim /etc/crypttab +``` + +Add an entry like: + +``` +sdx_crypt UUID=your-uuid-here /etc/zfs/keys/sdx.key luks,discard +``` + +### Final Step: Reboot + +- Reboot your system +- The drive will be automatically decrypted and imported + +### Best Practices + +- Keep your keyfile and encryption password secure +- Store keyfiles with restricted permissions +- Consider backing up the LUKS header + +### Troubleshooting + +- Double-check UUIDs +- Verify keyfile permissions +- Ensure cryptsetup and ZFS are installed + +### Security Notes + +- This method provides full-disk encryption at rest +- Data is inaccessible without the key or password +- Protects against physical drive theft + +### Disclaimer + +While these instructions are comprehensive, they come with inherent risks. Always: + +- Have backups +- Test in non-critical environments first +- Understand each step before executing + +### Further reading + +- [Setting up ZFS on LUKS - Alpine Linux Wiki](https://wiki.alpinelinux.org/wiki/Setting_up_ZFS_on_LUKS) +- [Decrypt Additional LUKS Encrypted Volumes on Boot](https://www.malachisoord.com/2023/11/04/decrypt-additiona-luks-encrypted-volumes-on-boot/) +- [Auto-Unlock LUKS Encrypted Drive - Dradis Support Guide](https://dradis.com/support/guides/customization/auto-unlock-luks-encrypted-drive.html) +- [How do I automatically decrypt an encrypted filesystem on the next reboot? - Ask Ubuntu](https://askubuntu.com/questions/996155/how-do-i-automatically-decrypt-an-encrypted-filesystem-on-the-next-reboot) + +## Add a disk to an existing vdev + +```bash +zpool add tank /dev/sdx +``` + +## Add a vdev to an existing pool + +```bash +zpool add main raidz1-1 /dev/disk-1 /dev/disk-2 /dev/disk-3 /dev/disk-4 +``` + +You don't need to specify the `ashift` or the `autoexpand` as they are set on zpool creation. + # Installation ## Install the required programs -OpenZFS is not in the mainline kernel for license issues (fucking capitalism...) so it's not yet suggested to use it for the root of your filesystem. +OpenZFS is not in the mainline kernel for license issues (fucking capitalism...) so it's not yet suggested to use it for the root of your filesystem. To install it in a Debian device: -* ZFS packages are included in the `contrib` repository, but the `backports` repository often provides newer releases of ZFS. You can use it as follows. - +- ZFS packages are included in the `contrib` repository, but the `backports` repository often provides newer releases of ZFS. You can use it as follows. + Add the backports repository: ```bash @@ -428,7 +594,7 @@ To install it in a Debian device: Pin-Priority: 990 ``` -* Install the packages: +- Install the packages: ```bash apt update @@ -444,8 +610,8 @@ First read the [ZFS storage planning](zfs_storage_planning.md) article and then ```bash zpool create \ - -o ashift=12 \ - -o autoexpand=on \ + -o ashift=12 \ + -o autoexpand=on \ main raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd \ log mirror \ /dev/disk/by-id/nvme-eui.e823gqkwadgp32uhtpobsodkjfl2k9d0-part4 \ @@ -457,10 +623,10 @@ main raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd \ Where: -* `-o ashift=12`: Adjusts the disk sector size to the disks in use. -* `/dev/sda /dev/sdb /dev/sdc /dev/sdd` are the rotational data disks configured in RAIDZ1 -* We set two partitions in mirror for the ZLOG -* We set two partitions in stripe for the L2ARC +- `-o ashift=12`: Adjusts the disk sector size to the disks in use. +- `/dev/sda /dev/sdb /dev/sdc /dev/sdd` are the rotational data disks configured in RAIDZ1 +- We set two partitions in mirror for the ZLOG +- We set two partitions in stripe for the L2ARC If you don't want the main pool to be mounted use `zfs set mountpoint=none main`. @@ -479,7 +645,7 @@ dd if=/dev/random of=/etc/zfs/keys/home.key bs=1 count=32 Then create the filesystem: ```bash -zfs create \ +zfs create \ -o mountpoint=/home/lyz \ -o encryption=on \ -o keyformat=raw \ @@ -530,9 +696,9 @@ With ZFS you can share a specific dataset via NFS. If for whatever reason the da You still must install the necessary daemon software to make the share available. For example, if you wish to share a dataset via NFS, then you need to install the NFS server software, and it must be running. Then, all you need to do is flip the sharing NFS switch on the dataset, and it will be immediately available. -### Install NFS +### Install NFS -To share a dataset via NFS, you first need to make sure the NFS daemon is running. On Debian and Ubuntu, this is the `nfs-kernel-server` package. +To share a dataset via NFS, you first need to make sure the NFS daemon is running. On Debian and Ubuntu, this is the `nfs-kernel-server` package. ```bash sudo apt-get install nfs-kernel-server @@ -584,7 +750,8 @@ mount -t nfs hostname.example.com:/srv /mnt To permanently mount it you need to add it to your `/etc/fstab`, check [this section for more details](linux_snippets.md#configure-fstab-to-mount-nfs). ## Configure a watchdog -[Watchdogs](watchdog.md) are programs that make sure that the services are working as expected. This is useful for example if you're suffering the [ZFS pool is stuck](#zfs-pool-is-stuck) error. + +[Watchdogs](watchdog.md) are programs that make sure that the services are working as expected. This is useful for example if you're suffering the [ZFS pool is stuck](#zfs-pool-is-stuck) error. - Install [Python bindings for systemd](python_systemd.md) to get logging functionality. @@ -723,7 +890,7 @@ To permanently mount it you need to add it to your `/etc/fstab`, check [this sec if __name__ == "__main__": - + log(f"Using socket {os.environ.get('NOTIFY_SOCKET', None)}") for send_signal in (signal.SIGINT, signal.SIGABRT, signal.SIGTERM): signal.signal(send_signal, socket_notify_stop) @@ -743,7 +910,7 @@ To permanently mount it you need to add it to your `/etc/fstab`, check [this sec - Create a systemd service `systemctl edit --full -force zfs_watchdog` and add: - ```ini + ```ini [Unit] Description=ZFS watchdog Requires=zfs.target @@ -764,15 +931,17 @@ To permanently mount it you need to add it to your `/etc/fstab`, check [this sec ``` If you're debugging still the script use `StartLimitAction=` instead so that you don't get unexpected reboots. + - Start the service with `systemctl start zfs_watchdog`. - Check that it's working as expected with `journalctl -feu zfs_watchdog` - Once you're ready everything is fine enable the service `systemctl enable zfs_watchdog` ### Monitor the watchdog + If you're using [Prometheus](prometheus.md) with the [Node exporter](node_exporter.md) in theory if the watchdog fails it will show up as a failed service. For the sake of redundancy we can create a Loki alert that checks that the watchdog is still alive, if the watchdog fails or if it restarts the server. ```yaml -groups: +groups: - name: zfs_watchdog rules: - alert: ZFSWatchdogIsDeadError @@ -802,6 +971,7 @@ groups: ``` ## Configure the deadman failsafe measure + ZFS has a safety measure called the [zfs_deadman_failmode](https://openzfs.github.io/openzfs-docs/man/master/4/zfs.4.html#zfs_deadman_enabled). When a pool sync operation takes longer than `zfs_deadman_synctime_ms`, or when an individual I/O operation takes longer than `zfs_deadman_ziotime_ms`, then the operation is considered to be "hung". If `zfs_deadman_enabled` is set, then the deadman behavior is invoked as described by `zfs_deadman_failmode`. By default, the deadman is enabled and set to wait which results in "hung" I/O operations only being logged. The deadman is automatically disabled when a pool gets suspended. `zfs_deadman_failmode` configuration can have the next values: @@ -811,20 +981,21 @@ ZFS has a safety measure called the [zfs_deadman_failmode](https://openzfs.githu - `panic`: Panic the system. This can be used to facilitate automatic fail-over to a properly configured fail-over partner. Follow the guides under [Set zfs module parameters or options](#set-zfs-module-parameters-or-options) to change this value. + # Backup Please remember that [RAID is not a backup](https://serverfault.com/questions/2888/why-is-raid-not-a-backup), it guards against one kind of hardware failure. There's lots of failure modes that it doesn't guard against though: -* File corruption -* Human error (deleting files by mistake) -* Catastrophic damage (someone dumps water onto the server) -* Viruses and other malware -* Software bugs that wipe out data -* Hardware problems that wipe out data or cause hardware damage (controller malfunctions, firmware bugs, voltage spikes, ...) +- File corruption +- Human error (deleting files by mistake) +- Catastrophic damage (someone dumps water onto the server) +- Viruses and other malware +- Software bugs that wipe out data +- Hardware problems that wipe out data or cause hardware damage (controller malfunctions, firmware bugs, voltage spikes, ...) That's why you still need to make backups. -ZFS has the builtin feature to make snapshots of the pool. A snapshot is a first class read-only filesystem. It is a mirrored copy of the state of the filesystem at the time you took the snapshot. They are persistent across reboots, and they don't require any additional backing store; they use the same storage pool as the rest of your data. +ZFS has the builtin feature to make snapshots of the pool. A snapshot is a first class read-only filesystem. It is a mirrored copy of the state of the filesystem at the time you took the snapshot. They are persistent across reboots, and they don't require any additional backing store; they use the same storage pool as the rest of your data. If you remember [ZFS's awesome nature of copy-on-write](https://pthree.org/2012/12/14/zfs-administration-part-ix-copy-on-write/) filesystems, you will remember the discussion about Merkle trees. A ZFS snapshot is a copy of the Merkle tree in that state, except we make sure that the snapshot of that Merkle tree is never modified. @@ -834,10 +1005,10 @@ Creating snapshots is near instantaneous, and they are cheap. However, once the ZFS doesn't though have a clean way to manage the lifecycle of those snapshots. There are many tools to fill the gap: -* [`sanoid`](sanoid.md): Made in Perl, 2.4k stars, last commit April 2022, last release April 2021 -* [zfs-auto-snapshot](https://github.com/zfsonlinux/zfs-auto-snapshot): Made in Bash, 767 stars, last commit/release on September 2019 -* [pyznap](https://github.com/yboetz/pyznap): Made in Python, 176 stars, last commit/release on September 2020 -* Custom scripts. +- [`sanoid`](sanoid.md): Made in Perl, 2.4k stars, last commit April 2022, last release April 2021 +- [zfs-auto-snapshot](https://github.com/zfsonlinux/zfs-auto-snapshot): Made in Bash, 767 stars, last commit/release on September 2019 +- [pyznap](https://github.com/yboetz/pyznap): Made in Python, 176 stars, last commit/release on September 2020 +- Custom scripts. It seems that the state of the art of ZFS backups is not changing too much in the last years, possibly because the functionality is covered so there is no need for further development. So I'm going to manage the backups with [`sanoid`](sanoid.md) despite it being done in Perl because [it's the most popular, it looks simple but flexible for complex cases, and it doesn't look I'd need to tweak the code](sanoid.md). @@ -848,6 +1019,7 @@ zfs list -t snapshot -o name path/to/dataset | tail -n+2 | tac | xargs -n 1 zfs ``` ## [Manually create a backup](https://docs.oracle.com/cd/E19253-01/819-5461/gbcya/index.html) + To create a snapshot of `tank/home/ahrens` that is named `friday` run: ```bash @@ -860,7 +1032,7 @@ You can list the available snapshots of a filesystem with `zfs list -t snapshot You have two ways to restore a backup: -* [Mount the snapshot in a directory and manually copy the needed files](https://askubuntu.com/questions/103369/ubuntu-how-to-mount-zfs-snapshot): +- [Mount the snapshot in a directory and manually copy the needed files](https://askubuntu.com/questions/103369/ubuntu-how-to-mount-zfs-snapshot): ```bash mount -t zfs main/lyz@autosnap_2023-02-17_13:15:06_hourly /mnt @@ -868,7 +1040,7 @@ You have two ways to restore a backup: To umount the snapshot run `umount /mnt`. -* Rolling back the filesystem to the snapshot state: Rolling back to a previous snapshot will discard any data changes between that snapshot and the current time. Further, by default, you can only rollback to the most recent snapshot. In order to rollback to an earlier snapshot, you must destroy all snapshots between the current time and that snapshot you wish to rollback to. If that's not enough, the filesystem must be unmounted before the rollback can begin. This means downtime. +- Rolling back the filesystem to the snapshot state: Rolling back to a previous snapshot will discard any data changes between that snapshot and the current time. Further, by default, you can only rollback to the most recent snapshot. In order to rollback to an earlier snapshot, you must destroy all snapshots between the current time and that snapshot you wish to rollback to. If that's not enough, the filesystem must be unmounted before the rollback can begin. This means downtime. To rollback the "tank/test" dataset to the "tuesday" snapshot, we would issue: @@ -905,19 +1077,18 @@ rpool/ROOT/solaris/var@install - 2.51M - - - From this output, you can see the amount of space that is: -* AVAIL: The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. -* USED: The amount of space consumed by this dataset and all its descendants. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendants datasets. - - The used space of a snapshot is the space referenced exclusively by this snapshot. If this snapshot is destroyed, the amount of `used` space will be freed. Space that is shared by multiple snapshots isn't accounted for in this metric. -* USEDSNAP: Space being consumed by snapshots of each data set -* USEDDS: Space being used by the dataset itself -* USEDREFRESERV: Space being used by a refreservation set on the dataset that would be freed if it was removed. -* USEDCHILD: Space being used by the children of this dataset. +- AVAIL: The amount of space available to the dataset and all its children, assuming that there is no other activity in the pool. +- USED: The amount of space consumed by this dataset and all its descendants. This is the value that is checked against this dataset's quota and reservation. The space used does not include this dataset's reservation, but does take into account the reservations of any descendants datasets. + The used space of a snapshot is the space referenced exclusively by this snapshot. If this snapshot is destroyed, the amount of `used` space will be freed. Space that is shared by multiple snapshots isn't accounted for in this metric. +- USEDSNAP: Space being consumed by snapshots of each data set +- USEDDS: Space being used by the dataset itself +- USEDREFRESERV: Space being used by a refreservation set on the dataset that would be freed if it was removed. +- USEDCHILD: Space being used by the children of this dataset. Other space properties are: -* LUSED: The amount of space that is "logically" consumed by this dataset and all its descendents. It ignores the effect of `compression` and `copies` properties, giving a quantity closer to the amount of data that aplication ssee. However it does include space consumed by metadata. -* REFER: The amount of data that is accessible by this dataset, which may or may not be shared with other dataserts in the pool. When a snapshot or clone is created, it initially references the same amount of space as the filesystem or snapshot it was created from, since its contents are identical. +- LUSED: The amount of space that is "logically" consumed by this dataset and all its descendents. It ignores the effect of `compression` and `copies` properties, giving a quantity closer to the amount of data that aplication ssee. However it does include space consumed by metadata. +- REFER: The amount of data that is accessible by this dataset, which may or may not be shared with other dataserts in the pool. When a snapshot or clone is created, it initially references the same amount of space as the filesystem or snapshot it was created from, since its contents are identical. ## [See the differences between two backups](https://docs.oracle.com/cd/E36784_01/html/E36835/gkkqz.html) @@ -931,12 +1102,12 @@ M /tank/home/tim/ The following table summarizes the file or directory changes that are identified by the `zfs diff` command. -| File or Directory Change | Identifier | -| --- | --- | -| File or directory has been modified or file or directory link has changed | M | -| File or directory is present in the older snapshot but not in the more recent snapshot | — | -| File or directory is present in the more recent snapshot but not in the older snapshot | + | -| File or directory has been renamed | R | +| File or Directory Change | Identifier | +| -------------------------------------------------------------------------------------- | ---------- | +| File or directory has been modified or file or directory link has changed | M | +| File or directory is present in the older snapshot but not in the more recent snapshot | — | +| File or directory is present in the more recent snapshot but not in the older snapshot | + | +| File or directory has been renamed | R | ## Create a cold backup of a series of datasets @@ -948,7 +1119,8 @@ If you've used the `-o keyformat=raw -o keylocation=file:///etc/zfs/keys/home.ke WARNING: substitute `/dev/sde` for the partition you need to work on in the next snippets To do it: -- Create the partitions: + +- Create the partitions: ```bash fdisk /dev/sde @@ -965,11 +1137,12 @@ To do it: ``` ### Sync an already created cold backup + #### Mount the existent pool Imagine your pool is at `/dev/sdf2`: -- Connect your device +- Connect your device - Check for available ZFS pools: First, check if the system detects any ZFS pools that can be imported: ```bash @@ -1016,8 +1189,11 @@ Additional options: # Monitorization ## Monitor the ZFS events + You can see the ZFS events using `zpool events -v`. If you want to be alerted on these events you can use [this service](https://codeberg.org/lyz/zfs_events) to ingest them into Loki and raise alerts. + ## Monitor the `dbgmsg` file + If you use [loki](loki.md) remember to monitor the `/proc/spl/kstat/zfs/dbgmsg` file: ```yaml @@ -1029,6 +1205,7 @@ If you use [loki](loki.md) remember to monitor the `/proc/spl/kstat/zfs/dbgmsg` job: zfs __path__: /proc/spl/kstat/zfs/dbgmsg ``` + # [Troubleshooting](https://openzfs.github.io/openzfs-docs/Basic%20Concepts/Troubleshooting.html) To debug ZFS errors you can check: @@ -1037,23 +1214,24 @@ To debug ZFS errors you can check: - ZFS Kernel Module Debug Messages: The ZFS kernel modules use an internal log buffer for detailed logging information. This log information is available in the pseudo file `/proc/spl/kstat/zfs/dbgmsg` for ZFS builds where ZFS module parameter `zfs_dbgmsg_enable = 1` ## [ZFS pool is stuck](https://openzfs.github.io/openzfs-docs/Basic%20Concepts/Troubleshooting.html#unkillable-process) + Symptom: zfs or zpool command appear hung, does not return, and is not killable Likely cause: kernel thread hung or panic -If a kernel thread is stuck, then a backtrace of the stuck thread can be in the logs. In some cases, the stuck thread is not logged until the deadman timer expires. +If a kernel thread is stuck, then a backtrace of the stuck thread can be in the logs. In some cases, the stuck thread is not logged until the deadman timer expires. The only way I've yet found to solve this is rebooting the machine (not ideal). I even have to use the magic keys -.- . A solution would be to [reboot server on kernel panic ](linux_snippets.md#reboot-server-on-kernel-panic) but it's not the kernel who does the panic but a module of the kernel, so that solution doesn't work. You can monitor this issue with loki using the next alerts: ```yaml -groups: +groups: - name: zfs rules: - alert: SlowSpaSyncZFSError expr: | - count_over_time({job="zfs"} |~ `spa_deadman.*slow spa_sync` [5m]) + count_over_time({job="zfs"} |~ `spa_deadman.*slow spa_sync` [5m]) for: 1m labels: severity: critical @@ -1069,6 +1247,7 @@ And to patch it you can use a [software watchdog that reproduces the error](#con There are many issues open with this behaviour: [1](https://github.com/openzfs/zfs/issues/11804), [2](https://github.com/openzfs/zfs/issues/6639) In my case I feel it happens when running `syncoid` to send the backups to the backup server. + ## [Clear a permanent ZFS error in a healthy pool](https://serverfault.com/questions/576898/clear-a-permanent-zfs-error-in-a-healthy-pool) Sometimes when you do a `zpool status` you may see that the pool is healthy but that there are "Permanent errors" that may point to files themselves or directly to memory locations. @@ -1085,7 +1264,6 @@ zpool scrub -s my_pool If you're close to the event that made the error you can check the `zpool events -v` to shed some light. - A few notes: - repaired errors are shown in the counters, but won't elicit a "permanent errors" message @@ -1096,6 +1274,7 @@ A few notes: - similarly, if you want detailed information on a specific failure, zpool events -v shows detailed information about both correctable and uncorrectable errors. You can read [this long discussion](https://github.com/openzfs/zfs/discussions/9705) if you want more info. + ## ZFS pool is in suspended mode Probably because you've unplugged a device without unmounting it. @@ -1115,28 +1294,31 @@ sudo zpool import WD_1TB ``` If you don't care about the zpool anymore, sadly your only solution is to [reboot the server](https://github.com/openzfs/zfs/issues/5242). Real ugly, so be careful when you umount zpools. + ## Cannot receive incremental stream: invalid backup stream Error + This is usually caused when you try to send a snapshot that is corrupted. To solve it: -- Look at the context on loki to identify the snapshot in question. -- Delete it + +- Look at the context on loki to identify the snapshot in question. +- Delete it - Run the sync again This can be monitored with loki through the next alert: -```yaml - - alert: SyncoidCorruptedSnapshotSendError - expr: | - count_over_time({syslog_identifier="syncoid_send_backups"} |= `cannot receive incremental stream: invalid backup stream` [15m]) > 0 - for: 0m - labels: - severity: critical - annotations: - summary: "Error tryig to send a corrupted snapshot at {{ $labels.hostname}}" - message: "Look at the context on loki to identify the snapshot in question. Delete it and then run the sync again" - +```yaml +- alert: SyncoidCorruptedSnapshotSendError + expr: | + count_over_time({syslog_identifier="syncoid_send_backups"} |= `cannot receive incremental stream: invalid backup stream` [15m]) > 0 + for: 0m + labels: + severity: critical + annotations: + summary: "Error tryig to send a corrupted snapshot at {{ $labels.hostname}}" + message: "Look at the context on loki to identify the snapshot in question. Delete it and then run the sync again" ``` + # Learning I've found that learning about ZFS was an interesting, intense and time @@ -1155,3 +1337,7 @@ pleasant to read. For further information check - [Docs](https://openzfs.github.io/openzfs-docs/) - [JRS articles](https://jrs-s.net/category/open-source/zfs/) - [ZFS basic introduction video](https://yewtu.be/watch?v=MsY-BafQgj4) + +## Books + +- [FreeBSD Mastery: ZFS by Michael W Lucas and Allan Jude](https://mwl.io/nonfiction/os#fmzfs) diff --git a/docs/linux_snippets.md b/docs/linux_snippets.md index f34ac36ee8d..f96b478030c 100644 --- a/docs/linux_snippets.md +++ b/docs/linux_snippets.md @@ -4,7 +4,32 @@ date: 20200826 author: Lyz --- +# Record the audio from your computer + +You can record audio being played in a browser using `ffmpeg` + +1. Check your default audio source: + + ```sh + pactl list sources | grep -E 'Name|Description' + ``` + +2. Record using `ffmpeg`: + + ```sh + ffmpeg -f pulse -i output.wav + ``` + + Example: + + ```sh + ffmpeg -f pulse -i alsa_output.pci-0000_00_1b.0.analog-stereo.monitor output.wav + ``` + +3. Stop recording with **Ctrl+C**. + # [Prevent the screen from turning off](https://wiki.archlinux.org/title/Display_Power_Management_Signaling#Runtime_settings) + VESA Display Power Management Signaling (DPMS) enables power saving behaviour of monitors when the computer is not in use. The time of inactivity before the monitor enters into a given saving power level—standby, suspend or off—can be set as described in DPMSSetTimeouts(3). It is possible to turn off your monitor with the xset command @@ -12,6 +37,7 @@ It is possible to turn off your monitor with the xset command ```bash xset s off -dpms ``` + It will disable DPMS and prevent screen from blanking To query the current settings: @@ -24,7 +50,7 @@ If that doesn't work you can use the [keep-presence](https://github.com/carrot69 ```bash pip install keep-presence -keep-presence -c +keep-presence -c ``` That will move the cursor one pixel in circles each 300s, if you need to move it more often use the `-s` flag. @@ -56,6 +82,7 @@ To check if it has set the password correctly you [can run](https://stackoverflo ```bash pdftk "input.pdf" dump_data output /dev/null dont_ask ``` + # [Reduce the size of an image](https://www.digitalocean.com/community/tutorials/reduce-file-size-of-images-linux) The simplest way of reducing the size of the image is by degrading the quality of the image. @@ -69,6 +96,7 @@ The main difference between `convert` and `mogrify` command is that `mogrify` co ```bash mogrify -quality 50 *.jpg ``` + # Change the default shell of a user using the command line ```bash @@ -86,9 +114,10 @@ weasyprint input.html output.pdf ``` It gave me better result than `wkhtmltopdf` - + ## Using wkhtmltopdf -To convert the given HTML into a PDF with proper styling and formatting using a simple method on Linux, you can use `wkhtmltopdf` with some custom options. + +To convert the given HTML into a PDF with proper styling and formatting using a simple method on Linux, you can use `wkhtmltopdf` with some custom options. First, ensure that you have `wkhtmltopdf` installed on your system. If not, install it using your package manager (e.g., Debian: `sudo apt-get install wkhtmltopdf`). @@ -99,17 +128,22 @@ wkhtmltopdf --page-size A4 --margin-top 15mm --margin-bottom 15mm --encoding utf ``` In this command: + - `--page-size A4`: Sets the paper size to A4. - `--margin-top 15mm` and `--margin-bottom 15mm`: Adds top and bottom margins of 15 mm to the PDF. After running the command, you should have a nicely formatted `output.pdf` file in your current directory. This method preserves most of the original HTML styling while providing a simple way to export it as a PDF on Linux. If you need to zoom in, you can use the `--zoom 1.2` flag. For this to work you need your css to be using the `em` sizes. + # Format a drive to use a FAT32 system + ```bash sudo mkfs.vfat -F 32 /dev/sdX ``` + Replace /dev/sdX with your actual drive identifier + # Get the newest file of a directory with nested directories and files ```bash @@ -127,7 +161,7 @@ If the docker is using less resources than the limits but they are still small ( To set up a systemd service as a **non-root user**, you can create a user-specific service file under your home directory. User services are defined in `~/.config/systemd/user/` and can be managed without root privileges. 1. Create the service file: - + Open a terminal and create a new service file in `~/.config/systemd/user/`. For example, if you want to create a service for a script named `my_script.py`, follow these steps: ```bash @@ -136,7 +170,7 @@ To set up a systemd service as a **non-root user**, you can create a user-specif ``` 2. Edit the service file: - + In the `my_script.service` file, add the following configuration: ```ini @@ -160,18 +194,20 @@ To set up a systemd service as a **non-root user**, you can create a user-specif - **Description**: A short description of what the service does. - **ExecStart**: The command to run your script. Replace `/path/to/your/script/my_script.py` with the full path to your Python script. If you want to run the script within a virtualenv you can use `/path/to/virtualenv/bin/python` instead of `/usr/bin/python3`. - You'll need to add the virtualenv path to Path - ```ini - # Add virtualenv's bin directory to PATH - Environment="PATH=/path/to/virtualenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" - ``` + You'll need to add the virtualenv path to Path + + ```ini + # Add virtualenv's bin directory to PATH + Environment="PATH=/path/to/virtualenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin" + ``` + - **WorkingDirectory**: Set the working directory to where your script is located (optional). - **Restart**: Restart the service if it fails. - **StandardOutput** and **StandardError**: This ensures that the output is captured in the systemd journal. - **WantedBy**: Specifies the target to which this service belongs. `default.target` is commonly used for user services. 3. Reload systemd to recognize the new service: - + Run the following command to reload systemd's user service files: ```bash @@ -179,7 +215,7 @@ To set up a systemd service as a **non-root user**, you can create a user-specif ``` 4. Enable and start the service: - + To start the service immediately and enable it to run on boot (for your user session), use the following commands: ```bash @@ -190,13 +226,13 @@ To set up a systemd service as a **non-root user**, you can create a user-specif 5. Check the status and logs: - To check if the service is running: - + ```bash systemctl --user status my_script.service ``` - To view logs specific to your service: - + ```bash journalctl --user -u my_script.service -f ``` @@ -246,169 +282,184 @@ After modifying the service, reload and restart it: systemctl --user daemon-reload systemctl --user restart my_script.service ``` -# Debugging high IOwait + +# Debugging high IOwait High I/O wait (`iowait`) on the CPU, especially at 50%, typically indicates that your system is spending a large portion of its time waiting for I/O operations (such as disk access) to complete. This can be caused by a variety of factors, including disk bottlenecks, overloaded storage systems, or inefficient applications making disk-intensive operations. Here’s a structured approach to debug and analyze high I/O wait on your server: ## Monitor disk I/O - First, verify if disk I/O is indeed the cause. Tools like `iostat`, `iotop`, and `dstat` can give you an overview of disk activity: - - **`iostat`**: This tool reports CPU and I/O statistics. You can install it with `apt-get install sysstat`. Run the following command to check disk I/O stats: +First, verify if disk I/O is indeed the cause. Tools like `iostat`, `iotop`, and `dstat` can give you an overview of disk activity: - ```bash - iostat -x 1 - ``` - The `-x` flag provides extended statistics, and `1` means it will report every second. Look for high values in the `%util` and `await` columns, which represent: - - `%util`: Percentage of time the disk is busy (ideally should be below 90% for most systems). - - `await`: Average time for I/O requests to complete. +- **`iostat`**: This tool reports CPU and I/O statistics. You can install it with `apt-get install sysstat`. Run the following command to check disk I/O stats: - If either of these values is unusually high, it indicates that the disk subsystem is likely overloaded. + ```bash + iostat -x 1 + ``` - - **`iotop`**: If you want a more granular look at which processes are consuming disk I/O, use `iotop`: + The `-x` flag provides extended statistics, and `1` means it will report every second. Look for high values in the `%util` and `await` columns, which represent: - ```bash - sudo iotop -o - ``` + - `%util`: Percentage of time the disk is busy (ideally should be below 90% for most systems). + - `await`: Average time for I/O requests to complete. - This will show you the processes that are actively performing I/O operations. + If either of these values is unusually high, it indicates that the disk subsystem is likely overloaded. - - **`dstat`**: Another useful tool for monitoring disk I/O in real-time: +- **`iotop`**: If you want a more granular look at which processes are consuming disk I/O, use `iotop`: - ```bash - dstat -cdl 1 - ``` + ```bash + sudo iotop -o + ``` + + This will show you the processes that are actively performing I/O operations. + +- **`dstat`**: Another useful tool for monitoring disk I/O in real-time: - This shows CPU, disk, and load stats, refreshing every second. Pay attention to the `dsk/await` value. + ```bash + dstat -cdl 1 + ``` + + This shows CPU, disk, and load stats, refreshing every second. Pay attention to the `dsk/await` value. ### Check disk health - Disk issues such as bad sectors or failing drives can also lead to high I/O wait times. To check the health of your disks: - - **Use `smartctl`**: This tool can give you a health check of your disks if they support S.M.A.R.T. +Disk issues such as bad sectors or failing drives can also lead to high I/O wait times. To check the health of your disks: - ```bash - sudo smartctl -a /dev/sda - ``` +- **Use `smartctl`**: This tool can give you a health check of your disks if they support S.M.A.R.T. - Check for any errors or warnings in the output. Particularly look for things like reallocated sectors or increasing "pending sectors." + ```bash + sudo smartctl -a /dev/sda + ``` - - **`dmesg` logs**: Look at the system logs for disk errors or warnings: + Check for any errors or warnings in the output. Particularly look for things like reallocated sectors or increasing "pending sectors." - ```bash - dmesg | grep -i "error" - ``` +- **`dmesg` logs**: Look at the system logs for disk errors or warnings: + + ```bash + dmesg | grep -i "error" + ``` - If there are frequent disk errors, it may be time to replace the disk or investigate hardware issues. + If there are frequent disk errors, it may be time to replace the disk or investigate hardware issues. ### Look for disk saturation - If the disk is saturated, no matter how fast the CPU is, it will be stuck waiting for data to come back from the disk. To further investigate disk saturation: - - **`df -h`**: Check if your disk partitions are full or close to full. +If the disk is saturated, no matter how fast the CPU is, it will be stuck waiting for data to come back from the disk. To further investigate disk saturation: - ```bash - df -h - ``` +- **`df -h`**: Check if your disk partitions are full or close to full. + + ```bash + df -h + ``` - - **`lsblk`**: Check how your disks are partitioned and how much data is written to each partition: +- **`lsblk`**: Check how your disks are partitioned and how much data is written to each partition: - ```bash - lsblk -o NAME,SIZE,TYPE,MOUNTPOINT - ``` + ```bash + lsblk -o NAME,SIZE,TYPE,MOUNTPOINT + ``` - - **`blktrace`**: For advanced debugging, you can use `blktrace`, which traces block layer events on your system. +- **`blktrace`**: For advanced debugging, you can use `blktrace`, which traces block layer events on your system. - ```bash - sudo blktrace -d /dev/sda -o - | blkparse -i - - ``` + ```bash + sudo blktrace -d /dev/sda -o - | blkparse -i - + ``` - This will give you very detailed insights into how the system is interacting with the block device. + This will give you very detailed insights into how the system is interacting with the block device. ### Check for heavy disk-intensive processes - Identify processes that might be using excessive disk I/O. You can use tools like `iotop` (as mentioned earlier) or `pidstat` to look for processes with high disk usage: - - **`pidstat`**: Track per-process disk activity: +Identify processes that might be using excessive disk I/O. You can use tools like `iotop` (as mentioned earlier) or `pidstat` to look for processes with high disk usage: - ```bash - pidstat -d 1 - ``` +- **`pidstat`**: Track per-process disk activity: + + ```bash + pidstat -d 1 + ``` - This command will give you I/O statistics per process every second. Look for processes with high `I/O` values (`r/s` and `w/s`). + This command will give you I/O statistics per process every second. Look for processes with high `I/O` values (`r/s` and `w/s`). - - **`top`** or **`htop`**: While `top` or `htop` can show CPU usage, they can also show process-level disk activity. Focus on processes consuming high CPU or memory, as they might also be performing heavy I/O operations. +- **`top`** or **`htop`**: While `top` or `htop` can show CPU usage, they can also show process-level disk activity. Focus on processes consuming high CPU or memory, as they might also be performing heavy I/O operations. ### check file system issues - Sometimes the file system itself can be the source of I/O bottlenecks. Check for any file system issues that might be causing high I/O wait. - - **Check file system consistency**: If you suspect the file system is causing issues (e.g., due to corruption), run a file system check. For `ext4`: +Sometimes the file system itself can be the source of I/O bottlenecks. Check for any file system issues that might be causing high I/O wait. - ```bash - sudo fsck /dev/sda1 - ``` +- **Check file system consistency**: If you suspect the file system is causing issues (e.g., due to corruption), run a file system check. For `ext4`: - Ensure you unmount the disk first or do this in single-user mode. + ```bash + sudo fsck /dev/sda1 + ``` - - **Check disk scheduling**: Some disk schedulers (like `cfq` or `deadline`) might perform poorly depending on your workload. You can check the scheduler used by your disk with: + Ensure you unmount the disk first or do this in single-user mode. - ```bash - cat /sys/block/sda/queue/scheduler - ``` +- **Check disk scheduling**: Some disk schedulers (like `cfq` or `deadline`) might perform poorly depending on your workload. You can check the scheduler used by your disk with: - You can change the scheduler with: + ```bash + cat /sys/block/sda/queue/scheduler + ``` - ```bash - echo deadline > /sys/block/sda/queue/scheduler - ``` + You can change the scheduler with: - This might improve disk performance, especially for certain workloads. + ```bash + echo deadline > /sys/block/sda/queue/scheduler + ``` + + This might improve disk performance, especially for certain workloads. ### Examine system logs - The system logs (`/var/log/syslog` or `/var/log/messages`) may contain additional information about hardware issues, I/O bottlenecks, or kernel-related warnings: - ```bash - sudo tail -f /var/log/syslog - ``` +The system logs (`/var/log/syslog` or `/var/log/messages`) may contain additional information about hardware issues, I/O bottlenecks, or kernel-related warnings: - or +```bash +sudo tail -f /var/log/syslog +``` - ```bash - sudo tail -f /var/log/messages - ``` +or + +```bash +sudo tail -f /var/log/messages +``` - Look for I/O or disk-related warnings or errors. +Look for I/O or disk-related warnings or errors. ### Consider hardware upgrades or tuning - - **SSD vs HDD**: If you're using HDDs, consider upgrading to SSDs. HDDs can be much slower in terms of I/O, especially if you have a high number of random read/write operations. - - **RAID Configuration**: If you are using RAID, check the RAID configuration and ensure it's properly tuned for performance (e.g., using RAID-10 for a good balance of speed and redundancy). - - **Memory and CPU Tuning**: If the server is swapping due to insufficient RAM, it can result in increased I/O wait. You might need to add more RAM or optimize the system to avoid excessive swapping. + +- **SSD vs HDD**: If you're using HDDs, consider upgrading to SSDs. HDDs can be much slower in terms of I/O, especially if you have a high number of random read/write operations. +- **RAID Configuration**: If you are using RAID, check the RAID configuration and ensure it's properly tuned for performance (e.g., using RAID-10 for a good balance of speed and redundancy). +- **Memory and CPU Tuning**: If the server is swapping due to insufficient RAM, it can result in increased I/O wait. You might need to add more RAM or optimize the system to avoid excessive swapping. ### Check for swapping issues - Excessive swapping can contribute to high I/O wait times. If your system is swapping (which happens when physical RAM is exhausted), I/O wait spikes as the system reads from and writes to swap space on disk. - - **Check swap usage**: +Excessive swapping can contribute to high I/O wait times. If your system is swapping (which happens when physical RAM is exhausted), I/O wait spikes as the system reads from and writes to swap space on disk. - ```bash - free -h - ``` +- **Check swap usage**: - If swap usage is high, you may need to add more physical RAM or optimize applications to reduce memory pressure. + ```bash + free -h + ``` + + If swap usage is high, you may need to add more physical RAM or optimize applications to reduce memory pressure. --- -# Create a file with random data +# Create a file with random data -Of 3.5 GB +Of 3.5 GB ```bash dd if=/dev/urandom of=random_file.bin bs=1M count=3584 ``` + # [Set the vim filetype syntax in a comment](https://unix.stackexchange.com/questions/19867/is-there-a-way-to-place-a-comment-in-a-file-which-vim-will-process-in-order-to-s) + Add somewhere in your file: ``` # vi: ft=yaml ``` + # Export environment variables in a crontab + If you need to expand the `PATH` in theory you can do it like this: ``` @@ -431,44 +482,56 @@ journalctl --vacuum-time=1s --unit=your.service ``` If you wish to clear all logs use `journalctl --vacuum-time=1s` + # [Send logs of a cronjob to journal](https://stackoverflow.com/questions/52200878/crontab-journalctl-extra-messages) + You can use `systemd-cat` to send the logs of a script or cron to the journal to the unit specified after the `-t` flag. It works better than piping the output to `logger -t` + ```bash systemd-cat -t syncoid_send_backups /root/send_backups.sh ``` + # [Set dependencies between systemd services](https://stackoverflow.com/questions/21830670/start-systemd-service-after-specific-service) + Use `Wants` or `Requires`: -```ini +```ini website.service [Unit] Wants=mongodb.service After=mongodb.service ``` + # [Set environment variable in systemd service](https://www.baeldung.com/linux/systemd-services-environment-variables) -```ini +```ini [Service] # ... Environment="FOO=foo" ``` # [Get info of a mkv file](https://superuser.com/questions/595177/how-to-retrieve-video-file-information-from-command-line-under-linux) + ```bash ffprobe file.mkv ``` -# [Send multiline messages with notify-send](https://stackoverflow.com/questions/35628702/display-multi-line-notification-using-notify-send-in-python) + +# [Send multiline messages with notify-send](https://stackoverflow.com/questions/35628702/display-multi-line-notification-using-notify-send-in-python) + The title can't have new lines, but the body can. ```bash notify-send "Title" "This is the first line.\nAnd this is the second.") ``` + # [Find BIOS version](https://www.cyberciti.biz/faq/check-bios-version-linux/) -```bash +```bash dmidecode | less ``` -# [Reboot server on kernel panic ](https://www.supertechcrew.com/kernel-panics-and-lockups/) + +# [Reboot server on kernel panic ](https://www.supertechcrew.com/kernel-panics-and-lockups/) + The `proc/sys/kernel/panic` file gives read/write access to the kernel variable `panic_timeout`. If this is zero, the kernel will loop on a panic; if nonzero it indicates that the kernel should autoreboot after this number of seconds. When you use the software watchdog device driver, the recommended setting is `60`. To set the value add the next contents to the `/etc/sysctl.d/99-panic.conf` @@ -488,6 +551,7 @@ Or with an ansible task: create: true state: present ``` + There are other things that can cause a machine to lock up or become unstable. Some of them will even make a machine responsive to pings and network heartbeat monitors, but will cause programs to crash and internal systems to lockup. If you want the machine to automatically reboot, make sure you set `kernel.panic` to something above 0. Otherwise these settings could cause a hung machine that you will have to reboot manually. @@ -522,7 +586,7 @@ kernel.panic_on_unrecovered_nmi=1 # kernel.panic_on_oops=30 ``` -# [Share a calculated value between github actions steps](https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-output-parameter) +# [Share a calculated value between github actions steps](https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-output-parameter) You need to set a step's output parameter. Note that the step will need an `id` to be defined to later retrieve the output value. @@ -542,7 +606,8 @@ For example: run: echo "The selected color is $SELECTED_COLOR" ``` -# [Split a zip into sizes with restricted size ](https://unix.stackexchange.com/questions/198982/zip-files-with-size-limit) +# [Split a zip into sizes with restricted size ](https://unix.stackexchange.com/questions/198982/zip-files-with-size-limit) + Something like: ```bash @@ -551,11 +616,13 @@ zipsplit -n 250000000 myfile.zip ``` Would produce `myfile1.zip`, `myfile2.zip`, etc., all independent of each other, and none larger than 250MB (in powers of ten). `zipsplit` will even try to organize the contents so that each resulting archive is as close as possible to the maximum size. -# [find files that were modified between dates](https://unix.stackexchange.com/questions/29245/how-to-list-files-that-were-changed-in-a-certain-range-of-time) + +# [find files that were modified between dates](https://unix.stackexchange.com/questions/29245/how-to-list-files-that-were-changed-in-a-certain-range-of-time) + The best option is the `-newerXY`. The m and t flags can be used. -- `m` The modification time of the file reference -- `t` reference is interpreted directly as a time +- `m` The modification time of the file reference +- `t` reference is interpreted directly as a time So the solution is @@ -563,8 +630,10 @@ So the solution is find . -type f -newermt 20111222 \! -newermt 20111225 ``` -The lower bound in inclusive, and upper bound is exclusive, so I added 1 day to it. And it is recursive. -# [Rotate image with the command line ](https://askubuntu.com/questions/591733/rotate-images-from-terminal) +The lower bound in inclusive, and upper bound is exclusive, so I added 1 day to it. And it is recursive. + +# [Rotate image with the command line ](https://askubuntu.com/questions/591733/rotate-images-from-terminal) + If you want to overwrite in-place, `mogrify` from the ImageMagick suite seems to be the easiest way to achieve this: ```bash @@ -574,14 +643,19 @@ mogrify -rotate -90 *.jpg # clockwise: mogrify -rotate 90 *.jpg ``` + # [Configure desktop icons in gnome](https://gitlab.gnome.org/GNOME/nautilus/-/issues/158#instructions) + Latest versions of gnome dont have desktop icons [read this article to fix this](https://gitlab.gnome.org/GNOME/nautilus/-/issues/158#instructions) -# [Make a file executable in a git repository ](https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action) -```bash + +# [Make a file executable in a git repository ](https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action) + +```bash git add entrypoint.sh git update-index --chmod=+x entrypoint.sh ``` -# [Configure autologin in Debian with Gnome](https://linux.how2shout.com/enable-or-disable-automatic-login-in-debian-11-bullseye/) + +# [Configure autologin in Debian with Gnome](https://linux.how2shout.com/enable-or-disable-automatic-login-in-debian-11-bullseye/) Edit the `/etc/gdm3/daemon.conf` file and include: @@ -590,16 +664,19 @@ AutomaticLoginEnable = true AutomaticLogin = ``` -# [See errors in the journalctl ](https://unix.stackexchange.com/questions/332886/how-to-see-error-message-in-journald ) +# [See errors in the journalctl ](https://unix.stackexchange.com/questions/332886/how-to-see-error-message-in-journald) To get all errors for running services using journalctl: ```bash journalctl -p 3 -xb ``` + where `-p 3` means priority err, `-x` provides extra message information, and `-b` means since last boot. + # [Fix rsyslog builtin:omfile suspended error](https://ubuntu-mate.community/t/rsyslogd-action-action-0-builtin-omfile-resumed-module-builtin-omfile/24105/21) -It may be a permissions error. I have not been able to pinpoint the reason behind it. + +It may be a permissions error. I have not been able to pinpoint the reason behind it. What did solve it though is to remove the [aledgely deprecated paramenters](https://www.rsyslog.com/doc/configuration/modules/omfile.html) from `/etc/rsyslog.conf`: @@ -607,10 +684,10 @@ What did solve it though is to remove the [aledgely deprecated paramenters](http # $FileOwner syslog # $FileGroup adm # $FileCreateMode 0640 -# $DirCreateMode 0755 -# $Umask 0022 -# $PrivDropToUser syslog -# $PrivDropToGroup syslog +# $DirCreateMode 0755 +# $Umask 0022 +# $PrivDropToUser syslog +# $PrivDropToGroup syslog ``` I hope that as they are the default parameters, they don't need to be set. @@ -621,12 +698,12 @@ I hope that as they are the default parameters, they don't need to be set. server { listen 80; server_name yourdomain.com; - + location / { if ($request_method !~ ^(GET|POST)$ ) { return 405; } - + try_files $uri $uri/ =404; } } @@ -639,6 +716,7 @@ location ~* /share/[\w-]+ { root /home/project_root; } ``` + # [Configure nginx location to accept many paths](https://serverfault.com/questions/564127/nginx-location-regex-for-multiple-paths) ``` @@ -648,6 +726,7 @@ location ~ ^/(static|media)/ { ``` # [Remove image metadata](https://stackoverflow.com/questions/66192531/exiftool-how-to-remove-all-metadata-from-all-files-possible-inside-a-folder-an) + ```bash exiftool -all:all= /path/to/file ``` @@ -664,7 +743,7 @@ It finds all the files in that directory that were created in the 2023, it only # [Makefile use bash instead of sh](https://stackoverflow.com/questions/589276/how-can-i-use-bash-syntax-in-makefile-targets) -The program used as the shell is taken from the variable `SHELL`. If +The program used as the shell is taken from the variable `SHELL`. If this variable is not set in your makefile, the program `/bin/sh` is used as the shell. @@ -714,7 +793,7 @@ Probably, your `ls` is aliased or defined as a function in your config files. Use the full path to `ls` like: ```bash -/bin/ls /var/lib/mysql/ +/bin/ls /var/lib/mysql/ ``` # [Convert png to svg](https://askubuntu.com/questions/470495/how-do-i-convert-a-png-to-svg) @@ -740,7 +819,7 @@ Once you are comfortable with the tracing options. You can automate it by using # Error when unmounting a device: Target is busy -- Check the processes that are using the mountpoint with `lsof /path/to/mountpoint` +- Check the processes that are using the mountpoint with `lsof /path/to/mountpoint` - Kill those processes - Try the umount again @@ -781,12 +860,10 @@ git describe --tags --abbrev=0 # [Configure gpg-agent cache ttl](https://superuser.com/questions/624343/keep-gnupg-credentials-cached-for-entire-user-session) - The user configuration (in `~/.gnupg/gpg-agent.conf`) can only define the default and maximum caching duration; it can't be disabled. The `default-cache-ttl` option sets the timeout (in seconds) after the last GnuPG activity (so it resets if you use it), the `max-cache-ttl` option set the timespan (in seconds) it caches after entering your password. The default value is 600 seconds (10 minutes) for `default-cache-ttl` and 7200 seconds (2 hours) for max-cache-ttl. - ``` default-cache-ttl 21600 max-cache-ttl 21600 @@ -850,11 +927,10 @@ Pin-Priority: 990 # [Rename multiple files matching a pattern](https://stackoverflow.com/questions/6840332/rename-multiple-files-by-replacing-a-particular-pattern-in-the-filenames-using-a) - There is `rename` that looks nice, but you need to install it. Using only `find` you can do: ```bash -find . -name '*yml' -exec bash -c 'echo mv $0 ${0/yml/yaml}' {} \; +find . -name '*yml' -exec bash -c 'echo mv $0 ${0/yml/yaml}' {} \; ``` If it shows what you expect, remove the `echo`. @@ -864,6 +940,7 @@ If it shows what you expect, remove the `echo`. ```bash ssh -o PreferredAuthentications=password -o PubkeyAuthentication=no exampleUser@example.com ``` + # [Do a tail -f with grep](https://stackoverflow.com/questions/23395665/tail-f-grep) ```bash @@ -918,7 +995,7 @@ sudo update-grub This will make your machine display the boot options for 5 seconds before it boot the default option (instead of waiting forever for you to choose one). -# SSH tunnel +# SSH tunnel ```bash ssh -D 9090 -N -f user@host @@ -968,21 +1045,21 @@ Server:/path/to/export /local_mountpoint nfs 0 0 Where: -* `Server`: The hostname or IP address of the NFS server where the exported directory resides. -* `/path/to/export`: The shared directory (exported folder) path. -* `/local_mountpoint`: Existing directory in the host where you want to mount the NFS share. +- `Server`: The hostname or IP address of the NFS server where the exported directory resides. +- `/path/to/export`: The shared directory (exported folder) path. +- `/local_mountpoint`: Existing directory in the host where you want to mount the NFS share. You can specify a number of options that you want to set on the NFS mount: -* `soft/hard`: When the mount option `hard` is set, if the NFS server crashes or becomes unresponsive, the NFS requests will be retried indefinitely. You can set the mount option `intr`, so that the process can be interrupted. When the NFS server comes back online, the process can be continued from where it was while the server became unresponsive. +- `soft/hard`: When the mount option `hard` is set, if the NFS server crashes or becomes unresponsive, the NFS requests will be retried indefinitely. You can set the mount option `intr`, so that the process can be interrupted. When the NFS server comes back online, the process can be continued from where it was while the server became unresponsive. When the option `soft` is set, the process will be reported an error when the NFS server is unresponsive after waiting for a period of time (defined by the `timeo` option). In certain cases `soft` option can cause data corruption and loss of data. So, it is recommended to use `hard` and `intr` options. -* `noexec`: Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system via NFS containing incompatible binaries. -* `nosuid`: Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. -* `tcp`: Specifies the NFS mount to use the TCP protocol. -* `udp`: Specifies the NFS mount to use the UDP protocol. -* `nofail`: Prevent issues when rebooting the host. The downside is that if you have services that depend on the volume to be mounted they won't behave as expected. +- `noexec`: Prevents execution of binaries on mounted file systems. This is useful if the system is mounting a non-Linux file system via NFS containing incompatible binaries. +- `nosuid`: Disables set-user-identifier or set-group-identifier bits. This prevents remote users from gaining higher privileges by running a setuid program. +- `tcp`: Specifies the NFS mount to use the TCP protocol. +- `udp`: Specifies the NFS mount to use the UDP protocol. +- `nofail`: Prevent issues when rebooting the host. The downside is that if you have services that depend on the volume to be mounted they won't behave as expected. # [Fix limit on the number of inotify watches](https://stackoverflow.com/questions/47075661/error-user-limit-of-inotify-watches-reached-extreact-build) @@ -1008,19 +1085,19 @@ Where `100000` is the desired number of inotify watches. # Manage users -* Change main group of user +- Change main group of user ```bash usermod -g {{ group_name }} {{ user_name }} ``` -* Add user to group +- Add user to group ```bash usermod -a -G {{ group_name }} {{ user_name }} ``` -* Remove user from group. +- Remove user from group. ```bash usermod -G {{ remaining_group_names }} {{ user_name }} @@ -1028,7 +1105,7 @@ Where `100000` is the desired number of inotify watches. You have to execute `groups {{ user }}` get the list and pass the remaining to the above command -* Change uid and gid of the user +- Change uid and gid of the user ```bash usermod -u {{ newuid }} {{ login }} @@ -1040,37 +1117,37 @@ Where `100000` is the desired number of inotify watches. # Manage ssh keys -* Generate ed25519 key +- Generate ed25519 key - ```bash - ssh-keygen -t ed25519 -f {{ path_to_keyfile }} - ``` + ```bash + ssh-keygen -t ed25519 -f {{ path_to_keyfile }} + ``` -* Generate RSA key +- Generate RSA key ```bash ssh-keygen -t rsa -b 4096 -o -a 100 -f {{ path_to_keyfile }} ``` -* Generate different comment +- Generate different comment ```bash ssh-keygen -t ed25519 -f {{ path_to_keyfile }} -C {{ email }} ``` -* Generate key headless, batch +- Generate key headless, batch ```bash ssh-keygen -t ed25519 -f {{ path_to_keyfile }} -q -N "" ``` -* Generate public key from private key +- Generate public key from private key ```bash ssh-keygen -y -f {{ path_to_keyfile }} > {{ path_to_public_key_file }} ``` -* Get fingerprint of key +- Get fingerprint of key ```bash ssh-keygen -lf {{ path_to_key }} ``` @@ -1087,8 +1164,8 @@ server#: iperf3 -i 10 -s Where: -* `-i`: the interval to provide periodic bandwidth updates -* `-s`: listen as a server +- `-i`: the interval to provide periodic bandwidth updates +- `-s`: listen as a server On the client system: @@ -1098,20 +1175,19 @@ client#: iperf3 -i 10 -w 1M -t 60 -c [server hostname or ip address] Where: -* `-i`: the interval to provide periodic bandwidth updates -* `-w`: the socket buffer size (which affects the TCP Window). The buffer size is also set on the server by this client command. -* `-t`: the time to run the test in seconds -* `-c`: connect to a listening server at… - +- `-i`: the interval to provide periodic bandwidth updates +- `-w`: the socket buffer size (which affects the TCP Window). The buffer size is also set on the server by this client command. +- `-t`: the time to run the test in seconds +- `-c`: connect to a listening server at… Sometimes is interesting to test both ways as they may return different outcomes I've got the next results at home: -* From new NAS to laptop through wifi 67.5 MB/s -* From laptop to new NAS 59.25 MB/s -* From intel Nuc to new NAS 116.75 MB/s (934Mbit/s) -* From old NAS to new NAS 11 MB/s +- From new NAS to laptop through wifi 67.5 MB/s +- From laptop to new NAS 59.25 MB/s +- From intel Nuc to new NAS 116.75 MB/s (934Mbit/s) +- From old NAS to new NAS 11 MB/s # [Measure the performance, IOPS of a disk](https://woshub.com/check-disk-performance-iops-latency-linux/) @@ -1123,7 +1199,7 @@ apt-get install fio Then you need to go to the directory where your disk is mounted. The test is done by performing read/write operations in this directory. -To do a random read/write operation test an 8 GB file will be created. Then `fio` will read/write a 4KB block (a standard block size) with the 75/25% by the number of reads and writes operations and measure the performance. +To do a random read/write operation test an 8 GB file will be created. Then `fio` will read/write a 4KB block (a standard block size) with the 75/25% by the number of reads and writes operations and measure the performance. ```bash fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75 @@ -1131,7 +1207,8 @@ fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest - I've run this test in different environments with awesome results: -* New NAS server NVME: +- New NAS server NVME: + ``` read: IOPS=297k, BW=1159MiB/s (1215MB/s)(3070MiB/2649msec) bw ( MiB/s): min= 1096, max= 1197, per=99.80%, avg=1156.61, stdev=45.31, samples=5 @@ -1142,7 +1219,7 @@ I've run this test in different environments with awesome results: cpu : usr=15.67%, sys=67.18%, ctx=233314, majf=0, minf=8 ``` -* New NAS server ZFS pool with RAIDZ: +- New NAS server ZFS pool with RAIDZ: ``` read: IOPS=271k, BW=1059MiB/s (1111MB/s)(3070MiB/2898msec) @@ -1154,7 +1231,7 @@ I've run this test in different environments with awesome results: cpu : usr=12.84%, sys=63.20%, ctx=234345, majf=0, minf=6 ``` -* Laptop NVME: +- Laptop NVME: ``` read: IOPS=36.8k, BW=144MiB/s (151MB/s)(3070MiB/21357msec) @@ -1166,8 +1243,8 @@ I've run this test in different environments with awesome results: cpu : usr=14.32%, sys=32.17%, ctx=356674, majf=0, minf=7 ``` -* Laptop ZFS pool through NFS (running in parallel with other network processes): - +- Laptop ZFS pool through NFS (running in parallel with other network processes): + ``` read: IOPS=4917, BW=19.2MiB/s (20.1MB/s)(3070MiB/159812msec) bw ( KiB/s): min=16304, max=22368, per=100.00%, avg=19681.46, stdev=951.52, samples=319 @@ -1178,7 +1255,8 @@ I've run this test in different environments with awesome results: cpu : usr=5.21%, sys=10.59%, ctx=175825, majf=0, minf=8 ``` -* Intel Nuc server disk SSD: +- Intel Nuc server disk SSD: + ``` read: IOPS=11.0k, BW=46.9MiB/s (49.1MB/s)(3070MiB/65525msec) bw ( KiB/s): min= 280, max=73504, per=100.00%, avg=48332.30, stdev=25165.49, samples=130 @@ -1189,11 +1267,13 @@ I've run this test in different environments with awesome results: cpu : usr=8.04%, sys=25.87%, ctx=268055, majf=0, minf=8 ``` -* Intel Nuc server external HD usb disk : +- Intel Nuc server external HD usb disk : + ``` + ``` -* Intel Nuc ZFS pool through NFS (running in parallel with other network processes): +- Intel Nuc ZFS pool through NFS (running in parallel with other network processes): ``` read: IOPS=18.7k, BW=73.2MiB/s (76.8MB/s)(3070MiB/41929msec) @@ -1205,7 +1285,7 @@ I've run this test in different environments with awesome results: cpu : usr=6.29%, sys=13.21%, ctx=575927, majf=0, minf=10 ``` -* Old NAS with RAID5: +- Old NAS with RAID5: ``` read : io=785812KB, bw=405434B/s, iops=98, runt=1984714msec write: io=262764KB, bw=135571B/s, iops=33, runt=1984714msec @@ -1214,15 +1294,15 @@ I've run this test in different environments with awesome results: Conclusions: -* New NVME are **super fast** (1215MB/s read, 406MB/s write) -* ZFS rocks, with a RAIDZ1, L2ARC and ZLOG it returned almost the same performance as the NVME ( 1111MB/s read, 371MB/s write) -* Old NAS with RAID is **super slow** (0.4KB/s read, 0.1KB/s write!) -* I should replace the laptop's NVME, the NAS one has 10x performace both on read and write. +- New NVME are **super fast** (1215MB/s read, 406MB/s write) +- ZFS rocks, with a RAIDZ1, L2ARC and ZLOG it returned almost the same performance as the NVME ( 1111MB/s read, 371MB/s write) +- Old NAS with RAID is **super slow** (0.4KB/s read, 0.1KB/s write!) +- I should replace the laptop's NVME, the NAS one has 10x performace both on read and write. There is a huge difference between ZFS in local and through NFS. In local you get (1111MB/s read and 371MB/s write) while through NFS I got (20.1MB/s read and 6.7MB/s write). I've measured the network performance between both machines with `iperf3` and got: -* From NAS to laptop 67.5 MB/s -* From laptop to NAS 59.25 MB/s +- From NAS to laptop 67.5 MB/s +- From laptop to NAS 59.25 MB/s It was because I was running it over wifi. @@ -1252,7 +1332,6 @@ sudo bash -c "openvpn --config config.ovpn --auth-user-pass <(echo -e 'user_nam Assuming that `vpn` is an entry of your `pass` password store. - # Download TS streams Some sites give stream content with small `.ts` files that you can't download @@ -1384,20 +1463,21 @@ Without the `-n` it won't work well. - Configure `apt` to only use `unstable` when specified - File: `/etc/apt/preferences` - ``` - Package: * - Pin: release a=stable - Pin-Priority: 700 + File: `/etc/apt/preferences` + + ``` + Package: * + Pin: release a=stable + Pin-Priority: 700 - Package: * - Pin: release a=testing - Pin-Priority: 600 + Package: * + Pin: release a=testing + Pin-Priority: 600 - Package: * - Pin: release a=unstable - Pin-Priority: 100 - ``` + Package: * + Pin: release a=unstable + Pin-Priority: 100 + ``` - Update the package data with `apt-get update`. - See that the new versions are available with diff --git a/docs/nas.md b/docs/nas.md index 0bf5f414d89..4548c907131 100644 --- a/docs/nas.md +++ b/docs/nas.md @@ -69,6 +69,8 @@ Depending the amount of data you need to hold and how do you expect it to grow you need to find the solution that suits your needs. After looking to many I've decided to make my own from scratch. +But I built a server pretty much the same as the [slimbook](https://slimbook.com/en/shop/product/nas-cube-1510?category=10). + Warning: If you pursue the beautiful and hard path of building one yourself, don't just buy the components online, there are thousands of things that can go wrong that will make you loose money. Instead go to your local hardware store diff --git a/docs/org_rw.md b/docs/org_rw.md index 92320c2f5ec..c42601f1e18 100644 --- a/docs/org_rw.md +++ b/docs/org_rw.md @@ -23,7 +23,6 @@ with open('your_file.org', 'r') as f: doc = load(f) ``` - ## Write to an orgmode file ```python diff --git a/docs/orgmode.md b/docs/orgmode.md index 0c463fc03a8..173244631f5 100644 --- a/docs/orgmode.md +++ b/docs/orgmode.md @@ -1,11 +1,12 @@ [`nvim-orgmode`](https://github.com/nvim-orgmode/orgmode#agenda) is a Orgmode clone written in Lua for Neovim. Org-mode is a flexible note-taking system that was originally created for Emacs. It has gained wide-spread acclaim and was eventually ported to Neovim. This page is heavily focused to the nvim plugin, but you can follow the concepts for emacs as well. If you use Android try [orgzly](orgzly.md). + # [Installation](https://github.com/nvim-orgmode/orgmode#installation) ## Using lazyvim -```lua +````lua return { 'nvim-orgmode/orgmode', ```lua @@ -35,14 +36,15 @@ return { }) end, } - ``` - dependencies = { - { 'nvim-treesitter/nvim-treesitter', lazy = true }, - }, - event = 'VeryLazy', - config = function() - -- Load treesitter grammar for org - require('orgmode').setup_ts_grammar() +```` + +dependencies = { +{ 'nvim-treesitter/nvim-treesitter', lazy = true }, +}, +event = 'VeryLazy', +config = function() +-- Load treesitter grammar for org +require('orgmode').setup_ts_grammar() -- Setup treesitter require('nvim-treesitter.configs').setup({ @@ -58,9 +60,11 @@ return { org_agenda_files = '~/orgfiles/**/*', org_default_notes_file = '~/orgfiles/refile.org', }) - end, + +end, } -``` + +```` ## Using packer Add to your plugin config: @@ -70,7 +74,7 @@ use {'nvim-orgmode/orgmode', config = function() require('orgmode').setup{} end } -``` +```` Then install it with `:PackerInstall`. @@ -107,10 +111,10 @@ You can check the default configuration file [here](https://github.com/nvim-orgm Mappings or Key bindings can be changed on the `mappings` attribute of the `setup`. The program has these kinds of mappings: -* [Org](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#org-mappings) -* [Agenda](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#agenda-mappings) -* [Capture](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#capture-mappings) -* [Global](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#global-mappings) +- [Org](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#org-mappings) +- [Agenda](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#agenda-mappings) +- [Capture](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#capture-mappings) +- [Global](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#global-mappings) For example the `global` mappings live under `mappings.global` and can be overridden like this: @@ -128,6 +132,7 @@ require('orgmode').setup({ ## Be ready when breaking changes come The developers have [created an issue](https://github.com/nvim-orgmode/orgmode/issues/217) to track breaking changes, subscribe to it so you're notified in advance. + # Usage If you are new to Orgmode, check the [vim Dotoo video](https://www.youtube.com/watch?v=nsv33iOnH34), it's another plugin but the developers say it's the same. If you, like me, prefer written tutorials check the hands-on [tutorial](https://github.com/nvim-orgmode/orgmode/wiki/Getting-Started). @@ -194,10 +199,10 @@ STARS KEYWORD PRIORITY TITLE TAGS Where: -* `KEYWORD`: if present, turns the heading into a [`TODO` item](#todo-items). -* `PRIORITY` sets a [priority level](#priority) to be used in the Agenda. -* `TITLE` is the main body of the heading. -* `TAGS` is a colon surrounded and delimited list of [tags](#tags) used in searching in the Agenda. +- `KEYWORD`: if present, turns the heading into a [`TODO` item](#todo-items). +- `PRIORITY` sets a [priority level](#priority) to be used in the Agenda. +- `TITLE` is the main body of the heading. +- `TAGS` is a colon surrounded and delimited list of [tags](#tags) used in searching in the Agenda. #### Toogle line to headline @@ -213,7 +218,7 @@ If you have a checkbox inside a TODO item, it will transform it to a children TO #### Change heading level -To change the heading level use `<<` or `>>`. It doesn't work in visual mode though, if you want to change the level of the whole subtree you can use `S`. +To change the heading level use `<<` or `>>`. It doesn't work in visual mode though, if you want to change the level of the whole subtree you can use `S`. ```lua org = { @@ -257,9 +262,9 @@ To fold the headings you can use either the normal vim bindings `zc`, `zo`, `zM` It's easy to navigate through your heading tree with: -* Next/previous heading of any level with `j`/`k` (Default `}`/`{`) -* Next/previous heading of the same level with `n`/`p` (Default `]]`/`[[`) -* Go to the parent heading with `gp` (Default `g{`) +- Next/previous heading of any level with `j`/`k` (Default `}`/`{`) +- Next/previous heading of the same level with `n`/`p` (Default `]]`/`[[`) +- Go to the parent heading with `gp` (Default `g{`) ```lua org = { @@ -284,12 +289,12 @@ vim.cmd[[ ### TODO items -`TODO` items are meant to model tasks that evolve between states. Check [this article](time_management_abstraction_levels.md) to see advanced uses of `TODO` items. +`TODO` items are meant to model tasks that evolve between states. Check [this article](time_management_abstraction_levels.md) to see advanced uses of `TODO` items. As creating `TODO` items is quite common you can: -* Create an item with the same level as the item above in the current position with `;t` (by default is `oit`). -* Create an item with the same level as the item above after all the children of the item above with `;T` (by default is `oit`). +- Create an item with the same level as the item above in the current position with `;t` (by default is `oit`). +- Create an item with the same level as the item above after all the children of the item above with `;T` (by default is `oit`). ```lua org = { @@ -318,7 +323,7 @@ org = { #### TODO state customization -By default they are `TODO` or `DONE` but you can define your own using the `org_todo_keywords` configuration. It accepts a list of *unfinished* states and *finished* states separated by a `'|'`. For example: +By default they are `TODO` or `DONE` but you can define your own using the `org_todo_keywords` configuration. It accepts a list of _unfinished_ states and _finished_ states separated by a `'|'`. For example: ```lua org_todo_keywords = { 'TODO', 'NEXT', '|', 'DONE' } @@ -370,16 +375,16 @@ TODO items can also have [timestamps](https://orgmode.org/manual/Timestamps.html ##### Appointments -Meant to be used for elements of the org file that have a defined date to occur, think of a calendar appointment. In the [agenda](#agenda) display, the headline of an entry associated with a plain timestamp is shown exactly on that date. +Meant to be used for elements of the org file that have a defined date to occur, think of a calendar appointment. In the [agenda](#agenda) display, the headline of an entry associated with a plain timestamp is shown exactly on that date. ```org * TODO Meet with Marie <2023-02-24 Fri> ``` -When you insert the timestamps with the date popup picker with `;d` (Default: `oi.`) you can only select the day and not the time, but you can add it manually. +When you insert the timestamps with the date popup picker with `;d` (Default: `oi.`) you can only select the day and not the time, but you can add it manually. -You can also define a timestamp range that spans through many days `<2023-02-24 Fri>--<2023-02-26 Sun>`. The headline then is shown on the first and last day of the range, and on any dates that are displayed and fall in the range. +You can also define a timestamp range that spans through many days `<2023-02-24 Fri>--<2023-02-26 Sun>`. The headline then is shown on the first and last day of the range, and on any dates that are displayed and fall in the range. ##### Start working on a task dates @@ -392,9 +397,9 @@ The headline is listed under the given date. In addition, a reminder that the sc SCHEDULED: <2004-12-25 Sat> ``` -Although is not a good idea (as it promotes the can pushing through the street), if you want to delay the display of this task in the agenda, use `SCHEDULED: <2004-12-25 Sat -2d>` the task is still scheduled on the 25th but will appear two days later. In case the task contains a repeater, the delay is considered to affect all occurrences; if you want the delay to only affect the first scheduled occurrence of the task, use `--2d` instead. +Although is not a good idea (as it promotes the can pushing through the street), if you want to delay the display of this task in the agenda, use `SCHEDULED: <2004-12-25 Sat -2d>` the task is still scheduled on the 25th but will appear two days later. In case the task contains a repeater, the delay is considered to affect all occurrences; if you want the delay to only affect the first scheduled occurrence of the task, use `--2d` instead. -Scheduling an item in Org mode should not be understood in the same way that we understand scheduling a meeting. Setting a date for a meeting is just [a simple appointment](#appointments), you should mark this entry with a simple plain timestamp, to get this item shown on the date where it applies. This is a frequent misunderstanding by Org users. In Org mode, scheduling means setting a date when you want to start working on an action item. +Scheduling an item in Org mode should not be understood in the same way that we understand scheduling a meeting. Setting a date for a meeting is just [a simple appointment](#appointments), you should mark this entry with a simple plain timestamp, to get this item shown on the date where it applies. This is a frequent misunderstanding by Org users. In Org mode, scheduling means setting a date when you want to start working on an action item. You can set it with `s` (Default: `ois`) @@ -405,15 +410,15 @@ You can set it with `s` (Default: `ois`) An example: ```org -* TODO Do this +* TODO Do this DEADLINE: <2023-02-24 Fri> ``` You can set it with `d` (Default: `oid`). -If you need a different warning period for a special task, you can specify it. For example setting a warning period of 5 days `DEADLINE: <2004-02-29 Sun -5d>`. +If you need a different warning period for a special task, you can specify it. For example setting a warning period of 5 days `DEADLINE: <2004-02-29 Sun -5d>`. -If you're as me, you may want to remove the warning feature of `DEADLINES` to be able to keep your agenda clean. Most of times you are able to finish the task in the day, and for those that you can't specify a `SCHEDULED` date. To do so set the default number of days to `0`. +If you're as me, you may want to remove the warning feature of `DEADLINES` to be able to keep your agenda clean. Most of times you are able to finish the task in the day, and for those that you can't specify a `SCHEDULED` date. To do so set the default number of days to `0`. ```lua require('orgmode').setup({ @@ -434,7 +439,7 @@ A timestamp may contain a repeater interval, indicating that it applies not only When you mark a recurring task with the TODO keyword ‘DONE’, it no longer produces entries in the agenda. The problem with this is, however, is that then also the next instance of the repeated entry will not be active. Org mode deals with this in the following way: when you try to mark such an entry as done, it shifts the base date of the repeating timestamp by the repeater interval, and immediately sets the entry state back to TODO. -As a consequence of shifting the base date, this entry is no longer visible in the agenda when checking past dates, but all future instances will be visible. +As a consequence of shifting the base date, this entry is no longer visible in the agenda when checking past dates, but all future instances will be visible. With the `+1m` cookie, the date shift is always exactly one month. So if you have not paid the rent for three months, marking this entry DONE still keeps it as an overdue deadline. Depending on the task, this may not be the best way to handle it. For example, if you forgot to call your father for 3 weeks, it does not make sense to call him 3 times in a single day to make up for it. For these tasks you can use the `++` operator, for example `++1m`. Finally, there are tasks, like changing batteries, which should always repeat a certain time after the last time you did it you can use the `.+` operator. For example: @@ -500,7 +505,7 @@ For those tasks that you want to always check before closing you can add a `(CHE ```orgmode * TODO Do X the first thursday of the month (CHECK) DEADLINE: <2024-01-04 ++1m> - + - [ ] Step 1 - [ ] Step 2 - [ ] Step ... @@ -517,7 +522,6 @@ By default when you mark a recurrent task as `DONE` it will transition the date The idea is that once an INACTIVE task reaches your agenda, either because the warning days of the `DEADLINE` make it show up, or because it's the `SCHEDULED` date you need to decide whether to change it to `TODO` if it's to be acted upon immediately or to `READY` and deactivate the date. - `INACTIVE` then should be the default state transition for the recurring tasks once you mark it as `DONE`. To do this, set in your config: ```lua @@ -527,9 +531,10 @@ org_todo_repeat_to_state = "INACTIVE", If a project gathers a list of recurrent subprojects or subactions it can have the next states: - `READY`: If there is at least one subelement in state `READY` and the rest are `INACTIVE` -- `TODO`: If there is at least one subelement in state `TODO` and the rest may have `READY` or `INACTIVE` -- `INACTIVE`: The project is not planned to be acted upon soon. +- `TODO`: If there is at least one subelement in state `TODO` and the rest may have `READY` or `INACTIVE` +- `INACTIVE`: The project is not planned to be acted upon soon. - `WAITING`: The project is planned to be acted upon but all its subelements are in `INACTIVE` state. + #### Date management ```lua @@ -542,9 +547,9 @@ If a project gathers a list of recurrent subprojects or subactions it can have t To edit existing dates you can: -* Increase/decrease the date under the cursor by 1 day with ``/` -* Increase/decrease the part of the date under the cursor with `a`/`x` -* Bring the date pop up with `e` (Default `cid`) +- Increase/decrease the date under the cursor by 1 day with ``/` +- Increase/decrease the part of the date under the cursor with `a`/`x` +- Bring the date pop up with `e` (Default `cid`) ```lua org = { @@ -563,10 +568,10 @@ vim.cmd[[ You can also use the next [abbreviations](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md#abbreviations): -* `:today:`: expands to today's date (example: <2021-06-29 Tue>) -* `:itoday:`: expands to an invactive version of today's date (example: [2021-06-29 Tue]) -* `:now:`: expands to today's date and current time (example: <2021-06-29 Tue 15:32>) -* `:inow:`: expands to invactive version of today's date and current time (example: [2021-06-29 Tue 15:32] +- `:today:`: expands to today's date (example: <2021-06-29 Tue>) +- `:itoday:`: expands to an invactive version of today's date (example: [2021-06-29 Tue]) +- `:now:`: expands to today's date and current time (example: <2021-06-29 Tue 15:32>) +- `:inow:`: expands to invactive version of today's date and current time (example: [2021-06-29 Tue 15:32] ### [Tags](https://orgmode.org/manual/Tag-Inheritance.html) @@ -580,9 +585,9 @@ You can also use tags to organize your items. To edit them use `g` (Defa When you press that key you can type: -* `tag1`: It will add `:tag1:`. -* `tag1:tag2`: It will add `:tag1:tag2:`. -* Press `ESC`: It will remove all tags from the item. +- `tag1`: It will add `:tag1:`. +- `tag1:tag2`: It will add `:tag1:tag2:`. +- Press `ESC`: It will remove all tags from the item. Tags are seen as `:tag1:tag2:` on the right of the TODO item description. @@ -607,17 +612,17 @@ If you plan refile elements to the root of a file (such as using a bare [Capture Tags are useful for [Agenda searches](#agenda-searches). I've found interesting to create tags based on: - Temporal context: - - lunch + - lunch - dinner - night - Spatial context: - kitchen - couch - - mobile - - bathroom + - mobile + - bathroom - Event context: - daily - - retro + - retro - planning - Mental context: - down @@ -626,23 +631,24 @@ Tags are useful for [Agenda searches](#agenda-searches). I've found interesting - design - inspired - People context: - - mom - - dad + - mom + - dad - ... - Roadmap area context: - - activism - - well-being + - activism + - well-being - care - - work + - work - Focus area context: - - maintenance + - maintenance - improvement - Knowledge area context: - - efficiency + - efficiency - politics - ... So that it's easy to find elements to work on based on each context. + ### `Lists Lists start with a dash: @@ -653,7 +659,7 @@ Lists start with a dash: To create new list item press ``. -### Checkboxes +### Checkboxes Checkboxes or checklists are a special type of [list](#lists): @@ -664,7 +670,7 @@ Checkboxes or checklists are a special type of [list](#lists): - [ ] Item 2 ``` -If you're over an item you can create new ones with `` (if you have the `org_meta_return = ''` binding set). +If you're over an item you can create new ones with `` (if you have the `org_meta_return = ''` binding set). You can change the checkbox state with ``, if you check a subitem the parent item will be marked as started `<3` automatically: @@ -683,8 +689,8 @@ Follow [this issue](https://github.com/nvim-orgmode/orgmode/issues/305) if you w One final aspect of the org file syntax are links. Links are of the form `[[link][description]]`, where link can be an: -* [Internal reference](#internal-document-links) -* [External reference](#external-links) +- [Internal reference](#internal-document-links) +- [External reference](#external-links) A link that does not look like a URL refers to the current document. You can follow it with `gx` when point is on the link (Default `oo`) if you use the next configuration. @@ -698,8 +704,8 @@ org = { Org provides several refinements to internal navigation within a document. Most notably: -* `[[Some section]]`: points to a headline with the name `Some section`. -* `[[#my-custom-id]]`: targets the entry with the `CUSTOM_ID` property set to `my-custom-id`. +- `[[Some section]]`: points to a headline with the name `Some section`. +- `[[#my-custom-id]]`: targets the entry with the `CUSTOM_ID` property set to `my-custom-id`. When the link does not belong to any of the cases above, Org looks for a dedicated target: the same string in double angular brackets, like `<>`. @@ -716,14 +722,13 @@ Ultimately, if none of the above succeeds, Org searches for a headline that is e Note that you must make sure custom IDs, dedicated targets, and names are unique throughout the document. Org provides a linter to assist you in the process, if needed, but I have not searched yet one for nvim. - #### [External links](https://orgmode.org/guide/Hyperlinks.html) -* URL (`http://`, `https://`) -* Path to a file (`file:/path/to/org/file`). File links can contain additional information to jump to a particular location in the file when following a link. This can be: - * `file:~/code/main.c::255`: A line number - * `file:~/xx.org::*My Target`: A search for `<>` heading. - * `file:~/xx.org::#my-custom-id`: A search for- a custom ID +- URL (`http://`, `https://`) +- Path to a file (`file:/path/to/org/file`). File links can contain additional information to jump to a particular location in the file when following a link. This can be: + - `file:~/code/main.c::255`: A line number + - `file:~/xx.org::*My Target`: A search for `<>` heading. + - `file:~/xx.org::#my-custom-id`: A search for- a custom ID ### [Properties](https://orgmode.org/guide/Properties.html) @@ -762,10 +767,12 @@ This can be interesting for example if you want to track when was a header creat ```org *** Title of header :PROPERTIES: - :CREATED: <2023-03-03 Fri 12:11> + :CREATED: <2023-03-03 Fri 12:11> :END: ``` +You can [define the properties that an be inherited with the `org_use_property_inheritance` configuration](https://github.com/nvim-orgmode/orgmode/commit/544e347c9ee12042234f6a2e3d741bd3240324dd) + ### [Code blocks](https://orgmode.org/manual/Structure-of-Code-Blocks.html) Org offers two ways to structure source code in Org documents: in a source code block, and directly inline. Both specifications are shown below. @@ -784,6 +791,7 @@ You need to use snippets for this to be usable. An inline code block has two possibilies - Language agnostic inline block is any string between `=` or `~` such as: + ```org If ~variable == true~ where =variable= is ... ``` @@ -807,12 +815,25 @@ Where: - ``: (Mandatory) It is the identifier of the source code language in the block. See [Languages](https://orgmode.org/worg/org-contrib/babel/languages/index.html) for identifiers of supported languages. - ``: (Optional) Switches provide finer control of the code execution, export, and format. - `
`: (Optional) Heading arguments control many aspects of evaluation, export and tangling of code blocks. Using Org’s properties feature, header arguments can be selectively applied to the entire buffer or specific subtrees of the Org document. -- ``: Source code in the dialect of the specified language identifier. +- ``: Source code in the dialect of the specified language identifier. + +### [Footnotes](https://orgmode.org/manual/Creating-Footnotes.html) + +A footnote is started by a footnote marker in square brackets in column 0, no indentation allowed. It ends at the next footnote definition, headline, or after two consecutive empty lines. The footnote reference is simply the marker in square brackets, inside text. Markers always start with ‘fn:’. For example: + +``` +The Org website[fn:1] now looks a lot better than it used to. +... +[fn:50] The link is: https://orgmode.org +``` + +Nvim-orgmode has [some basic support for footnotes](https://github.com/nvim-orgmode/orgmode/commit/4f62b7f#diff-fa091537281e07e5e58902b6484b097442300c98e115ab29f4374abbe98b8d3d). + ## Archiving When we no longer need certain parts of our org files, they can be archived. You can archive items by pressing `;A` (Default `o$`) while on the heading. This will also archive any child headings. The default location for archived headings is `.org_archive`, which can be changed with the `org_archive_location` option. -The problem is that when you archive an element you loose the context of the item unless it's a first level item. +The problem is that when you archive an element you loose the context of the item unless it's a first level item. Another way to archive is by adding the `:ARCHIVE:` tag with `;a` and once all elements are archived move it to the archive. @@ -823,7 +844,7 @@ org = { } ``` -There are some work in progress to improve archiving in the next issues [1](https://github.com/nvim-orgmode/orgmode/issues/413), [2](https://github.com/nvim-orgmode/orgmode/issues/369) and [3](https://github.com/joaomsa/telescope-orgmode.nvim/issues/2). +There are some work in progress to improve archiving in the next issues [1](https://github.com/nvim-orgmode/orgmode/issues/413), [2](https://github.com/nvim-orgmode/orgmode/issues/369) and [3](https://github.com/joaomsa/telescope-orgmode.nvim/issues/2). If you [don't want to have dangling org_archive files](https://github.com/nvim-orgmode/orgmode/issues/628) you can create an `archive` directory somewhere and then set: @@ -838,6 +859,7 @@ local org = require('orgmode').setup({ When you have big tasks that have nested checklists, when you finish the day working on the task you may want to clean the checklist without loosing what you've done, for example for reporting purposes. In those cases what I do is archive the task, and then undo the archiving. That way you have a copy of the state of the task in your archive with a defined date. Then you can safely remove the done checklist items. + ## Refiling Refiling lets you easily move around elements of your org file, such as headings or TODOs. You can refile with `r` with the next snippet: @@ -884,6 +906,7 @@ If you refile from the capture window, [until this issue is solved](https://gith Be careful that it only refiles the first task there is, so you need to close the capture before refiling the next The plugin also allows you to use `telescope` to search through the headings of the different files with `search_headings`, with the configuration above you'd use `g`. + ## Agenda The org agenda is used to get an overview of all your different org files. Pressing `ga` (Default: `oa`) gives you an overview of the various specialized views into the agenda that are available. Remember that you can press `g?` to see all the available key mappings for each view. @@ -896,12 +919,12 @@ The org agenda is used to get an overview of all your different org files. Press You'll be presented with the next views: -* `a`: Agenda for current week or day -* `t`: List of all TODO entries -* `m`: Match a TAGS/PROP/TODO query -* `M`: Like `m`, but only TODO entries -* `s`: Search for keywords -* `q`: Quit +- `a`: Agenda for current week or day +- `t`: List of all TODO entries +- `m`: Match a TAGS/PROP/TODO query +- `M`: Like `m`, but only TODO entries **that are active** (it won't show the DONE elements, for that use `m`) +- `s`: Search for keywords +- `q`: Quit So far the `nvim-orgmode` agenda view lacks the next features: @@ -913,26 +936,26 @@ So far the `nvim-orgmode` agenda view lacks the next features: ### Move around the agenda view -* `.`: Go to Today -* `J`: Opens a popup that allows you to select the date to jump to. -* `f`: Next agenda span. For example if you are in the week view it will go to the next week. -* `b`: Previous agenda span . -* `/`: Opens a prompt that allows filtering current agenda view by category, tags and title. - - For example, having a `todos.org` file with headlines that have tags `mytag` or `myothertag`, and some of them have check in content, searching by `todos+mytag/check/` returns all headlines that are in `todos.org` file, that have `mytag` tag, and have `check` in headline title. +- `.`: Go to Today +- `J`: Opens a popup that allows you to select the date to jump to. +- `f`: Next agenda span. For example if you are in the week view it will go to the next week. +- `b`: Previous agenda span . +- `/`: Opens a prompt that allows filtering current agenda view by category, tags and title. + + For example, having a `todos.org` file with headlines that have tags `mytag` or `myothertag`, and some of them have check in content, searching by `todos+mytag/check/` returns all headlines that are in `todos.org` file, that have `mytag` tag, and have `check` in headline title. Note that `regex` is case sensitive by default. Use the vim regex flag `\c` to make it case insensitive. For more information see `:help vim.regex()` and `:help /magic`. Pressing `` in filter prompt autocompletes categories and tags. -* `q`: Quit +- `q`: Quit ### Act on the agenda elements -* ``: Open the file containing the element on your cursor position. By default it opens it in the same buffer as the agenda view, which is a bit uncomfortable for me, I prefer the behaviour of `` so I'm using that instead. -* `t`: Change `TODO` state of an item both in the agenda and the original Org file -* `=`/`-`: Change the priority of the element -* `r`: Reload all org files and refresh the current agenda view. +- ``: Open the file containing the element on your cursor position. By default it opens it in the same buffer as the agenda view, which is a bit uncomfortable for me, I prefer the behaviour of `` so I'm using that instead. +- `t`: Change `TODO` state of an item both in the agenda and the original Org file +- `=`/`-`: Change the priority of the element +- `r`: Reload all org files and refresh the current agenda view. ```lua agenda = { @@ -947,10 +970,10 @@ So far the `nvim-orgmode` agenda view lacks the next features: ### Agenda views: -* `vd`: Show the agenda of the day -* `vw`: Show the agenda of the week -* `vm`: Show the agenda of the month -* `vy`: Show the agenda of the year +- `vd`: Show the agenda of the day +- `vw`: Show the agenda of the week +- `vm`: Show the agenda of the month +- `vy`: Show the agenda of the year Once you open one of the views you can do most of the same stuff that you on othe org mode file: @@ -958,13 +981,13 @@ Once you open one of the views you can do most of the same stuff that you on oth When using the search agenda view you can: -* Search by TODO states with `/WAITING` -* Search by tags `+home`. The syntax for such searches follows a simple boolean logic: +- Search by TODO states with `/WAITING` +- Search by tags `+home`. The syntax for such searches follows a simple boolean logic: - `|`: or - `&`: and - `+`: include matches - - `-`: exclude matches + - `-`: exclude matches Here are a few examples: @@ -972,7 +995,6 @@ When using the search agenda view you can: - `+computer|+urgent`: Returns all items tagged either `computer` or `urgent`. - `+computer&-urgent`: Returns all items tagged `computer` and not `urgent`. - As you may have noticed, the syntax above can be a little verbose, so org-mode offers convenient ways of shortening it. First, `-` and `+` imply `and` if no boolean operator is stated, so example three above could be rewritten simply as: ``` @@ -1003,36 +1025,159 @@ When using the search agenda view you can: +{computer\|work}+email ``` -* [Search by properties](https://orgmode.org/worg/org-tutorials/advanced-searching.html#property-searches): You can search by properties with the `PROPERTY="value"` syntax. Properties with numeric values can be queried with inequalities `PAGES>100`. To search by partial searches use a regular expression, for example if the entry had `:BIB_TITLE: Mysteries of the Amazon` you could use `BIB_TITLE={Amazon}` +- [Search by properties](https://orgmode.org/worg/org-tutorials/advanced-searching.html#property-searches): You can search by properties with the `PROPERTY="value"` syntax. Properties with numeric values can be queried with inequalities `PAGES>100`. To search by partial searches use a regular expression, for example if the entry had `:BIB_TITLE: Mysteries of the Amazon` you could use `BIB_TITLE={Amazon}` + + For example [if you want to search for the recurrent tasks that have been completed today you could use](https://github.com/nvim-orgmode/orgmode/pull/842) `LAST_REPEAT>"<2024-12-05>"`. You can also use relative values, like `<-1d>`, ``, ``, `<+3d>`, etc. -### Custom agendas +### Custom agendas -There is still no easy way to define your [custom agenda views](https://orgmode.org/manual/Custom-Agenda-Views.html), but it looks possible [1](https://github.com/nvim-orgmode/orgmode/issues/478) and [2](https://github.com/nvim-orgmode/orgmode/issues/135). +You an use [custom agenda commands](https://github.com/nvim-orgmode/orgmode/blob/d62fd3cdb2958e2e76fb0af4ea64d6209703fbe0/DOCS.md#org_agenda_custom_commands) -I've made an [ugly fix](https://github.com/nvim-orgmode/orgmode/pull/831) to be able to use it with the `tags` agenda. Until it's solved you can use [my fork](https://github.com/lyz-code/orgmode). To define your custom agenda you can set for example: +Define custom agenda views that are available through the `org_agenda` mapping. It is possible to combine multiple agenda types into single view. An example: -```Lua +```lua +require('orgmode').setup({ + org_agenda_files = {'~/org/**/*'}, + org_agenda_custom_commands = { + -- "c" is the shortcut that will be used in the prompt + c = { + description = 'Combined view', -- Description shown in the prompt for the shortcut + types = { + { + type = 'tags_todo', -- Type can be agenda | tags | tags_todo + match = '+PRIORITY="A"', --Same as providing a "Match:" for tags view oa + m, See: https://orgmode.org/manual/Matching-tags-and-properties.html + org_agenda_overriding_header = 'High priority todos', + org_agenda_todo_ignore_deadlines = 'far', -- Ignore all deadlines that are too far in future (over org_deadline_warning_days). Possible values: all | near | far | past | future + }, + { + type = 'agenda', + org_agenda_overriding_header = 'My daily agenda', + org_agenda_span = 'day' -- can be any value as org_agenda_span + }, + { + type = 'tags', + match = 'WORK', --Same as providing a "Match:" for tags view oa + m, See: https://orgmode.org/manual/Matching-tags-and-properties.html + org_agenda_overriding_header = 'My work todos', + org_agenda_todo_ignore_scheduled = 'all', -- Ignore all headlines that are scheduled. Possible values: past | future | all + }, + { + type = 'agenda', + org_agenda_overriding_header = 'Whole week overview', + org_agenda_span = 'week', -- 'week' is default, so it's not necessary here, just an example + org_agenda_start_on_weekday = 1 -- Start on Monday + org_agenda_remove_tags = true -- Do not show tags only for this view + }, + } + }, + p = { + description = 'Personal agenda', + types = { + { + type = 'tags_todo', + org_agenda_overriding_header = 'My personal todos', + org_agenda_category_filter_preset = 'todos', -- Show only headlines from `todos` category. Same value providad as when pressing `/` in the Agenda view + org_agenda_sorting_strategy = {'todo-state-up', 'priority-down'} -- See all options available on org_agenda_sorting_strategy + }, + { + type = 'agenda', + org_agenda_overriding_header = 'Personal projects agenda', + org_agenda_files = {'~/my-projects/**/*'}, -- Can define files outside of the default org_agenda_files + }, + { + type = 'tags', + org_agenda_overriding_header = 'Personal projects notes', + org_agenda_files = {'~/my-projects/**/*'}, + org_agenda_tag_filter_preset = 'NOTES-REFACTOR' -- Show only headlines with NOTES tag that does not have a REFACTOR tag. Same value providad as when pressing `/` in the Agenda view + }, + } + } + } +}) +``` + +You can also define the `org_agenda_sorting_strategy`. The default value is `{ agenda = {'time-up', 'priority-down', 'category-keep'}, todo = {'priority-down', 'category-keep'}, tags = {'priority-down', 'category-keep'}}`. + +The available list of sorting strategies to apply to a given view are: + +- `time-up`: Sort entries by time of day. Applicable only in agenda view +- `time-down`: Opposite of time-up +- `priority-down`: Sort by priority, from highest to lowest +- `priority-up`: Sort by priority, from lowest to highest +- `tag-up`: Sort by sorted tags string, ascending +- `tag-down`: Sort by sorted tags string, descending +- `todo-state-up`: Sort by todo keyword by position (example: 'TODO, PROGRESS, DONE' has a sort value of 1, 2 and 3), ascending +- `todo-state-down`: Sort by todo keyword, descending +- `clocked-up`: Show clocked in headlines first +- `clocked-down`: Show clocked in headines last +- `category-up`: Sort by category name, ascending +- `category-down`: Sort by category name, descending +- `category-keep`: Keep default category sorting, as it appears in org-agenda-files + +You can open the custom agendas with the API too. For example to open the agenda stored under `t`: + +```lua keys = { { - "gt", + "gt", function() - require("orgmode.api.agenda").tags({ - query = "+today/-INACTIVE-DONE-REJECTED", - todo_only = true, - }) + vim.notify("Opening today's agenda", vim.log.levels.INFO) + require("orgmode.api.agenda").open_by_key("t") + end, + desc = "Open orgmode agenda for today", + }, + }, +``` + +In that case I'm configuring the `keys` section of the lazyvim plugin. Through the API you can also configure these options: + +- `org_agenda_files` +- `org_agenda_sorting_strategy` +- `org_agenda_category_filter_preset` +- `org_agenda_todo_ignore_deadlines`: Ignore all deadlines that are too far in future (over org_deadline_warning_days). Possible values: all | near | far | past | future +- `org_agenda_todo_ignore_scheduled`: Ignore all headlines that are scheduled. Possible values: past | future | all + +#### Load different agendas with the same binding depending on the time + +I find it useful to bind `gt` to Today's agenda, but what today means is different between week days. Imagine that you want to load an agenda if you're from monday to friday before 18:00 (a work agenda) versus a personal agenda the rest of the time. + +You could then configure this function: + +```lua + keys = { + { + "gt", + function() + local current_time = os.date("*t") + local day = current_time.wday -- 1 = Sunday, 2 = Monday, etc. + local hour = current_time.hour + + local agenda_key = "t" + local agenda_name = "Today's" -- default + + -- Monday (2) through Friday (6) + if day >= 2 and day <= 6 then + if hour < 17 then + agenda_key = "w" + agenda_name = "Today + Work" + end + end + + vim.notify("Opening " .. agenda_name .. " agenda", vim.log.levels.INFO) + require("orgmode.api.agenda").open_by_key(agenda_key) end, desc = "Open orgmode agenda for today", }, - } ``` ### [Reload the agenda con any file change](https://github.com/nvim-orgmode/orgmode/issues/656) + There are two ways of doing this: - Reload the agenda each time you save a document - Reload the agenda each X seconds #### Reload the agenda each time you save a document + Add this to your configuration: ```lua @@ -1048,7 +1193,9 @@ vim.api.nvim_create_autocmd('BufWritePost', { ``` This will reload agenda window if it's open each time you write any org file, it won't work if you archive without saving though yet. But that can be easily fixed if you use [the auto-save plugin](vim_autosave.md). + #### Reload the agenda each X seconds + Add this to your configuration: ```lua @@ -1056,7 +1203,7 @@ vim.api.nvim_create_autocmd("FileType", { pattern = "org", group = vim.api.nvim_create_augroup("orgmode", { clear = true }), callback = function() - -- Reload the agenda each second if its opened so that unsaved changes + -- Reload the agenda each second if its opened so that unsaved changes -- in the files are shown local timer = vim.loop.new_timer() timer:start( @@ -1072,11 +1219,12 @@ vim.api.nvim_create_autocmd("FileType", { end, }) ``` + ## [Capture](https://orgmode.org/manual/Capture.html) Capture lets you quickly store notes with little interruption of your work flow. It works the next way: -- Open the interface with `;c` (Default `oc`) that asks you what kind of element you want to capture. +- Open the interface with `;c` (Default `oc`) that asks you what kind of element you want to capture. - Select the template you want to use. By default you only have the `Task` template, that introduces a task into the same file where you're at, select it by pressing `t`. - Fill up the template. - Choose what to do with the captured content: @@ -1099,7 +1247,6 @@ mappings = { If you're outside vim you can trigger the capture (if you're using i3) by adding this config: - ```bash for_window [title="Capture"] floating enable, resize set 50 ppt 30 ppt bindsym $mod+c exec PATH="$PATH:/home/lyz/.local/bin" kitty --title Capture nvim +"lua require('orgmode').action('capture.prompt')" @@ -1111,11 +1258,11 @@ By pressing `alt+c` a floating terminal will open with the capture template. Capture lets you define different templates for the different inputs. Each template has the next elements: -* Keybinding: Keys to press to activate the template -* Description: What to show in the capture menu to describe the template -* Template: The actual template of the capture, look below to see how to create them. -* Target: The place where the captured element will be inserted to. For example `~/org/todo.org`. If you don't define it it will go to the file configured in `org_default_notes_file`. -* Headline: An [optional headline](https://github.com/nvim-orgmode/orgmode/issues/196) of the Target file to insert the element. +- Keybinding: Keys to press to activate the template +- Description: What to show in the capture menu to describe the template +- Template: The actual template of the capture, look below to see how to create them. +- Target: The place where the captured element will be inserted to. For example `~/org/todo.org`. If you don't define it it will go to the file configured in `org_default_notes_file`. +- Headline: An [optional headline](https://github.com/nvim-orgmode/orgmode/issues/196) of the Target file to insert the element. For example: @@ -1144,7 +1291,7 @@ For the template you can use the next variables: For example: ```lua -{ +{ T = { description = 'Todo', template = '* TODO %?\n %u', @@ -1178,8 +1325,8 @@ For example: } ``` - ### Use capture + ## Links Orgmode supports the insertion of links with the `org_insert_link` and `org_store_link` commands. I've changed the default `oli` and `ols` bindings to some quicker ones: @@ -1193,6 +1340,7 @@ mappings = { }, } ``` + There are the next possible workflows: - Discover links as you go: If you more less know in which file are the headings you want to link: @@ -1201,14 +1349,15 @@ There are the next possible workflows: - Then type `::*` and press `` again to get the list of available headings. - Store the links you want to paste: - Go to the heading you want to link - - Press `ls` to store the link - - Go to the place where you want to paste the link + - Press `ls` to store the link + - Go to the place where you want to paste the link - Press `l` and then `` to iterate over the saved links. + ## The orgmode repository file organization How to structure the different orgmode files is something that has always confused me, each one does it's own way, and there are no good posts on why one structure is better than other, people just state what they do. -I've started with a typical [gtd](gtd.md) structure with a directory for the `todo` another for the `calendar` then another for the `references`. In the `todo` I had a file for personal stuff, another for each of my work clients, and the `someday.org`. Soon making the internal links was cumbersome so I decided to merge the personal `todo.org` and the `someday.org` into the same file and use folds to hide uninteresting parts of the file. The reality is that I feel that orgmode is less responsive and that I often feel lost in the file. +I've started with a typical [gtd](gtd.md) structure with a directory for the `todo` another for the `calendar` then another for the `references`. In the `todo` I had a file for personal stuff, another for each of my work clients, and the `someday.org`. Soon making the internal links was cumbersome so I decided to merge the personal `todo.org` and the `someday.org` into the same file and use folds to hide uninteresting parts of the file. The reality is that I feel that orgmode is less responsive and that I often feel lost in the file. I'm now more into the idea of having files per project in a flat structure and use an index.org file to give it some sense in the same way I do with the mkdocs repositories. Then I'd use internal links in the todo.org file to organize the priorities of what to do next. @@ -1223,6 +1372,7 @@ Cons: - Filenames must be unique. It hasn't been a problem in blue. - Blue won't be flattened into Vida as it's it's own knowledge repository + ## Synchronizations ### Synchronize with other orgmode repositories @@ -1231,10 +1381,10 @@ I use orgmode both at the laptop and the mobile, I want to syncronize some files - The files should be available on the devices when I'm not at home - The synchronization will be done only on the local network -- The synchronization mechanism will only be able to see the files that need to be synched. +- The synchronization mechanism will only be able to see the files that need to be synched. - Different files can be synced to different devices. If I have three devices (laptop, mobile, tablet) I want to sync all mobile files to the laptop but just some to the tablet). -Right now I'm already using [syncthing](syncthing.md) to sync files between the mobile and my server, so it's tempting to use it also to solve this issue. So the first approach is to spawn a syncthing docker at the laptop that connects with the server to sync the files whenever I'm at home. +Right now I'm already using [syncthing](syncthing.md) to sync files between the mobile and my server, so it's tempting to use it also to solve this issue. So the first approach is to spawn a syncthing docker at the laptop that connects with the server to sync the files whenever I'm at home. #### Mount the whole orgmode repository with syncthing @@ -1256,7 +1406,6 @@ This is also a good solution for the different authorization syncs as you can on We could also select which files to mount on the syncthing docker of the laptop. I find this to be an ugly solution because we'd first need to mount a directory so that syncthing can write it's internal data and then map each of the files we want to sync. So each time a new file is added, we need to change the docker command... Unpleasant. - #### Use the org-orgzly script Another solution would be to use [org-orgzly script](https://codeberg.org/anoduck/org-orgzly) to parse a chosen org file or files, check if an entry meets required parameters, and if it does, write the entry in a new file located inside the directory you desire to sync with orgzly. In theory it may work but I feel it's too Dropbox focused. @@ -1271,23 +1420,25 @@ You may want to synchronize your calendar entries with external ones shared with The orgmode docs have a tutorial to [sync with google](https://orgmode.org/worg/org-tutorials/org-google-sync.html) and suggests some orgmode packages that do that, sadly it won't work with `nvim-orgmode`. We'll need to go the "ugly way" by: -* Downloading external calendar events to ics with [`vdirsyncer`](vdirsyncer.md). -* [Importing the ics to orgmode](#importing-the-ics-to-orgmode) -* Editing the events in orgmode -* [Exporting from orgmode to ics](#exporting-from-orgmode-to-ics) -* Uploading then changes to the external calendar events with [`vdirsyncer`](vdirsyncer.md). +- Downloading external calendar events to ics with [`vdirsyncer`](vdirsyncer.md). +- [Importing the ics to orgmode](#importing-the-ics-to-orgmode) +- Editing the events in orgmode +- [Exporting from orgmode to ics](#exporting-from-orgmode-to-ics) +- Uploading then changes to the external calendar events with [`vdirsyncer`](vdirsyncer.md). #### Importing the ics to orgmode There are many tools that do this: -* [`ical2orgpy`](https://github.com/ical2org-py/ical2org.py) -* [`ical2org` in go](https://github.com/rjhorniii/ical2org) +- [`ical2orgpy`](https://github.com/ical2org-py/ical2org.py) +- [`ical2org` in go](https://github.com/rjhorniii/ical2org) They import an `ics` file #### Exporting from orgmode to ics + ## Clocking + There is partial support for [Clocking work time](https://orgmode.org/manual/Clocking-Work-Time.html). I've changed the default bindings to make them more comfortable: @@ -1309,41 +1460,131 @@ mappings = { ``` In theory you can use the key `R` in any agenda to report the time, although I still find it kind of buggy. + +## [Better handle indentations](https://github.com/nvim-orgmode/orgmode/issues/859#issuecomment-2614561947) + +There is something called [virtual indents](https://github.com/nvim-orgmode/orgmode/blob/master/docs/configuration.org#org_startup_indented) that will prevent you from many indentation headaches. To enable them set the `org_startup_indented = true` configuration. + +If you need to adjust the indentation of your document (for example after enabling the option on existent orgmode code), visually select the lines to correct the indentation (`V`) and then press `=`. You can do this with the whole file `(╥﹏╥)`. + ## Other interesting features Some interesting features for the future are: -* [Effort estimates](https://orgmode.org/manual/Effort-Estimates.html) -* [Clocking](https://orgmode.org/manual/Clocking-Work-Time.html) +- [Effort estimates](https://orgmode.org/manual/Effort-Estimates.html) +- [Clocking](https://orgmode.org/manual/Clocking-Work-Time.html) + +# Nice tweaks + +## Remove some tags when the state has changed so DONE + +For example if you want to remove them for recurrent tasks + +```lua + local function remove_specific_tags(headline) + local tagsToRemove = { "t", "w", "m", "q", "y" } + local currentTags = headline:get_tags() + local newTags = {} + local needsUpdate = false + + -- Build new tags list excluding t, w, m + for _, tag in ipairs(currentTags) do + local shouldKeep = true + for _, removeTag in ipairs(tagsToRemove) do + if tag == removeTag then + shouldKeep = false + needsUpdate = true + break + end + end + if shouldKeep then + table.insert(newTags, tag) + end + end + -- Only update if we actually removed something + if needsUpdate then + headline:set_tags(table.concat(newTags, ":")) + headline:refresh() + end + end + + local EventManager = require("orgmode.events") + EventManager.listen(EventManager.event.TodoChanged, function(event) + ---@cast event OrgTodoChangedEvent + if event.headline then + if type == "DONE" then + remove_specific_tags(event.headline) + end + end + end) + +``` +## [Register the todo changes in the logbook](https://github.com/nvim-orgmode/orgmode/issues/466) + +You can now register the changes with events. Add this to your plugin config. If you're using lazyvim: + +```lua +return { + { + "nvim-orgmode/orgmode", + config = function() + require("orgmode").setup({...}) + + local EventManager = require("orgmode.events") + local Date = require("orgmode.objects.date") + + EventManager.listen(EventManager.event.TodoChanged, function(event) + ---@cast event OrgTodoChangedEvent + if event.headline then + local current_todo, _, _ = event.headline:get_todo() + local now = Date.now() + + event.headline:add_note({ + 'State "' .. current_todo .. '" from "' .. event.old_todo_state .. '" [' .. now:to_string() .. "]", + }) + end + end) + end, + }, +} +``` + # Troubleshooting + ## doesn't go up in the jump list + It's because [ is a synonym of ](https://github.com/neovim/neovim/issues/5916), and `org_cycle` is [mapped by default as ](https://github.com/nvim-orgmode/orgmode/blob/c0584ec5fbe472ad7e7556bc97746b09aa7b8221/lua/orgmode/config/defaults.lua#L146) If you're used to use `zc` then you can disable the `org_cycle` by setting the mapping `org_cycle = ""`. ## [Prevent Enter to create `*` on headings](https://github.com/LazyVim/LazyVim/discussions/2529) + With a clean install of LazyVim distribution when pressing `` from a heading it creates a new heading instead of moving the cursor to the body of the heading: ```org * Test <-- press enter in insert mode ``` + The result is: + ```org * Test * <-- cursor here ``` + The expected behaviour is: + ```org * Test <-- cursor here ``` -It's because of the [`formatoptions`](https://vimhelp.org/change.txt.html#fo-table). If you do `:set fo-=r`, you will observe the difference. +It's because of the [`formatoptions`](https://vimhelp.org/change.txt.html#fo-table). If you do `:set fo-=r`, you will observe the difference. The `r` option automatically inserts the current comment leader after pressing `` in Insert mode. To make the change permanent, you should enforce it with an auto-command. I really have no idea what makes Neovim think that the character `*` is a comment leader in `.org` files. -```lua +```lua vim.api.nvim_create_autocmd("FileType", { pattern = "org", group = vim.api.nvim_create_augroup("orgmode", { clear = true }), @@ -1359,6 +1600,7 @@ Note: if you want to debug orgmode with DAP use [this config instead](#troublesh - [Create a new issue](https://github.com/nvim-orgmode/orgmode/issues/new/choose) - Create the `minimal_init.lua` file from [this file](https://github.com/nvim-orgmode/orgmode/blob/master/scripts/minimal_init.lua) + ```lua vim.cmd([[set runtimepath=$VIMRUNTIME]]) vim.cmd([[set packpath=/tmp/nvim/site]]) @@ -1422,10 +1664,12 @@ Note: if you want to debug orgmode with DAP use [this config instead](#troublesh load_config() end ``` + - Add the leader configuration at the top of the file `vim.g.mapleader = ' '` - Open it with `nvim -u minimal_init.lua` ## Troubleshoot orgmode from within lazyvim + To start a fresh instance of lazyvim with orgmode you can run ```bash @@ -1433,10 +1677,10 @@ mkdir ~/.config/newstarter && cd ~/.config/newstarter git clone https://github.com/LazyVim/starter . rm -rf .git* NVIM_APPNAME=newstarter nvim # and wait for installation of plugins to finish -# Quit Neovim and start again with +# Quit Neovim and start again with NVIM_APPNAME=newstarter nvim lua/plugins/orgmode.lua -# Paste the contents of the installation steps for `lazy.nvim` mentioned [here](https://github.com/nvim-orgmode/orgmode#installation) in the file that you opened. -# Quit and restart Neovim again with the aforementioned command +# Paste the contents of the installation steps for `lazy.nvim` mentioned [here](https://github.com/nvim-orgmode/orgmode#installation) in the file that you opened. +# Quit and restart Neovim again with the aforementioned command ``` Once you're done clean up with: @@ -1444,12 +1688,15 @@ Once you're done clean up with: ```bash rm -rf ~/.config/newstarter ~/.local/share/newstarter ``` + ## Troubleshoot orgmode with dap ### To debug You already have configured `dap` just press `` in the window where you are running orgmode and set the breakpoints with the other nvim. + ### To open an issue + Use the next config and follow the steps of [Create an issue in the orgmode repository](#create-an-issue-in-the-orgmode-repository). ```lua @@ -1692,9 +1939,9 @@ local org = require('orgmode').setup({ } }) local dap = require"dap" -dap.configurations.lua = { - { - type = 'nlua', +dap.configurations.lua = { + { + type = 'nlua', request = 'attach', name = "Attach to running Neovim instance", } @@ -1728,7 +1975,7 @@ The folding of the recurring tasks iterations is also kind of broken. For the ne - State "DONE" from "TODO" [2024-01-03 Wed 19:39] - State "DONE" from "TODO" [2023-12-11 Mon 21:30] - State "DONE" from "TODO" [2023-11-24 Fri 13:10] - + - [ ] Do X ``` @@ -1737,8 +1984,8 @@ When folded the State changes is not added to the Properties fold. It's shown so ```orgmode ** TODO Recurring task DEADLINE: <2024-02-08 Thu .+14d -0d> - :PROPERTIES:... - + :PROPERTIES:... + - State "DONE" from "TODO" [2024-01-25 Thu 11:53] - State "DONE" from "TODO" [2024-01-10 Wed 23:24] - State "DONE" from "TODO" [2024-01-03 Wed 19:39] @@ -1762,43 +2009,85 @@ It's [not yet supported](https://github.com/nvim-orgmode/orgmode/issues/200) to ## Attempt to index local 'src_file' (a nil value) using telescope orgmode -This happens when not all the files are loaded in the telescope cache. You just need to wait until they are. +This happens when not all the files are loaded in the telescope cache. You just need to wait until they are. I've made some tests and it takes more time to load many small files than few big ones. Take care then on what files you add to your `org_agenda_files`. For example you can take the next precautions: - When adding a wildcard, use `*.org` not to load the `*.org_archive` files into the ones to process. Or [save your archive files elsewhere](#archiving). -# Plugins + +# Plugins + nvim-orgmode supports plugins. Check [org-checkbox](https://github.com/massix/org-checkbox.nvim/blob/trunk/lua/orgcheckbox/init.lua) to see a simple one + +# [API usage](https://github.com/nvim-orgmode/orgmode/blob/master/doc/orgmode_api.txt) + +## [Get the headline under the cursor](https://github.com/nvim-orgmode/orgmode/commit/2c806ca) + +## [Read and write files](https://github.com/nvim-orgmode/orgmode/commit/500004ff315475033e3a9247b61addd922d1f5da) + +You have information on how to do it in [this pr](https://github.com/nvim-orgmode/orgmode/commit/500004ff315475033e3a9247b61addd922d1f5da) + +## [Create custom hyperlink types](https://github.com/nvim-orgmode/orgmode/commit/8cdfc8d34bd9c5993ea8f933b5f5c306081ffb97) + +Custom types can trigger functionality such as opening the terminal and pings the provided URL . + +To add your own custom hyperlink type, provide a custom handler to +`hyperlinks.sources` setting. Each handler needs to have a `get_name()` method +that returns a name for the handler. Additionally, `follow(link)` and +`autocomplete(link)` optional methods are available to open the link and +provide the autocompletion. ## [Refile a headline to another destination](https://github.com/nvim-orgmode/orgmode/issues/471#event-16071077147) + +## [Refile a headline to another destination](https://github.com/nvim-orgmode/orgmode/issues/471#event-16071077147) + +You can do this [with the API](https://github.com/nvim-orgmode/orgmode/blob/master/doc/orgmode_api.txt#L27). + +Assuming you are in the filewhere your TODOs are: + +local api = require('orgmode.api') +local closest_headline = api.current():get_closest_headline() +local destination_file = api.load('~/org/journal.org') +local destination_headline = vim.tbl_filter(function(headline) +return headline.title == 'My journal' +end, destination_file.headlines)[1] + +api.refile({ source = closest_headline, destination = destination_headline }) + +## [Use events](https://github.com/nvim-orgmode/orgmode/tree/master/lua/orgmode/events) + # Comparison with Markdown What I like of Org mode over Markdown: -* The whole interface to interact with the elements of the document through key bindings: - * Move elements around. - * Create elements -* The TODO system is awesome -* The Agenda system -* How it handles checkboxes <3 -* Easy navigation between references in the document -* Archiving feature -* Refiling feature -* `#` is used for comments. -* Create internal document links is easier, you can just copy and paste the heading similar to `[[*This is the heading]]` on markdown you need to edit it to `[](#this-is-the-heading)`. +- The whole interface to interact with the elements of the document through key bindings: + - Move elements around. + - Create elements +- The TODO system is awesome +- The Agenda system +- How it handles checkboxes <3 +- Easy navigation between references in the document +- Archiving feature +- Refiling feature +- `#` is used for comments. +- Create internal document links is easier, you can just copy and paste the heading similar to `[[*This is the heading]]` on markdown you need to edit it to `[](#this-is-the-heading)`. What I like of markdown over Org mode: -* The syntax of the headings `## Title` better than `** Title`. Although it makes sense to have `#` for comments. -* The syntax of the links: `[reference](link)` is prettier to read and write than `[[link][reference]]`, although this can be improved if only the reference is shown by your editor (nvim-orgmode doesn't do his yet) +- The syntax of the headings `## Title` better than `** Title`. Although it makes sense to have `#` for comments. +- The syntax of the links: `[reference](link)` is prettier to read and write than `[[link][reference]]`, although this can be improved if only the reference is shown by your editor (nvim-orgmode doesn't do his yet) + # Interesting things to investigate + - [org-bullets.nvim](https://github.com/akinsho/org-bullets.nvim): Show org mode bullets as UTF-8 characters. - [headlines.nvim](https://github.com/lukas-reineke/headlines.nvim): Add few highlight options for code blocks and headlines. - [Sniprun](https://github.com/michaelb/sniprun): A neovim plugin to run lines/blocs of code (independently of the rest of the file), supporting multiples languages. + ## Convert source code in the fly from markdown to orgmode + It would be awesome that when you do `nvim myfile.md` it automatically converts it to orgmode so that you can use all the power of it and once you save the file it converts it back to markdown -I've started playing around with this but got nowhere. I leave you my breadcrumbs in case you want to follow this path. +I've started playing around with this but got nowhere. I leave you my breadcrumbs in case you want to follow this path. ```lua -- Load the markdown documents as orgmode documents @@ -1815,6 +2104,7 @@ vim.api.nvim_create_autocmd("BufReadPost", { ``` If you make it work please [tell me how you did it!](contact.md) + # Things that are still broken or not developed - [The agenda does not get automatically refreshed](https://github.com/nvim-orgmode/orgmode/issues/656) @@ -1823,11 +2113,15 @@ If you make it work please [tell me how you did it!](contact.md) - [Refiling from the agenda](https://github.com/nvim-orgmode/orgmode/issues/657) - [Interacting with the logbook](https://github.com/nvim-orgmode/orgmode/issues/149) - [Easy list item management](https://github.com/nvim-orgmode/orgmode/issues/472) + # Python libraries + ## [org-rw](https://code.codigoparallevar.com/kenkeiras/org-rw) -`org-rw` is a library designed to handle Org-mode files, offering the ability to modify data and save it back to the disk. + +`org-rw` is a library designed to handle Org-mode files, offering the ability to modify data and save it back to the disk. - **Pros**: + - Allows modification of data and saving it back to the disk - Includes tests to ensure functionality @@ -1837,12 +2131,14 @@ If you make it work please [tell me how you did it!](contact.md) - Uses `unittest` instead of `pytest`, which some developers may prefer - Tests are not easy to read - Last commit was made five months ago, indicating potential inactivity - - [Not very popular]( https://github.com/kenkeiras/org-rw), with only one contributor, three stars, and no forks + - [Not very popular](https://github.com/kenkeiras/org-rw), with only one contributor, three stars, and no forks ## [orgparse](https://github.com/karlicoss/orgparse) + `orgparse` is a more popular library for parsing Org-mode files, with better community support and more contributors. However, it has significant limitations in terms of editing and saving changes. - **Pros**: + - More popular with 13 contributors, 43 forks, and 366 stars - Includes tests to ensure functionality - Provides some documentation, available [here](https://orgparse.readthedocs.io/en/latest/) @@ -1856,9 +2152,11 @@ If you make it work please [tell me how you did it!](contact.md) - The `ast` is geared towards single-pass document reading. While it is possible to modify the document object tree, writing back changes is more complicated and not a common use case for the author. ## [Tree-sitter](https://tree-sitter.github.io/tree-sitter/) + Tree-sitter is a powerful parser generator tool and incremental parsing library. It can build a concrete syntax tree for a source file and efficiently update the syntax tree as the source file is edited. - **Pros**: + - General enough to parse any programming language - Fast enough to parse on every keystroke in a text editor - Robust enough to provide useful results even in the presence of syntax errors @@ -1879,14 +2177,16 @@ To get a better grasp of Tree-sitter you can check their talks: - [Github Universe 2017](https://www.youtube.com/watch?v=a1rC79DHpmY). ## [lazyblorg orgparser.py](https://github.com/novoid/lazyblorg/blob/master/lib/orgparser.py) + `lazyblorg orgparser.py` is another tool for working with Org-mode files. However, I didn't look at it. + # References -* [Source](https://github.com/nvim-orgmode/orgmode) -* [Getting started guide](https://github.com/nvim-orgmode/orgmode/wiki/Getting-Started) -* [Docs](https://nvim-orgmode.github.io/) -* [Developer docs](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md) -* [Default configuration file](https://github.com/nvim-orgmode/orgmode/blob/master/lua/orgmode/config/defaults.lua) -* [List of supported commands](https://github.com/nvim-orgmode/orgmode/wiki/Feature-Completeness#nvim-org-commands-not-in-emacs) -* [Default mappings](https://github.com/nvim-orgmode/orgmode/blob/master/lua/orgmode/config/mappings/init.lua) -* [List of plugins](https://github.com/topics/orgmode-nvim) +- [Source](https://github.com/nvim-orgmode/orgmode) +- [Getting started guide](https://github.com/nvim-orgmode/orgmode/wiki/Getting-Started) +- [Docs](https://nvim-orgmode.github.io/) +- [Developer docs](https://github.com/nvim-orgmode/orgmode/blob/master/DOCS.md) +- [Default configuration file](https://github.com/nvim-orgmode/orgmode/blob/master/lua/orgmode/config/defaults.lua) +- [List of supported commands](https://github.com/nvim-orgmode/orgmode/wiki/Feature-Completeness#nvim-org-commands-not-in-emacs) +- [Default mappings](https://github.com/nvim-orgmode/orgmode/blob/master/lua/orgmode/config/mappings/init.lua) +- [List of plugins](https://github.com/topics/orgmode-nvim) diff --git a/docs/orgzly.md b/docs/orgzly.md index 242fa6bc176..4fff2ef08db 100644 --- a/docs/orgzly.md +++ b/docs/orgzly.md @@ -1,5 +1,11 @@ [Orgzly](https://www.orgzlyrevived.com/) is an android application to interact with [orgmode](orgmode.md) files. +# Tips + +## Not adding a todo state when creating a new element by default + +The default state `NOTE` doesn't add any state. + # Troubleshooting ## All files give conflicts when nothing has changed diff --git a/docs/pdm.md b/docs/pdm.md index aad99116af2..f08ca709f76 100644 --- a/docs/pdm.md +++ b/docs/pdm.md @@ -4,6 +4,8 @@ date: 20211217 author: Lyz --- +Note: Maybe use [uv](https://astral.sh/blog/uv) instead (although so far I'm still using `pdm`) + [PDM](https://pdm.fming.dev/) is a modern Python package manager with [PEP 582](https://www.python.org/dev/peps/pep-0582/) support. It installs and manages packages in a similar way to npm that doesn't need to create a diff --git a/docs/pretalx.md b/docs/pretalx.md new file mode 100644 index 00000000000..81a75fcf270 --- /dev/null +++ b/docs/pretalx.md @@ -0,0 +1,3 @@ +# Import a pretalx calendar in giggity + +Search the url similar to https://pretalx.com//schedule/export/schedule.xml diff --git a/docs/psu.md b/docs/psu.md index 3c2ac58ad1f..b066fc6ff11 100644 --- a/docs/psu.md +++ b/docs/psu.md @@ -92,7 +92,6 @@ manufacturers offer warranties. Calculator](https://www.bequiet.com/en/psucalculator) * [Cooler Master Power Calculator](http://www.coolermaster.com/power-supply-calculator/) * [Seasonic Wattage Calculator](https://seasonic.com/wattage-calculator) - * [MSI PSU Calculator](https://www.msi.com/calculator) * [Newegg PSU Calculator](https://www.newegg.com/tools/power-supply-calculator/) * *Consider upcoming GPU power requirements*: Although the best graphics cards diff --git a/docs/renovate.md b/docs/renovate.md index a23ebb42093..72e8e285d67 100644 --- a/docs/renovate.md +++ b/docs/renovate.md @@ -9,18 +9,39 @@ dependency updates. Multi-platform and multi-language. Why use Renovate? -* Get pull requests to update your dependencies and lock files. -* Reduce noise by scheduling when Renovate creates PRs. -* Renovate finds relevant package files automatically, including in monorepos. -* You can customize the bot's behavior with configuration files. -* Share your configuration with ESLint-like config presets. -* Get replacement PRs to migrate from a deprecated dependency to the community - suggested replacement (npm packages only). -* Open source. -* Popular (more than 9.7k stars and 1.3k forks) -* Beautifully integrate with main Git web applications (Gitea, Gitlab, Github). -* It supports most important languages: Python, Docker, Kubernetes, Terraform, - Ansible, Node, ... +- Get pull requests to update your dependencies and lock files. +- Reduce noise by scheduling when Renovate creates PRs. +- Renovate finds relevant package files automatically, including in monorepos. +- You can customize the bot's behavior with configuration files. +- Share your configuration with ESLint-like config presets. +- Get replacement PRs to migrate from a deprecated dependency to the community + suggested replacement (npm packages only). +- Open source. +- Popular (more than 9.7k stars and 1.3k forks) +- Beautifully integrate with main Git web applications (Gitea, Gitlab, Github). +- It supports most important languages: Python, Docker, Kubernetes, Terraform, + Ansible, Node, ... + +# [Installation](https://about.gitea.com/resources/tutorials/use-gitea-and-renovate-bot-to-automatically-monitor-software-packages) + +- Create Renovate Bot Account and generate a token for the Gitea Action secret +- Add the renovate bot account as collaborator with write permissions to the repository you want to update. +- Create a repository to store our Renovate bot configurations, assuming called renovate-config. + +In renovate-config, create a file config.js to configure Renovate: + +```json +module.exports = { + "endpoint": "https://gitea.com/api/v1", // replace it with your actual endpoint + "gitAuthor": "Renovate Bot ", + "platform": "gitea", + "onboardingConfigFileName": "renovate.json", + "autodiscover": true, + "optimizeForDisabled": true, +}; +``` + +If you're using mysql or you see errors like `.../repository/pulls 500 internal error` you [may need to set `unicodeEmoji: false`](https://github.com/renovatebot/renovate/issues/10264). # Behind the scenes @@ -28,18 +49,18 @@ Why use Renovate? Renovate: -* Scans your repositories to detect package files and their dependencies. -* Checks if any newer versions exist. -* Raises Pull Requests for available updates. +- Scans your repositories to detect package files and their dependencies. +- Checks if any newer versions exist. +- Raises Pull Requests for available updates. The Pull Requests patch the package files directly, and include Release Notes for the newer versions (if they are available). By default: -* You'll get separate Pull Requests for each dependency. -* Major updates are kept separate from non-major updates. +- You'll get separate Pull Requests for each dependency. +- Major updates are kept separate from non-major updates. # References -* [Docs](https://docs.renovatebot.com/) +- [Docs](https://docs.renovatebot.com/) diff --git a/docs/roadmap_adjustment.md b/docs/roadmap_adjustment.md index 438b333f427..3f85d04be7f 100644 --- a/docs/roadmap_adjustment.md +++ b/docs/roadmap_adjustment.md @@ -495,65 +495,67 @@ It's important that you prepare your environment for the review. You need to be - Remove from your environment everything else that may distract you - Close all windows in your laptop that you're not going to use -To record the results of the review create the file `references/reviews/YYYY_MM.org`, where the month is the one that is ending with the following template: +To record the results of the review create the section in `pages/reviews.org` with the following template: ```org -:inow: -* Discover -* Analyze -* Decide +* winter +** january review +*** work +*** personal +**** month review +***** mental dump +****** What worries you right now? +****** What drained your energy or brought you down emotionally this last month? +****** What are the little things that burden you or slow you down? +****** What do you desire right now? +****** Where is your mind these days? +****** What did you enjoy most this last month? +****** What did help you most this last month? +****** What things would you want to finish throughout the month so you can carry them to the next? +****** What things do you feel you need to do? +****** What are you most proud of this month? +***** month checks +***** analyze +***** decide ``` -##### Personal integrity review discover +I'm assuming it's the january's review and that you have two kinds of reviews, one personal and one for work. -Try not to, but if you think of decisions you want to make that address the elements you're discovering, write them down in the `Decide` section of your review document. - -There are different paths to discover actionable items: - -- Analyze what is in your mind: Take 10 minutes to answer to the next questions (you don't need to answer them all): - - - What did you enjoy most this last month? - - - [ ] - - - What do you desire right now? - - - [ ] - - - What worries you right now? +##### Dump your mind - - [ ] +The first thing we want to do in the review is to dump all that's in our mind into our system to free up mental load. - - What did drain your energy or brought you down emotionally this last month? - - - [ ] - - - What month accomplishments are you proud of? - - - [ ] +Try not to, but if you think of decisions you want to make that address the elements you're discovering, write them down in the `Decide` section of your review document. - - Where is your mind these days? +There are different paths to discover actionable items: - - [ ] +- Analyze what is in your mind: Take 10 minutes to answer to the questions of the template under the "mental dump" section (you don't need to answer them all). Notice that we do not need to review our life logging tools (diary, action manager, ...) to answer these questions. This means that we're doing an analysis of what is in our minds right now, not throughout the month. It's flawed but as we do this analysis often, it's probably fine. We add more importance to the latest events in our life anyway. - - What did help you most this last month? +##### Clean your notebook - - [ ] +- Empty the elements you added to the `review box`. I have them in my inbox with the tag `:review:` (you have it in the month agenda view `gM`) - - What do you want for the next month? - - [ ] Notice that we do not need to review our life logging tools (diary, action manager, ...) to answer these questions. This means that we're doing an analysis of what is in our minds right now, not throughout the month. It's flawed but as we do this analysis often, it's probably fine. We add more importance to the latest events in our life anyway. +- Clean your life notebook by: -- Empty the elements you added to the `review box`. + - Iterate over the areas of `proyects.org` only checking the first level of projects, don't go deeper and for each element: + - Move the done elements either to `archive.org` or `logbook.org`. + - Move to `backlog.org` the elements that don't make sense to be active anymore + - Check if you have any `DONE` element in `calendar.org`. + - Empty the `inbox.org` + - Empty the `DONE` elements of `talk.org` + - Clean the elements that don't make sense anymore from `think.org` -- Process your `Month checks`. For each of them: +- Process your `month checks`. For each of them: - - If you need, add action elements in the `Discover` section of the review. + - If you need, add action elements in the `mental dump` section of the review. - Think of whether you've met the check. -- Process your `Month objectives`. For each of them: - - Think of whether you've met the objective. - - If you need, add action elements in the `Discover` section of the review. - - If you won't need the objective in the next month, archive it. +##### Refresh your idea of how the month go + +- Open your `bitácora.org` agenda view to see what has been completed in the last month `match = 'CLOSED>"<-30d>"-work-steps-done',` ordered by name `org_agenda_sorting_strategy = { "category-keep" },` and change the priority of the elements according to the impact. +- Open your `recurrent.org` agenda view to see what has been done the last month `match = 'LAST_REPEAT>"<-30d>"-work'` +- Check what has been left of your month objectives `+m` and refile the elements that don't make sense anymore. +- Check the reports of your weekly reviews of the month in the `reviews.org` document. ##### Personal integrity review analyze @@ -630,14 +632,12 @@ It's important that you prepare your environment for the planning. You need to b - Your _Reading list_. - Remove from your environment everything else that may distract you -#### Clarify your state +#### Check your close compromises -To be able to make a good decision on your month's path you need to sort out which is your current state. To do so: +Check all your action management tools (in my case `orgmode` and `ikhal`) to identify: -- Clean your todo: Review each todo element by deciding if they should still be in the todo. If they do and they belong to a month objective, add it. If they don't need to be in the todo, refile it. -- Clean your agenda and get an feeling of the busyness of the month: - - Open the orgmode month view agenda and clean it - - Read the rest of your calendars +- Arranged compromises +- trips #### Decide the month objectives @@ -750,6 +750,7 @@ The quarter review requires an analysis that doesn't fill in a day session. It r *** Objectives ** Axis 2 ... + ``` Where: @@ -782,10 +783,6 @@ It's important that you prepare your environment for the review. You need to be To record the results of the review create the file `references/reviews/YYYY_MM_SSSS.org`, where the month is the one that is starting and the `SSSS` is the season name with the following template: -```org - -``` - ##### Quarter review discover ###### Do an overall area review @@ -940,7 +937,7 @@ With the use of [mediatracker](mediatracker.md) and other life logging tools I t - [Videogames](videogames.md) - [Boardgames](board_games.md) -## Life review +## Life roadmap adjustment Life reviews are meant to give you an idea of: @@ -949,13 +946,49 @@ Life reviews are meant to give you an idea of: - With the context you have now, you can think of how you could have avoided the bad decisions. -If you have the year's planning you can analyze it against your task management -tools and life logs and create a review document analyzing all. +It's also the time to set your life goals for this year. ### Life review timeline As you can see the amount of stuff to review is not something that can be done in a day, my current plan is to prepare the review from the 15th of December till the 15th of January and then carry it out until the 23rd of February, to leave space to do the spring quarter and March month reviews. +### Create next stage's life notebook + +After reading "The Bulletproof Journal", I was drawn to the idea of changing notebooks each year, carrying over only the necessary things. + +I find this to be a powerful concept since you start each stage with a clean canvas. This brings you closer to desire versus duty as it removes the commitments you made to yourself, freeing up significant mental load. From this point, it's much easier to allow yourself to dream about what you want to do in this new stage. + +I want to apply this concept to my digital life notebook as I see the following advantages: + +- It lightens my files making them easier to manage and faster to process with orgmode +- It's a very easy way to clean up +- It's an elegant way to preserve what you've recorded without it becoming a hindrance +- In each stage, you can start with a different notebook structure, meaning new axes, tools, and structures. This helps avoid falling into the rigidity of a constrained system or artifacts defined by inertia rather than conscious decision +- It allows you to avoid maintaining files that follow an old scheme or having to migrate them to the new system +- Additionally, you get rid of all those actions you've been reluctant to delete in one fell swoop + +The notebook change can be done in two phases: + +- Notebook Construction +- Stage Closure + +#### Notebook Construction + +This phase spans from when you start making stage adjustments until you finally close the current stage. +You can follow these steps: + +- Create a directory with the name of the new stage. In my case, it's the number of my predominant age during the stage +- Create a directory for the current stage's notebook within "notebooks" in your references. Here we'll move everything that doesn't make sense to maintain. It's important that this directory isn't within your agenda files +- Quickly review the improvements you've noted that you want to implement in next year's notebook to keep them in mind. You can note the references in the "Create new notebook" action + +As you review the stage, decide if it makes sense for the file you're viewing to exist as-is in the new notebook. Remember that the idea is to migrate minimal structure and data. + +- If it makes sense: + - Create a symbolic link in the new notebook. When closing the stage, we'll replace the link with the file's final state +- If the file no longer makes sense, move it to `references/notebooks` + +#### Stage Closure + # References ## Books diff --git a/docs/smartctl.md b/docs/smartctl.md new file mode 100644 index 00000000000..4005d2a19a0 --- /dev/null +++ b/docs/smartctl.md @@ -0,0 +1,451 @@ +[Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T. or SMART)](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology) is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs). Its primary function is to detect and report various indicators of drive reliability, or how long a drive can function while anticipating imminent hardware failures. + +When S.M.A.R.T. data indicates a possible imminent drive failure, software running on the host system may notify the user so action can be taken to prevent data loss, and the failing drive can be replaced and no data is lost. + +# General information + +## [Accuracy](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology#Accuracy) + +A field study at Google covering over 100,000 consumer-grade drives from December 2005 to August 2006 found correlations between certain S.M.A.R.T. information and annualized failure rates: + +- In the 60 days following the first uncorrectable error on a drive (S.M.A.R.T. attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to fail than a similar drive for which no such error occurred. +- First errors in reallocations, offline reallocations (S.M.A.R.T. attributes 0xC4 and 0x05 or 196 and 5) and probational counts (S.M.A.R.T. attribute 0xC5 or 197) were also strongly correlated to higher probabilities of failure. +- Conversely, little correlation was found for increased temperature and no correlation for usage level. However, the research showed that a large proportion (56%) of the failed drives failed without recording any count in the "four strong S.M.A.R.T. warnings" identified as scan errors, reallocation count, offline reallocation, and probational count. +- Further, 36% of failed drives did so without recording any S.M.A.R.T. error at all, except the temperature, meaning that S.M.A.R.T. data alone was of limited usefulness in anticipating failures. + +# [Installation](https://blog.shadypixel.com/monitoring-hard-drive-health-on-linux-with-smartmontools/) + +On Debian systems: + +```bash +sudo apt-get install smartmontools +``` + +By default when you install it all your drives are checked periodically with the `smartd` daemon under the `smartmontools` systemd service. + +# Usage + +## Running the tests + +### [Test types](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology#Self-tests) + +S.M.A.R.T. drives may offer a number of self-tests: + +- Short: Checks the electrical and mechanical performance as well as the read performance of the disk. Electrical tests might include a test of buffer RAM, a read/write circuitry test, or a test of the read/write head elements. Mechanical test includes seeking and servo on data tracks. Scans small parts of the drive's surface (area is vendor-specific and there is a time limit on the test). Checks the list of pending sectors that may have read errors, and it usually takes under two minutes. +- Long/extended: A longer and more thorough version of the short self-test, scanning the entire disk surface with no time limit. This test usually takes several hours, depending on the read/write speed of the drive and its size. It is possible for the long test to pass even if the short test fails. +- Conveyance: Intended as a quick test to identify damage incurred during transporting of the device from the drive manufacturer to the computer manufacturer. Only available on ATA drives, and it usually takes several minutes. + +Drives remain operable during self-test, unless a "captive" option (ATA only) is requested. + +#### Long test + +Start with a long self test with `smartctl`. Assuming the disk to test is +`/dev/sdd`: + +```bash +smartctl -t long /dev/sdd +``` + +The command will respond with an estimate of how long it thinks the test will +take to complete. + +To check progress use: + +```bash +smartctl -A /dev/sdd | grep remaining +# or +smartctl -c /dev/sdd | grep remaining +``` + +Don't check too often because it can abort the test with some drives. If you +receive an empty output, examine the reported status with: + +```bash +smartctl -l selftest /dev/sdd +``` + +If errors are shown, check the `dmesg` as there are usually useful traces of the error. + +## Understanding the tests + +The output of a `smartctl` command is difficult to read: + +``` +smartctl 5.40 2010-03-16 r3077 [x86_64-unknown-linux-gnu] (local build) +Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net + +=== START OF INFORMATION SECTION === +Model Family: SAMSUNG SpinPoint F2 EG series +Device Model: SAMSUNG HD502HI +Serial Number: S1VZJ9CS712490 +Firmware Version: 1AG01118 +User Capacity: 500,107,862,016 bytes +Device is: In smartctl database [for details use: -P show] +ATA Version is: 8 +ATA Standard is: ATA-8-ACS revision 3b +Local Time is: Wed Feb 9 15:30:42 2011 CET +SMART support is: Available - device has SMART capability. +SMART support is: Enabled + +=== START OF READ SMART DATA SECTION === +SMART overall-health self-assessment test result: PASSED + +General SMART Values: +Offline data collection status: (0x00) Offline data collection activity + was never started. + Auto Offline Data Collection: Disabled. +Self-test execution status: ( 0) The previous self-test routine completed + without error or no self-test has ever + been run. +Total time to complete Offline +data collection: (6312) seconds. +Offline data collection +capabilities: (0x7b) SMART execute Offline immediate. + Auto Offline data collection on/off support. + Suspend Offline collection upon new + command. + Offline surface scan supported. + Self-test supported. + Conveyance Self-test supported. + Selective Self-test supported. +SMART capabilities: (0x0003) Saves SMART data before entering + power-saving mode. + Supports SMART auto save timer. +Error logging capability: (0x01) Error logging supported. + General Purpose Logging supported. +Short self-test routine +recommended polling time: ( 2) minutes. +Extended self-test routine +recommended polling time: ( 106) minutes. +Conveyance self-test routine +recommended polling time: ( 12) minutes. +SCT capabilities: (0x003f) SCT Status supported. + SCT Error Recovery Control supported. + SCT Feature Control supported. + SCT Data Table supported. + +SMART Attributes Data Structure revision number: 16 +Vendor Specific SMART Attributes with Thresholds: +ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE + 1 Raw_Read_Error_Rate 0x000f 099 099 051 Pre-fail Always - 2376 + 3 Spin_Up_Time 0x0007 091 091 011 Pre-fail Always - 3620 + 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 405 + 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 + 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 + 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 + 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 717 + 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 + 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 + 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 405 + 13 Read_Soft_Error_Rate 0x000e 099 099 000 Old_age Always - 2375 +183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0 +184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 +187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 2375 +188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 +190 Airflow_Temperature_Cel 0x0022 084 074 000 Old_age Always - 16 (Lifetime Min/Max 16/16) +194 Temperature_Celsius 0x0022 084 071 000 Old_age Always - 16 (Lifetime Min/Max 16/16) +195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 3558 +196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 +197 Current_Pending_Sector 0x0012 098 098 000 Old_age Always - 81 +198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 +199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always - 1 +200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 +201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 + +SMART Error Log Version: 1 +No Errors Logged + +SMART Self-test log structure revision number 1 +No self-tests have been logged. [To run self-tests, use: smartctl -t] + + +SMART Selective self-test log data structure revision number 1 + SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS + 1 0 0 Not_testing + 2 0 0 Not_testing + 3 0 0 Not_testing + 4 0 0 Not_testing + 5 0 0 Not_testing +Selective self-test flags (0x0): + After scanning selected spans, do NOT read-scan remainder of disk. +If Selective self-test is pending on power-up, resume after 0 minute delay. +``` + +### Checking overall health + +Somewhere in your report you'll see something like: + +``` +=== START OF READ SMART DATA SECTION === +SMART overall-health self-assessment test result: PASSED +``` + +If it doesn’t return PASSED, you should immediately backup all your data. Your hard drive is probably failing. + +That message can also be shown with `smartctl -H /dev/sda` + +### [Checking the SMART attributes](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology#Known_ATA_S.M.A.R.T._attributes) + +Each drive manufacturer defines a set of attributes, and sets threshold values beyond which attributes should not pass under normal operation. But they do not agree on precise attribute definitions and measurement units, the following list of attributes is a general guide only. + +If one or more attribute have the "prefailure" flag, and the "current value" of such prefailure attribute is smaller than or equal to its "threshold value" (unless the "threshold value" is 0), that will be reported as a "drive failure". In addition, a utility software can send SMART RETURN STATUS command to the ATA drive, it may report three status: "drive OK", "drive warning" or "drive failure". + +#### [SMART attributes columns](https://ma.juii.net/blog/interpret-smart-attributes) + +Every of the SMART attributes has several columns as shown by “smartctl -a ”: + +- ID: The ID number of the attribute, good for comparing with other lists like [Wikipedia: S.M.A.R.T.: Known ATA S.M.A.R.T. attributes](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology#Known_ATA_S.M.A.R.T._attributes) because the attribute names sometimes differ. +- Name: The name of the SMART attribute. +- Value: The current, normalized value of the attribute. Higher values are always better (except for temperature for hard disks of some manufacturers). The range is normally 0-100, for some attributes 0-255 (so that 100 resp. 255 is best, 0 is worst). There is no standard on how manufacturers convert their raw value to this normalized one: when the normalized value approaches threshold, it can do linearily, exponentially, logarithmically or any other way, meaning that a doubled normalized value does not necessarily mean “twice as good”. +- Worst: The worst (normalized) value that this attribute had at any point of time where SMART was enabled. There seems to be no mechanism to reset current SMART attribute values, but this still makes sense as some SMART attributes, for some manufacturers, fluctuate over time so that keeping the worst one ever is meaningful. +- Threshold: The threshold below which the normalized value will be considered “exceeding specifications”. If the attribute type is “Pre-fail”, this means that SMART thinks the hard disk is just before failure. This will “trigger” SMART: setting it from “SMART test passed” to “SMART impending failure” or similar status. +- Type: The type of the attribute. Either “Pre-fail” for attributes that are said to indicate impending failure, or “Old_age” for attributes that just indicate wear and tear. Note that one and the same attribute can be classified as “Pre-fail” by one manufacturer or for one model and as “Old_age” by another or for another model. This is the case for example for attribute Seek_Error_Rate (ID 7), which is a widespread phenomenon on many disks and not considered critical by some manufacturers, but Seagate has it as “Pre-fail”. +- Raw value: The current raw value that was converted to the normalized value above. smartctl shows all as decimal values, but some attribute values of some manufacturers cannot be reasonably interpreted that way + +#### [Reacting to SMART Values](https://ma.juii.net/blog/interpret-smart-attributes) + +It is said that a drive that starts getting bad sectors (attribute ID 5) or “pending” bad sectors (attribute ID 197; they most likely are bad, too) will usually be trash in 6 months or less. The only exception would be if this does not happen: that is, bad sector count increases, but then stays stable for a long time, like a year or more. For that reason, one normally needs a diagramming / journaling tool for SMART. Many admins will exchange the hard drive if it gets reallocated sectors (ID 5) or sectors “under investigation” (ID 197) + +#### [Critical SMART attributes](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology#Known_ATA_S.M.A.R.T._attributes) + +Of all the attributes I'm going to analyse only the critical ones + +##### Read Error Rate + +ID: 01 (0x01) +Ideal: Low +Correlation with probability of failure: not clear + +(Vendor specific raw value.) Stores data related to the rate of hardware read errors that occurred when reading data from a disk surface. The raw value has different structure for different vendors and is often not meaningful as a decimal number. For some drives, this number may increase during normal operation without necessarily signifying errors. + +##### Reallocated Sectors Count + +ID: 05 (0x05) +Ideal: Low +Correlation with probability of failure: Strong + +Count of reallocated sectors. The raw value represents a count of the bad sectors that have been found and remapped. Thus, the higher the attribute value, the more sectors the drive has had to reallocate. This value is primarily used as a metric of the life expectancy of the drive; a drive which has had any reallocations at all is significantly more likely to fail in the immediate months. If Raw value of 0x05 attribute is higher than its Threshold value, that will reported as "drive warning". + +##### Spin Retry Count + +ID: 10 (0x0A) +Ideal: Low +Correlation with probability of failure: Strong + +Count of retry of spin start attempts. This attribute stores a total count of the spin start attempts to reach the fully operational speed (under the condition that the first attempt was unsuccessful). An increase of this attribute value is a sign of problems in the hard disk mechanical subsystem. + +##### Current Pending Sector Count + +ID: 197 (0xC5) +Ideal: Low +Correlation with probability of failure: Strong + +Count of "unstable" sectors (waiting to be remapped, because of unrecoverable read errors). If an unstable sector is subsequently read successfully, the sector is remapped and this value is decreased. Read errors on a sector will not remap the sector immediately (since the correct value cannot be read and so the value to remap is not known, and also it might become readable later); instead, the drive firmware remembers that the sector needs to be remapped, and will remap it the next time it has been successfully read.[76] + +However, some drives will not immediately remap such sectors when successfully read; instead the drive will first attempt to write to the problem sector, and if the write operation is successful the sector will then be marked as good (in this case, the "Reallocation Event Count" (0xC4) will not be increased). This is a serious shortcoming, for if such a drive contains marginal sectors that consistently fail only after some time has passed following a successful write operation, then the drive will never remap these problem sectors. If Raw value of 0xC5 attribute is higher than its Threshold value, that will reported as "drive warning" + +##### (Offline) Uncorrectable Sector Count + +ID: 198 (0xC6) +Ideal: Low +Correlation with probability of failure: Strong + +The total count of uncorrectable errors when reading/writing a sector. A rise in the value of this attribute indicates defects of the disk surface and/or problems in the mechanical subsystem. + +In the 60 days following the first uncorrectable error on a drive (S.M.A.R.T. attribute 0xC6 or 198) detected as a result of an offline scan, the drive was, on average, 39 times more likely to fail than a similar drive for which no such error occurred. + +#### [Non critical SMART attributes](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology#Known_ATA_S.M.A.R.T._attributes) + +The next attributes appear to change in the logs but that doesn't mean that there is anything going wrong + +##### Hardware ECC Recovered + +ID: 195 (0xC3) +Ideal: Varies +Correlation with probability of failure: Low + +(Vendor-specific raw value.) The raw value has different structure for different vendors and is often not meaningful as a decimal number. For some drives, this number may increase during normal operation without necessarily signifying errors. + +# Monitorization + +To monitor your drive health you can use [prometheus](prometheus.md) with [alertmanager](alertmanager.md) for alerts and [grafana](grafana.md) for dashboards. + +## Installing the exporter + +The prometheus community has it's own [smartctl exporter](https://github.com/prometheus-community/smartctl_exporter) + +### Using the binary + +You can download the latest binary from the repository [releases](https://github.com/prometheus-community/smartctl_exporter/releases) and configure the [systemd service](https://github.com/prometheus-community/smartctl_exporter/blob/master/systemd/smartctl_exporter.service) + +```bash +unp smartctl_exporter-0.13.0.linux-amd64.tar.gz +sudo mv smartctl_exporter-0.13.0.linux-amd64/smartctl_exporter /usr/bin +``` + +Add the [service](https://github.com/prometheus-community/smartctl_exporter/blob/master/systemd/smartctl_exporter.service) to `/etc/systemd/system/smartctl-exporter.service` + +```ini +[Unit] +Description=smartctl exporter service +After=network-online.target + +[Service] +Type=simple +PIDFile=/run/smartctl_exporter.pid +ExecStart=/usr/bin/smartctl_exporter +User=root +Group=root +SyslogIdentifier=smartctl_exporter +Restart=on-failure +RemainAfterExit=no +RestartSec=100ms +StandardOutput=journal +StandardError=journal + +[Install] +WantedBy=multi-user.target +``` + +Then enable it: + +```bash +sudo systemctl enable smartctl-exporter +sudo service smartctl-exporter start +``` + +### [Using docker](https://github.com/prometheus-community/smartctl_exporter?tab=readme-ov-file#example-of-running-in-docker) + +```yaml +--- +services: + smartctl-exporter: + container_name: smartctl-exporter + image: prometheuscommunity/smartctl-exporter + privileged: true + user: root + ports: + - "9633:9633" +``` + +### Configuring prometheus + +Add the next scraping metrics: + +```yaml +- job_name: smartctl_exporter + metrics_path: /metrics + scrape_timeout: 60s + static_configs: + - targets: [smartctl-exporter:9633] + labels: + hostname: "your-hostname" +``` + +### Configuring the alerts + +Taking as a reference the [awesome prometheus rules](https://samber.github.io/awesome-prometheus-alerts/rules#s.m.a.r.t-device-monitoring) and [this wired post](https://www.wirewd.com/hacks/blog/monitoring_a_mixed_fleet_of_flash_hdd_and_nvme_devices_with_node_exporter_and_prometheus) I'm using the next rules: + +```yaml +--- +groups: + - name: smartctl exporter + rules: + - alert: SmartDeviceTemperatureWarning + expr: smartctl_device_temperature > 60 + for: 2m + labels: + severity: warning + annotations: + summary: Smart device temperature warning (instance {{ $labels.hostname }}) + description: "Device temperature warning (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: SmartDeviceTemperatureCritical + expr: smartctl_device_temperature > 80 + for: 2m + labels: + severity: critical + annotations: + summary: Smart device temperature critical (instance {{ $labels.hostname }}) + description: "Device temperature critical (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: SmartCriticalWarning + expr: smartctl_device_critical_warning > 0 + for: 15m + labels: + severity: critical + annotations: + summary: Smart critical warning (instance {{ $labels.hostname }}) + description: "device has critical warning (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: SmartNvmeWearoutIndicator + expr: smartctl_device_available_spare{device=~"nvme.*"} < smartctl_device_available_spare_threshold{device=~"nvme.*"} + for: 15m + labels: + severity: critical + annotations: + summary: Smart NVME Wearout Indicator (instance {{ $labels.hostname }}) + description: "NVMe device is wearing out (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: SmartNvmeMediaError + expr: smartctl_device_media_errors > 0 + for: 15m + labels: + severity: warning + annotations: + summary: Smart NVME Media errors (instance {{ $labels.hostname }}) + description: "Contains the number of occurrences where the controller detected an unrecovered data integrity error. Errors such as uncorrectable ECC, CRC checksum failure, or LBA tag mismatch are included in this field (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: SmartSmartStatusError + expr: smartctl_device_smart_status < 1 + for: 15m + labels: + severity: critical + annotations: + summary: Smart general status error (instance {{ $labels.hostname }}) + description: " (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: DiskReallocatedSectorsIncreased + expr: smartctl_device_attribute{attribute_id="5", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="5", attribute_value_type="raw"}[1h]) + labels: + severity: warning + annotations: + summary: "SMART Attribute Reallocated Sectors Count Increased" + description: "The SMART attribute 5 (Reallocated Sectors Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: DiskSpinRetryCountIncreased + expr: smartctl_device_attribute{attribute_id="10", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="10", attribute_value_type="raw"}[1h]) + labels: + severity: warning + annotations: + summary: "SMART Attribute Spin Retry Count Increased" + description: "The SMART attribute 10 (Spin Retry Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: DiskCurrentPendingSectorCountIncreased + expr: smartctl_device_attribute{attribute_id="197", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="197", attribute_value_type="raw"}[1h]) + labels: + severity: warning + annotations: + summary: "SMART Attribute Current Pending Sector Count Increased" + description: "The SMART attribute 197 (Current Pending Sector Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" + + - alert: DiskUncorrectableSectorCountIncreased + expr: smartctl_device_attribute{attribute_id="198", attribute_value_type="raw"} > max_over_time(smartctl_device_attribute{attribute_id="198", attribute_value_type="raw"}[1h]) + labels: + severity: warning + annotations: + summary: "SMART Attribute Uncorrectable Sector Count Increased" + description: "The SMART attribute 198 (Uncorrectable Sector Count) has increased on {{ $labels.device }} (instance {{ $labels.hostname }})\n VALUE = {{ $value }}\n LABELS = {{ $labels }}" +``` + +### Configuring the grafana dashboards + +Of the different grafana dashboards ([1](https://grafana.com/grafana/dashboards/22604-smartctl-exporter-dashboard/), [2](https://grafana.com/grafana/dashboards/20204-smart-hdd/), [3](https://grafana.com/grafana/dashboards/22381-smartctl-exporter/)) I went for the first one. + +Import it with the UI of grafana, make it work and then export the json to store it in your infra as code respository. + +# References + +- [Wikipedia](https://en.wikipedia.org/wiki/Self-Monitoring,_Analysis_and_Reporting_Technology) +- [Home](https://sourceforge.net/projects/smartmontools/) +- [Documentation](https://www.smartmontools.org/wiki/TocDoc) diff --git a/docs/year_reviews.md b/docs/year_reviews.md index d44d49edede..cb55dda2ecb 100644 --- a/docs/year_reviews.md +++ b/docs/year_reviews.md @@ -1,5 +1,14 @@ # 2025 +## Fascismo + +En el acto de toma de posesión del cargo de trump, elon musk hace el saludo nazi. + +![elon nazi](x.mp4) + +## Feminismo + +- [trump nada más llegar al poder restablece el género de nacimiento a las personas trans y amenaza con terminar los programas de diversidad, inclusión e igualdad](https://www.usnews.com/news/business/articles/2025-01-20/trump-orders-reflect-his-promises-to-roll-back-transgender-protections-and-end-dei-programs) ## Cambio climático ![](2025-california-fire.jpg) diff --git a/docs/zfs_storage_planning.md b/docs/zfs_storage_planning.md index 04d065fb55a..38452b33968 100644 --- a/docs/zfs_storage_planning.md +++ b/docs/zfs_storage_planning.md @@ -51,7 +51,7 @@ choosing two different models of disk from two different manufacturers. To reduce the chances of getting disks from the same manufacturing batch, you can buy them from different vendors. -# Choosing the disks +# Choosing the disks There are many things to take into account when choosing the different disks for your pool. @@ -173,6 +173,7 @@ External air cannot re-enter the drive to refill these gaps, as air atoms are to Hard drive manufacturers offer limited warranties (typically 5 years) for helium-sealed drives, acknowledging this gradual performance degradation. While under warranty, these issues are unlikely to manifest; however, all units will eventually succumb to helium escape and increased head-disk interference. You can read more of this issue in [1](https://www.truenas.com/community/threads/helium-drives-long-term-use.96649/), [2](https://linustechtips.com/topic/1359644-helium-hdd-health-update-after-5-years/), [3](https://foro.noticias3d.com/vbulletin/showthread.php?t=468562), [4](https://blog.westerndigital.com/helium-hard-drives-explained/) + ### [Data disk brands](https://www.nasmaster.com/best-nas-drives/) #### [Western Digital](https://www.nasmaster.com/wd-red-vs-red-plus-vs-red-pro-nas-hdd/) @@ -182,26 +183,26 @@ offering and you should consider these if you can find them at more affordable prices. WD splits its NAS drives into three sub-categories, normal, Plus, and Pro. -| Specs | WD Red | WD Red Plus | WD Red Pro | WD Ultrastar HC520 | -| ---------------------- | --------- | ------------------ | --------------- | --- | -| Technology | SMR | CMR | CMR | PMR | -| Bays | 1-8 | 1-8 | 1-24 | | -| Capacity | 2-6TB | 1-14TB | 2-18TB | 12TB | -| Speed | 5,400 RPM | 5,400 RPM (1-4TB) | 7200 RPM | 7200 RPM | -| Speed | 5,400 RPM | 5,640 RPM (6-8TB) | 7200 RPM | 7200 RPM | -| Speed | 5,400 RPM | 7,200 RPM (8-14TB) | 7200 RPM | 7200 RPM | -| Speed | ? | 210MB/s | 235MB/s | 255 MB/s | -| Cache | 256MB | 16MB (1TB) | | 256 MB | -| Cache | 256MB | 64MB (1TB) | 64MB (2TB) | 256 MB | -| Cache | 256MB | 128MB (2-8TB) | 256MB (4-12TB) | 256 MB | -| Cache | 256MB | 256MB (8-12TB) | 512MB (14-18TB) | 256 MB | -| Cache | 256MB | 512MB (14TB) | | 256 MB | -| Workload | 180TB/yr | 180TB/yr | 300TB/yr | | -| MTBF | 1 million | 1 million | 1 million | 2.5 M | -| Warranty | 3 years | 3 years | 5 years | 5 years | -| Power Consumption | ? | ? | 8.8 W | 5.0 W | -| Power Consumption Rest | ? | ? | 4.6 W | 6.9 W | -| Price | From $50 | From $45 | From $78 | | +| Specs | WD Red | WD Red Plus | WD Red Pro | WD Ultrastar HC520 | +| ---------------------- | --------- | ------------------ | --------------- | ------------------ | +| Technology | SMR | CMR | CMR | PMR | +| Bays | 1-8 | 1-8 | 1-24 | | +| Capacity | 2-6TB | 1-14TB | 2-18TB | 12TB | +| Speed | 5,400 RPM | 5,400 RPM (1-4TB) | 7200 RPM | 7200 RPM | +| Speed | 5,400 RPM | 5,640 RPM (6-8TB) | 7200 RPM | 7200 RPM | +| Speed | 5,400 RPM | 7,200 RPM (8-14TB) | 7200 RPM | 7200 RPM | +| Speed | ? | 210MB/s | 235MB/s | 255 MB/s | +| Cache | 256MB | 16MB (1TB) | | 256 MB | +| Cache | 256MB | 64MB (1TB) | 64MB (2TB) | 256 MB | +| Cache | 256MB | 128MB (2-8TB) | 256MB (4-12TB) | 256 MB | +| Cache | 256MB | 256MB (8-12TB) | 512MB (14-18TB) | 256 MB | +| Cache | 256MB | 512MB (14TB) | | 256 MB | +| Workload | 180TB/yr | 180TB/yr | 300TB/yr | | +| MTBF | 1 million | 1 million | 1 million | 2.5 M | +| Warranty | 3 years | 3 years | 5 years | 5 years | +| Power Consumption | ? | ? | 8.8 W | 5.0 W | +| Power Consumption Rest | ? | ? | 4.6 W | 6.9 W | +| Price | From $50 | From $45 | From $78 | | #### Seagate @@ -212,25 +213,27 @@ advanced than IronWolf Pro and are best suited for server environments. They sport incredible levels of performance and reliability, including a workload rate of 550TB per year. -| Specs | IronWolf | IronWolf Pro | Exos 7E8 8TB | Exos 7E10 8TB | Exos X18 16TB | Enterpri. Capacity | -| ---------------------------- | ------------------ | -------------------- | ------------ | ------------- | ------------- | --- | -| Technology | CMR | CMR | CMR | SMR | CMR | -| Bays | 1-8 | 1-24 | ? | ? | ? | ? | -| Capacity | 1-12TB | 2-20TB | 8TB | 8TB | 16 TB | 10 TB | -| RPM | 5,400 RPM (3-6TB) | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | -| RPM | 5,900 RPM (1-3TB) | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | -| RPM | 7,200 RPM (8-12TB) | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | -| Speed | 180MB/s (1-12TB) | 214-260MB/s (4-18TB) | 249 MB/s | 255 MB/s | 258 MB/s | 254 MB/s | -| Cache | 64MB (1-4TB) | 256 MB | 256 MB | 256 MB | 256 MB | 256 MB | -| Cache | 256MB (3-12TB) | 256 MB | 256 MB | 256 MB | 256 MB | 256 MB | -| Power Consumption | 10.1 W | 10.1 W | 12.81 W | 11.03 W | 9.31 W | 8 W | -| Power Consumption Rest | 7.8 W | 7.8 W | 7.64 W | 7.06 W | 5.08 W | 4.5 W | -| Workload | 180TB/yr | 300TB/yr | 550TB/yr | 550TB/yr | 550TB/yr | < 550TB/yr | -| MTBF | 1 million | 1 million | 2 millions | 2 millions | 2.5 millions | 2.5 millions | -| Noise idle | ? | ? | ? | ? | ? | 3.0 bels max | -| Noise performance seek | ? | ? | ? | ? | ? | 3.4 bels max | -| Warranty | 3 years | 5 years | 5 years | 5 years | 5 years | -| Price | From $60 (2022) | From $83 (2022) | 249$ (2022) | 249$ (2022) | 249$ (2023) | +| Specs | IronWolf | IronWolf Pro | Exos 7E8 8TB | Exos 7E10 8TB | Exos X18 16TB | Enterpri. Capacity | +| ---------------------- | ------------------ | -------------------- | ------------ | ------------- | ------------- | ------------------ | +| Technology | CMR | CMR | CMR | SMR | CMR | CMR\* | +| Bays | 1-8 | 1-24 | ? | ? | ? | ? | +| Capacity | 1-12TB | 2-20TB | 8TB | 8TB | 16 TB | 10 TB | +| RPM | 5,400 RPM (3-6TB) | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | +| RPM | 5,900 RPM (1-3TB) | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | +| RPM | 7,200 RPM (8-12TB) | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | 7200 RPM | +| Speed | 180MB/s (1-12TB) | 214-260MB/s (4-18TB) | 249 MB/s | 255 MB/s | 258 MB/s | 254 MB/s | +| Cache | 64MB (1-4TB) | 256 MB | 256 MB | 256 MB | 256 MB | 256 MB | +| Cache | 256MB (3-12TB) | 256 MB | 256 MB | 256 MB | 256 MB | 256 MB | +| Power Consumption | 10.1 W | 10.1 W | 12.81 W | 11.03 W | 9.31 W | 8 W | +| Power Consumption Rest | 7.8 W | 7.8 W | 7.64 W | 7.06 W | 5.08 W | 4.5 W | +| Workload | 180TB/yr | 300TB/yr | 550TB/yr | 550TB/yr | 550TB/yr | < 550TB/yr | +| MTBF | 1 million | 1 million | 2 millions | 2 millions | 2.5 millions | 2.5 millions | +| Noise idle | ? | ? | ? | ? | ? | 3.0 bels max | +| Noise performance seek | ? | ? | ? | ? | ? | 3.4 bels max | +| Warranty | 3 years | 5 years | 5 years | 5 years | 5 years | ? | +| Price | From $60 (2022) | From $83 (2022) | 249$ (2022) | 249$ (2022) | 249$ (2023) | ? | + +I was not able to find if the Enterprise capacity of 10TB ST10000NM0046 is CMR or SMR on their specifications, but in [this page](https://www.seagate.com/es/es/products/cmr-smr-list/) all exos are CMR and there is no SMR over 8TB. And according to [this page](https://nascompares.com/answer/list-of-wd-cmr-and-smr-hard-drives-hdd/) is CMR, so I'm betting it is. Exos 7E10 is SMR so it's ruled out. @@ -363,18 +366,18 @@ won't break so I don't feel like having a spare one. ## Pool configuration -* [Use `ashift=12` or `ashift=13`](https://wiki.debian.org/ZFS) when creating the pool if applicable (though ZFS can detect correctly for most cases). Value of `ashift` is exponent of 2, which should be aligned to the physical sector size of disks, for example `2^9=512`, `2^12=4096`, `2^13=8192`. Some disks are reporting a logical sector size of 512 bytes while having 4KiB physical sector size , and some SSDs have 8KiB physical sector size. +- [Use `ashift=12` or `ashift=13`](https://wiki.debian.org/ZFS) when creating the pool if applicable (though ZFS can detect correctly for most cases). Value of `ashift` is exponent of 2, which should be aligned to the physical sector size of disks, for example `2^9=512`, `2^12=4096`, `2^13=8192`. Some disks are reporting a logical sector size of 512 bytes while having 4KiB physical sector size , and some SSDs have 8KiB physical sector size. - Consider using `ashift=12` or `ashift=13` even if currently using only disks with 512 bytes sectors. Adding devices with bigger sectors to the same VDEV can severely impact performance due to wrong alignment, while a device with 512 sectors will work also with a higher `ashift`. + Consider using `ashift=12` or `ashift=13` even if currently using only disks with 512 bytes sectors. Adding devices with bigger sectors to the same VDEV can severely impact performance due to wrong alignment, while a device with 512 sectors will work also with a higher `ashift`. -* Set "autoexpand" to on, so you can expand the storage pool automatically after all disks in the pool have been replaced with larger ones. Default is off. +- Set "autoexpand" to on, so you can expand the storage pool automatically after all disks in the pool have been replaced with larger ones. Default is off. ## [ZIL or SLOG](https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/) Before we can begin, we need to get a few terms out of the way that seem to be confusing: -* ZFS Intent Log, or ZIL is a logging mechanism where all of the data to be the written is stored, then later flushed as a transactional write. Similar in function to a journal for journaled filesystems, like `ext3` or `ext4`. Typically stored on platter disk. Consists of a ZIL header, which points to a list of records, ZIL blocks and a ZIL trailer. The ZIL behaves differently for different writes. For writes smaller than 64KB (by default), the ZIL stores the write data. For writes larger, the write is not stored in the ZIL, and the ZIL maintains pointers to the synched data that is stored in the log record. -* Separate Intent Log, or SLOG, is a separate logging device that caches the synchronous parts of the ZIL before flushing them to slower disk. This would either be a battery-backed DRAM drive or a fast SSD. The SLOG only caches synchronous data, and does not cache asynchronous data. Asynchronous data will flush directly to spinning disk. Further, blocks are written a block-at-a-time, rather than as simultaneous transactions to the SLOG. If the SLOG exists, the ZIL will be moved to it rather than residing on platter disk. Everything in the SLOG will always be in system memory. +- ZFS Intent Log, or ZIL is a logging mechanism where all of the data to be the written is stored, then later flushed as a transactional write. Similar in function to a journal for journaled filesystems, like `ext3` or `ext4`. Typically stored on platter disk. Consists of a ZIL header, which points to a list of records, ZIL blocks and a ZIL trailer. The ZIL behaves differently for different writes. For writes smaller than 64KB (by default), the ZIL stores the write data. For writes larger, the write is not stored in the ZIL, and the ZIL maintains pointers to the synched data that is stored in the log record. +- Separate Intent Log, or SLOG, is a separate logging device that caches the synchronous parts of the ZIL before flushing them to slower disk. This would either be a battery-backed DRAM drive or a fast SSD. The SLOG only caches synchronous data, and does not cache asynchronous data. Asynchronous data will flush directly to spinning disk. Further, blocks are written a block-at-a-time, rather than as simultaneous transactions to the SLOG. If the SLOG exists, the ZIL will be moved to it rather than residing on platter disk. Everything in the SLOG will always be in system memory. When you read online about people referring to "adding an SSD ZIL to the pool", they are meaning adding an SSD SLOG, of where the ZIL will reside. The ZIL is a subset of the SLOG in this case. The SLOG is the device, the ZIL is data on the device. Further, not all applications take advantage of the ZIL. Applications such as databases (MySQL, PostgreSQL, Oracle), NFS and iSCSI targets do use the ZIL. Typical copying of data around the filesystem will not use it. Lastly, the ZIL is generally never read, except at boot to see if there is a missing transaction. The ZIL is basically "write-only", and is very write-intensive. @@ -386,7 +389,7 @@ If you use a SLOG you will see improved disk latencies, disk utilization and sys WARNING: Some motherboards will not present disks in a consistent manner to the Linux kernel across reboots. As such, a disk identified as `/dev/sda` on one boot might be `/dev/sdb` on the next. For the main pool where your data is stored, this is not a problem as ZFS can reconstruct the VDEVs based on the metadata geometry. For your L2ARC and SLOG devices, however, no such metadata exists. So, rather than adding them to the pool by their `/dev/sd?` names, you should use the `/dev/disk/by-id/*` names, as these are symbolic pointers to the ever-changing `/dev/sd?` files. If you don't heed this warning, your SLOG device may not be added to your hybrid pool at all, and you will need to re-add it later. This could drastically affect the performance of the applications depending on the existence of a fast SLOG. -Adding a SLOG to your existing zpool is not difficult. However, it is considered best practice to mirror the SLOG. Suppose that there are 4 platter disks in the pool, and two NVME. +Adding a SLOG to your existing zpool is not difficult. However, it is considered best practice to mirror the SLOG. Suppose that there are 4 platter disks in the pool, and two NVME. First you need to create a partition of 5 GB on each the nvme drive: @@ -401,10 +404,11 @@ Then mirror the partitions as SLOG ```bash zpool add tank log mirror \ /dev/disk/by-id/nvme0n1-part1 \ -/dev/disk/by-id/nvme1n1-part1 +/dev/disk/by-id/nvme1n1-part1 ``` Check that it worked with + ```bash # zpool status pool: tank @@ -429,7 +433,7 @@ You will likely not need a large ZIL, take into account that zfs dumps it's cont ## [Adjustable Replacement Cache](https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/) -The ZFS adjustable replacement cache (ARC) is one such caching mechanism that caches both recent block requests as well as frequent block requests. +The ZFS adjustable replacement cache (ARC) is one such caching mechanism that caches both recent block requests as well as frequent block requests. It will occupy 1/2 of available RAM. However, this isn't static. If you have 32 GB of RAM in your server, this doesn't mean the cache will always be 16 GB. Rather, the total cache will adjust its size based on kernel decisions. If the kernel needs more RAM for a scheduled process, the ZFS ARC will be adjusted to make room for whatever the kernel needs. However, if there is space that the ZFS ARC can occupy, it will take it up. The ARC can be extended using the level 2 ARC or L2ARC. This means that as the MRU (the most recently requested blocks from the filesystem) or MFU (the most frequently requested blocks from the filesystem) grow, they don't both simultaneously share the ARC in RAM and the L2ARC on your SSD. Instead, when a page is about to be evicted, a walking algorithm will evict the MRU and MFU pages into an 8 MB buffer, which is later set as an atomic write transaction to the L2ARC. The advantage is that the latency of evicting pages from the cache is not impacted. Further, if a large read of data blocks is sent to the cache, the blocks are evicted before the L2ARC walk, rather than sent to the L2ARC. This minimizes polluting the L2ARC with massive sequential reads. Filling the L2ARC can also be very slow, or very fast, depending on the access to your data. @@ -451,10 +455,11 @@ It is recommended that you stripe the L2ARC to maximize both size and speed. ```bash zpool add tank cache \ /dev/disk/by-id/nvme0n1-part2 \ -/dev/disk/by-id/nvme1n1-part2 +/dev/disk/by-id/nvme1n1-part2 ``` Check that it worked with + ```bash # zpool status pool: tank @@ -480,65 +485,201 @@ To check hte size of the L2ARC use `zpool iostat -v`. As with all recommendations, some of these guidelines carry a great amount of weight, while others might not. You may not even be able to follow them as rigidly as you would like. Regardless, you should be aware of them. The idea of "best practices" is to optimize space efficiency, performance and ensure maximum data integrity. -* Keep pool capacity under 80% for best performance. Due to the copy-on-write nature of ZFS, the filesystem gets heavily fragmented. -* Only run ZFS on 64-bit kernels. It has 64-bit specific code that 32-bit kernels cannot do anything with. -* Install ZFS only on a system with lots of RAM. 1 GB is a bare minimum, 2 GB is better, 4 GB would be preferred to start. Remember, ZFS will use 1/2 of the available RAM for the ARC. -* Use ECC RAM when possible for scrubbing data in registers and maintaining data consistency. The ARC is an actual read-only data cache of valuable data in RAM. -* Use whole disks rather than partitions. ZFS can make better use of the on-disk cache as a result. If you must use partitions, backup the partition table, and take care when reinstalling data into the other partitions, so you don't corrupt the data in your pool. -* Keep each VDEV in a storage pool the same size. If VDEVs vary in size, ZFS will favor the larger VDEV, which could lead to performance bottlenecks. -* Use redundancy when possible, as ZFS can and will want to correct data errors that exist in the pool. You cannot fix these errors if you do not have a redundant good copy elsewhere in the pool. Mirrors and RAID-Z levels accomplish this. -* Do not use raidz1 for disks 1TB or greater in size. -* For raidz1, do not use less than 3 disks, nor more than 7 disks in each vdev -* For raidz2, do not use less than 6 disks, nor more than 10 disks in each vdev (8 is a typical average). -* For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev (13 & 15 are typical average). -* Consider using RAIDZ-2 or RAIDZ-3 over RAIDZ-1. You've heard the phrase "when it rains, it pours". This is true for disk failures. If a disk fails in a RAIDZ-1, and the hot spare is getting resilvered, until the data is fully copied, you cannot afford another disk failure during the resilver, or you will suffer data loss. With RAIDZ-2, you can suffer two disk failures, instead of one, increasing the probability you have fully resilvered the necessary data before the second, and even third disk fails. -* Perform regular (at least weekly) backups of the full storage pool. It's not a backup, unless you have multiple copies. Just because you have redundant disk, does not ensure live running data in the event of a power failure, hardware failure or disconnected cables. -* Use hot spares to quickly recover from a damaged device. Set the "autoreplace" property to on for the pool. -* Consider using a hybrid storage pool with fast SSDs or NVRAM drives. Using a fast SLOG and L2ARC can greatly improve performance. -* If using a hybrid storage pool with multiple devices, mirror the SLOG and stripe the L2ARC. -* If using a hybrid storage pool, and partitioning the fast SSD or NVRAM drive, unless you know you will need it, 1 GB is likely sufficient for your SLOG. Use the rest of the SSD or NVRAM drive for the L2ARC. The more storage for the L2ARC, the better. -* If possible, scrub consumer-grade SATA and SCSI disks weekly and enterprise-grade SAS and FC disks monthly. Depending on a lot factors, this might not be possible, so your mileage may vary. But, you should scrub as frequently as possible, basically. -* Email reports of the storage pool health weekly for redundant arrays, and bi-weekly for non-redundant arrays. -* When using advanced format disks that read and write data in 4 KB sectors, set the "ashift" value to 12 on pool creation for maximum performance. Default is 9 for 512-byte sectors. -* Set "autoexpand" to on, so you can expand the storage pool automatically after all disks in the pool have been replaced with larger ones. Default is off. -* Always export your storage pool when moving the disks from one physical system to another. -* When considering performance, know that for sequential writes, mirrors will always outperform RAID-Z levels. For sequential reads, RAID-Z levels will perform more slowly than mirrors on smaller data blocks and faster on larger data blocks. For random reads and writes, mirrors and RAID-Z seem to perform in similar manners. Striped mirrors will outperform mirrors and RAID-Z in both sequential, and random reads and writes. -* Compression is disabled by default. This doesn't make much sense with today's hardware. ZFS compression is extremely cheap, extremely fast, and barely adds any latency to the reads and writes. In fact, in some scenarios, your disks will respond faster with compression enabled than disabled. A further benefit is the massive space benefits. -* Unless you have the RAM, avoid using deduplication. Unlike compression, deduplication is very costly on the system. The deduplication table consumes massive amounts of RAM. -* Avoid running a ZFS root filesystem on GNU/Linux for the time being. It's a bit too experimental for /boot and GRUB. However, do create datasets for /home/, /var/log/ and /var/cache/. -* Snapshot frequently and regularly. Snapshots are cheap, and can keep a plethora of file versions over time. -* Snapshots are not a backup. Use "zfs send" and "zfs receive" to send your ZFS snapshots to an external storage. -* If using NFS, use ZFS NFS rather than your native exports. This can ensure that the dataset is mounted and online before NFS clients begin sending data to the mountpoint. -* Don't mix NFS kernel exports and ZFS NFS exports. This is difficult to administer and maintain. -* For /home/ ZFS installations, setting up nested datasets for each user. For example, pool/home/atoponce and pool/home/dobbs. Consider using quotas on the datasets. -* When using "zfs send" and "zfs receive", send incremental streams with the "zfs send -i" switch. This can be an exceptional time saver. -* Consider using "zfs send" over "rsync", as the "zfs send" command can preserve dataset properties. +- Keep pool capacity under 80% for best performance. Due to the copy-on-write nature of ZFS, the filesystem gets heavily fragmented. +- Only run ZFS on 64-bit kernels. It has 64-bit specific code that 32-bit kernels cannot do anything with. +- Install ZFS only on a system with lots of RAM. 1 GB is a bare minimum, 2 GB is better, 4 GB would be preferred to start. Remember, ZFS will use 1/2 of the available RAM for the ARC. +- Use ECC RAM when possible for scrubbing data in registers and maintaining data consistency. The ARC is an actual read-only data cache of valuable data in RAM. +- Use whole disks rather than partitions. ZFS can make better use of the on-disk cache as a result. If you must use partitions, backup the partition table, and take care when reinstalling data into the other partitions, so you don't corrupt the data in your pool. +- Keep each VDEV in a storage pool the same size. If VDEVs vary in size, ZFS will favor the larger VDEV, which could lead to performance bottlenecks. +- Use redundancy when possible, as ZFS can and will want to correct data errors that exist in the pool. You cannot fix these errors if you do not have a redundant good copy elsewhere in the pool. Mirrors and RAID-Z levels accomplish this. +- Do not use raidz1 for disks 1TB or greater in size. +- For raidz1, do not use less than 3 disks, nor more than 7 disks in each vdev +- For raidz2, do not use less than 6 disks, nor more than 10 disks in each vdev (8 is a typical average). +- For raidz3, do not use less than 7 disks, nor more than 15 disks in each vdev (13 & 15 are typical average). +- Consider using RAIDZ-2 or RAIDZ-3 over RAIDZ-1. You've heard the phrase "when it rains, it pours". This is true for disk failures. If a disk fails in a RAIDZ-1, and the hot spare is getting resilvered, until the data is fully copied, you cannot afford another disk failure during the resilver, or you will suffer data loss. With RAIDZ-2, you can suffer two disk failures, instead of one, increasing the probability you have fully resilvered the necessary data before the second, and even third disk fails. +- Perform regular (at least weekly) backups of the full storage pool. It's not a backup, unless you have multiple copies. Just because you have redundant disk, does not ensure live running data in the event of a power failure, hardware failure or disconnected cables. +- Use hot spares to quickly recover from a damaged device. Set the "autoreplace" property to on for the pool. +- Consider using a hybrid storage pool with fast SSDs or NVRAM drives. Using a fast SLOG and L2ARC can greatly improve performance. +- If using a hybrid storage pool with multiple devices, mirror the SLOG and stripe the L2ARC. +- If using a hybrid storage pool, and partitioning the fast SSD or NVRAM drive, unless you know you will need it, 1 GB is likely sufficient for your SLOG. Use the rest of the SSD or NVRAM drive for the L2ARC. The more storage for the L2ARC, the better. +- If possible, scrub consumer-grade SATA and SCSI disks weekly and enterprise-grade SAS and FC disks monthly. Depending on a lot factors, this might not be possible, so your mileage may vary. But, you should scrub as frequently as possible, basically. +- Email reports of the storage pool health weekly for redundant arrays, and bi-weekly for non-redundant arrays. +- When using advanced format disks that read and write data in 4 KB sectors, set the "ashift" value to 12 on pool creation for maximum performance. Default is 9 for 512-byte sectors. +- Set "autoexpand" to on, so you can expand the storage pool automatically after all disks in the pool have been replaced with larger ones. Default is off. +- Always export your storage pool when moving the disks from one physical system to another. +- When considering performance, know that for sequential writes, mirrors will always outperform RAID-Z levels. For sequential reads, RAID-Z levels will perform more slowly than mirrors on smaller data blocks and faster on larger data blocks. For random reads and writes, mirrors and RAID-Z seem to perform in similar manners. Striped mirrors will outperform mirrors and RAID-Z in both sequential, and random reads and writes. +- Compression is disabled by default. This doesn't make much sense with today's hardware. ZFS compression is extremely cheap, extremely fast, and barely adds any latency to the reads and writes. In fact, in some scenarios, your disks will respond faster with compression enabled than disabled. A further benefit is the massive space benefits. +- Unless you have the RAM, avoid using deduplication. Unlike compression, deduplication is very costly on the system. The deduplication table consumes massive amounts of RAM. +- Avoid running a ZFS root filesystem on GNU/Linux for the time being. It's a bit too experimental for /boot and GRUB. However, do create datasets for /home/, /var/log/ and /var/cache/. +- Snapshot frequently and regularly. Snapshots are cheap, and can keep a plethora of file versions over time. +- Snapshots are not a backup. Use "zfs send" and "zfs receive" to send your ZFS snapshots to an external storage. +- If using NFS, use ZFS NFS rather than your native exports. This can ensure that the dataset is mounted and online before NFS clients begin sending data to the mountpoint. +- Don't mix NFS kernel exports and ZFS NFS exports. This is difficult to administer and maintain. +- For /home/ ZFS installations, setting up nested datasets for each user. For example, pool/home/atoponce and pool/home/dobbs. Consider using quotas on the datasets. +- When using "zfs send" and "zfs receive", send incremental streams with the "zfs send -i" switch. This can be an exceptional time saver. +- Consider using "zfs send" over "rsync", as the "zfs send" command can preserve dataset properties. There are some caveats though. The point of the caveat list is by no means to discourage you from using ZFS. Instead, as a storage administrator planning out your ZFS storage server, these are things that you should be aware of. If you don't head these warnings, you could end up with corrupted data. The line may be blurred with the "best practices" list above. -* Your VDEVs determine the IOPS of the storage, and the slowest disk in that VDEV will determine the IOPS for the entire VDEV. -* ZFS uses 1/64 of the available raw storage for metadata. So, if you purchased a 1 TB drive, the actual raw size is 976 GiB. After ZFS uses it, you will have 961 GiB of available space. The "zfs list" command will show an accurate representation of your available storage. Plan your storage keeping this in mind. -* ZFS wants to control the whole block stack. It checksums, resilvers live data instead of full disks, self-heals corrupted blocks, and a number of other unique features. If using a RAID card, make sure to configure it as a true JBOD (or "passthrough mode"), so ZFS can control the disks. If you can't do this with your RAID card, don't use it. Best to use a real HBA. -* Do not use other volume management software beneath ZFS. ZFS will perform better, and ensure greater data integrity, if it has control of the whole block device stack. As such, avoid using dm-crypt, mdadm or LVM beneath ZFS. -* Do not share a SLOG or L2ARC DEVICE across pools. Each pool should have its own physical DEVICE, not logical drive, as is the case with some PCI-Express SSD cards. Use the full card for one pool, and a different physical card for another pool. If you share a physical device, you will create race conditions, and could end up with corrupted data. -* Do not share a single storage pool across different servers. ZFS is not a clustered filesystem. Use GlusterFS, Ceph, Lustre or some other clustered filesystem on top of the pool if you wish to have a shared storage backend. -* Other than a spare, SLOG and L2ARC in your hybrid pool, do not mix VDEVs in a single pool. If one VDEV is a mirror, all VDEVs should be mirrors. If one VDEV is a RAIDZ-1, all VDEVs should be RAIDZ-1. Unless of course, you know what you are doing, and are willing to accept the consequences. ZFS attempts to balance the data across VDEVs. Having a VDEV of a different redundancy can lead to performance issues and space efficiency concerns, and make it very difficult to recover in the event of a failure. -* Do not mix disk sizes or speeds in a single VDEV. Do mix fabrication dates, however, to prevent mass drive failure. -* In fact, do not mix disk sizes or speeds in your storage pool at all. -* Do not mix disk counts across VDEVs. If one VDEV uses 4 drives, all VDEVs should use 4 drives. -* Do not put all the drives from a single controller in one VDEV. Plan your storage, such that if a controller fails, it affects only the number of disks necessary to keep the data online. -* When using advanced format disks, you must set the ashift value to 12 at pool creation. It cannot be changed after the fact. Use "zpool create -o ashift=12 tank mirror sda sdb" as an example. -* Hot spare disks will not be added to the VDEV to replace a failed drive by default. You MUST enable this feature. Set the autoreplace feature to on. Use "zpool set autoreplace=on tank" as an example. - * The storage pool will not auto resize itself when all smaller drives in the pool have been replaced by larger ones. You MUST enable this feature, and you MUST enable it before replacing the first disk. Use "zpool set autoexpand=on tank" as an example. -* ZFS does not restripe data in a VDEV nor across multiple VDEVs. Typically, when adding a new device to a RAID array, the RAID controller will rebuild the data, by creating a new stripe width. This will free up some space on the drives in the pool, as it copies data to the new disk. ZFS has no such mechanism. Eventually, over time, the disks will balance out due to the writes, but even a scrub will not rebuild the stripe width. -* You cannot shrink a zpool, only grow it. This means you cannot remove VDEVs from a storage pool. -* You can only remove drives from mirrored VDEV using the "zpool detach" command. You can replace drives with another drive in RAIDZ and mirror VDEVs however. -* Do not create a storage pool of files or ZVOLs from an existing zpool. Race conditions will be present, and you will end up with corrupted data. Always keep multiple pools separate. -* The Linux kernel may not assign a drive the same drive letter at every boot. Thus, you should use the /dev/disk/by-id/ convention for your SLOG and L2ARC. If you don't, your zpool devices could end up as a SLOG device, which would in turn clobber your ZFS data. -* Don't create massive storage pools "just because you can". Even though ZFS can create 78-bit storage pool sizes, that doesn't mean you need to create one. -* Don't put production directly into the zpool. Use ZFS datasets instead. -* Don't commit production data to file VDEVs. Only use file VDEVs for testing scripts or learning the ins and outs of ZFS. -* A "zfs destroy" can cause downtime for other datasets. A "zfs destroy" will touch every file in the dataset that resides in the storage pool. The larger the dataset, the longer this will take, and it will use all the possible IOPS out of your drives to make it happen. Thus, if it take 2 hours to destroy the dataset, that's 2 hours of potential downtime for the other datasets in the pool. -* Debian and Ubuntu will not start the NFS daemon without a valid export in the /etc/exports file. You must either modify the /etc/init.d/nfs init script to start without an export, or create a local dummy export. -* When creating ZVOLs, make sure to set the block size as the same, or a multiple, of the block size that you will be formatting the ZVOL with. If the block sizes do not align, performance issues could arise. -* When loading the "zfs" kernel module, make sure to set a maximum number for the ARC. Doing a lot of "zfs send" or snapshot operations will cache the data. If not set, RAM will slowly fill until the kernel invokes OOM killer, and the system becomes responsive. For example set in the `/etc/modprobe.d/zfs.conf` file "options zfs zfs_arc_max=2147483648", which is a 2 GB limit for the ARC. +- Your VDEVs determine the IOPS of the storage, and the slowest disk in that VDEV will determine the IOPS for the entire VDEV. +- ZFS uses 1/64 of the available raw storage for metadata. So, if you purchased a 1 TB drive, the actual raw size is 976 GiB. After ZFS uses it, you will have 961 GiB of available space. The "zfs list" command will show an accurate representation of your available storage. Plan your storage keeping this in mind. +- ZFS wants to control the whole block stack. It checksums, resilvers live data instead of full disks, self-heals corrupted blocks, and a number of other unique features. If using a RAID card, make sure to configure it as a true JBOD (or "passthrough mode"), so ZFS can control the disks. If you can't do this with your RAID card, don't use it. Best to use a real HBA. +- Do not use other volume management software beneath ZFS. ZFS will perform better, and ensure greater data integrity, if it has control of the whole block device stack. As such, avoid using dm-crypt, mdadm or LVM beneath ZFS. +- Do not share a SLOG or L2ARC DEVICE across pools. Each pool should have its own physical DEVICE, not logical drive, as is the case with some PCI-Express SSD cards. Use the full card for one pool, and a different physical card for another pool. If you share a physical device, you will create race conditions, and could end up with corrupted data. +- Do not share a single storage pool across different servers. ZFS is not a clustered filesystem. Use GlusterFS, Ceph, Lustre or some other clustered filesystem on top of the pool if you wish to have a shared storage backend. +- Other than a spare, SLOG and L2ARC in your hybrid pool, do not mix VDEVs in a single pool. If one VDEV is a mirror, all VDEVs should be mirrors. If one VDEV is a RAIDZ-1, all VDEVs should be RAIDZ-1. Unless of course, you know what you are doing, and are willing to accept the consequences. ZFS attempts to balance the data across VDEVs. Having a VDEV of a different redundancy can lead to performance issues and space efficiency concerns, and make it very difficult to recover in the event of a failure. +- Do not mix disk sizes or speeds in a single VDEV. Do mix fabrication dates, however, to prevent mass drive failure. +- In fact, do not mix disk sizes or speeds in your storage pool at all. +- Do not mix disk counts across VDEVs. If one VDEV uses 4 drives, all VDEVs should use 4 drives. +- Do not put all the drives from a single controller in one VDEV. Plan your storage, such that if a controller fails, it affects only the number of disks necessary to keep the data online. +- When using advanced format disks, you must set the ashift value to 12 at pool creation. It cannot be changed after the fact. Use "zpool create -o ashift=12 tank mirror sda sdb" as an example. +- Hot spare disks will not be added to the VDEV to replace a failed drive by default. You MUST enable this feature. Set the autoreplace feature to on. Use "zpool set autoreplace=on tank" as an example. +- The storage pool will not auto resize itself when all smaller drives in the pool have been replaced by larger ones. You MUST enable this feature, and you MUST enable it before replacing the first disk. Use "zpool set autoexpand=on tank" as an example. +- ZFS does not restripe data in a VDEV nor across multiple VDEVs. Typically, when adding a new device to a RAID array, the RAID controller will rebuild the data, by creating a new stripe width. This will free up some space on the drives in the pool, as it copies data to the new disk. ZFS has no such mechanism. Eventually, over time, the disks will balance out due to the writes, but even a scrub will not rebuild the stripe width. +- You cannot shrink a zpool, only grow it. This means you cannot remove VDEVs from a storage pool. +- You can only remove drives from mirrored VDEV using the "zpool detach" command. You can replace drives with another drive in RAIDZ and mirror VDEVs however. +- Do not create a storage pool of files or ZVOLs from an existing zpool. Race conditions will be present, and you will end up with corrupted data. Always keep multiple pools separate. +- The Linux kernel may not assign a drive the same drive letter at every boot. Thus, you should use the /dev/disk/by-id/ convention for your SLOG and L2ARC. If you don't, your zpool devices could end up as a SLOG device, which would in turn clobber your ZFS data. +- Don't create massive storage pools "just because you can". Even though ZFS can create 78-bit storage pool sizes, that doesn't mean you need to create one. +- Don't put production directly into the zpool. Use ZFS datasets instead. +- Don't commit production data to file VDEVs. Only use file VDEVs for testing scripts or learning the ins and outs of ZFS. +- A "zfs destroy" can cause downtime for other datasets. A "zfs destroy" will touch every file in the dataset that resides in the storage pool. The larger the dataset, the longer this will take, and it will use all the possible IOPS out of your drives to make it happen. Thus, if it take 2 hours to destroy the dataset, that's 2 hours of potential downtime for the other datasets in the pool. +- Debian and Ubuntu will not start the NFS daemon without a valid export in the /etc/exports file. You must either modify the /etc/init.d/nfs init script to start without an export, or create a local dummy export. +- When creating ZVOLs, make sure to set the block size as the same, or a multiple, of the block size that you will be formatting the ZVOL with. If the block sizes do not align, performance issues could arise. +- When loading the "zfs" kernel module, make sure to set a maximum number for the ARC. Doing a lot of "zfs send" or snapshot operations will cache the data. If not set, RAM will slowly fill until the kernel invokes OOM killer, and the system becomes responsive. For example set in the `/etc/modprobe.d/zfs.conf` file "options zfs zfs_arc_max=2147483648", which is a 2 GB limit for the ARC. + +## Thoughts on adding new disks to ZFS + +When it comes to expanding an existing ZFS storage system, careful consideration is crucial. In my case, I faced a decision point with my storage cluster: after two years of reliable service from my 8TB drives, I needed more capacity. This led me to investigate the best way to integrate newly acquired refurbished 12TB drives into the system. Here's my journey through this decision-making process and the insights gained along the way. + +### The Starting Point + +My existing setup consisted of 8TB drives purchased new, which had been running smoothly for two years. The need for expansion led me to consider refurbished 12TB drives as a cost-effective solution. However, mixing new and refurbished drives, especially of different capacities, raised several important considerations that needed careful analysis. + +### Initial Drive Assessment + +The first step was to evaluate the reliability of all drives. Using `smartctl`, I analyzed the SMART data across both the existing and new drives: + +```bash +for disk in a b c d e f g h i; do + echo "/dev/sd$disk: old $(smartctl -a /dev/sd$disk | grep Old | wc -l) pre-fail: $(smartctl -a /dev/sd$disk | grep Pre- | wc -l)" +done +``` + +The results showed similar values across all drives, with "Old_Age" attributes ranging from 14-17 and "Pre-fail" attributes between 3-6. While this indicated all drives were aging, they were still functioning with acceptable parameters. However, raw SMART data doesn't tell the whole story, especially when comparing new versus refurbished drives. + +### Drive Reliability Considerations + +After careful evaluation, I found myself trusting the existing 8TB drives more than the newer refurbished 12TB ones. This conclusion was based on several factors: + +- The 8TB drives had a proven track record in my specific environment +- Their smaller size meant faster resilver times, reducing the window of vulnerability during recovery +- One of the refurbished 12TB drives was already showing concerning symptoms (8 reallocated sectors, although a badblocks didn't increase that number), which reduced confidence in the entire batch +- The existing drives were purchased new, while the 12TB drives were refurbished, adding an extra layer of uncertainty + +### Layout Options Analysis + +When expanding a ZFS system, there's always the temptation to simply add more vdevs to the existing pool. However, I investigated two main approaches: + +1. Creating a new separate ZFS pool with the new disks +2. Add another vdev to the existent pool + +#### Resilver time + +Adding the 12TB drives to the pool and redistributing the data across all 8 drives will help reduce the resilver time. Here's a detailed breakdown: + +1. **Current Situation** + +- 4x 8TB drives at 95% capacity means each drive is heavily packed +- High data density means longer resilver times +- Limited free space for data movement and reconstruction + +2. **After Adding 12TB Drives** + +- Total pool capacity increases significantly +- ZFS will automatically start rebalancing data across all 8 drives +- This process (sometimes called "data shuffling" or "data redistribution") has several benefits: + - Reduces data density per drive + - Creates more free space + - Improves overall pool performance + - Potentially reduces future resilver times + +3. **Resilver Time Reduction Mechanism** + +- With data spread across more drives, each individual drive has less data to resilver +- Less data per drive = faster resilver process +- The redistribution happens gradually and in the background + +#### Understanding Failure Scenarios + +The key differentiator between these approaches came down to failure scenarios: + +##### Single Drive Failure + +Both configurations handle single drive failures similarly, though the 12TB drives' longer resilver time creates a longer window of vulnerability in the two-vdev configuration if the data load is evenly shared between the disks. This is particularly concerning with refurbished drives, where the failure probability might be higher. + +However if as soon as you add the other vdev to the pool you defragment the data inside zfs, the 8TB drives will be less filled, so until more data is added you may reduce the resilver time as they have less data. + +##### Double Drive Failure + +This is where the configurations differ significantly: + +- In a two-vdev pool, losing two drives from the same vdev would cause complete pool failure +- With separate pools, a double drive failure would only affect one pool, allowing the other to continue operating. This way you can store the critical data on the pool you trust more. +- Given the mixed drive origins (new vs refurbished), isolating potential failures becomes more critical + +#### Performance Considerations + +While investigating performance implications, I found several interesting points about IOPS and throughput: + +- ZFS stripes data across vdevs, meaning more vdevs generally means better IOPS +- In RAIDZ configurations, IOPS are limited by the slowest drive in the vdev +- Multiple mirrored vdevs provide the best combined IOPS performance +- Streaming speeds scale with the number of data disks in a RAIDZ vdev +- When mixing drive sizes, ZFS tends to favor larger vdevs, which could lead to uneven wear + +#### Easiness of configuration + +##### Cache and log + +If you already have a zpool with a cache and logs in nvme, then if you were to use two pools, you'd need to reformat your nvme drives to create space for the new partitions needed for the new zpool. + +This would allow you to specify different cache sizes for each pool. But it comes at the cost of a more complex operation. + +##### New pool creation + +Adding a vdev to an existing pool is quicker and easier than to create a zpool. You need to make sure that you initialise it with the correct configuration. + +##### Storage management + +Having two pools doubles the operation tasks. One of the pools are to be filled soon, so you may need to manually move files and directories around to rebalance it. + +### Final Decision + +After weighing all factors, if you favour reliability over easiness of your life implement two separate ZFS pools. This statement is primarily driven by: + +1. **Enhanced Reliability**: By separating the pools, we can maintain service availability even if one pool fails completely +2. **Data Prioritization**: This allows placing critical application data on the more reliable pool (8TB drives), while using the refurbished drives for less critical data like media files +3. **Risk Isolation**: Keeping the proven, new-purchased drives separate from the refurbished ones minimizes the impact of potential issues with the refurbished drives +4. **Consistent Performance**: Following the best practice of keeping same-sized drives together in pools + +However I'm currently favouring easiness of life and trust my backup solution (I hope not to read this line in the future with regret :P), so I'll go with two vdevs. + +### Key Takeaways + +Through this investigation, I learned several important lessons about ZFS storage design: + +1. Raw parity drive count isn't the only reliability metric - configuration matters more than simple redundancy numbers +2. Pool layout significantly impacts both performance and failure scenarios +3. Sometimes simpler configurations (like separate pools) can provide better overall reliability than more complex ones +4. Consider the full lifecycle of the storage, including maintenance operations like resilver times +5. When expanding storage, don't underestimate the value of isolating different generations or sources of hardware +6. The history and source of drives (new vs refurbished) should influence your pool design decisions + +This investigation reinforced that storage design isn't just about maximizing space or performance - it's about finding the right balance of reliability, performance, and manageability for your specific needs. When dealing with mixed drive sources and different capacities, this balance becomes even more critical. + +### References and further reading + +- [Truenas post](https://www.truenas.com/blog/zfs-pool-performance-2/) +- [Freebsd post](https://forums.freebsd.org/threads/when-does-it-make-more-sense-to-use-multiple-vdevs-in-a-zfs-pool.83586/) +- [Klarasystems post](https://klarasystems.com/articles/choosing-the-right-zfs-pool-layout/) diff --git a/mkdocs.yml b/mkdocs.yml index 74292dd7175..2707d5b076d 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -35,7 +35,9 @@ nav: - Environmentalism: environmentalism.md - Laboral: laboral.md - Collaborating tools: collaborating_tools.md - - Conference organisation: conference_organisation.md + - Conference organisation: + - conference_organisation.md + - pretalx: pretalx.md - Ludditest: luddites.md - Life Management: - life_management.md @@ -103,6 +105,8 @@ nav: - Email clients: - himalaya: himalaya.md - alot: alot.md + - k9: k9.md + - Email protocols: - Maildir: maildir.md - Instant Messages Management: @@ -371,6 +375,7 @@ nav: - File management configuration: - NeoTree: neotree.md - Telescope: telescope.md + - fzf.nvim: fzf_nvim.md - Editing specific configuration: - vim_editor_plugins.md - Vim formatters: vim_formatters.md @@ -566,7 +571,10 @@ nav: - OpenZFS storage planning: zfs_storage_planning.md - Sanoid: sanoid.md - ZFS Prometheus exporter: zfs_exporter.md - - Hard drive health: hard_drive_health.md + - Hard drive health: + - hard_drive_health.md + - Smartctl: smartctl.md + - badblocks: badblocks.md - Resilience: - linux_resilience.md - Memtest: memtest.md @@ -768,7 +776,8 @@ nav: # - Streaming channels: streaming_channels.md - Music: - Sister Rosetta Tharpe: sister_rosetta_tharpe.md - - Video Gaming: + - Videogames: + - DragonSweeper: dragonsweeper.md - King Arthur Gold: kag.md - The Battle for Wesnoth: - The Battle for Wesnoth: wesnoth.md