From 0b9020c1dbe566c5317dd5deeb7f26d4d7aabbe6 Mon Sep 17 00:00:00 2001 From: Souvik Sarkar Date: Fri, 14 Jul 2023 00:08:20 +0530 Subject: [PATCH] Vale style checks for tuning guide --- xml/tuning_cgroups.xml | 42 ++++++++++++++++++++---------------------- 1 file changed, 20 insertions(+), 22 deletions(-) diff --git a/xml/tuning_cgroups.xml b/xml/tuning_cgroups.xml index 01e5d3cde2..305fc78c5f 100644 --- a/xml/tuning_cgroups.xml +++ b/xml/tuning_cgroups.xml @@ -37,7 +37,7 @@ Every process is assigned exactly one administrative cgroup. cgroups are ordered in a hierarchical tree structure. You can set resource - limitations, such as CPU, memory, disk I/O, or network bandwidth usage, + limitations such as CPU, memory, disk I/O, or network bandwidth usage, for single processes or for whole branches of the hierarchy tree. @@ -52,9 +52,8 @@ - The kernel cgroup API comes in two variants, v1 and v2. Additionally, - there can be multiple cgroup hierarchies exposing different APIs. From - the numerous possible combinations, there are two practical choices: + The kernel cgroup API comes in two variants ‐ v1 and v2. Additionally, + there can be multiple cgroup hierarchies exposing different APIs. From many possible combinations, there are two practical choices: @@ -98,7 +97,7 @@ - simpler handling of the single hierararchy + simpler handling of the single hierarchy @@ -108,8 +107,8 @@ To enable the unified control group hierarchy, append as a kernel command - line parameter to the &grub; boot loader. (Refer to - for more details about configuring &grub;.) + line parameter to the &grub; boot loader. For more details about configuring &grub;, refer to + . @@ -122,9 +121,9 @@ - The accounting has relatively small but non-zero overhead, whose impact - depends on the workload. Activating accounting for one unit will also - implicitly activate it for all units in the same slice, and for all its + The accounting has comparatively small but non-zero overhead, whose impact + depends on the workload. Activating accounting for one unit also + implicitly activates it for all units in the same slice, and for all its parent slices, and the units contained in them. @@ -214,7 +213,7 @@ TasksMax=infinity DefaultTasksMax=infinity infinity means having no limit. It is not a requirement - to change the default, but setting some limits may help to prevent system + to change the default, but setting certain limits may help to prevent system crashes from runaway processes. @@ -292,7 +291,7 @@ DefaultTasksMax=256 Default <literal>TasksMax</literal> limit on users - The default limit on users should be fairly high, because user sessions + The default limit on users should be high, because user sessions need more resources. Set your own default for any user by creating a new file, for example /etc/systemd/system/user-.slice.d/40-user-taskmask.conf. @@ -320,7 +319,7 @@ TasksMax=16284 How do you know what values to use? This varies according to your workloads, system resources, and other resource configurations. When your - TasksMax value is too low, you will see error messages + TasksMax value is too low, you may see error messages such as Failed to fork (Resources temporarily unavailable), Can't create thread to handle new connection, and Error: Function call 'fork' failed @@ -402,7 +401,7 @@ The throttling policy is implemented higher in the stack, therefore it does not require any additional adjustments. The proportional I/O control policies have two different implementations: the BFQ controller, and the cost-based model. -We describe the BFQ controller here. In order to exert its proportional +We describe the BFQ controller here. To exert its proportional implementation for a particular device, we must make sure that BFQ is the chosen scheduler. Check the current scheduler: @@ -418,9 +417,8 @@ Switch the scheduler to BFQ: You must specify the disk device (not a partition). The -optimal way to set this attribute is a udev rule specific to the device -(note that &slsa; ships udev rules that already enable BFQ for rotational -disk drives). +optimal way to set this attribute is a udev rule specific to the device &slsa; ships udev rules that already enable BFQ for rotational +disk drives. @@ -444,7 +442,7 @@ r I/O is originating only from cgroups c and b. Even though c has a higher -weight, it will be treated with lower priority because it is level-competing +weight, it is treated with lower priority because it is level-competing with b. @@ -476,7 +474,7 @@ example: I/O control behavior and setting expectations The following list items describe I/O control behavior, and what you - should expect under various conditions. + should expect under different conditions. @@ -485,7 +483,7 @@ I/O control works best for direct I/O operations (bypassing page cache), the situations where the actual I/O is decoupled from the caller (typically writeback via page cache) may manifest variously. For example, delayed I/O control or even no observed I/O control (consider little bursts or competing -workloads that happen to never "meet", submitting I/O at the same time, and +workloads that happen to never meet, submitting I/O at the same time, and saturating the bandwidth). For these reasons, the resulting ratio of I/O throughputs does not strictly follow the ratio of configured weights. @@ -530,7 +528,7 @@ each other (but responsible resource design perhaps avoids that). The I/O device bandwidth is not the only shared resource on the I/O path. Global file system structures are involved, which is relevant -when I/O control is meant to guarantee certain bandwidth; it will not, and +when I/O control is meant to guarantee certain bandwidth; it does not, and it may even lead to priority inversion (prioritized cgroup waiting for a transaction of slower cgroup). @@ -539,7 +537,7 @@ transaction of slower cgroup). So far, we have been discussing only explicit I/O of file system data, but swap-in and swap-out can also be controlled. Although if such a need -arises, it usually points out to improperly provisioned memory (or memory limits). +arises, it points out to improperly provisioned memory (or memory limits).