Skip to content

Commit

Permalink
Vale style checks for tuning guide
Browse files Browse the repository at this point in the history
  • Loading branch information
sounix000 committed Jul 13, 2023
1 parent 11f03fb commit 0b9020c
Showing 1 changed file with 20 additions and 22 deletions.
42 changes: 20 additions & 22 deletions xml/tuning_cgroups.xml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@
<para>
Every process is assigned exactly one administrative cgroup. cgroups are
ordered in a hierarchical tree structure. You can set resource
limitations, such as CPU, memory, disk I/O, or network bandwidth usage,
limitations such as CPU, memory, disk I/O, or network bandwidth usage,
for single processes or for whole branches of the hierarchy tree.
</para>

Expand All @@ -52,9 +52,8 @@
</para>

<para>
The kernel cgroup API comes in two variants, v1 and v2. Additionally,
there can be multiple cgroup hierarchies exposing different APIs. From
the numerous possible combinations, there are two practical choices:
The kernel cgroup API comes in two variants &dash; v1 and v2. Additionally,
there can be multiple cgroup hierarchies exposing different APIs. From many possible combinations, there are two practical choices:
</para>
<itemizedlist>
<listitem>
Expand Down Expand Up @@ -98,7 +97,7 @@
</listitem>
<listitem>
<para>
simpler handling of the single hierararchy
simpler handling of the single hierarchy
</para>
</listitem>
</itemizedlist>
Expand All @@ -108,8 +107,8 @@
<para>
To enable the unified control group hierarchy, append
<option>systemd.unified_cgroup_hierarchy=1</option> as a kernel command
line parameter to the &grub; boot loader. (Refer to
<xref linkend="cha-grub2"/> for more details about configuring &grub;.)
line parameter to the &grub; boot loader. For more details about configuring &grub;, refer to
<xref linkend="cha-grub2"/>.
</para>
</sect1>

Expand All @@ -122,9 +121,9 @@
</para>

<para>
The accounting has relatively small but non-zero overhead, whose impact
depends on the workload. Activating accounting for one unit will also
implicitly activate it for all units in the same slice, and for all its
The accounting has comparatively small but non-zero overhead, whose impact
depends on the workload. Activating accounting for one unit also
implicitly activates it for all units in the same slice, and for all its
parent slices, and the units contained in them.
</para>

Expand Down Expand Up @@ -214,7 +213,7 @@ TasksMax=infinity
DefaultTasksMax=infinity</screen>
<para>
<literal>infinity</literal> means having no limit. It is not a requirement
to change the default, but setting some limits may help to prevent system
to change the default, but setting certain limits may help to prevent system
crashes from runaway processes.
</para>
</sect2>
Expand Down Expand Up @@ -292,7 +291,7 @@ DefaultTasksMax=256
<sect2>
<title>Default <literal>TasksMax</literal> limit on users</title>
<para>
The default limit on users should be fairly high, because user sessions
The default limit on users should be high, because user sessions
need more resources. Set your own default for any user by creating a new
file, for example
<filename>/etc/systemd/system/user-.slice.d/40-user-taskmask.conf</filename>.
Expand Down Expand Up @@ -320,7 +319,7 @@ TasksMax=16284
<para>
How do you know what values to use? This varies according to your
workloads, system resources, and other resource configurations. When your
<literal>TasksMax</literal> value is too low, you will see error messages
<literal>TasksMax</literal> value is too low, you may see error messages
such as <emphasis>Failed to fork (Resources temporarily
unavailable)</emphasis>, <emphasis>Can't create thread to handle new
connection</emphasis>, and <emphasis>Error: Function call 'fork' failed
Expand Down Expand Up @@ -402,7 +401,7 @@ The throttling policy is implemented higher in the stack, therefore it
does not require any additional adjustments.
The proportional I/O control policies have two different implementations:
the BFQ controller, and the cost-based model.
We describe the BFQ controller here. In order to exert its proportional
We describe the BFQ controller here. To exert its proportional
implementation for a particular device, we must make sure that BFQ is the
chosen scheduler. Check the current scheduler:
</para>
Expand All @@ -418,9 +417,8 @@ Switch the scheduler to BFQ:
</screen>
<para>
You must specify the disk device (not a partition). The
optimal way to set this attribute is a udev rule specific to the device
(note that &slsa; ships udev rules that already enable BFQ for rotational
disk drives).
optimal way to set this attribute is a udev rule specific to the device &slsa; ships udev rules that already enable BFQ for rotational
disk drives.
</para>
</sect3>

Expand All @@ -444,7 +442,7 @@ r
</screen>
<para>
I/O is originating only from cgroups c and b. Even though c has a higher
weight, it will be treated with lower priority because it is level-competing
weight, it is treated with lower priority because it is level-competing
with b.
</para>
</sect3>
Expand Down Expand Up @@ -476,7 +474,7 @@ example:
<title>I/O control behavior and setting expectations</title>
<para>
The following list items describe I/O control behavior, and what you
should expect under various conditions.
should expect under different conditions.
</para>
<itemizedlist>
<listitem>
Expand All @@ -485,7 +483,7 @@ I/O control works best for direct I/O operations (bypassing page cache),
the situations where the actual I/O is decoupled from the caller (typically
writeback via page cache) may manifest variously. For example, delayed I/O
control or even no observed I/O control (consider little bursts or competing
workloads that happen to never "meet", submitting I/O at the same time, and
workloads that happen to never <quote>meet</quote>, submitting I/O at the same time, and
saturating the bandwidth). For these reasons, the resulting ratio of
I/O throughputs does not strictly follow the ratio of configured weights.
</para>
Expand Down Expand Up @@ -530,7 +528,7 @@ each other (but responsible resource design perhaps avoids that).
<para>
The I/O device bandwidth is not the only shared resource on the I/O path.
Global file system structures are involved, which is relevant
when I/O control is meant to guarantee certain bandwidth; it will not, and
when I/O control is meant to guarantee certain bandwidth; it does not, and
it may even lead to priority inversion (prioritized cgroup waiting for a
transaction of slower cgroup).
</para>
Expand All @@ -539,7 +537,7 @@ transaction of slower cgroup).
<para>
So far, we have been discussing only explicit I/O of file system data, but
swap-in and swap-out can also be controlled. Although if such a need
arises, it usually points out to improperly provisioned memory (or memory limits).
arises, it points out to improperly provisioned memory (or memory limits).
</para>
</listitem>
</itemizedlist>
Expand Down

0 comments on commit 0b9020c

Please sign in to comment.