You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am currently attempting to write a custom async io engine for a distributed file system that I am working on and am having troubles understanding the semantics of io_eingine::getevents and io_engine::events. The file system I am working on exposes some sort of bespoke epollable IO interface where the file system kicks the epoll instance whenever there are some events that it can reap from the filesystem queue. Something like this:
What I am having troubles with is understanding how to map this to the io_engine::getevents and io_engine::events functions.
At the moment I understand an io_u object should map to some sort io object in my own file system, and the number of Completions broadly maps to the return value of io_engine::getevents.
However, what's not clear to me is how the parameters min and max of io_engine::getevents interacts with io_engine::events and what exactly io_engine::events indexes into.
My main issue is do we have to buffer completions if io_engine::getevents specifies a max that is smaller than the number events available?
For example, let's say I do the following:
queue and submit $N$ IOs into my filesystem by implementing io_engine::queue and io_engine::commit
at some point FIO is going to poll the filesystem with io_engine::getevents, let's say the max argument is some number max < $N$. However, the filesystem in fact already has $N$ IOs available (I believe theoretically it could have up to iodepth events available)
FIO now thinks there are max available events and indexes the events up to [0,max) with io_engine::events and process them as being complete.
Now, do I have to buffer the remaining max - $N$ events somewhere? and have them be available for the next io_engine::events to index into, or should they be dropped ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I am currently attempting to write a custom async io engine for a distributed file system that I am working on and am having troubles understanding the semantics of
io_eingine::getevents
andio_engine::events
. The file system I am working on exposes some sort of bespoke epollable IO interface where the file system kicks the epoll instance whenever there are some events that it can reap from the filesystem queue. Something like this:What I am having troubles with is understanding how to map this to the
io_engine::getevents
andio_engine::events
functions.At the moment I understand an
io_u
object should map to some sort io object in my own file system, and the number ofCompletions
broadly maps to the return value ofio_engine::getevents
.However, what's not clear to me is how the parameters
min
andmax
ofio_engine::getevents
interacts withio_engine::events
and what exactlyio_engine::events
indexes into.My main issue is do we have to buffer completions if
io_engine::getevents
specifies a max that is smaller than the number events available?For example, let's say I do the following:
io_engine::queue
andio_engine::commit
io_engine::getevents
, let's say themax
argument is some numbermax
<max
available events and indexes the events up to [0,max) withio_engine::events
and process them as being complete.max
-io_engine::events
to index into, or should they be dropped ?Best,
lukerip
Beta Was this translation helpful? Give feedback.
All reactions