diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 7f9b8d8..36addcc 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-23T20:08:29","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-23T20:22:16","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 32d5866..72ca39c 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,5 +1,5 @@ -API · ConcurrentSim

API

ConcurrentSim.ContainerType
Container{N<:Real, T<:Number}(env::Environment, capacity::N=one(N); level::N=zero(N))

A "Container" resource object, storing up to capacity units of a resource (of type N).

There is a Resource alias for Container{Int, Int}.

Resource() with default capacity of 1 is very similar to a typical lock. The lock and unlock functions are a convenient way to interact with such a "lock", in a way mostly compatible with other discrete event and concurrency frameworks. The request and release aliases are also available for these two functions.

See Store for a more channel-like resource.

Think of Resource and Container as locks and of Store as channels. They block only if empty (on taking) or full (on storing).

source
ConcurrentSim.DelayQueueType
DelayQueue{T}

A queue in which items are stored in a FIFO order, but are only available after a delay.

julia> sim = Simulation()
+API · ConcurrentSim

API

ConcurrentSim.ContainerType
Container{N<:Real, T<:Number}(env::Environment, capacity::N=one(N); level::N=zero(N))

A "Container" resource object, storing up to capacity units of a resource (of type N).

There is a Resource alias for Container{Int, Int}.

Resource() with default capacity of 1 is very similar to a typical lock. The lock and unlock functions are a convenient way to interact with such a "lock", in a way mostly compatible with other discrete event and concurrency frameworks. The request and release aliases are also available for these two functions.

See Store for a more channel-like resource.

Think of Resource and Container as locks and of Store as channels. They block only if empty (on taking) or full (on storing).

source
ConcurrentSim.DelayQueueType
DelayQueue{T}

A queue in which items are stored in a FIFO order, but are only available after a delay.

julia> sim = Simulation()
        queue = DelayQueue{Symbol}(sim, 10)
        @resumable function producer(env, queue)
            for item in [:a,:b,:a,:c]
@@ -25,7 +25,7 @@
 [ Info: taking a at time 10.0
 [ Info: taking b at time 12.0
 [ Info: taking a at time 14.0
-[ Info: taking c at time 16.0
source
ConcurrentSim.QueueStoreType
QueueStore{N, T<:Number}

A store in which items are stored in a FIFO order.

julia> sim = Simulation()
+[ Info: taking c at time 16.0
source
ConcurrentSim.QueueStoreType
QueueStore{N, T<:Number}

A store in which items are stored in a FIFO order.

julia> sim = Simulation()
        store = Store{Symbol}(sim)
        queue = QueueStore{Symbol}(sim)
        items = [:a,:b,:a,:c];
@@ -46,7 +46,7 @@
  :a
  :b
  :a
- :c

See also: StackStore, Store

source
ConcurrentSim.StackStoreType
StackStore{N, T<:Number}

A store in which items are stored in a FILO order.

julia> sim = Simulation()
        store = Store{Symbol}(sim)
        stack = StackStore{Symbol}(sim)
        items = [:a,:b,:a,:c];
@@ -67,7 +67,7 @@
  :c
  :a
  :b
- :a

See also: QueueStore, Store

source
ConcurrentSim.StoreType
Store{N, T<:Number}(env::Environment; capacity::UInt=typemax(UInt))

A store is a resource that can hold a number of items of type N. It is similar to a Base.Channel with a finite capacity (put! blocks after reaching capacity). The put! and take! functions are a convenient way to interact with such a "channel" in a way mostly compatible with other discrete event and concurrency frameworks.

See Container for a more lock-like resource.

Think of Resource and Container as locks and of Store as channels/stacks. They block only if empty (on taking) or full (on storing).

Store does not guarantee any order of items. See StackStore and QueueStore for ordered variants.

julia> sim = Simulation(); store = Store{Int}(sim);
+ :a

See also: QueueStore, Store

source
ConcurrentSim.StoreType
Store{N, T<:Number}(env::Environment; capacity::UInt=typemax(UInt))

A store is a resource that can hold a number of items of type N. It is similar to a Base.Channel with a finite capacity (put! blocks after reaching capacity). The put! and take! functions are a convenient way to interact with such a "channel" in a way mostly compatible with other discrete event and concurrency frameworks.

See Container for a more lock-like resource.

Think of Resource and Container as locks and of Store as channels/stacks. They block only if empty (on taking) or full (on storing).

Store does not guarantee any order of items. See StackStore and QueueStore for ordered variants.

julia> sim = Simulation(); store = Store{Int}(sim);
 
 julia> put!(store, 1); run(sim, 1); put!(store, 2);
 
@@ -75,4 +75,4 @@
 2
 
 julia> value(take!(store))
-1
source
Base.put!Method
put!(sto::Store, item::T)

Put an item into the store. Returns the put event, blocking if the store is full.

source
Base.lockMethod
lock(res::Resource)

Locks the Resource and return the lock event. If the capacity of the Container is greater than 1, multiple requests can be made before blocking occurs.

source
Base.unlockMethod
unlock(res::Resource)

Unlocks the Resource and return the unlock event.

source
Base.take!Function
take!(::Store)

An alias for get(::Store) for easier interoperability with the Base.Channel interface. Blocks if the store is empty.

source
+1
source
Base.put!Method
put!(sto::Store, item::T)

Put an item into the store. Returns the put event, blocking if the store is full.

source
Base.lockMethod
lock(res::Resource)

Locks the Resource and return the lock event. If the capacity of the Container is greater than 1, multiple requests can be made before blocking occurs.

source
Base.unlockMethod
unlock(res::Resource)

Unlocks the Resource and return the unlock event.

source
Base.take!Function
take!(::Store)

An alias for get(::Store) for easier interoperability with the Base.Channel interface. Blocks if the store is empty.

source
diff --git a/dev/examples/Latency/index.html b/dev/examples/Latency/index.html index 409b15e..2f2fb05 100644 --- a/dev/examples/Latency/index.html +++ b/dev/examples/Latency/index.html @@ -81,4 +81,4 @@ Received this at 80.0 while sender sent this at 70.0 Received this at 85.0 while sender sent this at 75.0 Received this at 90.0 while sender sent this at 80.0 -Received this at 95.0 while sender sent this at 85.0 +Received this at 95.0 while sender sent this at 85.0 diff --git a/dev/examples/mmc/index.html b/dev/examples/mmc/index.html index 0f658e2..7a47cf2 100644 --- a/dev/examples/mmc/index.html +++ b/dev/examples/mmc/index.html @@ -69,4 +69,4 @@ Customer 6 exited service: 17.401833154870328 Customer 10 entered service: 17.401833154870328 Customer 9 exited service: 17.586065352135993 -Customer 10 exited service: 18.690264775280085 +Customer 10 exited service: 18.690264775280085 diff --git a/dev/examples/ross/index.html b/dev/examples/ross/index.html index e00e8e1..210ae83 100644 --- a/dev/examples/ross/index.html +++ b/dev/examples/ross/index.html @@ -68,4 +68,4 @@ At time 30844.62667837361: No more spares! At time 1601.2524911974856: No more spares! At time 824.1048708405848: No more spares! -Average crash time: 16664.247588264083 +Average crash time: 16664.247588264083 diff --git a/dev/guides/basics/index.html b/dev/guides/basics/index.html index bb7539a..f5f30db 100644 --- a/dev/guides/basics/index.html +++ b/dev/guides/basics/index.html @@ -14,4 +14,4 @@ # output -now=1.0, value=42

The example process function above first creates a timeout event. It passes the environment, a delay, and a value to it. The timeout schedules itself at now + delay (that’s why the environment is required); other event types usually schedule themselves at the current simulation time.

The process function then yields the event and thus gets suspended. It is resumed, when ConcurrentSim processes the timeout event. The process function also receives the event’s value (42) – this is, however, optional, so @yield event would have been okay if the you were not interested in the value or if the event had no value at all.

Finally, the process function prints the current simulation time (that is accessible via the now function) and the timeout’s value.

If all required process functions are defined, you can instantiate all objects for your simulation. In most cases, you start by creating an instance of Environment, e.g. a Simulation, because you’ll need to pass it around a lot when creating everything else.

Starting a process function involves two things:

+now=1.0, value=42

The example process function above first creates a timeout event. It passes the environment, a delay, and a value to it. The timeout schedules itself at now + delay (that’s why the environment is required); other event types usually schedule themselves at the current simulation time.

The process function then yields the event and thus gets suspended. It is resumed, when ConcurrentSim processes the timeout event. The process function also receives the event’s value (42) – this is, however, optional, so @yield event would have been okay if the you were not interested in the value or if the event had no value at all.

Finally, the process function prints the current simulation time (that is accessible via the now function) and the timeout’s value.

If all required process functions are defined, you can instantiate all objects for your simulation. In most cases, you start by creating an instance of Environment, e.g. a Simulation, because you’ll need to pass it around a lot when creating everything else.

Starting a process function involves two things:

diff --git a/dev/guides/blockingandyielding/index.html b/dev/guides/blockingandyielding/index.html index de81100..8aebd16 100644 --- a/dev/guides/blockingandyielding/index.html +++ b/dev/guides/blockingandyielding/index.html @@ -1,2 +1,2 @@ -Resource API · ConcurrentSim

Blocking and Yielding Resource API

The goal of this page is to list the most common synchronization and resource management patterns used in ConcurrentSim.jl simulations and to briefly compare them to Julia's base capabilities for asynchronous and parallel programming.

There are many different approaches to discrete event simulation in particular and to asynchronous and parallel programming in general. This page assumes some rudimentary understanding of concurrency in programming. While not necessary, you are encouraged to explore the following resources for a more holistic understanding:

  • "concurrency" vs "parallelism" - see stackoverflow.com on the topic;
  • "threads" vs "tasks": A task is the actual piece of work, a thread is the "runway" on which a task runs. You can have more tasks than threads and you can even have tasks that jump between threads - see Julia's parallel programming documentation (in particular the async and multithreading docs), and multiple Julia blog post on multithreading and its misuses;
  • "locks" used to guard (or synchronize) the access to a given resource: i.e. one threads locks an array while modifying it in order to ensure that another thread will not be modifying it at the same time. Julia's Base multithreading capabilities provide a ReentrantLock, together with a lock, trylock, unlock, and islocked API;
  • "channels" used to organize concurrent tasks. Julia's Base multithreading capabilities provide Channel, together with take!, put!, isready;
  • knowing of the "red/blue-colored functions" metaphor can be valuable as well as learning of "promises" and "futures".

Programming discrete event simulations can be very similar to async parallel programming, except for the fact that in the simulation the "time" is fictitious (and tracking it is a big part of the value proposition in the simulation software). On the other hand, in usual parallel programming the goal is simply to do as much work as possible in the shortest (actual) time. In that context, one possible use of discrete event simulations is to cheaply model and optimize various parallel implementations of actual expensive algorithms (whether numerical computer algorithms or the algorithms used to schedule a real factory or a fleet of trucks).

In particular, the ConcurrentSim.jl package uses the async "coroutines" model of parallel programing. ConcurrentSim uses the ResumableFunctions.jl package to build its coroutines, which uses the @resumable macro to mark a function as an "async" coroutine and the @yield macro to yield between coroutines.

Base Julia coroutines vs ConcurrentSim coroutines

The ConcurrentSim and ResumableFunctions coroutines are currently incompatible with Julia's base coroutines (which based around wait and fetch). A separate coroutines implementation was necessary, because Julia's coroutines are designed for computationally heavy tasks and practical parallel algorithms, leading to significant overhead when they are used with extremely large numbers of computationally cheap tasks, as it is common in discrete event simulators. ResumableFunctions's coroutines are single threaded but with drastically lower call overhead. A future long-term goal of ours is to unify the API used by ResumableFunctions and base Julia, but this will not be achieved in the near term, hence the need for pages like this one.

Without further ado, here is the typical API used with:

  • ConcurrentSim.Resource which is used to represent scarce resource that can be used by only up to a fixed number of tasks. If the limit is just one task (the default), this is very similar to Base.ReentrantLock. Resource is a special case of Container with an integer "resource counter".
  • ConcurrentSim.Store which is used to represent an unordered heap. For the ordered versions, consider QueueStore or StackStore.
Base ReentrantLockBase ChannelConcurrentSim ContainerConcurrentSim Resource, i.e. Container{Int}ConcurrentSim Store
put!@yield@yield@yieldlow-level "put an object in" API
take!block@yieldthe Channel-like API for Store
lockblock@yieldthe Lock-like API for Resource (there is also trylock)
unlock✔️@yieldthe Lock-like API for Resource
isready✔️✔️✔️✔️something is stored in the resource
islocked✔️✔️✔️✔️the resource can not store anything more

The table denotes which methods exist (✔️), are blocking (block), need to be explicitly yielded with ResumableFunctions (@yield), or are not applicable (❌).

As you can see Resource shares some properties with ReentrantLock and avails itself of the lock/unlock/trylock Base API. Store similarly shares some properties with Channel and shares the put!/take! Base API. Of note is that when the Base API would be blocking, the corresponding ConcurrentSim methods actually give coroutines that need to be @yield-ed.

take! and unlock are both implemented on top of the lower level get.

The Base.lock and Base.unlock are aliased to ConcurrentSim.request and ConcurrentSim.release respectively for semantic convenience when working with Resource.

unlock(::Resource) is instantaneous so the @yield is not strictly necessary. Similarly for put!(::Store) if the store has infinite capacity.

+Resource API · ConcurrentSim

Blocking and Yielding Resource API

The goal of this page is to list the most common synchronization and resource management patterns used in ConcurrentSim.jl simulations and to briefly compare them to Julia's base capabilities for asynchronous and parallel programming.

There are many different approaches to discrete event simulation in particular and to asynchronous and parallel programming in general. This page assumes some rudimentary understanding of concurrency in programming. While not necessary, you are encouraged to explore the following resources for a more holistic understanding:

  • "concurrency" vs "parallelism" - see stackoverflow.com on the topic;
  • "threads" vs "tasks": A task is the actual piece of work, a thread is the "runway" on which a task runs. You can have more tasks than threads and you can even have tasks that jump between threads - see Julia's parallel programming documentation (in particular the async and multithreading docs), and multiple Julia blog post on multithreading and its misuses;
  • "locks" used to guard (or synchronize) the access to a given resource: i.e. one threads locks an array while modifying it in order to ensure that another thread will not be modifying it at the same time. Julia's Base multithreading capabilities provide a ReentrantLock, together with a lock, trylock, unlock, and islocked API;
  • "channels" used to organize concurrent tasks. Julia's Base multithreading capabilities provide Channel, together with take!, put!, isready;
  • knowing of the "red/blue-colored functions" metaphor can be valuable as well as learning of "promises" and "futures".

Programming discrete event simulations can be very similar to async parallel programming, except for the fact that in the simulation the "time" is fictitious (and tracking it is a big part of the value proposition in the simulation software). On the other hand, in usual parallel programming the goal is simply to do as much work as possible in the shortest (actual) time. In that context, one possible use of discrete event simulations is to cheaply model and optimize various parallel implementations of actual expensive algorithms (whether numerical computer algorithms or the algorithms used to schedule a real factory or a fleet of trucks).

In particular, the ConcurrentSim.jl package uses the async "coroutines" model of parallel programing. ConcurrentSim uses the ResumableFunctions.jl package to build its coroutines, which uses the @resumable macro to mark a function as an "async" coroutine and the @yield macro to yield between coroutines.

Base Julia coroutines vs ConcurrentSim coroutines

The ConcurrentSim and ResumableFunctions coroutines are currently incompatible with Julia's base coroutines (which based around wait and fetch). A separate coroutines implementation was necessary, because Julia's coroutines are designed for computationally heavy tasks and practical parallel algorithms, leading to significant overhead when they are used with extremely large numbers of computationally cheap tasks, as it is common in discrete event simulators. ResumableFunctions's coroutines are single threaded but with drastically lower call overhead. A future long-term goal of ours is to unify the API used by ResumableFunctions and base Julia, but this will not be achieved in the near term, hence the need for pages like this one.

Without further ado, here is the typical API used with:

  • ConcurrentSim.Resource which is used to represent scarce resource that can be used by only up to a fixed number of tasks. If the limit is just one task (the default), this is very similar to Base.ReentrantLock. Resource is a special case of Container with an integer "resource counter".
  • ConcurrentSim.Store which is used to represent an unordered heap. For the ordered versions, consider QueueStore or StackStore.
Base ReentrantLockBase ChannelConcurrentSim ContainerConcurrentSim Resource, i.e. Container{Int}ConcurrentSim Store
put!@yield@yield@yieldlow-level "put an object in" API
take!block@yieldthe Channel-like API for Store
lockblock@yieldthe Lock-like API for Resource (there is also trylock)
unlock✔️@yieldthe Lock-like API for Resource
isready✔️✔️✔️✔️something is stored in the resource
islocked✔️✔️✔️✔️the resource can not store anything more

The table denotes which methods exist (✔️), are blocking (block), need to be explicitly yielded with ResumableFunctions (@yield), or are not applicable (❌).

As you can see Resource shares some properties with ReentrantLock and avails itself of the lock/unlock/trylock Base API. Store similarly shares some properties with Channel and shares the put!/take! Base API. Of note is that when the Base API would be blocking, the corresponding ConcurrentSim methods actually give coroutines that need to be @yield-ed.

take! and unlock are both implemented on top of the lower level get.

The Base.lock and Base.unlock are aliased to ConcurrentSim.request and ConcurrentSim.release respectively for semantic convenience when working with Resource.

unlock(::Resource) is instantaneous so the @yield is not strictly necessary. Similarly for put!(::Store) if the store has infinite capacity.

diff --git a/dev/guides/environments/index.html b/dev/guides/environments/index.html index 05113b2..05e868d 100644 --- a/dev/guides/environments/index.html +++ b/dev/guides/environments/index.html @@ -57,4 +57,4 @@ end

In ConcurrentSim, this can be used to provide return values for processes that can be used by other processes:

@resumable function other_proc(env::Environment)
   ret_val = @yield @process my_proc(env)
   @assert ret_val == 150
-end
+end diff --git a/dev/guides/events/index.html b/dev/guides/events/index.html index c9c8a82..38fa2e5 100644 --- a/dev/guides/events/index.html +++ b/dev/guides/events/index.html @@ -74,4 +74,4 @@ bell is ringing at t=90.0 pupil 1 leaves class at t=90.0 pupil 2 leaves class at t=90.0 -pupil 3 leaves class at t=90.0 +pupil 3 leaves class at t=90.0 diff --git a/dev/index.html b/dev/index.html index f8bb2b8..26bfb76 100644 --- a/dev/index.html +++ b/dev/index.html @@ -26,4 +26,4 @@ fast 0.5 slow 1.0 fast 1.0 -fast 1.5 +fast 1.5 diff --git a/dev/tutorial/index.html b/dev/tutorial/index.html index 3b03266..c7ea9cf 100644 --- a/dev/tutorial/index.html +++ b/dev/tutorial/index.html @@ -143,4 +143,4 @@ 2 leaving the bcs at 9.0 4 starting to charge at 9.0 3 leaving the bcs at 12.0 -4 leaving the bcs at 14.0

Note that the first two cars can start charging immediately after they arrive at the BCS, while cars 3 and 4 have to wait.

+4 leaving the bcs at 14.0

Note that the first two cars can start charging immediately after they arrive at the BCS, while cars 3 and 4 have to wait.