Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: support Atomics.waitAsync #687

Open
wants to merge 108 commits into
base: current
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
108 commits
Select commit Hold shift + click to select a range
2cf4b46
refactor: emit `drain` as soon as capacity is available (#367)
metcoder95 Jul 14, 2023
7b86105
refactor!: drop runTask method (#363)
metcoder95 Sep 21, 2023
b42bb14
chore(deps-dev): Bump @types/node from 20.6.0 to 20.6.2 (#406)
dependabot[bot] Sep 19, 2023
4df0ca4
fix: add signal reason support (#403)
metcoder95 Sep 20, 2023
986659a
chore(deps): Bump actions/checkout from 3 to 4 (#413)
dependabot[bot] Oct 2, 2023
bf92e5a
chore(deps-dev): Bump @types/node from 20.6.2 to 20.8.2 (#417)
dependabot[bot] Oct 2, 2023
352bc84
chore(deps-dev): Bump @types/node from 20.8.2 to 20.8.3 (#419)
dependabot[bot] Oct 15, 2023
a998a69
chore(deps): Bump @babel/traverse (#425)
dependabot[bot] Oct 20, 2023
31f7a2b
fix: do not re-create threads when calling `.destory()` (#430)
alan-agius4 Oct 20, 2023
3d67926
chore(deps-dev): Bump @types/node from 20.8.3 to 20.8.7 (#428)
dependabot[bot] Oct 23, 2023
5c2264c
fix: migrate to EventEmitterAsyncResource from core (#433)
groozin Oct 30, 2023
69cae53
chore(deps): Bump actions/setup-node from 3 to 4 (#437)
dependabot[bot] Nov 6, 2023
0cfa8d7
chore(deps-dev): Bump @types/node from 20.8.7 to 20.8.10 (#440)
dependabot[bot] Nov 6, 2023
c9e2abc
chore(deps-dev): Bump @typescript-eslint/parser from 6.9.1 to 6.10.0 …
dependabot[bot] Nov 13, 2023
ef6e205
chore(deps-dev): Bump @types/node from 20.8.10 to 20.9.0 (#443)
dependabot[bot] Nov 13, 2023
1f85c16
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.9.1 to …
dependabot[bot] Nov 15, 2023
b7895b9
chore(release): 4.2.0
metcoder95 Nov 19, 2023
cc8fca5
chore(deps-dev): Bump @types/node from 20.9.0 to 20.9.2 (#448)
dependabot[bot] Nov 20, 2023
4260e7e
chore(deps-dev): Bump @typescript-eslint/parser from 6.10.0 to 6.11.0…
dependabot[bot] Nov 20, 2023
99ae84b
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.10.0 to…
dependabot[bot] Nov 21, 2023
27cac34
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.11.0 to…
dependabot[bot] Nov 27, 2023
b6e2559
chore(deps-dev): Bump @typescript-eslint/parser from 6.11.0 to 6.12.0…
dependabot[bot] Nov 27, 2023
b08a78e
chore(deps-dev): Bump @types/node from 20.9.2 to 20.10.0 (#455)
dependabot[bot] Nov 27, 2023
be15076
chore(deps-dev): Bump typescript from 5.1.6 to 5.3.2 (#454)
dependabot[bot] Nov 27, 2023
a150938
fix: default minThreads with odd CPU count (#457)
Chocobozzz Dec 3, 2023
6ea35db
chore(deps-dev): Bump @types/node from 20.10.0 to 20.10.4 (#466)
dependabot[bot] Dec 13, 2023
41408c1
chore(deps-dev): Bump ts-node from 9.1.1 to 10.9.2 (#463)
dependabot[bot] Dec 13, 2023
682a931
chore(release): 4.2.1
metcoder95 Dec 13, 2023
d30c6b3
chore(deps-dev): Bump @typescript-eslint/parser from 6.12.0 to 6.14.0…
dependabot[bot] Dec 13, 2023
89ee83a
chore(deps-dev): Bump typescript from 5.3.2 to 5.3.3 (#464)
dependabot[bot] Dec 13, 2023
8deb653
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.12.0 to…
dependabot[bot] Dec 13, 2023
88b40d8
chore(deps): Bump @babel/traverse from 7.22.8 to 7.23.2 (#469)
dependabot[bot] Dec 13, 2023
8dadd43
chore(deps-dev): Bump @types/node from 20.10.4 to 20.10.5 (#470)
dependabot[bot] Dec 19, 2023
d84adb3
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.14.0 to…
dependabot[bot] Dec 28, 2023
c1996d4
chore(deps-dev): Bump @typescript-eslint/parser from 6.14.0 to 6.16.0…
dependabot[bot] Dec 28, 2023
cff5414
chore(deps-dev): Bump @types/node from 20.10.5 to 20.10.6 (#476)
dependabot[bot] Jan 3, 2024
ef07d02
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.15.0 to…
dependabot[bot] Jan 3, 2024
40b5e6b
chore(deps-dev): Bump @types/node from 20.10.6 to 20.10.7 (#479)
dependabot[bot] Jan 12, 2024
a3571fd
feat: use native Node.js histogram support (#482)
clydin Jan 12, 2024
e848054
chore(deps-dev): Bump @types/node from 20.11.0 to 20.11.1 (#485)
dependabot[bot] Jan 16, 2024
79d7401
chore(release): 4.3.0
metcoder95 Jan 16, 2024
aabaa5b
chore(deps-dev): Bump @typescript-eslint/parser from 6.18.1 to 6.19.0…
dependabot[bot] Jan 26, 2024
177a58d
chore: workflows: drop v16.x (#495)
RafaelGSS Jan 27, 2024
6f4ef50
fix(#491): out of bounds histogram value (#496)
metcoder95 Jan 28, 2024
c1f533d
chore(release): 4.3.1
metcoder95 Jan 30, 2024
335e406
chore(deps-dev): Bump @typescript-eslint/eslint-plugin from 6.18.1 to…
dependabot[bot] Jan 31, 2024
be9e83b
fix(#513): forward errors correctly to Piscina (#514)
metcoder95 Feb 16, 2024
86a5b21
chore(release): 4.3.2
metcoder95 Feb 16, 2024
e0686f6
chore(deps-dev): Bump @types/node from 20.11.1 to 20.11.19 (#516)
dependabot[bot] Feb 21, 2024
07b03eb
chore(deps): Bump actions/cache from 3 to 4 (#505)
dependabot[bot] Feb 21, 2024
01726c3
feat: add option to disable run/wait time recording (#518)
clydin Feb 21, 2024
ff8dee0
feat: allow named import usage (#517)
clydin Feb 21, 2024
683cda4
chore(release): 4.4.0
metcoder95 Feb 28, 2024
b2d7e62
chore(deps-dev): Bump @types/node from 20.11.19 to 20.11.24 (#524)
dependabot[bot] Mar 6, 2024
eb3008c
feat!: narrow TS types for histograms
metcoder95 Mar 10, 2024
2877294
chore: add v21.x to CI
metcoder95 Mar 10, 2024
b60d8ba
chore: adjust cache
metcoder95 Mar 10, 2024
4b391ac
chore: update package-lock.json
metcoder95 Mar 10, 2024
4bb872f
Merge remote-tracking branch 'origin/current' into next
metcoder95 May 29, 2024
57570c6
refactor: cleanups
metcoder95 May 29, 2024
6cbc507
fix: lint
metcoder95 May 29, 2024
7e40e2d
Merge remote-tracking branch 'origin/current' into next
metcoder95 May 31, 2024
d701046
fix: merge with current
metcoder95 May 31, 2024
144bd1c
feat!: set FixedQueue as default task queue (#578)
metcoder95 Jun 4, 2024
363b5af
docs: remove runtask references (#581)
metcoder95 Jun 12, 2024
3729907
feat!: drop v16 (#582)
metcoder95 Jun 12, 2024
2f58db9
feat: initial shape
metcoder95 Jun 12, 2024
e72310a
Merge branch 'next' into feat/custom_balancer
metcoder95 Jun 12, 2024
f9af8a4
feat: implement balancer
metcoder95 Jun 19, 2024
f2ad1a3
fix: onclose
metcoder95 Jun 19, 2024
22e9642
fix: smaller tweaks
metcoder95 Jun 23, 2024
c5946d1
Merge remote-tracking branch 'origin/current' into next
metcoder95 Jul 21, 2024
4417fc1
Merge remote-tracking branch 'origin/next' into feat/custom_balancer
metcoder95 Jul 24, 2024
93d9c52
test: fix
metcoder95 Jul 24, 2024
3634822
feat: add histogram to workers (#619)
metcoder95 Jul 25, 2024
176c399
feat: add state properties to worker (#620)
metcoder95 Aug 2, 2024
267bde4
Merge remote-tracking branch 'origin/current' into next
metcoder95 Sep 15, 2024
53e41f4
chore: reconcile lockfile
metcoder95 Sep 15, 2024
bbb684d
docs: update README
metcoder95 Sep 15, 2024
a828c8d
Merge remote-tracking branch 'origin/current' into next
metcoder95 Sep 18, 2024
a8447c0
fix: bad merge
metcoder95 Sep 18, 2024
b2463b7
Merge branch 'next' into feat/custom_balancer
metcoder95 Sep 18, 2024
f329b81
feat: pool events for workers (#624)
metcoder95 Sep 20, 2024
0dfc689
feat: add worker events to pool (#625)
metcoder95 Oct 2, 2024
65c81f8
Merge remote-tracking branch 'origin' into feat/custom_balancer
metcoder95 Oct 2, 2024
fe6968d
refactor: drop commands from balancer
metcoder95 Oct 2, 2024
93a1504
refactor: cleanup
metcoder95 Oct 2, 2024
bb0aceb
chore: fix lockfile
metcoder95 Oct 2, 2024
34e3ef2
refactor: Update src/index.ts
metcoder95 Oct 3, 2024
d82b8e2
fix: linting
metcoder95 Oct 11, 2024
6484ba8
Merge branch 'current' into feat/custom_balancer
metcoder95 Oct 15, 2024
d9a7b9f
refactor: use performance.now instead of process.hrtime
metcoder95 Oct 16, 2024
8714a6f
refactor: decouple shcheduling from distribution
metcoder95 Oct 16, 2024
27b1f84
refactor: small tweaks
metcoder95 Oct 16, 2024
750646c
types: adjust types and documentation
metcoder95 Oct 25, 2024
86f265e
docs: add documentation
metcoder95 Oct 27, 2024
4093dfa
Merge remote-tracking branch 'origin/current' into feat/custom_balancer
metcoder95 Oct 30, 2024
05dd04f
feat: Support Atomics.waitAsync
metcoder95 Oct 30, 2024
b550f5c
test: add testing
metcoder95 Nov 1, 2024
ba71411
fix: linting
metcoder95 Nov 1, 2024
bb6de93
chore: benchmarks
metcoder95 Nov 1, 2024
3f2d96a
fix: consume messages after wake up
metcoder95 Nov 1, 2024
19c7de0
refactor: await upon wake
metcoder95 Nov 3, 2024
ccda657
fix: lint
metcoder95 Nov 3, 2024
fdbc664
test: scope per platform
metcoder95 Nov 3, 2024
c8dd7e2
refactor: drop comments
metcoder95 Nov 3, 2024
c3b6fad
Merge remote-tracking branch 'origin/current' into feat/custom_balancer
metcoder95 Nov 4, 2024
bf45d08
Merge branch 'feat/custom_balancer' into feat/async_atomics
metcoder95 Nov 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 10 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ const { resolve } = require('path');
const Piscina = require('piscina');
const piscina = new Piscina({
filename: resolve(__dirname, 'worker.js'),
useAtomics: false
atomics: 'disabled'
});

async function main () {
Expand Down Expand Up @@ -362,12 +362,16 @@ This class extends [`EventEmitter`][] from Node.js.
only makes sense to specify if there is some kind of asynchronous component
to the task. Keep in mind that Worker threads are generally not built for
handling I/O in parallel.
* `useAtomics`: (`boolean`) Use the [`Atomics`][] API for faster communication
* `atomics`: (`sync` | `async` | `disabled`) Use the [`Atomics`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Atomics) API for faster communication
between threads. This is on by default. You can disable `Atomics` globally by
setting the environment variable `PISCINA_DISABLE_ATOMICS` to `1`.
If `useAtomics` is `true`, it will cause to pause threads (stoping all execution)
between tasks. Ideally, threads should wait for all operations to finish before
returning control to the main thread (avoid having open handles within a thread).
setting the environment variable `PISCINA_DISABLE_ATOMICS` to `1` .
If `atomics` is `sync`, it will cause to pause threads (stoping all execution)
between tasks. Ideally, threads should wait for all operations to finish before
returning control to the main thread (avoid having open handles within a thread). If still want to have the possibility
of having open handles or handle asynchrnous tasks, you can set the environment variable `PISCINA_ENABLE_ASYNC_ATOMICS` to `1` or setting `options.atomics` to `async`.

> **Note**: The `async` mode comes with performance penalties and can lead to undesired behaviour if open handles are not tracked correctly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the open handle thing? Why is that an issue with async?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly as we are not doing event loop ticks, these can add some overhead if the workers start doing more than it should.

Even waiting for just the worker to wake up added some ticks that showed a difference of ±5% which is really negligible.

I've not strong opinion in this one; but if we want to make this the default behaviour we should document that it is important to not keep the workers doing too much background tasks as they can face some penalty (as well if using async, implementer needs to account for the teardown of the worker when idle, i.e. no more tasks scheduled)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that sounds good, I can adjust the PR for that 👍


* `resourceLimits`: (`object`) See [Node.js new Worker options][]
* `maxOldGenerationSizeMb`: (`number`) The maximum size of each worker threads
main heap in MB.
Expand Down
29 changes: 29 additions & 0 deletions benchmark/simple-benchmark-async.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
'use strict';
const { Piscina } = require('../dist');
const { resolve } = require('path');

async function simpleBenchmark ({ duration = 10000 } = {}) {
const pool = new Piscina({ filename: resolve(__dirname, 'fixtures/add.js'), atomics: 'async' });
let done = 0;

const results = [];
const start = process.hrtime.bigint();
while (pool.queueSize === 0) {
results.push(scheduleTasks());
}

async function scheduleTasks () {
while ((process.hrtime.bigint() - start) / 1_000_000n < duration) {
await pool.run({ a: 4, b: 6 });
done++;
}
}

await Promise.all(results);

return done / duration * 1e3;
}

simpleBenchmark().then((opsPerSecond) => {
console.log(`opsPerSecond: ${opsPerSecond} (with default taskQueue)`);
});
42 changes: 42 additions & 0 deletions docs/docs/advanced-topics/loadbalancer.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
---
id: Custom Load Balancers
sidebar_position: 2
---

[Load Balancers](https://www.cloudflare.com/learning/performance/what-is-load-balancing/) are a crucial part of any distributed workload.
Piscina by default uses a [Resource Based](https://www.cloudflare.com/en-gb/learning/performance/types-of-load-balancing-algorithms/) algorithm (least-busy)
to distribute the load across the different workers set by the pool.

### Choosing the Load Balancing Algorithm

Choosing the right algorithm heavily depends on the requirements of each individual problem, and to assess
them by testing the implementation against several variations of workloads.

The algorithms can be grouped on:

#### Dynamic

Focused on taking decision based on heuristics from the workload.

It aims to balance the work by adapting itself to the environment and distribute the workloads equally across the different workers/nodes attempting to make better use
of resources or to minimize the time to finish each individual workload.

> Piscina uses a Dynamic algorithm, based on how busy the worker is (least-busy)

#### Static

Aims to distribute the workload based on a group of predefined factors.
It does not adapt to the environment, but rather aims to preserve the state of the distribution accordingly
to the values set beforehand.

This can be helpful under several situations where we want the workload to be distributed evenly or stick to a specific worker/node over time.

> Round Robin, Ring Hash, are just examples of these algorithms

It is heavily advised to understand the problem and the workload your pool will be facing, and tests heavily against the different algorithms that matches your
use case with several combinations to better understand its impact and how to manage it.

### Resources

- [Load Balancing Algorithms](https://en.wikipedia.org/wiki/Load_balancing_(computing))
- [Types of Load Balancing Algorithms](https://www.cloudflare.com/en-gb/learning/performance/types-of-load-balancing-algorithms/)
2 changes: 1 addition & 1 deletion docs/docs/advanced-topics/performance.mdx
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
id: Performance Notes
sidebar_position: 2
sidebar_position: 3
---

Workers are generally optimized for offloading synchronous,
Expand Down
148 changes: 142 additions & 6 deletions docs/docs/api-reference/class.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
id: Class
id: Instance
sidebar_position: 2
---

Expand Down Expand Up @@ -36,7 +36,6 @@ This class extends [`EventEmitter`](https://nodejs.org/api/events.html) from Nod
:::info
The default `idleTimeout` can lead to some performance loss in the application because of the overhead involved with stopping and starting new worker threads. To improve performance, try setting the `idleTimeout` explicitly.
:::

- `maxQueue`: (`number` | `string`) The maximum number of tasks that may be
scheduled to run, but not yet running due to lack of available threads, at
a given time. By default, there is no limit. The special value `'auto'`
Expand All @@ -48,12 +47,18 @@ This class extends [`EventEmitter`](https://nodejs.org/api/events.html) from Nod
only makes sense to specify if there is some kind of asynchronous component
to the task. Keep in mind that Worker threads are generally not built for
handling I/O in parallel.
- `useAtomics`: (`boolean`) Use the [`Atomics`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Atomics) API for faster communication
- `atomics`: (`sync` | `async` | `disabled`) Use the [`Atomics`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Atomics) API for faster communication
between threads. This is on by default. You can disable `Atomics` globally by
setting the environment variable `PISCINA_DISABLE_ATOMICS` to `1`.
If `useAtomics` is `true`, it will cause to pause threads (stoping all execution)
setting the environment variable `PISCINA_DISABLE_ATOMICS` to `1` .
If `atomics` is `sync`, it will cause to pause threads (stoping all execution)
between tasks. Ideally, threads should wait for all operations to finish before
returning control to the main thread (avoid having open handles within a thread).
returning control to the main thread (avoid having open handles within a thread). If still want to have the possibility
of having open handles or handle asynchrnous tasks, you can set the environment variable `PISCINA_ENABLE_ASYNC_ATOMICS` to `1` or setting `options.atomics` to `async`.

:::info
**Note**: The `async` mode comes with performance penalties and can lead to undesired behaviour if open handles are not tracked correctly.
:::

- `resourceLimits`: (`object`) See [Node.js new Worker options](https://nodejs.org/api/worker_threads.html#worker_threads_new_worker_filename_options)
- `maxOldGenerationSizeMb`: (`number`) The maximum size of each worker threads
main heap in MB.
Expand Down Expand Up @@ -91,8 +96,139 @@ This class extends [`EventEmitter`](https://nodejs.org/api/events.html) from Nod
complete all in-flight tasks when `close()` is called. The default is `30000`
- `recordTiming`: (`boolean`) By default, run and wait time will be recorded
for the pool. To disable, set to `false`.
- `workerHistogram`: (`boolean`) By default `false`. It will hint the Worker pool to record statistics for each individual Worker
- `loadBalancer`: ([`PiscinaLoadBalancer`](#piscinaloadbalancer)) By default, Piscina uses a least-busy algorithm. The `loadBalancer`
option can be used to provide an alternative implementation. See [Custom Load Balancers](../advanced-topics/loadbalancer.mdx) for additional detail.

:::caution
Use caution when setting resource limits. Setting limits that are too low may
result in the `Piscina` worker threads being unusable.
:::

## `PiscinaLoadBalancer`

The `PiscinaLoadBalancer` interface is used to implement custom load balancing algorithm that determines which worker thread should be assigned a task.

> For more information, see [Custom Load Balancers](../advanced-topics/loadbalancer.mdx).

### Interface: `PiscinaLoadBalancer`

```ts
type PiscinaLoadBalancer = (
task: PiscinaTask, // Task to be distributed
workers: PiscinaWorker[] // Array of Worker instances
) => PiscinaWorker | null; // Worker instance to be assigned the task
```

If the `PiscinaLoadBalancer` returns `null`, `Piscina` will attempt to spawn a new worker, otherwise the task will be queued until a worker is available.

### Interface: `PiscinaTask`

```ts
interface PiscinaTask {
taskId: number; // Unique identifier for the task
filename: string; // Filename of the worker module
name: string; // Name of the worker function
created: number; // Timestamp when the task was created
isAbortable: boolean; // Indicates if the task can be aborted through AbortSignal
}
```

### Interface: `PiscinaWorker`

```ts
interface PiscinaWorker {
id: number; // Unique identifier for the worker
currentUsage: number; // Number of tasks currently running on the worker
isRunningAbortableTask: boolean; // Indicates if the worker is running an abortable task
histogram: HistogramSummary | null; // Worker histogram
terminating: boolean; // Indicates if the worker is terminating
destroyed: boolean; // Indicates if the worker has been destroyed
}
```

### Example: Custom Load Balancer

#### JavaScript

<a id="custom-load-balancer-example-js"> </a>

```js
const { Piscina } = require('piscina');

function LeastBusyBalancer(opts) {
const { maximumUsage } = opts;

return (task, workers) => {
let candidate = null;
let checkpoint = maximumUsage;
for (const worker of workers) {
if (worker.currentUsage === 0) {
candidate = worker;
break;
}

if (worker.isRunningAbortableTask) continue;

if (!task.isAbortable && worker.currentUsage < checkpoint) {
candidate = worker;
checkpoint = worker.currentUsage;
}
}

return candidate;
};
}

const piscina = new Piscina({
loadBalancer: LeastBusyBalancer({ maximumUsage: 2 }),
});

piscina
.runTask({ filename: 'worker.js', name: 'default' })
.then((result) => console.log(result))
.catch((err) => console.error(err));
```

#### TypeScript

<a id="custom-load-balancer-example-ts"> </a>

```ts
import { Piscina } from 'piscina';

function LeastBusyBalancer(
opts: LeastBusyBalancerOptions
): PiscinaLoadBalancer {
const { maximumUsage } = opts;

return (task, workers) => {
let candidate: PiscinaWorker | null = null;
let checkpoint = maximumUsage;
for (const worker of workers) {
if (worker.currentUsage === 0) {
candidate = worker;
break;
}

if (worker.isRunningAbortableTask) continue;

if (!task.isAbortable && worker.currentUsage < checkpoint) {
candidate = worker;
checkpoint = worker.currentUsage;
}
}

return candidate;
};
}

const piscina = new Piscina({
loadBalancer: LeastBusyBalancer({ maximumUsage: 2 }),
});

piscina
.runTask({ filename: 'worker.js', name: 'default' })
.then((result) => console.log(result))
.catch((err) => console.error(err));
```
14 changes: 13 additions & 1 deletion docs/docs/api-reference/event.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,16 @@ by number of tasks enqueued that are pending of execution.

## Event: `'message'`

A `'message'` event is emitted whenever a message is received from a worker thread.
A `'message'` event is emitted whenever a message is received from a worker thread.

## Event: `'workerCreate'`

Event that is triggered when a new worker is created.

As argument, it receives the worker instance.

## Event: `'workerDestroy'`

Event that is triggered when a worker is destroyed.

As argument, it receives the worker instance that has been destroyed.
2 changes: 1 addition & 1 deletion docs/docs/examples/broadcast.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ const { resolve } = require('path');
const Piscina = require('piscina');
const piscina = new Piscina({
filename: resolve(__dirname, 'worker.js'),
useAtomics: false
atomics: 'disabled'
});

async function main () {
Expand Down
4 changes: 2 additions & 2 deletions examples/broadcast/main.js
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ const { resolve } = require('path');
const Piscina = require('piscina');
const piscina = new Piscina({
filename: resolve(__dirname, 'worker.js'),
// Set useAtomics to false to avoid threads being blocked when idle
useAtomics: false
// Set atomics to disabled to avoid threads being blocked when idle
atomics: 'disabled'
});

async function main () {
Expand Down
Loading