Skip to content

Commit d855dc3

Browse files
Address tfjs-models typos in documentation strings (#1407)
1 parent a6f4f00 commit d855dc3

File tree

13 files changed

+17
-17
lines changed

13 files changed

+17
-17
lines changed

body-pix/README_Archive.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ const net = await bodyPix.load({
141141
- `2`. 2 bytes per float. Leads to slightly lower accuracy and 2x model size reduction.
142142
- `1`. 1 byte per float. Leads to lower accuracy and 4x model size reduction.
143143

144-
The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (widthout gzip) when using different quantization bytes:
144+
The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (without gzip) when using different quantization bytes:
145145

146146
| Architecture | quantBytes=4 | quantBytes=2 | quantBytes=1 |
147147
| ------------------ |:------------:|:------------:|:------------:|

body-pix/src/setup_test.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ import {setTestEnvs} from '@tensorflow/tfjs-core/dist/jasmine_util';
2727
// Increase test timeout since we are fetching the model files from GCS.
2828
jasmine.DEFAULT_TIMEOUT_INTERVAL = 20000;
2929

30-
// Run browser tests againts both the cpu and webgl backends.
30+
// Run browser tests against both the cpu and webgl backends.
3131
setTestEnvs([
3232
// WebGL.
3333
{

body-segmentation/demos/shared/params.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ export const BLAZE_POSE_CONFIG = {
5757
visualization: 'binaryMask'
5858
};
5959
/**
60-
* This map descripes tunable flags and theior corresponding types.
60+
* This map describes tunable flags and theior corresponding types.
6161
*
6262
* The flags (keys) in the map satisfy the following two conditions:
6363
* - Is tunable. For example, `IS_BROWSER` and `IS_CHROME` is not tunable,

body-segmentation/demos/shared/util.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ export function isMobile() {
3737
async function resetBackend(backendName) {
3838
const ENGINE = tf.engine();
3939
if (!(backendName in ENGINE.registryFactory)) {
40-
throw new Error(`${backendName} backend is not registed.`);
40+
throw new Error(`${backendName} backend is not registered.`);
4141
}
4242

4343
if (backendName in ENGINE.registry) {

body-segmentation/src/body_pix/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Pass in `bodySegmentation.SupportedModels.BodyPix` from the
7474
- `2`. 2 bytes per float. Leads to slightly lower accuracy and 2x model size reduction.
7575
- `1`. 1 byte per float. Leads to lower accuracy and 4x model size reduction.
7676

77-
The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (widthout gzip) when using different quantization bytes:
77+
The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (without gzip) when using different quantization bytes:
7878

7979
| Architecture | quantBytes=4 | quantBytes=2 | quantBytes=1 |
8080
| ------------------ |:------------:|:------------:|:------------:|

body-segmentation/src/body_pix/impl/setup_test.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ import { setTestEnvs } from '@tensorflow/tfjs-core/dist/jasmine_util';
2727
// Increase test timeout since we are fetching the model files from GCS.
2828
jasmine.DEFAULT_TIMEOUT_INTERVAL = 20000;
2929

30-
// Run browser tests againts both the cpu and webgl backends.
30+
// Run browser tests against both the cpu and webgl backends.
3131
setTestEnvs([
3232
// WebGL.
3333
{

coco-ssd/src/index.ts

+2-2
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ export interface DetectedObject {
3939
*/
4040
export interface ModelConfig {
4141
/**
42-
* It determines wich object detection architecture to load. The supported
42+
* It determines which object detection architecture to load. The supported
4343
* architectures are: 'mobilenet_v1', 'mobilenet_v2' and 'lite_mobilenet_v2'.
4444
* It is default to 'lite_mobilenet_v2'.
4545
*/
@@ -212,7 +212,7 @@ export class ObjectDetection {
212212

213213
/**
214214
* Detect objects for an image returning a list of bounding boxes with
215-
* assocated class and score.
215+
* associated class and score.
216216
*
217217
* @param img The image to detect objects from. Can be a tensor or a DOM
218218
* element image, video, or canvas.

deeplab/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ loadModel()
2929
console.log(`The predicted classes are ${JSON.stringify(legend)}`));
3030
```
3131

32-
By default, calling `load` initalizes the PASCAL variant of the model quantized to 2 bytes.
32+
By default, calling `load` initializes the PASCAL variant of the model quantized to 2 bytes.
3333

3434
If you would rather load custom weights, you can pass the URL in the config instead:
3535

@@ -136,7 +136,7 @@ const classify = async (image) => {
136136

137137
### Producing a Semantic Segmentation Map
138138

139-
To segment an arbitrary image and generate a two-dimensional tensor with class labels assigned to each cell of the grid overlayed on the image (with the maximum number of cells on the side fixed to 513), use the `predict` method of the `SemanticSegmentation` object.
139+
To segment an arbitrary image and generate a two-dimensional tensor with class labels assigned to each cell of the grid overlaid on the image (with the maximum number of cells on the side fixed to 513), use the `predict` method of the `SemanticSegmentation` object.
140140

141141
#### `model.predict(image)` input
142142

deeplab/src/index.ts

+2-2
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ export class SemanticSegmentation {
112112

113113
/**
114114
* Segments an arbitrary image and generates a two-dimensional tensor with
115-
* class labels assigned to each cell of the grid overlayed on the image ( the
115+
* class labels assigned to each cell of the grid overlaid on the image ( the
116116
* maximum number of cells on the side is fixed to 513).
117117
*
118118
* @param input ::
@@ -133,7 +133,7 @@ export class SemanticSegmentation {
133133

134134
/**
135135
* Segments an arbitrary image and generates a two-dimensional tensor with
136-
* class labels assigned to each cell of the grid overlayed on the image ( the
136+
* class labels assigned to each cell of the grid overlaid on the image ( the
137137
* maximum number of cells on the side is fixed to 513).
138138
*
139139
* @param image :: `ImageData | HTMLImageElement | HTMLCanvasElement |

depth-estimation/demos/depth_map/js/gl-class.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ class GlTextureImpl {
2929
}
3030
}
3131

32-
// A wrapper class for WebGL texture and its associted framebuffer and utility
32+
// A wrapper class for WebGL texture and its associated framebuffer and utility
3333
// functions.
3434
class GlTextureFramebuffer extends GlTextureImpl {
3535
constructor(gl, framebuffer, texture, width, height) {

depth-estimation/demos/relighting/js/gl-class.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ class GlTextureImpl {
2929
}
3030
}
3131

32-
// A wrapper class for WebGL texture and its associted framebuffer and utility
32+
// A wrapper class for WebGL texture and its associated framebuffer and utility
3333
// functions.
3434
class GlTextureFramebuffer extends GlTextureImpl {
3535
constructor(gl, framebuffer, texture, width, height) {

depth-estimation/demos/relighting/js/gl-shaders.js

+2-2
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ out vec4 out_color;
4747
#define GetDepth(uv) (texture(uDepth, uv).r)
4848
#define GetColor(uv) (texture(uColor, uv).rgb)
4949
50-
// Computes the aspect ratio for portait and landscape modes.
50+
// Computes the aspect ratio for portrait and landscape modes.
5151
vec2 CalculateAspectRatio(in vec2 size) {
5252
return pow(size.yy / size, vec2(step(size.x, size.y) * 2.0 - 1.0));
5353
}
@@ -177,7 +177,7 @@ vec3 RenderMotionLights(in vec2 uv) {
177177
col = smoothstep(0.0, 0.7, col + 0.05);
178178
col = pow(col, vec3(1.0 / 1.8));
179179
180-
// Perceptual light radius propotional to percentage in the screen space.
180+
// Perceptual light radius proportional to percentage in the screen space.
181181
float light_radius = 2.0 * atan(kLightRadius, 2.0 * (1.0 - center.z));
182182
183183
float l = distance(center.xy, normalized_uv);

depth-estimation/src/ar_portrait_depth/estimator.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ class ARPortraitDepthEstimator implements DepthEstimator {
128128
// Shape after expansion is [1, height, width, 3].
129129
const batchInput = tf.expandDims(imageResized);
130130

131-
// Depth prediction (ouput shape is [1, height, width, 1]).
131+
// Depth prediction (output shape is [1, height, width, 1]).
132132
const depth4D = this.estimatorModel.predict(batchInput) as tf.Tensor4D;
133133

134134
// Normalize to user requirements.

0 commit comments

Comments
 (0)