Skip to content

Address tfjs-models typos in documentation strings #1407

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion body-pix/README_Archive.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ const net = await bodyPix.load({
- `2`. 2 bytes per float. Leads to slightly lower accuracy and 2x model size reduction.
- `1`. 1 byte per float. Leads to lower accuracy and 4x model size reduction.

The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (widthout gzip) when using different quantization bytes:
The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (without gzip) when using different quantization bytes:

| Architecture | quantBytes=4 | quantBytes=2 | quantBytes=1 |
| ------------------ |:------------:|:------------:|:------------:|
Expand Down
2 changes: 1 addition & 1 deletion body-pix/src/setup_test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import {setTestEnvs} from '@tensorflow/tfjs-core/dist/jasmine_util';
// Increase test timeout since we are fetching the model files from GCS.
jasmine.DEFAULT_TIMEOUT_INTERVAL = 20000;

// Run browser tests againts both the cpu and webgl backends.
// Run browser tests against both the cpu and webgl backends.
setTestEnvs([
// WebGL.
{
Expand Down
2 changes: 1 addition & 1 deletion body-segmentation/demos/shared/params.js
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ export const BLAZE_POSE_CONFIG = {
visualization: 'binaryMask'
};
/**
* This map descripes tunable flags and theior corresponding types.
* This map describes tunable flags and theior corresponding types.
*
* The flags (keys) in the map satisfy the following two conditions:
* - Is tunable. For example, `IS_BROWSER` and `IS_CHROME` is not tunable,
Expand Down
2 changes: 1 addition & 1 deletion body-segmentation/demos/shared/util.js
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ export function isMobile() {
async function resetBackend(backendName) {
const ENGINE = tf.engine();
if (!(backendName in ENGINE.registryFactory)) {
throw new Error(`${backendName} backend is not registed.`);
throw new Error(`${backendName} backend is not registered.`);
}

if (backendName in ENGINE.registry) {
Expand Down
2 changes: 1 addition & 1 deletion body-segmentation/src/body_pix/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Pass in `bodySegmentation.SupportedModels.BodyPix` from the
- `2`. 2 bytes per float. Leads to slightly lower accuracy and 2x model size reduction.
- `1`. 1 byte per float. Leads to lower accuracy and 4x model size reduction.

The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (widthout gzip) when using different quantization bytes:
The following table contains the corresponding BodyPix 2.0 model checkpoint sizes (without gzip) when using different quantization bytes:

| Architecture | quantBytes=4 | quantBytes=2 | quantBytes=1 |
| ------------------ |:------------:|:------------:|:------------:|
Expand Down
2 changes: 1 addition & 1 deletion body-segmentation/src/body_pix/impl/setup_test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ import { setTestEnvs } from '@tensorflow/tfjs-core/dist/jasmine_util';
// Increase test timeout since we are fetching the model files from GCS.
jasmine.DEFAULT_TIMEOUT_INTERVAL = 20000;

// Run browser tests againts both the cpu and webgl backends.
// Run browser tests against both the cpu and webgl backends.
setTestEnvs([
// WebGL.
{
Expand Down
4 changes: 2 additions & 2 deletions coco-ssd/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ export interface DetectedObject {
*/
export interface ModelConfig {
/**
* It determines wich object detection architecture to load. The supported
* It determines which object detection architecture to load. The supported
* architectures are: 'mobilenet_v1', 'mobilenet_v2' and 'lite_mobilenet_v2'.
* It is default to 'lite_mobilenet_v2'.
*/
Expand Down Expand Up @@ -212,7 +212,7 @@ export class ObjectDetection {

/**
* Detect objects for an image returning a list of bounding boxes with
* assocated class and score.
* associated class and score.
*
* @param img The image to detect objects from. Can be a tensor or a DOM
* element image, video, or canvas.
Expand Down
4 changes: 2 additions & 2 deletions deeplab/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ loadModel()
console.log(`The predicted classes are ${JSON.stringify(legend)}`));
```

By default, calling `load` initalizes the PASCAL variant of the model quantized to 2 bytes.
By default, calling `load` initializes the PASCAL variant of the model quantized to 2 bytes.

If you would rather load custom weights, you can pass the URL in the config instead:

Expand Down Expand Up @@ -136,7 +136,7 @@ const classify = async (image) => {

### Producing a Semantic Segmentation Map

To segment an arbitrary image and generate a two-dimensional tensor with class labels assigned to each cell of the grid overlayed on the image (with the maximum number of cells on the side fixed to 513), use the `predict` method of the `SemanticSegmentation` object.
To segment an arbitrary image and generate a two-dimensional tensor with class labels assigned to each cell of the grid overlaid on the image (with the maximum number of cells on the side fixed to 513), use the `predict` method of the `SemanticSegmentation` object.

#### `model.predict(image)` input

Expand Down
4 changes: 2 additions & 2 deletions deeplab/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ export class SemanticSegmentation {

/**
* Segments an arbitrary image and generates a two-dimensional tensor with
* class labels assigned to each cell of the grid overlayed on the image ( the
* class labels assigned to each cell of the grid overlaid on the image ( the
* maximum number of cells on the side is fixed to 513).
*
* @param input ::
Expand All @@ -133,7 +133,7 @@ export class SemanticSegmentation {

/**
* Segments an arbitrary image and generates a two-dimensional tensor with
* class labels assigned to each cell of the grid overlayed on the image ( the
* class labels assigned to each cell of the grid overlaid on the image ( the
* maximum number of cells on the side is fixed to 513).
*
* @param image :: `ImageData | HTMLImageElement | HTMLCanvasElement |
Expand Down
2 changes: 1 addition & 1 deletion depth-estimation/demos/depth_map/js/gl-class.js
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class GlTextureImpl {
}
}

// A wrapper class for WebGL texture and its associted framebuffer and utility
// A wrapper class for WebGL texture and its associated framebuffer and utility
// functions.
class GlTextureFramebuffer extends GlTextureImpl {
constructor(gl, framebuffer, texture, width, height) {
Expand Down
2 changes: 1 addition & 1 deletion depth-estimation/demos/relighting/js/gl-class.js
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class GlTextureImpl {
}
}

// A wrapper class for WebGL texture and its associted framebuffer and utility
// A wrapper class for WebGL texture and its associated framebuffer and utility
// functions.
class GlTextureFramebuffer extends GlTextureImpl {
constructor(gl, framebuffer, texture, width, height) {
Expand Down
4 changes: 2 additions & 2 deletions depth-estimation/demos/relighting/js/gl-shaders.js
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ out vec4 out_color;
#define GetDepth(uv) (texture(uDepth, uv).r)
#define GetColor(uv) (texture(uColor, uv).rgb)

// Computes the aspect ratio for portait and landscape modes.
// Computes the aspect ratio for portrait and landscape modes.
vec2 CalculateAspectRatio(in vec2 size) {
return pow(size.yy / size, vec2(step(size.x, size.y) * 2.0 - 1.0));
}
Expand Down Expand Up @@ -177,7 +177,7 @@ vec3 RenderMotionLights(in vec2 uv) {
col = smoothstep(0.0, 0.7, col + 0.05);
col = pow(col, vec3(1.0 / 1.8));

// Perceptual light radius propotional to percentage in the screen space.
// Perceptual light radius proportional to percentage in the screen space.
float light_radius = 2.0 * atan(kLightRadius, 2.0 * (1.0 - center.z));

float l = distance(center.xy, normalized_uv);
Expand Down
2 changes: 1 addition & 1 deletion depth-estimation/src/ar_portrait_depth/estimator.ts
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ class ARPortraitDepthEstimator implements DepthEstimator {
// Shape after expansion is [1, height, width, 3].
const batchInput = tf.expandDims(imageResized);

// Depth prediction (ouput shape is [1, height, width, 1]).
// Depth prediction (output shape is [1, height, width, 1]).
const depth4D = this.estimatorModel.predict(batchInput) as tf.Tensor4D;

// Normalize to user requirements.
Expand Down
Loading