Skip to content
This repository has been archived by the owner on Sep 11, 2024. It is now read-only.

moar vision docs #603

Merged
merged 4 commits into from
Nov 27, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions HelpPage/doc/docs/opencv/WIP.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
sidebar_position: 3
sidebar_position: 9999
pagination_next: null
---

# WIP

This section is work in progress. Please check back later.
This section is work in progress. Please check back later.

32 changes: 32 additions & 0 deletions HelpPage/doc/docs/opencv/bitwise-operations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
sidebar_position: 5
title: Bitwise Operations
sidebar_label: Bitwise Operations
---

# Bitwise Operations

Bitwise operations are operations that are performed on the binary representation of a number. They are very useful for
combining images, using masks, and more.

There are two most common bitwise operations: `and` and `not`.

There is also `or` and `xor`, but I have yet to find a use for these, so I will not be covering them. You can google them if you want to learn more.

## And

The `and` operation is used to combine two images. This allows us to use masks to only show certain parts of an image.
A common use is to have a mask (say one provided by the `inRange` method) and use it to only show the parts of the image
where the mask is white.

```java
Core.bitwise_and(Mat src1, Mat src2, Mat dst)
```

## Not

The `not` operation is used to invert an image.

```java
Core.bitwise_not(Mat src, Mat dst)
```
11 changes: 11 additions & 0 deletions HelpPage/doc/docs/opencv/blob-and-edge.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
sidebar_position: 6
title: Blob and Edge Detection
sidebar_label: Blob and Edge Detection
---

# WIP

This section is work in progress. Please check back later.

For now just google it (or use github copilot)
42 changes: 42 additions & 0 deletions HelpPage/doc/docs/opencv/colors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -64,3 +64,45 @@ Now why is this useful? The YCrCb color scheme is great because **_brightness an
that we can interpret the colors, regardless of what the brightness of our surroundings are, as opposed to RGB where a
change in brightness affects all three channels.

## Converting between colors

Converting between color spaces is really easy, but you have to keep track of when you switch, because there is no way to get
OpenCV to tell you what color space you are using.


First, we have to know what our source color space is. By default, the `input` parameter is in RGB.

Second, we need to know what our intended color space is. Supported color spaces are RGB, GRAY, HSV, LAB, XYZ, YCrCb (and more).

Then we can simply use the following function:
```java
ImgProc.cvtColor(source, destination , color_space);
```

Source and destination are both any `Mat`, though ideally the destination should be blank.

For the color space parameter, there are constants in the ImgProc class. Some examples are:
- ImgProc.COLOR_RGB2HSV
- ImgProc.COLOR_HSV2RGB

:::tip
You can use the autocomplete menu to list what conversions are supported. Just type in `ImgProc.COLOR_` and let the IDE
list all the options.

You may have to convert from one color to another first.
:::

As an example, we can put the grayscale version of RGB in a new `Mat` like so:
```java
Mat inputGray = new Mat();
ImgProc.cvtColor(input, inputGray, ImgProc.COLOR_RGB2GRAY);
```

Whenever we are done, we should make sure `input` is still in RGB. If it isn't, OpenCV will assume it is and the display
in the dashboard will have incorrect colors.

:::caution
Whenever we convert to grayscale, we lose all color information. Whenever we convert back to RGB, it will still not have color.
This is because all color spaces except grayscale have 3 channels, whereas grayscale only has 1, so it is impossible to get those
channels back.
:::
76 changes: 76 additions & 0 deletions HelpPage/doc/docs/opencv/common-methods.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
---
sidebar_position: 4
title: Common Methods
sidebar_label: Common Methods
---

# Common Methods

While we can't go over every single method in OpenCV, we will briefly go over some of the more common methods that will
depend on what you are trying to accomplish.

We are always adding more methods to this page, but if you don't see a method you need, you can always google it.

## Mean value
```java
Core.mean(Mat mat)
```

Find the mean value of an image.

**Parameters:**
- `mat`: The image to find the mean value of.

**Returns:** A `Scalar` object containing the mean value of the image, seperated by channel.

**Example:**
```java
Scalar mean = Core.mean(input);

mean.val[0]; // Mean value of the first channel
mean.val[1]; // Mean value of the second channel
mean.val[2]; // Mean value of the third channel
```

## In Range
```java
Core.inRange(Mat src, Scalar lowerBound, Scalar upperBound, Mat dst)
```

Get a binary image of the pixels in a certain range. If the pixel is in the range, it will be white, otherwise it will be black.

**Parameters:**
- `src`: The image to find the mean value of.
- `lowerBound`: The lower bound of the range.
- `upperBound`: The upper bound of the range.
- `dst`: The output image.

**Returns:** None

**Example:**
```java
Scalar lowerBound = new Scalar(0, 0, 0);
Scalar upperBound = new Scalar(255, 255, 255);
Mat dst = new Mat();

Core.inRange(input, lowerBound, upperBound, dst);
```

## Split
```java
Core.split(Mat src, List<Mat> dst)
```

Split an image into its channels.

**Parameters:**
- `src`: The image to split.
- `dst`: The list of images to store the channels in.

**Returns:** None

**Example:**
```java
ArrayList<Mat> channels = new ArrayList<>();
Core.split(input, channels);
```
58 changes: 58 additions & 0 deletions HelpPage/doc/docs/opencv/cropping.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
---
sidebar_position: 3
title: Cropping Images
sidebar_label: Cropping Images
---

# Cropping Images

Cropping an image is a very simple operation. First, we need to establish a rectangle defining the region which we want
to take a section of (referred to as a _**Region of Interest**_, or **_roi_**).

## Defining a Rectangle

To define a rectangle, we need to create a `Rect` object. There are multiple constructors for the `Rect` class, one uses two `Point` objects
and the other uses four `int`s. Both constructors create the same rectangle, but they are used differently. When using
the four `int` method, the first two `int`s are the x and y coordinates of the top-left corner of the rectangle, and the
last two `int`s are the width and height of the rectangle. When using the two `Point` method, the first `Point` is the top-left
corner of the rectangle, and the second `Point` is the bottom-right corner of the rectangle.

### Using `Point`s

To use the two `Point` method, we need to know the top-left and bottom-right corners of the rectangle. We can create two `Point` objects
to represent these corners, then pass them into the `Rect` constructor.

```java
Point topLeft = new Point(0, 0);
Point bottomRight = new Point(100, 100);
Rect roi = new Rect(topLeft, bottomRight);
```

This will create a rectangle with the top-left corner at (0, 0) and the bottom-right corner at (100, 100).

### Using `int`s
Using the four `int` method is different. Instead of using the top-left and bottom-right corners, we use the top-left corner and the width and height of the rectangle.

```java
Rect roi = new Rect(0, 0, 100, 100);
```

This will create the same rectangle as the previous example.

## Extracting the Region of Interest

Once we have our rectangle, we can use it to extract the region of interest from the image. To do this, we will create a new `Mat` object
and use the `submat` method to extract the region of interest.

```java
Mat cropped = input.submat(roi);
```

This cropped image will be a reference to the original image, so any changes made to the cropped image will also be made to the original image.
Now this is an interesting property, but it can also be a problem. If we don't want this behavior, we can use the `clone` method to create a copy of the image.

```java
Mat cropped = input.submat(roi).clone();
```

The `clone` method creates a copy of the image, so any changes made to the cropped image will not be made to the original image.
Original file line number Diff line number Diff line change
Expand Up @@ -42,15 +42,15 @@ public Mat processFrame(Mat input) {
ArrayList<Mat> YCrCbChannels = new ArrayList<>();
split(labColorSpace, YCrCbChannels);

// Get the channel of interest (Cb for blue team, Cr for red team)
int channelOfInterest = isBlueTeam ? 2 : 1;
Mat channel = YCrCbChannels.get(channelOfInterest);

/*
* Define the box surrounding where each position is
* Zone 1: x1 = 100, x2 = 200, y1 = 240, y2 = 370
* Zone 2: x1 = 400, x2 = 560, y1 = 230, y2 = 430
*/

int channelOfInterest = isBlueTeam ? 2 : 1;
Mat channel = YCrCbChannels.get(channelOfInterest);

// Zone 1
Rect zone1Rect = new Rect(ZONE1_X, ZONE1_Y, ZONE1_WIDTH, ZONE1_HEIGHT);
Mat zone1 = new Mat(channel, zone1Rect);
Expand Down