Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Camera node holistic record & replay #1117

Open
wants to merge 60 commits into
base: v3_develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
ceb441c
Implemented holistic record for Camera node
asahtik Aug 29, 2024
8419fe8
Added Holistic R&R support for the Camera node
asahtik Sep 2, 2024
2e7029e
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Sep 2, 2024
085a7fb
Bump fw
asahtik Sep 2, 2024
b064bf9
Added RVC4 support for holistic record & replay
asahtik Sep 4, 2024
b9526bb
Bump fw
asahtik Sep 4, 2024
62beaab
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Oct 7, 2024
4916b3b
Move cross-platform examples
asahtik Oct 7, 2024
52e78e2
PR fixes
asahtik Oct 7, 2024
3dd2eef
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Oct 7, 2024
2caba46
Bump firmware
asahtik Oct 7, 2024
6b00d3c
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Oct 22, 2024
c578a1f
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Oct 28, 2024
fa6471a
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Oct 28, 2024
2c1d6f0
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Oct 28, 2024
380c898
Added IMUData protobuf support
asahtik Oct 29, 2024
f0012ad
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Nov 4, 2024
f614d76
Moved Record & Replay from jsonschema to protobuf, TODO testing
asahtik Nov 5, 2024
1acbf71
Removed wrapper from recording schema
asahtik Nov 7, 2024
fae97f5
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Nov 8, 2024
eddfb12
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Nov 8, 2024
7b2fac6
WIP: migrate record&replay to protobuf
asahtik Nov 8, 2024
fd7b032
Merge branch 'v3_visualizer_support' of github.com:luxonis/depthai-co…
asahtik Nov 8, 2024
cfa03f0
Remove unused header
asahtik Nov 8, 2024
b8480d8
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Nov 11, 2024
952914f
Fix replay bugs
asahtik Nov 11, 2024
87ae43b
Bump RVC4 fw [no ci]
asahtik Nov 11, 2024
69d582d
Bump rvc2 fw
asahtik Nov 11, 2024
4f7c7bb
Bump rvc2 fw
asahtik Nov 15, 2024
33b1bd7
Holistic record/replay bugfixes
asahtik Nov 15, 2024
eb11ab8
Build camera
asahtik Nov 15, 2024
1764042
Merge branch 'v3_camera_record_n_replay' of github.com:luxonis/deptha…
asahtik Nov 19, 2024
ed6b70b
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 5, 2024
a74acc6
Fix build
asahtik Dec 5, 2024
0054a47
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 10, 2024
801606b
Holistic camera record improvements
asahtik Dec 11, 2024
a6a45fe
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 11, 2024
45aa00b
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 11, 2024
320bbba
Holistic record fixes
asahtik Dec 12, 2024
29194c1
Bump rvc4 fw
asahtik Dec 12, 2024
ce8df2d
Bump rvc2 fw
asahtik Dec 12, 2024
2ab1748
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 12, 2024
b635c72
Fix imgtransformation proto deserialization
asahtik Dec 12, 2024
c0560af
Bump rvc2 fw
asahtik Dec 12, 2024
aa5977a
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 13, 2024
b1aaabe
Fix example
asahtik Dec 13, 2024
a178a16
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Dec 18, 2024
a121df9
Bump rvc4 fw [no ci]
asahtik Dec 18, 2024
8a82faf
Bump rvc2 fw
asahtik Dec 18, 2024
9088650
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Jan 7, 2025
ea2217a
Fix holistic record&replay examples to use Camera instead of color ca…
asahtik Jan 7, 2025
877a90b
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Jan 8, 2025
c6c84ac
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Jan 10, 2025
7cc30ca
Remove version control markers [no ci]
asahtik Jan 10, 2025
1708e10
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Jan 23, 2025
6814926
Fix missing timestamp when recording encoded frames
asahtik Jan 23, 2025
25507e2
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Jan 30, 2025
4e0b436
Added helper functions to Camera [no ci]
asahtik Jan 30, 2025
2c5dd0d
Merge branch 'v3_develop' of github.com:luxonis/depthai-core into v3_…
asahtik Jan 30, 2025
3001ef9
Bump rvc4 fw
asahtik Jan 30, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion bindings/python/src/pipeline/node/CameraBindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
#include "depthai/pipeline/Node.hpp"
#include "depthai/pipeline/Pipeline.hpp"
#include "depthai/pipeline/node/Camera.hpp"
#include "depthai/pipeline/node/host/Replay.hpp"

void bind_camera(pybind11::module& m, void* pCallstack) {
using namespace dai;
Expand All @@ -21,9 +22,15 @@ void bind_camera(pybind11::module& m, void* pCallstack) {
// Actual bindings
camera.def_readonly("inputControl", &Camera::inputControl, DOC(dai, node, Camera, inputControl))
.def_readonly("initialControl", &Camera::initialControl, DOC(dai, node, Camera, initialControl))
.def("build", &Camera::build, "boardSocket"_a = CameraBoardSocket::AUTO, DOC(dai, node, Camera, build))
.def_readonly("mockIsp", &Camera::mockIsp, DOC(dai, node, Camera, mockIsp))
.def("build", py::overload_cast<dai::CameraBoardSocket>(&Camera::build), "boardSocket"_a = CameraBoardSocket::AUTO, DOC(dai, node, Camera, build))
#ifdef DEPTHAI_HAVE_OPENCV_SUPPORT
.def("build", py::overload_cast<dai::CameraBoardSocket, ReplayVideo&>(&Camera::build), py::arg("boardSocket"), py::arg("replayNode"), DOC(dai, node, Camera, build))
.def("build", py::overload_cast<ReplayVideo&>(&Camera::build), py::arg("replayNode"), DOC(dai, node, Camera, build))
#endif
// .def("setBoardSocket", &Camera::setBoardSocket, "boardSocket"_a, DOC(dai, node, Camera, setBoardSocket))
.def("getBoardSocket", &Camera::getBoardSocket, DOC(dai, node, Camera, getBoardSocket))
.def("setMockIsp", &Camera::setMockIsp, "mockIsp"_a, DOC(dai, node, Camera, setMockIsp))
// .def("setCamera", &Camera::setCamera, "name"_a, DOC(dai, node, Camera, setCamera))
// .def("getCamera", &Camera::getCamera, DOC(dai, node, Camera, getCamera))
.def("requestOutput",
Expand Down
4 changes: 3 additions & 1 deletion bindings/python/src/pipeline/node/IMUBindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,9 @@ void bind_imu(pybind11::module& m, void* pCallstack) {
.def_readwrite("enableFirmwareUpdate", &IMUProperties::enableFirmwareUpdate, DOC(dai, IMUProperties, enableFirmwareUpdate));

// Node
imu.def_readonly("out", &IMU::out, DOC(dai, node, IMU, out))
imu
.def_readonly("out", &IMU::out, DOC(dai, node, IMU, out))
.def_readonly("mockIn", &IMU::mockIn, DOC(dai, node, IMU, mockIn))
.def("enableIMUSensor",
static_cast<void (IMU::*)(IMUSensorConfig imuSensor)>(&IMU::enableIMUSensor),
py::arg("sensorConfig"),
Expand Down
2 changes: 1 addition & 1 deletion cmake/Depthai/DepthaiDeviceRVC4Config.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ set(DEPTHAI_DEVICE_RVC4_MATURITY "snapshot")

# "version if applicable"
# set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+93f7b75a885aa32f44c5e9f53b74470c49d2b1af")
set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+e15e34e96495e1ac3d4c064499d45f33784cba8a")
set(DEPTHAI_DEVICE_RVC4_VERSION "0.0.1+cb02832972593df1fa893eb81796b84c5b28fd3c")
2 changes: 1 addition & 1 deletion cmake/Depthai/DepthaiDeviceSideConfig.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
set(DEPTHAI_DEVICE_SIDE_MATURITY "snapshot")

# "full commit hash of device side binary"
set(DEPTHAI_DEVICE_SIDE_COMMIT "9af6aa33e593ce207ad00d09e528ce85181d543b")
set(DEPTHAI_DEVICE_SIDE_COMMIT "bb2e7846c91edf48b2a881caff3e47d5c4764c2a")

# "version if applicable"
set(DEPTHAI_DEVICE_SIDE_VERSION "")
17 changes: 10 additions & 7 deletions examples/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -263,15 +263,17 @@ dai_add_example(image_manip_v2_multi_ops ImageManip/image_manip_v2_multi_ops.cpp
dai_add_example(image_manip_color_conversion ImageManip/image_manip_color_conversion.cpp OFF OFF)

# Record
dai_add_example(record_encoded_h264 RVC2/Record/record_encoded_h264.cpp OFF OFF)
dai_add_example(record_encoded_mjpeg RVC2/Record/record_encoded_mjpeg.cpp OFF OFF)
dai_add_example(record_raw_color RVC2/Record/record_raw_color.cpp OFF OFF)
dai_add_example(record_raw_gray RVC2/Record/record_raw_gray.cpp OFF OFF)
dai_add_example(record_imu RVC2/Record/record_imu.cpp OFF OFF)
dai_add_example(record_encoded_h264 Record/record_encoded_h264.cpp OFF OFF)
dai_add_example(record_encoded_mjpeg Record/record_encoded_mjpeg.cpp OFF OFF)
dai_add_example(record_raw_color Record/record_raw_color.cpp OFF OFF)
dai_add_example(record_raw_gray Record/record_raw_gray.cpp OFF OFF)
dai_add_example(record_imu Record/record_imu.cpp OFF OFF)
dai_add_example(holistic_record Record/holistic_record.cpp OFF OFF)

# Replay
dai_add_example(replay_video_meta RVC2/Replay/replay_video_meta.cpp OFF OFF)
dai_add_example(replay_imu RVC2/Replay/replay_imu.cpp OFF OFF)
dai_add_example(replay_video_meta Replay/replay_video_meta.cpp OFF OFF)
dai_add_example(replay_imu Replay/replay_imu.cpp OFF OFF)
dai_add_example(holistic_replay Replay/holistic_replay.cpp OFF OFF)

# StereoDepth
dai_add_example(depth_preview StereoDepth/depth_preview.cpp OFF OFF)
Expand All @@ -281,6 +283,7 @@ dai_add_example(thermal RVC2/Thermal/thermal.cpp OFF OFF)

# Host nodes
dai_add_example(rgb_video RVC2/ColorCamera/rgb_video.cpp OFF OFF)
dai_add_example(rgb_video_camera RVC2/Camera/rgb_video_camera.cpp OFF OFF)
dai_add_example(host_node HostNodes/host_node.cpp OFF OFF)
dai_add_example(threaded_host_node HostNodes/threaded_host_node.cpp OFF OFF)

Expand Down
31 changes: 31 additions & 0 deletions examples/cpp/RVC2/Camera/rgb_video_camera.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
// Includes common necessary includes for development using depthai library
#include "depthai/common/CameraBoardSocket.hpp"
#include "depthai/depthai.hpp"
#include "depthai/pipeline/datatype/ImgFrame.hpp"
#include "depthai/pipeline/node/host/Replay.hpp"

int main(int argc, char** argv) {
// Create pipeline
auto device = argc > 1 ? std::make_shared<dai::Device>(argv[1]) : std::make_shared<dai::Device>();
dai::Pipeline pipeline(device);

// Define source and output
auto camRgb = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);

auto outputQueue = camRgb->requestOutput({1280, 720}, dai::ImgFrame::Type::NV12)->createOutputQueue();

pipeline.start();
while(pipeline.isRunning()) {
auto videoIn = outputQueue->get<dai::ImgFrame>();

// Get BGR frame from NV12 encoded video frame to show with opencv
// Visualizing the frame on slower hosts might have overhead
cv::imshow("video", videoIn->getCvFrame());

int key = cv::waitKey(1);
if(key == 'q' || key == 'Q') {
pipeline.stop();
}
}
return 0;
}
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,20 @@
#endif

int main(int argc, char** argv) {
dai::Pipeline pipeline(true);
auto cam = pipeline.create<dai::node::ColorCamera>();
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::Camera>()->build();
auto imu = pipeline.create<dai::node::IMU>();
auto display = pipeline.create<dai::node::Display>();

cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
cam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
cam->setFps(30);
auto camOut = cam->requestOutput({720, 1280}, dai::ImgFrame::Type::NV12);

// enable ACCELEROMETER_RAW at 500 hz rate
imu->enableIMUSensor(dai::IMUSensor::ACCELEROMETER_RAW, 500);
// enable GYROSCOPE_RAW at 400 hz rate
imu->enableIMUSensor(dai::IMUSensor::GYROSCOPE_RAW, 400);
imu->setBatchReportThreshold(100);

cam->video.link(display->input);
camOut->link(display->input);
auto q = imu->out.createOutputQueue();

dai::RecordConfig config;
Expand All @@ -36,17 +35,21 @@ int main(int argc, char** argv) {

auto start = std::chrono::steady_clock::now();

while(std::chrono::steady_clock::now() - start < std::chrono::seconds(15)) {
auto imuData = q->get<dai::IMUData>();
auto imuPackets = imuData->packets;
for(auto& imuPacket : imuPackets) {
auto& acceleroValues = imuPacket.acceleroMeter;
auto& gyroValues = imuPacket.gyroscope;

printf("Accelerometer [m/s^2]: x: %.3f y: %.3f z: %.3f \n", acceleroValues.x, acceleroValues.y, acceleroValues.z);
printf("Gyroscope [rad/s]: x: %.3f y: %.3f z: %.3f \n", gyroValues.x, gyroValues.y, gyroValues.z);
try {
while(std::chrono::steady_clock::now() - start < std::chrono::seconds(15)) {
auto imuData = q->get<dai::IMUData>();
auto imuPackets = imuData->packets;
for(auto& imuPacket : imuPackets) {
auto& acceleroValues = imuPacket.acceleroMeter;
auto& gyroValues = imuPacket.gyroscope;

// printf("Accelerometer [m/s^2]: x: %.3f y: %.3f z: %.3f \n", acceleroValues.x, acceleroValues.y, acceleroValues.z);
// printf("Gyroscope [rad/s]: x: %.3f y: %.3f z: %.3f \n", gyroValues.x, gyroValues.y, gyroValues.z);
}
std::this_thread::sleep_for(std::chrono::milliseconds(1));
}
std::this_thread::sleep_for(std::chrono::milliseconds(1));
} catch(...) {

}

pipeline.stop();
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
#include <depthai/depthai.hpp>

#include "depthai/capabilities/ImgFrameCapability.hpp"
#include "depthai/common/CameraBoardSocket.hpp"
#include "depthai/pipeline/node/host/Record.hpp"

#ifndef DEPTHAI_HAVE_OPENCV_SUPPORT
Expand All @@ -8,7 +10,7 @@

int main(int argc, char** argv) {
dai::Pipeline pipeline(true);
auto cam = pipeline.create<dai::node::ColorCamera>();
auto cam = pipeline.create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
auto display = pipeline.create<dai::node::Display>();
auto videoEncoder = pipeline.create<dai::node::VideoEncoder>();
auto record = pipeline.create<dai::node::RecordVideo>();
Expand All @@ -17,14 +19,12 @@ int main(int argc, char** argv) {
record->setRecordVideoFile(path + std::string(".mp4"));
record->setRecordMetadataFile(path + std::string(".mcap"));

cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
cam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
cam->setFps(30);
auto* camOut = cam->requestOutput({1920, 1440}, dai::ImgFrame::Type::NV12, dai::ImgResizeMode::CROP, 30.f);

videoEncoder->setProfile(dai::VideoEncoderProperties::Profile::H264_MAIN);

cam->video.link(videoEncoder->input);
cam->video.link(display->input);
camOut->link(videoEncoder->input);
camOut->link(display->input);
videoEncoder->out.link(record->input);

pipeline.run(); // Let the display node stop the pipeline
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,21 +7,19 @@
#endif

int main(int argc, char** argv) {
dai::Pipeline pipeline(true);
auto cam = pipeline.create<dai::node::ColorCamera>();
dai::Pipeline pipeline;
auto cam = pipeline.create<dai::node::Camera>()->build();
auto imu = pipeline.create<dai::node::IMU>();
auto display = pipeline.create<dai::node::Display>();

cam->setBoardSocket(dai::CameraBoardSocket::CAM_A);
cam->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
cam->setFps(30);
auto camOut = cam->requestOutput({720, 1280}, dai::ImgFrame::Type::NV12);

// enable ACCELEROMETER_RAW at 500 hz rate
imu->enableIMUSensor(dai::IMUSensor::ACCELEROMETER_RAW, 500);
// enable GYROSCOPE_RAW at 400 hz rate
imu->enableIMUSensor(dai::IMUSensor::GYROSCOPE_RAW, 400);

cam->video.link(display->input);
camOut->link(display->input);
auto q = imu->out.createOutputQueue();

pipeline.enableHolisticReplay(argc > 1 ? std::string(argv[1]) : "recording.tar.gz");
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,17 +15,13 @@
# Create pipeline
with dai.Pipeline(True) as pipeline:
# Define source and output
camRgb = pipeline.create(dai.node.ColorCamera)

# Properties
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setVideoSize(1920, 1080)
camRgb.setFps(30)
camRgb = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
camRgbOut = camRgb.requestOutput((1920, 1080), fps = 30)

imu = pipeline.create(dai.node.IMU)
imu.enableIMUSensor(dai.IMUSensor.ACCELEROMETER_RAW, 500);
imu.enableIMUSensor(dai.IMUSensor.GYROSCOPE_RAW, 400);
imu.setBatchReportThreshold(100)

config = dai.RecordConfig()
config.outputDir = args.output;
Expand All @@ -35,7 +31,7 @@

pipeline.enableHolisticRecord(config)

videoQueue = camRgb.preview.createOutputQueue()
videoQueue = camRgbOut.createOutputQueue()
imuQueue = imu.out.createOutputQueue()

# Connect to device and start pipeline
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,17 @@
# Create pipeline
with dai.Pipeline(True) as pipeline:
# Define source and output
camRgb = pipeline.create(dai.node.ColorCamera)

# Properties
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setVideoSize(1920, 1080)
camRgb.setFps(30)
camRgb = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_A)
camRgbOut = camRgb.requestOutput((1920, 1080), fps = 30)

imu = pipeline.create(dai.node.IMU)
imu.enableIMUSensor(dai.IMUSensor.ACCELEROMETER_RAW, 500);
imu.enableIMUSensor(dai.IMUSensor.GYROSCOPE_RAW, 400);
imu.setBatchReportThreshold(100)

pipeline.enableHolisticReplay(args.source)

videoQueue = camRgb.preview.createOutputQueue()
videoQueue = camRgbOut.createOutputQueue()
imuQueue = imu.out.createOutputQueue()

# Connect to device and start pipeline
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@
import cv2

parser = argparse.ArgumentParser()
parser.add_argument("-i", "--inputVideo", default="test_video.mp4", help="Input video name")
parser.add_argument("-v", "--inputVideo", default="test_video.mp4", help="Input video name")
parser.add_argument("-m", "--inputMetadata", default="test_video.mcap", help="Input metadata name")

args = parser.parse_args()

Expand All @@ -17,19 +18,17 @@
with dai.Pipeline() as pipeline:
replay = pipeline.create(dai.node.ReplayVideo)
replay.setReplayVideoFile(Path(args.inputVideo))
if Path(args.inputMetadata).exists():
replay.setReplayMetadataFile(Path(args.inputMetadata))
replay.setOutFrameType(dai.ImgFrame.Type.NV12)
replay.setLoop(False)

imageManip = pipeline.create(dai.node.ImageManip)
imageManip.initialConfig.setResize(300, 300)
imageManip.initialConfig.setFrameType(dai.ImgFrame.Type.BGR888p)
replay.out.link(imageManip.inputImage)
manipOutQueue = imageManip.out.createOutputQueue()
videoOut = replay.out.createOutputQueue()

pipeline.start()
while pipeline.isRunning() and replay.isRunning():
try:
outFrame : dai.ImgFrame = manipOutQueue.get()
outFrame : dai.ImgFrame = videoOut.get()
except dai.MessageQueue.QueueException:
# Replay stopped the pipeline
break
Expand Down
5 changes: 3 additions & 2 deletions include/depthai/common/ImgTransformations.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,12 +26,12 @@ struct ImgTransformation {
size_t width = 0;
size_t height = 0;

std::vector<dai::RotatedRect> srcCrops = {};

dai::RotatedRect srcCrop;
dai::RotatedRect dstCrop;
bool cropsValid = false;

std::vector<dai::RotatedRect> srcCrops = {};

void calcCrops();

public:
Expand Down Expand Up @@ -151,6 +151,7 @@ struct ImgTransformation {
* @return Vertical field of view in degrees
*/
float getVFov(bool source = false) const;

std::vector<dai::RotatedRect> getSrcCrops() const;

/**
Expand Down
14 changes: 8 additions & 6 deletions include/depthai/pipeline/datatype/EncodedFrame.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -172,13 +172,20 @@ class EncodedFrame : public Buffer, public ProtoSerializable {
*/
EncodedFrame& setProfile(Profile profile);

ImgFrame getImgFrameMeta() const;

void serialize(std::vector<std::uint8_t>& metadata, DatatypeEnum& datatype) const override {
metadata = utility::serialize(*this);
datatype = DatatypeEnum::EncodedFrame;
};

#ifdef DEPTHAI_ENABLE_PROTOBUF
/**
* Serialize message to proto buffer
*
* @returns serialized message
*/
std::vector<std::uint8_t> serializeProto() const override;
std::vector<std::uint8_t> serializeProto(bool metadataOnly = false) const override;

/**
* Serialize schema to proto buffer
Expand All @@ -204,11 +211,6 @@ class EncodedFrame : public Buffer, public ProtoSerializable {
Buffer::sequenceNum,
Buffer::ts,
Buffer::tsDevice);

void serialize(std::vector<std::uint8_t>& metadata, DatatypeEnum& datatype) const override {
metadata = utility::serialize(*this);
datatype = DatatypeEnum::EncodedFrame;
};
};

} // namespace dai
Loading
Loading