Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
70 commits
Select commit Hold shift + click to select a range
017beae
Initial pointcloud implementation
Jun 15, 2023
9c6a5ce
Merge remote-tracking branch 'origin/develop' into HEAD
Jun 15, 2023
70b821a
Update core
Jun 15, 2023
be16a68
Merge remote-tracking branch 'origin/develop' into HEAD
Jul 5, 2023
855e020
Merge pull request #939 from luxonis/main
moratom Dec 14, 2023
a6963c9
Merge branch 'develop' of github.com:luxonis/depthai-python into poin…
asahtik Dec 20, 2023
1da4317
Fixed large buffer processing in MessageGroups
asahtik Jan 8, 2024
fbb9e3d
Added PointCloudDataBindings
asahtik Jan 8, 2024
4516699
Bump core
asahtik Jan 8, 2024
ac9ea49
Merge pull request #945 from luxonis/msg_grp_bugfix
asahtik Jan 10, 2024
369c01a
clangformat
asahtik Jan 10, 2024
0517a4d
Run HIL tests after all wheels are available
jakgra Jan 10, 2024
4ff848c
Make notify hil workflow depend only on builds it needs
jakgra Jan 15, 2024
845880b
Bugfixes, added python pointcloud visualization example with open3d
asahtik Jan 15, 2024
50a6485
Update core
Jan 15, 2024
8545d25
Update core
Jan 15, 2024
8c1a729
Merge pull request #956 from luxonis/bootloader_watchdog_bugfix
moratom Jan 15, 2024
4802dfa
Merge pull request #950 from luxonis/run_hil_workflow_after_build
jakgra Jan 16, 2024
54e8575
Added rgb pointcloud capability
asahtik Jan 17, 2024
c21f783
Revert "Added rgb pointcloud capability"
asahtik Jan 18, 2024
fba5559
Bump core
asahtik Jan 18, 2024
e357a76
Revert shared
asahtik Jan 22, 2024
d1d3754
Bump core
asahtik Jan 22, 2024
dbb1d9a
Sparse pointcloud bindings
asahtik Jan 22, 2024
c6c2032
Bump core
asahtik Jan 22, 2024
72b0b72
[RVC2] Added getStereoPairs and getAvailableStereoPairs API (#959)
zrezke Jan 24, 2024
30630ef
Added c++ tests for pointcloud
asahtik Jan 26, 2024
bea383e
Add intensity output
whoactuallycares Jan 31, 2024
adbd8c7
Bump core
asahtik Feb 1, 2024
36abd80
Merge pull request #967 from luxonis/tof_intensity
whoactuallycares Feb 2, 2024
cbe0c76
Added ability to set the lens position via a float, to enable a more …
zrezke Feb 5, 2024
2a4bff2
Update depthai-core
whoactuallycares Feb 9, 2024
ba29e0a
Merge pull request #977 from luxonis/tof_intensity
moratom Feb 9, 2024
99fca06
FW: fix CAM_C failing to stream in certain cases,
alex-luxonis Feb 13, 2024
c682f16
Added the initial version of the depthai binary. (#979)
zrezke Feb 15, 2024
c014e27
OAK-T Support (#957)
zrezke Feb 20, 2024
53601fb
Merge branch 'develop' of github.com:luxonis/depthai-python into poin…
asahtik Feb 21, 2024
393aa6d
Example fixes & improvements, bump fw
asahtik Feb 22, 2024
57c36aa
Clangformat
asahtik Feb 22, 2024
9f67ed8
Bugfix
asahtik Feb 22, 2024
8b8a4da
Default cam test (#845)
zrezke Feb 25, 2024
ebca0f2
Reduce amount of pointcloud copying
asahtik Feb 26, 2024
ee001db
Make cam_test.py available through the depthai binary. (#983)
zrezke Feb 26, 2024
b592eeb
Fix tests
asahtik Feb 26, 2024
4af09fd
Add colorized example for pointcloud
Feb 26, 2024
8bfa675
Implemented transformation matrix for pointcloud
asahtik Feb 28, 2024
daa90fb
Merge branch 'pointcloud' of github.com:luxonis/depthai-python into p…
asahtik Feb 28, 2024
1d8d790
Merge pull request #961 from luxonis/pointcloud
asahtik Feb 29, 2024
5248efe
Fixed windows build
asahtik Mar 4, 2024
3dcae95
Optimized getPoints
asahtik Mar 4, 2024
42c3a2d
Minor fixes
asahtik Mar 4, 2024
6a26eee
Enhance camera features (#987)
zrezke Mar 4, 2024
00f0d32
FW: -fix `setAutoExposureLimit` flicker during AF lens move,
alex-luxonis Mar 4, 2024
9cb7719
Merge pull request #989 from luxonis/pcl_fix_build
moratom Mar 4, 2024
e37db4f
Improve the pointcloud example to work with 30FPS
Mar 5, 2024
b505473
Change the example to camel case for consistency
Mar 5, 2024
50e3683
Minor fixes
Mar 5, 2024
ae4fa01
Merge pull request #991 from luxonis/improve_pointcloud_example
moratom Mar 5, 2024
7b33b42
Merge pull request #990 from luxonis/main
moratom Mar 5, 2024
37f6553
Print a nice error of Open3D is not installed
Mar 5, 2024
11b925f
Update the handling for uninstalled open3d
Mar 5, 2024
a2e419e
Merge pull request #992 from luxonis/improve_pointcloud_example
moratom Mar 5, 2024
e13e231
Bump version to 2.25.0.0
Mar 6, 2024
d3917a7
Add manylinux wheels for 32 bit arm
Mar 6, 2024
a03c585
Add only the wheel for bookworm
Mar 6, 2024
8402008
Don't install global packages
Mar 6, 2024
5a8a85d
Add missing PCL API & change th RH system
Mar 7, 2024
a579364
Update FW to develop
Mar 8, 2024
2b24633
Merge remote-tracking branch 'origin/main' into release_v2.25.0.0
Mar 8, 2024
0465260
Update core to main
Mar 8, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Merge remote-tracking branch 'origin/develop' into HEAD
  • Loading branch information
SzabolcsGergely committed Jul 5, 2023
commit be16a6892c6131f9b310617986cf0049ade17880
8 changes: 6 additions & 2 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -156,11 +156,15 @@ jobs:
run: python3 -m pip wheel . -w ./wheelhouse/ --verbose
- name: Auditing wheel
run: for whl in wheelhouse/*.whl; do auditwheel repair "$whl" --plat linux_armv7l -w wheelhouse/audited/; done
- name: Install tweaked auditwheel and add armv6l tag
run: |
python3 -m pip install git+https://github.com/luxonis/auditwheel@main
for whl in wheelhouse/*.whl; do python3 -m auditwheel addtag -t linux_armv7l linux_armv6l -w wheelhouse/postaudited/ "$whl"; done
- name: Archive wheel artifacts
uses: actions/upload-artifact@v3
with:
name: audited-wheels
path: wheelhouse/audited/
path: wheelhouse/postaudited/
- name: Deploy wheels to artifactory (if not a release)
if: startsWith(github.ref, 'refs/tags/v') != true
run: bash ./ci/upload-artifactory.sh
Expand Down Expand Up @@ -566,7 +570,7 @@ jobs:
owner: luxonis
workflow: regression_test.yml
workflow_inputs: '{"commit": "${{ github.ref }}", "parent_url": "https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }}"}'
workflow_timeout_seconds: 120 # Default: 300
workflow_timeout_seconds: 300 # was 120 Default: 300

- name: Release
run: echo "https://github.com/luxonis/depthai-core-hil-tests/actions/runs/${{steps.return_dispatch.outputs.run_id}}" >> $GITHUB_STEP_SUMMARY
33 changes: 33 additions & 0 deletions examples/Script/script_read_calibration.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
#!/usr/bin/env python3
import depthai as dai

# Start defining a pipeline
pipeline = dai.Pipeline()

# Script node
script = pipeline.create(dai.node.Script)
script.setProcessor(dai.ProcessorType.LEON_CSS)
script.setScript("""
import time

cal = Device.readCalibration2()
left_camera_id = cal.getStereoLeftCameraId()
right_camera_id = cal.getStereoRightCameraId()

extrinsics = cal.getCameraExtrinsics(left_camera_id, right_camera_id)
intrinsics_left = cal.getCameraIntrinsics(left_camera_id)

node.info(extrinsics.__str__())
node.info(intrinsics_left.__str__())

time.sleep(1)
node.io['end'].send(Buffer(32))
""")

xout = pipeline.create(dai.node.XLinkOut)
xout.setStreamName('end')
script.outputs['end'].link(xout.input)

# Connect to device with pipeline
with dai.Device(pipeline) as device:
device.getOutputQueue('end').get()
5 changes: 4 additions & 1 deletion examples/SpatialDetection/spatial_calculator_multi_roi.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,10 @@
depthFrame = inDepth.getFrame() # depthFrame values are in millimeters

depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
5 changes: 4 additions & 1 deletion examples/SpatialDetection/spatial_location_calculator.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,10 @@
depthFrame = inDepth.getFrame() # depthFrame values are in millimeters

depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
5 changes: 4 additions & 1 deletion examples/SpatialDetection/spatial_mobilenet.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,10 @@
depthFrame = depth.getFrame() # depthFrame values are in millimeters

depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
5 changes: 4 additions & 1 deletion examples/SpatialDetection/spatial_mobilenet_mono.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,10 @@
depthFrame = inDepth.getFrame() # depthFrame values are in millimeters

depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
5 changes: 4 additions & 1 deletion examples/SpatialDetection/spatial_tiny_yolo.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,10 @@
depthFrame = depth.getFrame() # depthFrame values are in millimeters

depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
5 changes: 4 additions & 1 deletion examples/StereoDepth/depth_crop_control.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,10 @@

# Frame is transformed, the color map will be applied to highlight the depth info
depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
29 changes: 19 additions & 10 deletions examples/StereoDepth/rgb_depth_aligned.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,16 @@
import cv2
import numpy as np
import depthai as dai
import argparse

# Weights to use when blending depth/rgb image (should equal 1.0)
rgbWeight = 0.4
depthWeight = 0.6

parser = argparse.ArgumentParser()
parser.add_argument('-alpha', type=float, default=None, help="Alpha scaling parameter to increase float. [0,1] valid interval.")
args = parser.parse_args()
alpha = args.alpha

def updateBlendWeights(percent_rgb):
"""
Expand All @@ -21,9 +26,6 @@ def updateBlendWeights(percent_rgb):
depthWeight = 1.0 - rgbWeight


# Optional. If set (True), the ColorCamera is downscaled from 1080p to 720p.
# Otherwise (False), the aligned depth is automatically upscaled to 1080p
downscaleColor = True
fps = 30
# The disparity is computed at this resolution, then upscaled to RGB resolution
monoResolution = dai.MonoCameraProperties.SensorResolution.THE_720_P
Expand All @@ -34,7 +36,7 @@ def updateBlendWeights(percent_rgb):
queueNames = []

# Define sources and outputs
camRgb = pipeline.create(dai.node.ColorCamera)
camRgb = pipeline.create(dai.node.Camera)
left = pipeline.create(dai.node.MonoCamera)
right = pipeline.create(dai.node.MonoCamera)
stereo = pipeline.create(dai.node.StereoDepth)
Expand All @@ -48,15 +50,17 @@ def updateBlendWeights(percent_rgb):
queueNames.append("disp")

#Properties
camRgb.setBoardSocket(dai.CameraBoardSocket.CAM_A)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
rgbCamSocket = dai.CameraBoardSocket.CAM_A

camRgb.setBoardSocket(rgbCamSocket)
camRgb.setSize(1280, 720)
camRgb.setFps(fps)
if downscaleColor: camRgb.setIspScale(2, 3)

# For now, RGB needs fixed focus to properly align with depth.
# This value was used during calibration
try:
calibData = device.readCalibration2()
lensPosition = calibData.getLensPosition(dai.CameraBoardSocket.CAM_A)
lensPosition = calibData.getLensPosition(rgbCamSocket)
if lensPosition:
camRgb.initialControl.setManualFocus(lensPosition)
except:
Expand All @@ -71,14 +75,19 @@ def updateBlendWeights(percent_rgb):
stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
# LR-check is required for depth alignment
stereo.setLeftRightCheck(True)
stereo.setDepthAlign(dai.CameraBoardSocket.CAM_A)
stereo.setDepthAlign(rgbCamSocket)

# Linking
camRgb.isp.link(rgbOut.input)
camRgb.video.link(rgbOut.input)
left.out.link(stereo.left)
right.out.link(stereo.right)
stereo.disparity.link(disparityOut.input)

camRgb.setMeshSource(dai.CameraProperties.WarpMeshSource.CALIBRATION)
if alpha is not None:
camRgb.setCalibrationAlpha(alpha)
stereo.setAlphaScaling(alpha)

# Connect to device and start pipeline
with device:
device.startPipeline(pipeline)
Expand Down
3 changes: 3 additions & 0 deletions examples/StereoDepth/stereo_depth_from_host.py
Original file line number Diff line number Diff line change
Expand Up @@ -588,6 +588,9 @@ def __init__(self, config):
fov = 71.86
focal = width / (2 * math.tan(fov / 2 / 180 * math.pi))

stereo.setBaseline(baseline/10)
stereo.setFocalLength(focal)

streams = ['left', 'right']
if outRectified:
streams.extend(['rectified_left', 'rectified_right'])
Expand Down
5 changes: 4 additions & 1 deletion examples/mixed/rotated_spatial_detections.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,10 @@
depthFrame = depth.getFrame() # depthFrame values are in millimeters

depth_downscaled = depthFrame[::4]
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
if np.all(depth_downscaled == 0):
min_depth = 0 # Set a default minimum depth value when all elements are zero
else:
min_depth = np.percentile(depth_downscaled[depth_downscaled != 0], 1)
max_depth = np.percentile(depth_downscaled, 99)
depthFrameColor = np.interp(depthFrame, (min_depth, max_depth), (0, 255)).astype(np.uint8)
depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
[build-system]
requires = ["setuptools", "wheel", "mypy", "cmake==3.25"]
requires = ["setuptools", "wheel", "mypy<=1.3.0", "cmake==3.25"]
3 changes: 2 additions & 1 deletion src/pipeline/datatype/ToFConfigBindings.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -48,10 +48,10 @@ void bind_tofconfig(pybind11::module& m, void* pCallstack){

depthParams
.def(py::init<>())
.def_readwrite("enable", &RawToFConfig::DepthParams::enable, DOC(dai, RawToFConfig, DepthParams, enable))
.def_readwrite("freqModUsed", &RawToFConfig::DepthParams::freqModUsed, DOC(dai, RawToFConfig, DepthParams, freqModUsed))
.def_readwrite("avgPhaseShuffle", &RawToFConfig::DepthParams::avgPhaseShuffle, DOC(dai, RawToFConfig, DepthParams, avgPhaseShuffle))
.def_readwrite("minimumAmplitude", &RawToFConfig::DepthParams::minimumAmplitude, DOC(dai, RawToFConfig, DepthParams, minimumAmplitude))
.def_readwrite("median", &RawToFConfig::DepthParams::median, DOC(dai, RawToFConfig, DepthParams, median))
;

// Message
Expand All @@ -63,6 +63,7 @@ void bind_tofconfig(pybind11::module& m, void* pCallstack){
.def("setFreqModUsed", static_cast<ToFConfig&(ToFConfig::*)(dai::ToFConfig::DepthParams::TypeFMod)>(&ToFConfig::setFreqModUsed), DOC(dai, ToFConfig, setFreqModUsed))
.def("setAvgPhaseShuffle", &ToFConfig::setAvgPhaseShuffle, DOC(dai, ToFConfig, setAvgPhaseShuffle))
.def("setMinAmplitude", &ToFConfig::setMinAmplitude, DOC(dai, ToFConfig, setMinAmplitude))
.def("setMedianFilter", &ToFConfig::setMedianFilter, DOC(dai, ToFConfig, setMedianFilter))

.def("set", &ToFConfig::set, py::arg("config"), DOC(dai, ToFConfig, set))
.def("get", &ToFConfig::get, DOC(dai, ToFConfig, get))
Expand Down
12 changes: 12 additions & 0 deletions utilities/cam_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,8 @@ def socket_type_pair(arg):
help="Show also ToF amplitude output alongside depth")
parser.add_argument('-tofcm', '--tof-cm', action='store_true',
help="Show ToF depth output in centimeters, capped to 255")
parser.add_argument('-tofmedian', '--tof-median', choices=[0,3,5,7], default=5, type=int,
help="ToF median filter kernel size")
parser.add_argument('-rgbprev', '--rgb-preview', action='store_true',
help="Show RGB `preview` stream instead of full size `isp`")

Expand Down Expand Up @@ -221,6 +223,16 @@ def get(self):
tofConfig.depthParams.freqModUsed = dai.RawToFConfig.DepthParams.TypeFMod.MIN
tofConfig.depthParams.avgPhaseShuffle = False
tofConfig.depthParams.minimumAmplitude = 3.0

if args.tof_median == 0:
tofConfig.depthParams.median = dai.MedianFilter.MEDIAN_OFF
elif args.tof_median == 3:
tofConfig.depthParams.median = dai.MedianFilter.KERNEL_3x3
elif args.tof_median == 5:
tofConfig.depthParams.median = dai.MedianFilter.KERNEL_5x5
elif args.tof_median == 7:
tofConfig.depthParams.median = dai.MedianFilter.KERNEL_7x7

tof[c].initialConfig.set(tofConfig)
if args.tof_amplitude:
amp_name = 'tof_amplitude_' + c
Expand Down
You are viewing a condensed version of this merge commit. You can view the full changes here.