@nkos Understood, I will arrange to interface with the new version of the kernel. It will take a few days.
-
RE: radxa zero 3e, yocto linux 6.13, imx287
-
RE: radxa zero 3e, yocto linux 6.13, imx287
We have released a Debian installation package that supports Radxa Zero 3E. Please check the link below. Of course, this is based on the kernel 5.10 version.
https://github.com/veyeimaging/rk35xx_radxa/releases/tag/v1.2
-
RE: VEYE-LVDS-327
@lozinski Sorry, we currently do not have products that meet your requirements. The VEYE-LVDS-327 has been discontinued for a long time.
-
RE: MV-MIPI-GMAX4002M compatibility with Jetson Xavier NX, Orin NX
@stiwana
https://wiki.veye.cc/index.php/Mv_series_camera_appnotes_4_jetsonFor specific information, please refer to the link above. We provide a veye_viewer client for customers to initially get started and verify usage.
We haven't done extensive debugging on GStreamer yet. As for OpenCV, there are examples provided in the article above. Regarding the latency issue: although the camera itself has negligible delay, each image entering the Jetson system will experience some latency due to the V4L2 buffer, format conversion, and preview processes—all of which involve memory buffering queues.
The most fundamental solution is to develop your own program tailored to your product's specific needs, allowing flexible management of buffers. Additionally, note that the V4L2 buffer containing the camera images after they enter the Jetson system includes timestamps. It is recommended that you use these timestamps.
Regarding the supply of this product, we will maintain long-term availability unless the chip is discontinued. We will keep a regular inventory sufficient to meet retail demand. For bulk orders of several hundred units or more, the production cycle is typically within one month.
-
RE: MV-MIPI-GMAX4002M compatibility with Jetson Xavier NX, Orin NX
@stiwana
I think the post on the forum might have misled you. In fact, the latency of the VEYE series (models starting with VEYE-) is relatively high.
The latency of the MV series, for instance MV-MIPI-GAMX4002M, is very low (below 1ms) because our ISP pipeline does not use frame buffering and only has a small amount of line buffering. -
RE: OrangePi cm5-tablet RAW-MIPI-SC132M
@mparem
https://github.com/veyeimaging/rk35xx_orangepi/releases/tag/v1.2_mvcam_tablet_cm5
Please try this image file on cm5-tablet. -
RE: Capture MV-MIPI-IMX178M with OpenCV
@barbanevosa
The issue occurs because the camera sensor outputs a resolution of 3088 × 2064, but the receiving side (V4L2 buffers) aligns each line to a stride of 3136 bytes for memory alignment. When OpenCV is used directly with cv2.CAP_V4L2, it applies the width setting (3088) both to the sensor and to the buffer interpretation. Since the actual buffer has padding (3136 vs. 3088), OpenCV misinterprets the data layout, which results in visible image artifacts (stripes).We recommend using a GStreamer pipeline to handle the stride and alignment internally, so that OpenCV receives a clean image with the correct resolution.
-
RE: VEYE imx385 驱动问题+
@flaty
DMA BUFF模式的支持,并不是相机驱动所决定的,而是rockchip的rkcif这个驱动决定的。 数据的传递顺序是:camera-->mipi rx--->rkcif。我查阅了一些资料,显示5.10版本的rk3588的驱动好像是比较明确不支持DMA BUFF模式的。这也就限制了你camera到其他外设驱动的buffer零拷贝传递的能力。 不过如果你只是想零拷贝从camera驱动中拿到数据,进入算法的话,V4L2_MEMORY_MMAP模式应该就可以支持。
rkcif驱动在kernel6以上版本是否支持了DMA buffer模式,没有确切的说法。
-
RE: VEYE imx385 驱动问题+
@flaty 我的理解中,相机驱动并不涉及到BUFF模式的限制。实际上数据的收取和防止到内存是rk的mipi接收单元的驱动所决定的。
下面是网上插到的信息,希望能有帮助。从你贴的代码来看,报错
Failed to queue buffer: Invalid argument
(EINVAL)通常意味着结构体成员设置有问题,或者驱动不支持 DMABUF 模式。结合你写的代码,我帮你逐条排查一下:
1.
v4l2_buffer
初始化问题你这里写的是:
struct v4l2_buffer buf={0}; struct v4l2_plane planes[1]={0};
这是对的,但要注意
buf.length
必须和planes
数组元素数量一致,并且buf.m.planes
不能为空。
2.
buf
关键字段设置你代码里:
buf.index = 0; buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; buf.memory = V4L2_MEMORY_DMABUF; buf.length = 1; buf.m.planes = planes;
buf.type
用V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE
是对的(说明驱动是 multiplanar 类型)。buf.memory
设置成DMABUF
正确。buf.index
不是 0 就行,它必须是你在VIDIOC_REQBUFS
时分配的 buffer 数量里的一个合法索引。比如如果你req.count = 4
,那么只能用0..3
。
3. plane 字段设置
planes[0].m.fd = frame_buffers[i].dmabuf_fd; planes[0].length = buffer_size;
问题可能出在这里:
planes[0].m.fd
:必须是dma-buf
的有效fd
。planes[0].length
:应该是该 plane 的大小,但 通常驱动并不依赖这个值。有些驱动要求设置bytesused
,即:
planes[0].bytesused = buffer_size;
如果
bytesused
没设,驱动可能直接返回EINVAL
。
4. 建议的正确写法
试着改成这样:
struct v4l2_buffer buf; struct v4l2_plane planes[VIDEO_MAX_PLANES]; // 通常 >= 1 memset(&buf, 0, sizeof(buf)); memset(planes, 0, sizeof(planes)); buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE; buf.memory = V4L2_MEMORY_DMABUF; buf.index = i; // 循环里的 buffer 索引 buf.length = 1; // plane 数量 buf.m.planes = planes; planes[0].m.fd = frame_buffers[i].dmabuf_fd; planes[0].length = buffer_size; // plane 总大小 planes[0].bytesused = buffer_size; // 必须加上
5. 确认驱动是否支持 DMABUF
不是所有 V4L2 驱动都支持
V4L2_MEMORY_DMABUF
。你可以用下面的方法确认:v4l2-ctl -d /dev/video0 --querycap
如果
ioctl
里不支持DMABUF
,就算代码写对了也会EINVAL
。
有些驱动只支持MMAP
,那就得用V4L2_MEMORY_MMAP
。
建议:先在
QBUF
前打印出buf
和planes[0]
的值,确认index
、fd
、bytesused
都合理。