How to read VideoDevice into 1D array?
We have a custom frame grabber that is recognized as a video device by
v=imaq.VideoDevice("winvideo",DeviceID)
The ReturnedDataType defaults to ‘single’ but can be set to ‘uint16’. The frame grabber outputs 16 bits per image sensor pixel
The ReturnedColorSpace does not show ‘Bayer’ as an option, only ‘rgb’ ‘grayscale’ and ‘YCbCr’. But the frame grabber outputs the 2D image sensor pixels in row-major order (ie row1, then row2, etc) which can be transposed and then demosaiced using a ‘gbrg’ BayerSensorAlignment.
The ROI defaults to the [1, 1, Height, Width] of the sensor
It seems that step(v) reshapes the data in column-major order.
Since the ReturnedColorSpace does not offer ‘Bayer’ as an option, and seems to default to colum-major reshaping of the output of step(v), is there a way to execute step(v) such that the output is a 1D vector with length=Height*Width? This would allow the image data to be reshaped in row-major order into a 2D ‘Bayer’ image, transposed, and then demosaiced.
For reference, a video capture object can be generated in python
v=cv2.VideoCapture(DeviceID)
and the reshaping of the v.capture() output can be halted using
v.set(cv2.CAP_PROP_CONVERT_RGB, 0)
v.capture() results in a 1D vector (although the length is then 2*Height*Width of ‘uint8’ values that can be typecast to uint16)We have a custom frame grabber that is recognized as a video device by
v=imaq.VideoDevice("winvideo",DeviceID)
The ReturnedDataType defaults to ‘single’ but can be set to ‘uint16’. The frame grabber outputs 16 bits per image sensor pixel
The ReturnedColorSpace does not show ‘Bayer’ as an option, only ‘rgb’ ‘grayscale’ and ‘YCbCr’. But the frame grabber outputs the 2D image sensor pixels in row-major order (ie row1, then row2, etc) which can be transposed and then demosaiced using a ‘gbrg’ BayerSensorAlignment.
The ROI defaults to the [1, 1, Height, Width] of the sensor
It seems that step(v) reshapes the data in column-major order.
Since the ReturnedColorSpace does not offer ‘Bayer’ as an option, and seems to default to colum-major reshaping of the output of step(v), is there a way to execute step(v) such that the output is a 1D vector with length=Height*Width? This would allow the image data to be reshaped in row-major order into a 2D ‘Bayer’ image, transposed, and then demosaiced.
For reference, a video capture object can be generated in python
v=cv2.VideoCapture(DeviceID)
and the reshaping of the v.capture() output can be halted using
v.set(cv2.CAP_PROP_CONVERT_RGB, 0)
v.capture() results in a 1D vector (although the length is then 2*Height*Width of ‘uint8’ values that can be typecast to uint16) We have a custom frame grabber that is recognized as a video device by
v=imaq.VideoDevice("winvideo",DeviceID)
The ReturnedDataType defaults to ‘single’ but can be set to ‘uint16’. The frame grabber outputs 16 bits per image sensor pixel
The ReturnedColorSpace does not show ‘Bayer’ as an option, only ‘rgb’ ‘grayscale’ and ‘YCbCr’. But the frame grabber outputs the 2D image sensor pixels in row-major order (ie row1, then row2, etc) which can be transposed and then demosaiced using a ‘gbrg’ BayerSensorAlignment.
The ROI defaults to the [1, 1, Height, Width] of the sensor
It seems that step(v) reshapes the data in column-major order.
Since the ReturnedColorSpace does not offer ‘Bayer’ as an option, and seems to default to colum-major reshaping of the output of step(v), is there a way to execute step(v) such that the output is a 1D vector with length=Height*Width? This would allow the image data to be reshaped in row-major order into a 2D ‘Bayer’ image, transposed, and then demosaiced.
For reference, a video capture object can be generated in python
v=cv2.VideoCapture(DeviceID)
and the reshaping of the v.capture() output can be halted using
v.set(cv2.CAP_PROP_CONVERT_RGB, 0)
v.capture() results in a 1D vector (although the length is then 2*Height*Width of ‘uint8’ values that can be typecast to uint16) imaq, roi MATLAB Answers — New Questions