The GPU processing supports 8-bit sensor formats and 10/12-bit packed formats. Support for 10/12-bit unpacked formats is missing, let's add it. 10/12-bit unpacked formats use two adjacent bytes to store the value. This means the 8-bit shaders can be used if we can modify them for additional support of 16-bit addressing. This requires the following modifications: - Using GL_RG (two bytes per pixel) instead of GL_LUMINANCE (one byte per pixel) as the texture format for the given input formats. - Setting the texture width to the number of pixels rather than the number of bytes. - Making the definition of `fetch' macro variable, according to the pixel format. - Using only `fetch' for accessing the texture. Signed-off-by: Milan Zamazal <mzamazal@redhat.com> Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org> |
||
---|---|---|
.. | ||
android | ||
libcamera | ||
linux | ||
meson.build |