libcamera: software_isp: GPU support for unpacked 10/12-bit formats
The GPU processing supports 8-bit sensor formats and 10/12-bit packed formats. Support for 10/12-bit unpacked formats is missing, let's add it. 10/12-bit unpacked formats use two adjacent bytes to store the value. This means the 8-bit shaders can be used if we can modify them for additional support of 16-bit addressing. This requires the following modifications: - Using GL_RG (two bytes per pixel) instead of GL_LUMINANCE (one byte per pixel) as the texture format for the given input formats. - Setting the texture width to the number of pixels rather than the number of bytes. - Making the definition of `fetch' macro variable, according to the pixel format. - Using only `fetch' for accessing the texture. Signed-off-by: Milan Zamazal <mzamazal@redhat.com> Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
This commit is contained in:
parent
9b66144aad
commit
f28498a2fb
3 changed files with 44 additions and 14 deletions
|
@ -32,9 +32,17 @@ uniform mat3 ccm;
|
|||
void main(void) {
|
||||
vec3 rgb;
|
||||
|
||||
#if defined(RAW10P)
|
||||
#define pixel(p) p.r / 4.0 + p.g * 64.0
|
||||
#define fetch(x, y) pixel(texture2D(tex_y, vec2(x, y)))
|
||||
#elif defined(RAW12P)
|
||||
#define pixel(p) p.r / 16.0 + p.g * 16.0
|
||||
#define fetch(x, y) pixel(texture2D(tex_y, vec2(x, y)))
|
||||
#else
|
||||
#define fetch(x, y) texture2D(tex_y, vec2(x, y)).r
|
||||
#endif
|
||||
|
||||
float C = texture2D(tex_y, center.xy).r; // ( 0, 0)
|
||||
float C = fetch(center.x, center.y); // ( 0, 0)
|
||||
const vec4 kC = vec4( 4.0, 6.0, 5.0, 5.0) / 8.0;
|
||||
|
||||
// Determine which of four types of pixels we are on.
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue