Compare commits

...

147 commits

Author SHA1 Message Date
Laurent Pinchart
afd9890b7b libcamera: delayed_controls: Inherit from Object class
A second use-after-free bug related to signals staying connected after
the receiver DelayedControls instance gets deleted has been found, this
time in the simple pipeline handler. Fix the issue once and for all by
making the DelayedControls class inherit from Object. This will
disconnect signals automatically upon deletion of the receiver.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Tested-by: Isaac Scott <isaac.scott@ideasonboard.com>
Reviewed-by: Isaac Scott <isaac.scott@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-11 12:25:46 +01:00
Umang Jain
fb72083975 camera: Fix spell error
Correct 'CameraConfigutation' spell error to 'CameraConfiguration'.

Signed-off-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 14:10:14 +01:00
Naushir Patuck
29a88d85b7 libcamera: controls: Use nanoseconds units for FrameWallClock
Use nanoseconds for the FrameWallClock control to match the units for
other timestamp controls, including SensorTimestamp.

Update the RPi pipeline handlers to match the new nanoseconds units when
converting from SensorTimestamp to FrameWallClock.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:58 +01:00
Naushir Patuck
a437212753 libcamera: controls: Remove hyphenation in control description text
Remove the hyphenation in "micro-seconds" in the description for the
ExposureTime control to match the rest of the document.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:47 +01:00
Nick Hollinghurst
e6fb24ffdb ipa: rpi: Fix bug in AfState reporting
A previous change introduced a bug in which it reported AfStateIdle
when idle in Auto mode, when it should continue to report the most
recent AF cycle's outcome (AfStateFocused or AfStateFailed).

Also fix the Pause method so it won't reset state to AfStateIdle
when paused in Continuous AF mode (to match documented behaviour).

Fixes: ea5f451c56 ("ipa: rpi: controller: AutoFocus bidirectional scanning")
Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:17 +01:00
Harvey Yang
525325440b V4L2VideoDevice: Call FrameBuffer::Private::cancel() in streamOff()
At the moment `V4L2VideoDevice::streamOff()` sets
`FrameBuffer::Private`'s metadata directly, while that's equivalent to
calling `FrameBuffer::Private::cancel()`. To ease code tracing, this
patch replace the manual modification with the function call.

Signed-off-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <uajain@igalia.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:12 +01:00
Christian Rauch
17eed522e8 subprojects: libpisp: Update to 1.2.1
Update the libpisp wrap to use the latest 1.2.1 release which silences
an 'unused-parameter' warning.

Bug: https://github.com/raspberrypi/libpisp/pull/43
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Christian Rauch <Rauch.Christian@gmx.de>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-07-08 13:05:54 +03:00
Nick Hollinghurst
619da07f73 ipa: rpi: Update IMX708 camera tuning files for AutoFocus changes
Explicitly add new parameters: "retrigger_ratio", "retrigger_delay",
"check_for_ir". Tweak other parameters to suit algorithm changes.
(Though existing tuning files should still work acceptably.)

Add AfSpeedFast parameters for the Raspberry Pi V3 standard lens.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:33 +01:00
Nick Hollinghurst
ea5f451c56 ipa: rpi: controller: AutoFocus bidirectional scanning
To reduce unnecessary lens movements, allow the CDAF-based
search procedure to start from either end of the range;
or if not near an end, from the current lens position.

This sometimes requires a second coarse scan, if the first
one started in the middle and did not find peak contrast.

Shorten the fine scan from 5 steps to 3 steps; allow fine scan
to be omitted altogether when "step_fine": 0 in the tuning file.

Move updateLensPosition() out of startProgrammedScan() to avoid
calling it more than once per iteration.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
686f88707c ipa: rpi: controller: Autofocus to use AWB statistics; re-trigger
Analyse AWB statistics: used both for scene change detection
and to detect IR lighting (when a flag is set in the tuning file).

Option to suppress PDAF altogether when IR lighting is detected.

Rather than being based solely on PDAF "dropout", allow a scan to
be (re-)triggered whenever the scene changes and then stabilizes,
based on contrast and average RGB statistics within the AF window.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
3d44987bc6 ipa: rpi: controller: AutoFocus tweak earlyTerminationByPhase()
Increase threshold for ETBP, from "confEpsilon" to "confThresh".
Correct sign test to take account of pdafGain sign (typically -ve).
Reduce allowed extrapolation range, but relax the check in the
case of Continuous AF, when we go back into the PDAF closed loop.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
429a5ab48f ipa: rpi: controller: Autofocus CAF/PDAF stability tweak
When in Continuous AF mode using PDAF, only move the lens when
phase has had the same sign for at least 4 frames. This reduces
lens wobble in e.g. noisy conditions.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
0fa2b05a86 ipa: rpi: controller: AutoFocus weighting tweak
In getPhase(), stop using different weights for sumWc and sumWcp.
This should improve linearity e.g. in earlyTerminationByPhase().
Phases are slightly larger but confidence values slightly reduced.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
a283287fbf ipa: rpi: controller: Improve findPeak() function in AF algorithm
Improve quadratic peak fitting in findPeak(). The old approximation
was good but only valid when points were equally spaced and the
MAX was not at one end of the series.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
30114cadd8 ipa: rpi: Defer initialising AF LensPosition ControlInfo and value
This fixes two small bugs:

We previously populated LensPosition's ControlInfo with hard-coded
values, ignoring the tuning file. Now we query the AfAlgorithm to
get limits (over all AF ranges) and default (for AfRangeNormal).

We previously sent a default position to the lens driver, even when
a user-specified starting position would follow. Defer doing this,
to reduce unnecessary lens movement at startup (for some drivers).

Bug: https://bugs.libcamera.org/show_bug.cgi?id=258
Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:26 +01:00
Barnabás Pőcze
6b5cc1c92a libcamera: pipeline: uvcvideo: Handle controls during startup
Process the control list passed to `Camera::start()`, and set
the V4L2 controls accordingly.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-07-02 10:26:41 +02:00
Naushir Patuck
5f94209b1d pipeline: rpi: Fix for enumerating the media graphs
When there are multiple entities between the sensor and CFE device (e.g.
a serialiser and deserialiser or multiple mux devices), the media graph
enumeration would work incorrectly and report that the frontend entity
was not found. This is because the found flag was stored locally in a
boolean and got lost in the recursion.

Fix this by explicitly tracking and returning the frontend found flag
through the return value of enumerateVideoDevices(). This ensures the
flag does not get lost through nested recursion.

This flag can also be used to fail a camera registration if the frontend
is not found.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-07-01 02:10:34 +03:00
Barnabás Pőcze
35ee8752b7 libcamera: pipeline: uvcvideo: Silently ignore AeEnable
The `AeEnable` control is handled in `Camera::queueRequest()` but it
still reaches the pipeline handler because a single element cannot be
removed from a `ControlList`. So ignore it silently.

Fixes: ffcecda4d5 ("libcamera: pipeline: uvcvideo: Report new AeEnable control as available")
Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-06-30 14:46:19 +02:00
Umang Jain
e9528306f2 camera_sensor: Expand on computeTransform() documentation
The description for computeTransform() when the desired orientation
cannot be achieved, can be expanded a further bit, to clearly report
that orientation will be adjusted to native camera sensor mounting
rotation.

Signed-off-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-26 16:20:53 +03:00
Barnabás Pőcze
a29c53f6a6 meson: Use libyaml wrap file from wrapdb
Use the libyaml wrap file from the meson wrapdb instead of
creating the wrap file manually and using the cmake module.
This provides better integration with meson, such as the
`force_fallback_for` built-in option.

This is also needed because the upstream CMakeLists.txt is
out of date, failing with a sufficiently new cmake version:

    CMake Error at CMakeLists.txt:2 (cmake_minimum_required):
    Compatibility with CMake < 3.5 has been removed from CMake.

The above is nonetheless addressed by https://github.com/yaml/libyaml/pull/314,
but the project seems a bit inactive at the moment.

The wrap file was added using `meson wrap install libyaml`,
and it can be updated using `meson wrap update libyaml`.

`default_library=static` is used to match the behaviour of the
previously used cmake build. `werror=false` needs to be set
because libyaml does not compile without warnings, and that
would abort the build process otherwise.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-26 14:01:19 +02:00
Kieran Bingham
5f4d2ac935 libcamera: controls: Revert incorrect SPDX removal
In commit 6a09deaf7d ("controls: Add FrameWallClock control") the
existing SPDX was accidentally removed, likely from a rebase operation
at some point.

Unfortunately as this patch had already collected Reviewed-by tags, the
surruptious removal wasn't noticed until after it was merged.

Re-insert the existing SPDX and copyright banner as the header to the
control definitions file.

Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-26 12:59:03 +01:00
Stefan Klug
0dfb052fbd libcamera: base: Fix log level parsing when multiple categories are
listed

For a list of log levels like LIBCAMERA_LOG_LEVELS="CatA:0,CatB:1" only
the severity of the last entry is correctly parsed.

Due to the change of level to a string_view in 24c2caa1c1 ("libcamera:
base: log: Use `std::string_view` to avoid some copies") the level is no
longer necessarily null terminated as it is a view on the original data.

Replace the check for a terminating null by a check for the end position
to fix the issue.

Fixes: 24c2caa1c1 ("libcamera: base: log: Use `std::string_view` to avoid some copies")
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-23 16:40:43 +02:00
Stefan Klug
8ea3ef083f libcamera: test: Add a failing test for the log level parser
Log level parsing doesn't always work as expected.  Add a failing test
for that.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-23 16:39:47 +02:00
Laurent Pinchart
c19047dfdf gstreamer: Use std::exchange() instead of g_steal_pointer()
g_steal_pointer) only preserves the type since glib 2.68, requiring
casts. Use std::exchange() instead.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
2025-06-23 02:30:47 +03:00
Laurent Pinchart
02a3b436c4 ipa: rkisp1: Move Sharpness control creation to Filter algorithm
The Sharpness control is used solely by the Filter algorithm. Create it
there, to avoid exposing it to applications when the algorithm is
disabled.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-06-23 02:30:47 +03:00
David Plowman
1537da7442 pipeline: rpi: Add wallclock timestamp support
A ClockRecovery object is added for derived classes to use, and
wallclock timestamps are copied into the request metadata for
applications.

Wallclock timestamps are derived corresponding to the sensor
timestamp, and made available to the base pipeline handler class and
to IPAs, for both vc4 and pisp platforms.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
David Plowman
1d1ba78b45 controls: Add camera synchronisation controls for Raspberry Pi
New controls are added to control the camera "sync" algorithm, which
allows different cameras to synchronise their frames. For the time
being, the controls are Raspberry Pi specific, though this is expected
to change in future.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
David Plowman
2a4e347dfe libcamera: Add ClockRecovery class to generate wallclock timestamps
The ClockRecovery class takes pairs of timestamps from two different
clocks, and models the second ("output") clock from the first ("input")
clock.

We can use it, in particular, to get a good wallclock estimate for a
frame's SensorTimestamp.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
David Plowman
6a09deaf7d controls: Add FrameWallClock control
Add a FrameWallClock control that reports the same moment as the
frame's SensorTimestamp, but in wallclock units.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
Hou Qi
4a277906a4 gstreamer: Fix libcamerasrc responding latency before setting caps
Whenever a downstream element queries latency, libcamerasrc will always reply,
even though it has not yet determined the latency.

However some downstream elements (e.g. glvideomixer/aggregator) will query the
latency before libcamerasrc sets the caps. When these elements get the latency,
they will start the caps negotiation. Since libcamerasrc has not yet determined
caps, invalid negotiation is performed and workflow is disrupted.

So, set latency to 'GST_CLOCK_TIME_NONE' during initialization, and reply to the
query after libcamerasrc confirms the latency. At this time, libcamerasrc has also
completed caps negotiation and downstream elements work fine.

In addition, every time the src pad task stops, we reset the latency to
GST_CLOCK_TIME_NONE to ensure that when next time task starts, the downstream
elements can generate out buffers after receiving the effective latency.

Signed-off-by: Hou Qi <qi.hou@nxp.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 01:50:38 +01:00
Barnabás Pőcze
b4c92a61bf ipa: rpi: Initialize enum controls with a list of values
This is how uvcvideo and rkisp1 do it. See ee918b370a
("ipa: rkisp1: agc: Initialize enum controls with a list of values")
for the motivation. In summary, having a list of values is used as a sign
that the control is an enum in multiple places (e.g. `cam`, `camshark`).

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 10:59:12 +02:00
Laurent Pinchart
b3ff75d758 gstreamer: Replace NULL with nullptr
Usage of NULL has slowly crept in the libcamerasrc sources. Replace it
with nullptr.

Reported-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:31 +03:00
Laurent Pinchart
a8f90517e0 gstreamer: Drop incorrect unref on caps
The caps object passeed to the gst_libcamera_create_video_pool()
function is managed as a g_autoptr() in the caller. The function doesn't
acquire any new reference, so it shouldn't call gst_caps_unref(). Fix
it.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:29 +03:00
Laurent Pinchart
772b06bd8c gstreamer: Fix leak of GstQuery and GstBufferPool in error path
The gst_libcamera_create_video_pool() function leaks a GstQuery instance
and a GstBufferPool instance in an error path. Fix the leaks with
g_autoptr().

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
2025-06-17 01:01:26 +03:00
Laurent Pinchart
f7c4fcd301 gstreamer: Rename variable in gst_libcamera_create_video_pool()
Now that the code is isolated in a function, the video_pool variable in
gst_libcamera_create_video_pool() can be renamed to pool without
clashing with another local variable. Do so to reduce line length.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:23 +03:00
Laurent Pinchart
613202b809 gstreamer: Reduce indentation in gst_libcamera_create_video_pool()
Now that video pool creation is handled by a dedicated function, the
logic can be simplified by returning early instead of nesting scopes. Do
so to decrease indentation and improve readability, and document the
implementation of the function with comments.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:20 +03:00
Laurent Pinchart
3b68207789 gstreamer: Factor out video pool creation
The gst_libcamera_src_negotiate() function uses 5 indentation levels,
causing long lines. Move video pool creation to a separate function to
increase readability.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:17 +03:00
Laurent Pinchart
04e7823eb2 gstreamer: Document improvements when updating minimum GStreamer version
A const_cast<> was recently added to fix a compilation issue with older
GStreamer versions. Add a comment to indicate it can be removed when
bumping the minimum GStreamer version requirement. While at it, also
document a possible future improvement in the same function, and wrap
long lines.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:14 +03:00
Antoine Bouyer
d3f3b95b64 pipeline: imx8-isi: Dynamically compute crossbar subdevice's first source.
So far, imx8-isi pipeline supports _symetrical_ crossbar, with same
amount of sink and source pads.

But for some other imx SoCs, such as i.MX8QM or i.MX95, crossbar is not
symetric anymore.

Since each crossbar source is already captured as a pipes_ vector entry,
we use pipes_ vector's size to compute 1st source index.

  "1st source index" = "total number of crossbar pads" - pipes_.count()

Signed-off-by: Antoine Bouyer <antoine.bouyer@nxp.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 00:44:05 +03:00
Antoine Bouyer
5621ac27a2 pipeline: imx8-isi: Fix match returned value in error case
The match() function returns a boolean type, while it could return int
in case of error when opening the capture file.

Fixes: 0ec982d210 ("libcamera: pipeline: Add IMX8 ISI pipeline")
Signed-off-by: Antoine Bouyer <antoine.bouyer@nxp.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 00:19:54 +03:00
Antoine Bouyer
5c8de8a08e pipeline: imx8-isi: Cosmetic changes
Change indentation to pass checkstyle script.

Fixes: 680cde6005 ("libcamera: imx8-isi: Split Bayer/YUV config generation")
Signed-off-by: Antoine Bouyer <antoine.bouyer@nxp.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 00:19:53 +03:00
Barnabás Pőcze
b544ce1c19 apps: common: image: Fix assertion
`plane` must be strictly less than the vector's size,
it cannot be equal to it.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-16 10:57:54 +02:00
Naushir Patuck
8d2cd0b5b8 ipa: rpi: Rename dropFrameCount_ to invalidCount_
Rename dropFrameCount_ to invalidCount_ to better reflect its use as
frames are no longer dropped by the pipeline handler.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:55 +01:00
Naushir Patuck
a402f9ebc1 pipeline: rpi: Remove ispOutputCount_ and ispOutputTotal_
With the drop frame logic removed from the pipeline handler, these
member variables and not used, so remove them.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:55 +01:00
Naushir Patuck
98d144fef3 pipeline: rpi: Remove disable_startup_frame_drops config option
With the previous change to not drop frames in the pipeline handler,
the "disable_startup_frame_drops" pipeline config option is not used.
Remove it, and throw a warning if the option is present in the YAML
config file.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:55 +01:00
Naushir Patuck
6cf9c4d34f pipeline: ipa: rpi: Split RPiCameraData::dropFrameCount_
Split the pipeline handler drop frame tracking into startup frames and
invalid frames, as reported by the IPA.

Remove the drop buffer handling logic in the pipeline handler. Now all
image buffers are returned out with the appropriate FrameStatus set
for startup or invalid frames.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:54 +01:00
Naushir Patuck
b114c155a7 ipa: rpi: Replace dropFrameCount in the IPA -> PH interface
Replace the dropFrameCount parameter returned from ipa::start() to the
pipeline handler by startupFrameCount and invalidFrameCount. The former
counts the number of frames required for AWB/AGC to converge, and the
latter counts the number of invalid frames produced by the sensor when
starting up.

In the pipeline handler, use the sum of these 2 values to replicate the
existing dropFrameCount behaviour.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:54 +01:00
Naushir Patuck
c50eb1f04a libcamera: framebuffer: Add FrameMetadata::Status::FrameStartup
Add a new status enum, FrameStartup, used to denote that even though
the frame has been successfully captured, the IQ parameters set by the
IPA will cause the frame to be unusable and applications are advised to
not consume this frame. An example of this would be on a cold-start of
the 3A algorithms, and there will be large oscillations to converge to
a stable state quickly.

Additional, update the definition of the FrameError state to cover the
usage when the sensor is known to produce a number of invalid/error
frames after stream-on.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:54 +01:00
Barnabás Pőcze
8d168f3348 libcamera: process: Ensure that file descriptors are nonnegative
Return `-EINVAL` from `Process::start()` if any of the file descriptors
are negative as those most likely signal some kind of issue such as
missed error checking.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:26:11 +02:00
Barnabás Pőcze
fae2b506d7 libcamera: process: Return error if already running
Returning 0 when a running process is already managed can be confusing
since the parameters might be completely different, causing the caller
to mistakenly assume that the program it specified has been started.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:25:59 +02:00
Barnabás Pőcze
0a591eaf8c libcamera: process: Misc. cleanup around execv()
Firstly, get the number of arguments first, and use that to determine the
size of the allocation instead of retrieving it twice.

Secondly, use `const_cast` instead of a C-style cast when calling `execv()`.

Third, use `size_t` to match the type of `args.size()`.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:25:22 +02:00
Barnabás Pőcze
081554db34 libcamera: process: Disable copy/move
A `Process` object has address identity because a pointer to it is
stored inside the `ProcessManager`. However, copy/move special
methods are still generated by the compiler. So disable them to
avoid potential issues and confusion.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:25:18 +02:00
Barnabás Pőcze
633063e099 android: camera_device: Do not pass nullptr to Request::addBuffer()
The default argument already takes care of passing no fence to
`addBuffer()`, so there is no reason to specify `nullptr` explicitly.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-04 09:31:23 +02:00
Kieran Bingham
290d3f82e3 libcamera v0.5.1
The abi-compliance checker reports 100% compatibility in this release.
As such the SONAME is maintained at 0.5.

 Binary compatibility: 100%
 Source compatibility: 100%
 Total binary compatibility problems: 0, warnings: 0
 Total source compatibility problems: 0, warnings: 0

This release brings 93 commits with a large proportion of fixes and
cleanup againt earlier releases. Improvements have been made to the
Raspberry Pi Camera Tuning Tools, and the geometry, matrix and vector
class helpers have been expanded for greater reuse throughout the
project.

Notably for packagers - IPA modules now have their own subdirectory
which should prevent undesirable surrupticious error messages that would
occur if packagers choose to install the V4L2 adaptation layer in the
same folder as the IPA modules.

The RKISP1 can now adapt to more complex input pipelines, including
FPGAs and multiplexors, which has been beneficial for users on the
i.MX8MP, and the IPA algorithms for i.MX8MP and RKISP1 continue to get
improvements.

The software ISP has a new Saturation control (available when the CCM is
enabled).

The Documentation and pipeline handler writers guide has been
re-reviewed and cleaned up.

On the application and test side, lc-compliance now includes
multi-stream tests, and cam has extended support for display formats and
now prevents issues on non-display GPUs when rendering direct to DRM.

Contributors:

    36  Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
    15  Stefan Klug <stefan.klug@ideasonboard.com>
     5  David Plowman <david.plowman@raspberrypi.com>
     5  Kieran Bingham <kieran.bingham@ideasonboard.com>
     5  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
     4  Milan Zamazal <mzamazal@redhat.com>
     4  Quentin Schulz <quentin.schulz@cherry.de>
     3  Daniel Scally <dan.scally@ideasonboard.com>
     3  Paul Elder <paul.elder@ideasonboard.com>
     2  Hou Qi <qi.hou@nxp.com>
     2  Julien Vuillaumier <julien.vuillaumier@nxp.com>
     2  Naushir Patuck <naush@raspberrypi.com>
     2  Niklas Söderlund <niklas.soderlund@ragnatech.se>
     2  Pavel Machek <pavel@ucw.cz>
     1  Benjamin Mugnier <benjamin.mugnier@foss.st.com>
     1  Nícolas F. R. A. Prado <nfraprado@collabora.com>
     1  Sven Püschel <s.pueschel@pengutronix.de>

 108 files changed, 3359 insertions(+), 528 deletions(-)

Integration overview:

The following commits in this release relate to either a bug fix or an
improvement to an existing commit.

 - meson: Do not automatically build documentation if sphinx-build-3 is found
   - Fixes: aba567338b ("Documentation: Move all dependencies into features")
 - Revert "libcamera: rkisp1: Eliminate hard-coded resizer limits"
   - Fixes: 761545407c ("pipeline: rkisp1: Filter out sensor sizes not supported by the pipeline")
 - pipeline: rkisp1: Fix vblank delay
   - Fixes: f72c76eb6e ("rkisp1: Honor the FrameDurationLimits control")
 - utils: raspberrypi: ctt: Fix NaNs in lens shading tables
   - Bug: https://github.com/raspberrypi/libcamera/issues/254
 - utils: raspberrypi: ctt: Fix NaNs in chromatic aberration tables
   - Bug: https://github.com/raspberrypi/libcamera/issues/254
 - utils: raspberrypi: ctt: Fix integer division error calculating LSC cell size
   - Bug: https://github.com/raspberrypi/libcamera/issues/260
 - apps: qcam: Push the viewfinder role to vector
   - Fixes: ee2b011b65 ("apps: cam: Try raw role if default viewfinder role fails")
 - ipa: Move IPA installations to a subdir
   - Bug: https://bugs.libcamera.org/show_bug.cgi?id=268
 - ipa: rkisp1: algorithms: awb: Fix wrong colour temperature reporting
   - Fixes: b60bd37b1a ("ipa: rkisp1: Move calculation of RGB means into own function")
 - ipa: rkisp1: ccm/lsc: Fix CCM/LSC based on manual color temperature
   - Fixes: 0230880954 ("ipa: rkisp1: awb: Implement ColourTemperature control")
 - libcamera: controls: Fix `ControlInfoMap::count(unsigned int)`
   - Fixes: 76b9923e55 ("libcamera: controls: Avoid exception in ControlInfoMap count() and find()")
 - apps: cam: capture_script: Disallow arrays of strings
   - Fixes: b35f04b3c1 ("cam: capture_script: Support parsing array controls")
 - libcamera: matrix: Fix compilation error in inverse() function
   - Fixes: 6287ceff5a ("libcamera: matrix: Add inverse() function")
 - ipa: rpi: controller: rpi: Fix colour gain typo in AGC
   - Fixes: 29892f1c56 ("ipa: libipa: colour: Use the RGB class to model RGB values")

And the following updates have been made in this release, grouped by category:

core:
 - meson: Make the default value of "documentation" feature explicit
 - meson: Do not automatically build documentation if sphinx-build-3 is found
 - libcamera: request: Avoid double map lookup
 - utils: rkisp1: gen-csc-table: Support printing CCM in decimal
 - libcamera: ipa_module: Avoid unnecessary copy when getting signature
 - libcamera: controls: Disallow arrays of arrays
 - libcamera: media_device: Add helper to return matching entities
 - libcamera: internal: Add MediaPipeline helper
 - libcamera: stream: Add color space to configuration string representation
 - README.rst: remove unnecessary dependency for qcam
 - libcamera: v4l2_videodevice: Log buffer count on allocation error
 - libcamera: matrix: Replace SFINAE with static_asserts
 - libcamera: matrix: Make most functions constexpr
 - libcamera: matrix: Add a Span based constructor
 - libcamera: vector: Add a Span based constructor
 - libcamera: matrix: Add inverse() function
 - libcamera: matrix: Extend multiplication operator to heterogenous types
 - libcamera: vector: Extend matrix multiplication operator to heterogenous types
 - libcamera: controls: Fix `ControlInfoMap::count(unsigned int)`
 - utils: codegen: Make users depend on `controls.py` in meson
 - libcamera: matrix: Fix compilation error in inverse() function
 - libcamera: sensor: Fix the gain delay for IMX283
 - treewide: Do not use `*NameValueMap` for known values
 - utils: codegen: ipc: Use `any()` instead of `len([]) > 0`
 - utils: codegen: ipc: Remove `namespace` argument
 - utils: codegen: ipc: Add `deserializer()` function
 - utils: codegen: ipc: Log error code when remote call fails
 - utils: codegen: ipc: Simplify `return` statements
 - libcamera: ipa_data_serializer: Remove some vector `reserve()` calls
 - libcamera: mali-c55: Remove tpgCodes_
 - libcamera: mali-c55: Remove tpgSizes_ member from MaliC55CameraData
 - libcamera: process: Use _exit in child process
 - libcamera: process: Pass stderr and reserve stdin and stdout fds
 - guides: pipeline-handler: Update name of pipeline handler stop function
 - libcamera: mali-c55: Fix error paths in ::init()

pipeline:
 - libcamera: software_isp: Add a clarification comment to AWB
 - libcamera: pipeline: uvcvideo: Expose `Gamma` control
 - libcamera: software_isp: Fix CCM multiplication
 - libcamera: pipeline: virtual: Fix typo in log message
 - libcamera: pipeline: imx8-isi: Remove unused variable
 - pipeline: rkisp1: Fix vblank delay
 - libcamera: pipeline: rkisp1: Convert to use MediaPipeline
 - libcamera: pipeline: uvcvideo: Report new AeEnable control as available
 - ipu3: cio2: Remove unused function definition
 - libcamera: software_isp: Add saturation control
 - Revert "libcamera: rkisp1: Eliminate hard-coded resizer limits"

apps:
 - apps: lc-compliance: Support multiple streams in helpers
 - apps: lc-compliance: Add multi-stream tests
 - apps: cam: capture_script: Simplify bool array parsing
 - gstreamer: Fixate colorimetry field during caps negotiation
 - apps: cam: Try raw role if default viewfinder role fails
 - apps: qcam: Push the viewfinder role to vector
 - py: Set `PYTHONPATH` in devenv
 - apps: cam: sdl_texture: Take list of buffers in span
 - apps: cam: sdl_texture: Drop `&rect_` from `SDL_Update{NV,}Texture()` call
 - apps: cam: sdl_texture: Add `SDLTexture1Plane`
 - apps: cam: sdl_sink: Support more single-plane formats
 - gstreamer: Add GstVideoMeta support
 - apps: cam: capture_script: Disallow arrays of strings
 - apps: cam: Skip non-display GPUs

ipa:
 - utils: ipc: Do not duplicate signals in proxy object
 - utils: ipc: Do not define variables in signal handler up front
 - ipa: rpi: common: Avoid warnings when AeEnable control is used
 - ipa: rpi: awb: Remove "fast" parameter
 - ipa: Move IPA installations to a subdir
 - ipa: rkisp1: awb: Declare ControlInfo in AWB
 - ipa: rkisp1: awb: Ignore empty AWB statistics
 - ipa: rkisp1: Refactor automatic/manual structure in IPAActiveState
 - ipa: rkisp1: algorithms: awb: Fix wrong colour temperature reporting
 - ipa: rkisp1: ccm/lsc: Fix CCM/LSC based on manual color temperature
 - ipa: rkisp1: Implement manual ColourCorrectionMatrix control
 - libipa: awb: Make result of gainsFromColourTemp optional
 - ipa: rkisp1: Damp color temperature regulation
 - ipa: rkisp1: awb: Take the CCM into account for the AWB gains calculation
 - ipa: rkisp1: awb: Avoid division by zero
 - ipa: rpi: controller: rpi: Fix colour gain typo in AGC
 - ipa: rpi: Add tuning for IMX283
 - ipa: rpi: Prevent segfault if AGC algorithm is absent

tuning:
 - utils: raspberrypi: ctt: Fix NaNs in lens shading tables
 - utils: raspberrypi: ctt: Fix NaNs in chromatic aberration tables
 - utils: raspberrypi: ctt: Fix integer division error calculating LSC cell size

documentation:
 - Documentation: guides: pipeline-handler: Fix camera creation
 - Documentation: guides: pipeline-handler: Fix property list file name
 - Documentation: guides: pipeline-handler: Fix configuration creation
 - Documentation: guides: pipeline-handler: Fix `Camera::create()` link
 - Documentation: guides: pipeline-handler: Simplify format collection
 - Documentation: guides: pipeline-handler: Query pixel formats once
 - Documentation: guides: application-developer: Remove unnecessary argument
 - Documentation: Fix `INCLUDE_PATH` doxygen configuration option
 - doc: Mention right meson version
 - doc: document libtiff dependency for cam

test:
 - test: Add minimal test for Matrix
 - lc-compliance: Move camera setup to CameraHolder class

Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-01 23:57:29 +01:00
Barnabás Pőcze
a8bc540653 Documentation: Fix INCLUDE_PATH doxygen configuration option
libcamera header files should be included using the `libcamera/...` prefix.
However, `INCLUDE_PATH` is currently set to `@TOP_SRCDIR@/include/libcamera`
meaning that doxygen, when encountering `libcamera/x.h`, will try to open
`@TOP_SRCDIR@/include/libcamera/libcamera/x.h`, which is not the correct
path.

Fix that by using `@TOP_{BUILD,SRC}DIR@/include`. This removes the extra
`libcamera` component from the path and adds the corresponding directory
from the build directory as well since that is an implicit include
directory added by meson.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-01 23:11:54 +01:00
Milan Zamazal
59ac34b728 libcamera: software_isp: Add saturation control
Saturation control is added on top of the colour correction matrix.  A
method of saturation adjustment that can be fully integrated into the
colour correction matrix is used.  The control is available only if Ccm
algorithm is enabled.

The control uses 0.0-2.0 value range, with 1.0 being unmodified
saturation, 0.0 full desaturation and 2.0 quite saturated.

The saturation is adjusted by converting to Y'CbCr colour space,
applying the saturation value on the colour axes, and converting back to
RGB.  ITU-R BT.601 conversion is used to convert between the colour
spaces, for no particular reason.

The colour correction matrix is applied before gamma and the given
matrix is suitable for such a case.  Alternatively, the transformation
used in libcamera rpi ccm.cpp could be used.

Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-01 23:08:01 +01:00
Daniel Scally
e342f050c2 libcamera: mali-c55: Fix error paths in ::init()
In the MaliC55CameraData::init() function there are two places that
return values they shouldn't; the ret variable is returned after
checking a pointer is not null instead of an explicit -ENODEV and later
the boolean value false is returned on failure instead of the error
value returned by V4L2Subdevice::open() - fix both problems.

Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-01 22:53:47 +01:00
Niklas Söderlund
fabee6055f guides: pipeline-handler: Update name of pipeline handler stop function
Since commit f6b6f15b54 ("libcamera: pipeline: Introduce
stopDevice()") the stop function needed to be implemented by pipeline
handlers was renamed to stopDevice().

Update the pipeline handler writers guide to match this.

Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Jai Luthra <jai.luthra@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-01 22:41:32 +01:00
Niklas Söderlund
4b5856533a ipu3: cio2: Remove unused function definition
The private function cio2BufferReady is defined but not implemented or
used, remove it for the class definition.

Signed-off-by: Niklas Söderlund <niklas.soderlund@ragnatech.se>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-01 22:30:49 +01:00
Milan Zamazal
663ab2ee8e apps: cam: Skip non-display GPUs
Device::openCard() in the cam DRM helpers looks for a /dev/dri/card*
device that can be opened and that doesn't fail when asked about
DRM_CAP_DUMB_BUFFER capability (regardless whether the capability is
supported by the device).

There can be matching devices that are not display devices.  This can
lead to selection of such a device and inability to use KMS output with
the `cam' application.  The ultimate goal is to display something on the
device and later the KMS sink will fail if there is no connector
attached to the device (although it can actually fail earlier, when
trying to set DRM_CLIENT_CAP_ATOMIC capability if this is not
supported).

Let's avoid selecting devices without connectors, CRTCs or encoders.
The added check makes the original check for DRM_CAP_DUMB_BUFFER API
most likely unnecessary, let's remove it.

Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Mattijs Korpershoek <mkorpershoek@kernel.org>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-30 12:26:53 +03:00
Benjamin Mugnier
1ee330c058 ipa: rpi: Prevent segfault if AGC algorithm is absent
Even without AGC definition in the tuning file, the application would
still dereference agc unconditionally, leading to a segmentation fault
if AGC is absent.
This is relevant for sensors already providing AGC/AEC by themselves.
Check if AGC is present prior to setting maximum exposure time.

Signed-off-by: Benjamin Mugnier <benjamin.mugnier@foss.st.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Tested-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com> # RPi4 + imx708_wide
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-29 15:00:09 +01:00
Julien Vuillaumier
5b7c83d8cc libcamera: process: Pass stderr and reserve stdin and stdout fds
When a child process is started from Process::start(), the file
descriptors inherited from the parent process are closed, except
the ones explicitly listed in the fds[] argument.

One issue is that the file descriptors for stdin, stdout and stderr
being closed, the subsequent file descriptors created by the child
process will reuse the values 0, 1 and 2 that are now available.
Thus, usage of printf(), assert() or alike may direct its output
to the new resource bound to one of these reused file descriptors.
The other issue is that the child process can no longer log on
the console because stderr has been closed.

To address the 2 issues, Process:start() is amended as below:
- Child process inherits from parent's stderr fd in order to share
the same logging descriptor
- Child process stdin, stdout and stderr fds are bound to /dev/null
if not inherited from parent. That is to prevent those descriptors
to be reused for any other resource, that could be corrupted by
the presence of printf(), assert() or alike.

Signed-off-by: Julien Vuillaumier <julien.vuillaumier@nxp.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-29 12:30:12 +01:00
Julien Vuillaumier
32905fdd0b libcamera: process: Use _exit in child process
Use _exit() in child process in case of execv() error. That is to
avoid interfering with the parent process as exit() may call its
atexit() handlers and flush its io buffers.

Signed-off-by: Julien Vuillaumier <julien.vuillaumier@nxp.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-29 12:30:12 +01:00
Daniel Scally
f58077f073 libcamera: mali-c55: Remove tpgSizes_ member from MaliC55CameraData
The tpgSizes_ vector is only used within the initTPGData() function.
Drop it and use a local variable instead.

Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
2025-05-29 11:56:23 +01:00
Daniel Scally
b55943714f libcamera: mali-c55: Remove tpgCodes_
MaliC55CameraData stores a vector of the TPG's mbus codes (if the
camera in question is a TPG). This is never used - remove it.

Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Daniel Scally <dan.scally@ideasonboard.com>
2025-05-29 11:56:23 +01:00
Barnabás Pőcze
4709f8442b libcamera: ipa_data_serializer: Remove some vector reserve() calls
`appendPOD()` does a single insertion, so if only a single `appendPOD()`
will be called on a vector before returning, then calling `reserve()`
is not that useful, so remove it.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-05-27 11:10:23 +02:00
Barnabás Pőcze
e633d85be9 utils: codegen: ipc: Simplify return statements
Returning an expression of type `void` from a function returning `void`
is legal, so do not handle those cases specially.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-05-27 11:10:23 +02:00
Barnabás Pőcze
4adefc100d utils: codegen: ipc: Log error code when remote call fails
The error code can be useful in diagnosing the underlying issue,
so log that as well, not just the existence of the issue.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-05-27 11:10:23 +02:00
Barnabás Pőcze
d58ccabab7 utils: codegen: ipc: Add deserializer() function
Add `deserializer()` in `serializer.tmpl` to have a single function
that generates all the necessary functions into the template specialization
like `serializer()`. This also avoids the duplication of some
conditional logic.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-05-27 11:10:23 +02:00
Barnabás Pőcze
0a1539a4f1 utils: codegen: ipc: Remove namespace argument
The `serializer()`, `deserializer_{fd,no_fd,simple}()` functions
take a string argument named "namespace", but they do not use it.
So remove the argument.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-05-27 11:10:23 +02:00
Barnabás Pőcze
d4ef160b1a utils: codegen: ipc: Use any() instead of len([]) > 0
Use `any()` with a generator expression instead of constructing
a list and checking its length.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-05-27 11:10:23 +02:00
Barnabás Pőcze
eecb270085 treewide: Do not use *NameValueMap for known values
When the value is known, do not look it up via the control's `NameValueMap`,
instead, just refer to the value directly.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-27 09:42:46 +02:00
Naushir Patuck
aca8b701ac libcamera: sensor: Fix the gain delay for IMX283
The IMX283 uses a gain delay of 1 instead of the current value of 2 as
defined in the sensor properties. Fix it.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Acked-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-23 11:52:31 +01:00
Naushir Patuck
eb9bb35d80 ipa: rpi: Add tuning for IMX283
Add calibrated tuning for the IMX283 sensor for pisp. Update the vc4
tuning file to match the new calibration.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Paul Elder <paul.elder@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-23 11:49:20 +01:00
David Plowman
ad5326c926 ipa: rpi: controller: rpi: Fix colour gain typo in AGC
A simple typo crept in where the red gain had been re-typed rather
than using the correct green gain. In particular, this was causing
very dark images for sensors that use large red gains, such as the
IMX477 outdoors.

Fixes: 29892f1c56 ("ipa: libipa: colour: Use the RGB class to model RGB values")
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Tested-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-23 09:44:57 +02:00
Laurent Pinchart
516f365670 libcamera: matrix: Fix compilation error in inverse() function
Some gcc versions report uninitialized variable usage:

In member function ‘constexpr T& libcamera::Span<T, 4294967295>::operator[](size_type) const [with T = unsigned int]’,
    inlined from ‘void libcamera::matrixInvert(Span<const T>, Span<T, 4294967295>, unsigned int, Span<T, 4294967295>, Span<unsigned int>)::MatrixAccessor::swap(unsigned int, unsigned int) [with T = float]’ at ../../src/libcamera/matrix.cpp:194:13,
    inlined from ‘bool libcamera::matrixInvert(Span<const T>, Span<T, 4294967295>, unsigned int, Span<T, 4294967295>, Span<unsigned int>) [with T = float]’ at ../../src/libcamera/matrix.cpp:255:14:
../../include/libcamera/base/span.h:362:76: error: ‘row’ may be used uninitialized [-Werror=maybe-uninitialized]
  362 |         constexpr reference operator[](size_type idx) const { return data()[idx]; }
      |                                                                      ~~~~~~^
../../src/libcamera/matrix.cpp: In function ‘bool libcamera::matrixInvert(Span<const T>, Span<T, 4294967295>, unsigned int, Span<T, 4294967295>, Span<unsigned int>) [with T = float]’:
../../src/libcamera/matrix.cpp:232:30: note: ‘row’ was declared here
  232 |                 unsigned int row;
      |                              ^~~

This is a false positive. Fix it by initializing the variable when
declaring it.

Fixes: 6287ceff5a ("libcamera: matrix: Add inverse() function")
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Tested-by: Milan Zamazal <mzamazal@redhat.com>
2025-05-22 19:04:15 +02:00
Barnabás Pőcze
d997e97512 utils: codegen: Make users depend on controls.py in meson
Currently, modifying `controls.py` does not make those build targets dirty
that use a script that includes it (e.g. `gen-controls.py`) because meson
has no knowledge of this dependency. Add `depend_files` to each
`custom_target()` invocation to fix this.

Ideally it would be possible to attach this dependency to `gen_controls`,
`gen_gst_controls`, etc. objects themselves, so that repetition is
avoided, but this does not seem possible at the moment.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Acked-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-22 13:16:07 +02:00
Barnabás Pőcze
702af1a1d0 apps: cam: capture_script: Disallow arrays of strings
The current `ControlValue` mechanism does not support arrays
of strings, the assignment in the removed snippet will in fact
trigger an assertion failure in `ControlValue::set()` because
`sizeof(std::string) != ControlValueSize[ControlTypeString]`.

Fixes: b35f04b3c1 ("cam: capture_script: Support parsing array controls")
Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-22 13:16:07 +02:00
Barnabás Pőcze
ffcecda4d5 libcamera: pipeline: uvcvideo: Report new AeEnable control as available
The `AeEnable` control is handled by the `Camera` class directly, but it
still has to be added because `ControlInfoMap`s are not easily modifiable.

See 338ba00e7a ("ipa: rkisp1: agc: Report new AeEnable control as available")
for more details and a similar change in rkisp1.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-22 12:34:54 +02:00
Barnabás Pőcze
efdbe39698 libcamera: controls: Fix ControlInfoMap::count(unsigned int)
The two overloads of `find()` and `at()` have the same behaviour
regardless of the argument type: `unsigned int` or `const ControlId *`.
However, `count()` is not so because `count(unsigned int)` only checks
the `ControlIdMap`, and it does not check if the given id is actually
present in the map storing the `ControlInfo` objects.

So `count()` returns 1 for every control id that is present in the
associated `ControlIdMap` regardless of whether there is an actual
entry for the `ControlId` associated with the given numeric id.

Fix that by simply using `find()` to determine the return value.

Fixes: 76b9923e55 ("libcamera: controls: Avoid exception in ControlInfoMap count() and find()")
Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-22 11:22:34 +02:00
Stefan Klug
969df3db31 ipa: rkisp1: awb: Avoid division by zero
As the gains can also be specified manually, the regulation can run into
numeric instabilities by dividing by near zero. Mitigate that by
applying a small minium value.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 11:20:08 +02:00
Stefan Klug
7991293cec ipa: rkisp1: awb: Take the CCM into account for the AWB gains calculation
The AWB measurements are taken after the CCM. This can be seen by
enabling debug logging on AWB, disabling AWB (stats will still be
processed) and manually chaning the CCM.

This means that the estimated colour temperature and the corresponding
CCM also lead to changed rgbMeans which in turn leads to oscillations.
Fix that by applying the inverse transform on the rgbMeans.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 11:20:08 +02:00
Stefan Klug
71b680c863 ipa: rkisp1: Damp color temperature regulation
Damp the regulation of the color temperature with the same factor as the
gains.  Not damping the color temperature leads to visible flicker, as
the CCM changes too much.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 11:20:08 +02:00
Stefan Klug
c699d26573 libipa: awb: Make result of gainsFromColourTemp optional
In the grey world AWB case, if no colour gains are contained in the
tuning file, the colour gains get reset to 1 when the colour temperature
is set manually. This is unexpected and undesirable. Allow the
gainsFromColourTemp() function to return a std::nullopt to handle that
case.

While at it, remove an unnecessary import from rkisp1/algorithms/awb.h.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 11:20:08 +02:00
Stefan Klug
66e9604684 ipa: rkisp1: Implement manual ColourCorrectionMatrix control
Add a manual ColourCorrectionMatrix control. This was already discussed
while implementing manual colour temperature but was never implemented.
The control allows to manually specify the CCM when AwbEnable is false.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 11:20:07 +02:00
Stefan Klug
f1ac420eb1 ipa: rkisp1: ccm/lsc: Fix CCM/LSC based on manual color temperature
In RkISP1Awb::process(), the color temperature in the active state is
updated every time new statistics are available.  The CCM/LSC algorithms
use that value in prepare() to update the CCM/LSC. This is not correct
if the color temperature was specified manually and leads to visible
flicker even when AwbEnable is set to false.

To fix that, track the auto and manual color temperature separately in
active state. In Awb::prepare() the current frame context is updated
with the corresponding value from active state. Change the algorithms to
fetch the color temperature from the frame context instead of the active
state in prepare().

Fixes: 0230880954 ("ipa: rkisp1: awb: Implement ColourTemperature control")
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-20 11:16:36 +02:00
Stefan Klug
3fcc6b06c3 ipa: rkisp1: algorithms: awb: Fix wrong colour temperature reporting
In commit b60bd37b1a ("ipa: rkisp1: Move calculation of RGB means into
own function") the output of the current measured colour temperature as
metadata was incorrectly added. Remove it.

Fixes: b60bd37b1a ("ipa: rkisp1: Move calculation of RGB means into own function")
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:59:23 +02:00
Stefan Klug
5010b65a08 ipa: rkisp1: Refactor automatic/manual structure in IPAActiveState
Swap gains and automatic/manual in the IPAActiveState structure. This is
in preparation to adding another member, which is easier in the new
structure. The patch contains no functional changes.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-20 09:58:56 +02:00
Laurent Pinchart
1e67b96fb0 libcamera: vector: Extend matrix multiplication operator to heterogenous types
It is useful to multiply matrices and vectors of heterogeneous types, for
instance float and double. Extend the multiplication operator to support
this, avoiding the need to convert one of the operations. The type of the
returned vector is selected automatically to avoid loosing precision.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:49:09 +02:00
Laurent Pinchart
754798b664 libcamera: matrix: Extend multiplication operator to heterogenous types
It is useful to multiply matrices of heterogneous types, for instance
float and double. Extend the multiplication operator to support this,
avoiding the need to convert one of the matrices. The type of the
returned matrix is selected automatically to avoid loosing precision.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:49:09 +02:00
Stefan Klug
dacbcc7d77 test: Add minimal test for Matrix
Add a few tests for the Matrix class. This is not full fledged but at
least a starter.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:49:01 +02:00
Stefan Klug
6287ceff5a libcamera: matrix: Add inverse() function
For calculations in upcoming algorithm patches, the inverse of a matrix
is required. Add an implementation of the inverse() function for square
matrices.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:46:12 +02:00
Stefan Klug
bcba580546 libcamera: vector: Add a Span based constructor
When one wants to create a Vector from existing data, currently the only
way is via std::array. Add a Span based constructor to allow creation
from std::vectors and alike.

While at it, replace the manual loop with std::copy.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:46:12 +02:00
Stefan Klug
aca9042abd libcamera: matrix: Add a Span based constructor
When one wants to create a Matrix from existing data, currently the only
way is via std::array. Add a Span based constructor to allow creation
from vectors and alike.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:46:11 +02:00
Stefan Klug
5234e4936f libcamera: matrix: Make most functions constexpr
By zero-initializing the data_ member we can make most functions
constexpr which will come in handy in upcoming patches. Note that this
is due to C++17. In C++20 we will be able to leave data_ uninitialized
for constexpr.  The Matrix(std::array) version of the constructor can
not be constexpr because std::copy only became constexpr in C++20.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-20 09:46:11 +02:00
Stefan Klug
1d8a6db31c libcamera: matrix: Replace SFINAE with static_asserts
SFINAE is difficult to read and not needed in these cases. Replace it
with static_asserts. The idea came from [1] where it is stated:

"The use of enable_if seems misguided to me. SFINAE is useful for the
situation where we consider multiple candidates for something (overloads
or class template specializations) and try to choose the correct one,
without causing compilation to fail."

[1]: https://stackoverflow.com/questions/62109526/c-friend-template-that-use-sfinae

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
2025-05-20 09:46:11 +02:00
Stefan Klug
0069b9ceb1 ipa: rkisp1: awb: Ignore empty AWB statistics
When the AWB engine doesn't find a valid pixel because all pixels lie
outside the configured colour range it returns an AWB measurement value
of 255, 255, 255. This leaves the regulation in an unrecoverable state
noticeable by a completely green image. Fix that by skipping the AWB
calculation in case there were no valid pixels.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-19 15:35:47 +02:00
Hou Qi
848a3017b8 gstreamer: Add GstVideoMeta support
GStreamer video-info calculated stride and offset may differ from
those used by the camera.

For stride and offset mismatch, this patch adds video meta to buffer
if downstream supports VideoMeta through allocation query. Otherwise,
create a internal VideoPool using the caps, and copy video frame to
this system memory.

Signed-off-by: Hou Qi <qi.hou@nxp.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-19 09:28:19 +01:00
Barnabás Pőcze
e5442c3150 apps: cam: sdl_sink: Support more single-plane formats
With the newly introduced `SDLTexture1Plane` it is easy to handle
any single-plane format that has an SDL equivalent. So use it for
more YUV and RGB formats.

The mapping of RGB formats is not entirely straightforward because
`SDL_PIXELFORMAT_ZZZ...888...` defines a format where the order of
the components is endian dependent, while libcamera's `ZZZ...888...`
formats are derived from the matching DRM formats, and the RGB formats
in question are defined to be little-endian there. So the
endian-independent `SDL_PIXELFORMAT_{ZZZ24,ZZZZ32}` are used.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-15 17:41:36 +02:00
Barnabás Pőcze
b24cd12293 apps: cam: sdl_texture: Add SDLTexture1Plane
`SDLTextureYUYV` uses `SDL_PIXELFORMAT_YUY2`, which is a single plane
format. To support other single plane formats, replace `SDLTextureYUYV`
with `SDLTexture1Plane` that can be instantiated with an arbitrary SDL
pixel format and that uses `SDL_UpdateTexture()` to update the texture
using exactly a single plane.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-15 17:41:36 +02:00
Barnabás Pőcze
41b0997114 apps: cam: sdl_texture: Drop &rect_ from SDL_Update{NV,}Texture() call
If the entire texture is to be updated, there is no need to specify
the target area explicitly.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-15 17:41:36 +02:00
Barnabás Pőcze
02f60006cf apps: cam: sdl_texture: Take list of buffers in span
A non-owning span is sufficient, so use that instead of a vector.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-15 17:41:36 +02:00
Nícolas F. R. A. Prado
f3a12332f6 lc-compliance: Move camera setup to CameraHolder class
Different base classes can be used for different setups on tests, but
all of them will need to setup the camera for the test. To reuse that
code, move it to a separate CameraHolder class that is inherited by test
classes.

Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Sven Püschel <s.pueschel@pengutronix.de>
2025-05-13 20:17:19 +02:00
Paul Elder
d01342f1dc ipa: rkisp1: awb: Declare ControlInfo in AWB
The ControlInfo information for AwbEnable and ColourGains are declared
and exposed in the top-level IPA. These should instead be exposed by the
AWB part of the IPA, as it doesn't make sense to support these controls
when AWB is disabled, for example.

Move the declaration of these controls out of the top-level IPA and into
AWB.

Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-13 11:57:24 +02:00
Kieran Bingham
37dccb4584 ipa: Move IPA installations to a subdir
IPAs are expected to live within a directory that is searched by the
IPAManager.  If other non-IPA so files are installed in the same
location, then the user may be presented with an error message reporting
that the module could not be parsed.

Move IPA modules to an ipa specific subdirectory to ensure we only parse
.so files that are expected to be IPA modules at load time.

Bug: https://bugs.libcamera.org/show_bug.cgi?id=268
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Isaac Scott <isaac.scott@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-12 16:13:06 +02:00
Barnabás Pőcze
54aeb0447c py: Set PYTHONPATH in devenv
If the python bindings are built, then set the `PYTHONPATH` environmental
variable in the meson devenv accordingly to make it easy to use.

  $ meson devenv -C build
  [libcamera] $ echo $PYTHONPATH
  /libcamera/build/src/py
  [libcamera] $ python
  Python 3.13.3 (main, Apr  9 2025, 07:44:25) [GCC 14.2.1 20250207] on linux
  Type "help", "copyright", "credits" or "license" for more information.
  >>> import libcamera
  >>> cm = libcamera.CameraManager.singleton()
  [...]
  [129:52:33.293860558] [4133380]  INFO Camera camera_manager.cpp:326 libcamera v0.5.0+169-7dbe74b5-dirty (2025-05-01)
  [...]

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-05-12 09:21:14 +02:00
Sven Püschel
fabfdd8559 libcamera: v4l2_videodevice: Log buffer count on allocation error
Log the actual and requested buffers count in case of a V4L2 buffers
allocation error, when the requested buffers count could not be
allocated.

Signed-off-by: Sven Püschel <s.pueschel@pengutronix.de>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-05-09 15:39:17 +02:00
Kieran Bingham
a799415017 apps: qcam: Push the viewfinder role to vector
In commit ee2b011b65 ("apps: cam: Try raw role if default viewfinder
role fails"), the viewfinder role is specified as the default if no role
is yet chosen.

This was unfortunately added by directly accessing the vector rather
than extending the size when the vector is empty. Fix the code to push
the default viewfinder role on to the back of the vector, increasing the
size appropriately.

Fixes: ee2b011b65 ("apps: cam: Try raw role if default viewfinder role fails")
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Tested-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-08 10:51:11 +02:00
David Plowman
e0405b171e utils: raspberrypi: ctt: Fix integer division error calculating LSC cell size
The cell sizes must be cast to integers as the parameters that
were passed in may be floats.

Bug: https://github.com/raspberrypi/libcamera/issues/260
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Fixes: 36ba0e5515 ("utils: raspberrypi: ctt: Fix NaNs in lens shading tables")
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-08 10:51:10 +02:00
Barnabás Pőcze
2f62701e9e Documentation: guides: application-developer: Remove unnecessary argument
`required: true` is the default for meson's `dependency()` function.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-05 09:20:39 +02:00
Barnabás Pőcze
1200775986 Documentation: guides: pipeline-handler: Query pixel formats once
There is no reason to create an entire new copy of the same thing,
so use the already existing `formats` object.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-02 17:25:30 +02:00
Barnabás Pőcze
b03992e66f Documentation: guides: pipeline-handler: Simplify format collection
I believe a simple range based for loop is easier to understand
here than `std::transform()`. Furthermore, using a for loop enables
the easy filtering of invalid pixel formats.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-02 17:25:30 +02:00
Barnabás Pőcze
f83bab529c Documentation: guides: pipeline-handler: Fix Camera::create() link
Since 6b4771d460 ("libcamera: camera: Hide Camera::create() from the public API")
`Camera::create()` is documented in the internal documentation.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-02 17:25:30 +02:00
Barnabás Pőcze
28d2d4f43c Documentation: guides: pipeline-handler: Fix configuration creation
`PipelineHandler::generateConfiguration()` returns a `std::unique_ptr`.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-02 17:25:30 +02:00
Barnabás Pőcze
dd2ddea8bf Documentation: guides: pipeline-handler: Fix property list file name
It is `property_ids_core.yaml`.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-02 17:25:30 +02:00
Barnabás Pőcze
8e10804413 Documentation: guides: pipeline-handler: Fix camera creation
1. The unique_ptr containing the private data must be passed to
`Camera::create()`.

2. `registerCamera()` needs only the pointer to the `Camera`

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-02 17:25:30 +02:00
Quentin Schulz
ab508f2b55 README.rst: remove unnecessary dependency for qcam
The introducing commit (dff416a84b ("README: Add missing package for
Qt5 tools"); for Qt 5 originally) stated that without the dependency we
would get the following messages:

	Program /usr/lib/x86_64-linux-gnu/qt5/bin/lrelease found: NO
	Program lrelease-qt5 found: NO
	Program lrelease found: NO found  but need: '== 5.14.2'

That was the case for qt5 and is still true for qt6 but this actually
is neither breaking the build nor is it doing anything to the outcome
of the build (for both qt5 and qt6) as qcam is bit to bit identical
with and without that package.

Therefore, let's not mislead users to install an unnecessary package.

Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-05-01 16:48:48 +01:00
Barnabás Pőcze
92ed6140ee ipa: rpi: awb: Remove "fast" parameter
The "fast" parameter has not been used since it first appeared in the
source code. And not only is it not used, but its retrieval from
the configuration since c1597f9896 ("ipa: raspberrypi: Use YamlParser
to replace dependency on boost") has been incorrect. So remove it.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
2025-04-30 14:36:06 +02:00
David Plowman
e4677362a1 ipa: rpi: common: Avoid warnings when AeEnable control is used
The AeEnable control is now just a wrapper that is converted to
ExposureTimeMode and AnalogueGainMode controls instead. Therefore, it
should simply be ignored when we encounter it, without the need for
any warnings.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-29 16:14:49 +01:00
David Plowman
17e41b2a3a utils: raspberrypi: ctt: Fix NaNs in chromatic aberration tables
NaNs can appear if no black dots can be found and analysed in a
particular region of the calibration image. There needs to be at least
one such dot in every 8x8 cell covering the image.

This is now detected, and an error message issued. No CAC tables are
generated, so CAC is disabled.

Bug: https://github.com/raspberrypi/libcamera/issues/254
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-29 16:14:48 +01:00
David Plowman
36ba0e5515 utils: raspberrypi: ctt: Fix NaNs in lens shading tables
The problem occurs when the calculation could lead to a final row (or
column) of grid squares with no pixels in them (and hence, NaNs).

One specific case is a Pi 5 with an image width (or height) of 1364,
so that's 682 Bayer quads. To give 32 grid squares it was calculating
22 quads per cell. However, 31 * 22 = 682 leaving nothing in the final
column.

The fix is to do a rounding-down division by the number of cells minus
one, rather than a rounding-up division by the number of cells. This
turns the corner case from one where the final row/column has no
pixels to one where we don't quite cover the full image, which is how
we have to handle these cases.

Bug: https://github.com/raspberrypi/libcamera/issues/254
Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-29 16:14:48 +01:00
Laurent Pinchart
9b50d3c23d libcamera: stream: Add color space to configuration string representation
Extend the string representation of StreamConfiguration, as returned by
the toString() and operator<<() functions, with color space information.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
2025-04-29 17:32:19 +03:00
Kieran Bingham
8751369c5b libcamera: pipeline: rkisp1: Convert to use MediaPipeline
Use the new MediaPipeline to manage and identify all sensors connected
to complex pipelines that can connect to the CSI2 receiver before the
ISP.

This can include chained multiplexors that supply multiple cameras, so
make use of the MediaDevice::locateEntities to search for all cameras
and construct a pipeline for each.

Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Acked-by: Stefan Klug <stefan.klug@ideasonboard.com>
2025-04-29 02:45:21 +09:00
Kieran Bingham
f1721c2f9f libcamera: internal: Add MediaPipeline helper
Provide a MediaPipeline class to help identifing and managing pipelines across
a MediaDevice graph.

Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
2025-04-29 02:45:21 +09:00
Kieran Bingham
0785f5f99a libcamera: media_device: Add helper to return matching entities
Provide a helper on the MediaDevice to return a list of all
available entities which match a given function in the graph.

As a drive by, also fix a whitespace error in the documentation of
MediaDevice::setupLink.

Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <umang.jain@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Daniel Scally <dan.scally@ideasonboard.com>
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Stefan Klug <stefan.klug@ideasonboard.com>
2025-04-29 01:20:50 +09:00
Paul Elder
ee2b011b65 apps: cam: Try raw role if default viewfinder role fails
cam currently defaults to the viewfinder role when no role is specified.
This means that on platforms that only support the raw role (such as a
raw sensor with no softISP on a simple pipeline platform),
generateConfiguration() would return nullptr and cam would bail out.

At least this is what is supposed to happen based on the little
documentation that we have written regarding generateConfiguration(),
specifically in the application writer's guide, which is probably the
most influential piece of documentation:

  The ``Camera::generateConfiguration()`` function accepts a list of
  desired roles and generates a ``CameraConfiguration`` with the best
  stream parameters configuration for each of the requested roles. If the
  camera can handle the requested roles, it returns an initialized
  ``CameraConfiguration`` and a null pointer if it can't.

Currently the simple pipeline handler will return a raw configuration
anyway (if it only supports raw) even if a non-raw role was requested.
Thus cam receives a raw configuration instead of a nullptr when no role
is specified and viewfinder is requested.

However, in the near future, support for raw streams with softISP on the
simple pipeline handler will be merged. This will notably change the
behavior of the simple pipeline handler to return nullptr if a non-raw
role was requested on a platform that only supports raw. This is proper
behavior according to documentation, but changes cam's behavior as it
used to capture fine with no parameters but will no longer be able to.

Technically this is an issue with the roles API, as we are mixing
roles in the sense of "configuration hints" (eg. viewfinder vs recording
vs still capture) with roles in the sense of "platform capabilities"
(raw vs everything else). In the long term the proper solution is to
rework the roles API.

In the meantime, fix cam so that it will try the raw role if the default
viewfinder role returns no configuration. cam is an app that is capable
of using the raw stream, so this is appropriate behavior. If roles are
specified, then do not retry, as in this situation the user knows what
streams they can use and what they want.

Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Milan Zamazal <mzamazal@redhat.com>
2025-04-29 00:57:54 +09:00
Barnabás Pőcze
72c3deffbb libcamera: controls: Disallow arrays of arrays
Arrays of arrays, even arrays of strings, are not supported by
the current `ControlValue` mechanism, so disable them for now
to trigger compile time errors if attempts are made to use them.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-25 18:06:05 +02:00
Hou Qi
3569fed7af gstreamer: Fixate colorimetry field during caps negotiation
When libcamerasrc is negotiating with downstream element, it first
extracts colorimetry field from downstream supported caps, then set
this colorimetry to its stream configuration and propagates the
colorimetry downstream.

Currently libamerasrc only considers the case there is one colorimetry
in colorimetry field of downstream caps. But the issue is that
downstream caps may report a list of supported colorimetry, which
causes libcamerasrc to set unknown colorimetry to stream configuration
and negotiate fail with downstream element.

In order to fix the issue, need to fixate colorimetry field before
getting colorimetry string.

Signed-off-by: Hou Qi <qi.hou@nxp.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-23 13:50:53 +01:00
Barnabás Pőcze
e1818265ae utils: ipc: Do not define variables in signal handler up front
Defining the variables at the beginning of the function forces the types
to be default constructible, which may not be desirable; furthermore, it
also forces the move/copy assignment operator to be used when the
deserialized value is retrieved.

Having `T val = f()` has the advantage of benefitting from potential RVO
as well as not requiring `T` to be default constructible, so generate
code in that form by calling `deserialize_call()` with `declare=true`.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-22 20:52:42 +02:00
Barnabás Pőcze
f31da7272e libcamera: ipa_module: Avoid unnecessary copy when getting signature
The `signature()` getter can just return a reference to the private vector
member variable, and let the caller make a copy if needed. Since the
return type is const qualified, this was likely the original intention.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-04-22 14:49:10 +02:00
Paul Elder
86c45c8fdf pipeline: rkisp1: Fix vblank delay
The vblank delay for delayed controls was incorrectly hardcoded to 1.
Get it from the camera sensor properties instead.

Fixes: f72c76eb6e ("rkisp1: Honor the FrameDurationLimits control")
Signed-off-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-22 20:18:11 +09:00
Quentin Schulz
6e24360d3f Revert "libcamera: rkisp1: Eliminate hard-coded resizer limits"
This reverts commit e85c7ddd38.

Linux kernel predating 6.4 (specifically commit 7cfb35d3a800 ("media:
rkisp1: Implement ENUM_FRAMESIZES") do not have the ioctl in rkisp1
driver required to dynamically query the resizer limits.

Because of that, maxResolution and minResolution are both {0, 0}
(default value for Size objects) which means filterSensorResolution()
will create an entry for the sensor in sensorSizesMap_ but because the
sensor resolution cannot fit inside the min and max resolution of the
rkisp1, no size is put into this entry in sensorSizesMap_.
On the next call to filterSensorResolution(),
sensorSizesMap_.find(sensor) will return the entry but when attempting
to call back() on iter->second, it'll trigger an assert because the size
array is empty.

Linux kernel 6.1 is supported until December 2027, so it seems premature
to get rid of those hard-coded resizer limits before this happens.

Let's restore the hard-coded resizer limits as fallbacks, actual limits
are still queried from the driver on recent enough kernels.

Fixes: 761545407c ("pipeline: rkisp1: Filter out sensor sizes not supported by the pipeline")
Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-22 13:24:01 +03:00
Barnabás Pőcze
5b73d25967 utils: ipc: Do not duplicate signals in proxy object
The specific proxy type (see `module_ipa_proxy.h.tmpl`) inherits `IPAProxy`,
the specific interface type, and `Object`. The interface type already
provides public definitions of the necessary `Signal<>` objects (see
`module_ipa_interface.h.tmpl`), so do not duplicate them.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-21 16:33:40 +02:00
Barnabás Pőcze
3e4de5f54e apps: cam: capture_script: Simplify bool array parsing
`std::vector<bool>` is a specialization that implements a dynamic
bit vector, therefore it is not suitable to provide storage for
an array of `bool`. Hence a statically sized array is used when
parsing an array of boolean values.

Instead, use the array overload of `std::make_unique` since the
size is known beforehand.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-04-21 16:24:16 +02:00
Barnabás Pőcze
83543f08d5 libcamera: pipeline: imx8-isi: Remove unused variable
The `mbusCodes` variable in `ISICameraConfiguration::validateRaw()`
has been unused since

  87fed43253 ("libcamera: imx8-isi: Break out RAW format selection"),

so remove it.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-21 16:15:21 +02:00
Barnabás Pőcze
ee92b5211c libcamera: pipeline: virtual: Fix typo in log message
pass -> parse

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-21 16:05:43 +02:00
Laurent Pinchart
5d1380f7df utils: rkisp1: gen-csc-table: Support printing CCM in decimal
Add an option to the gen-csc-table.py script to output the CCM matrix in
decimal format instead of hexadecimal. This makes no functional
difference, but is useful to adapt to different coding styles.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-17 14:50:44 +03:00
Pavel Machek
50d143ad1d doc: document libtiff dependency for cam
DNG writing is useful when working with bayer data, but libtiff is
needed for that.

Signed-off-by: Pavel Machek <pavel@ucw.cz>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
[Kieran: Updated text to match other entries]
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-15 18:52:31 +01:00
Milan Zamazal
026ed62739 libcamera: software_isp: Fix CCM multiplication
A colour correction matrix (CCM) is applied like this to an RGB pixel
vector P:

  CCM * P

White balance must be applied before CCM.  If CCM is used, software ISP
makes a combined matrix by multiplying the CCM by a white balance gains
CCM-like matrix (WBG).  The multiplication should be as follows to do it
in the correct order:

  CCM * (WBG * P) = (CCM * WBG) * P

The multiplication order in Lut software ISP algorithm is reversed,
resulting in colour casts.  Let's fix the order.

Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-15 18:47:17 +01:00
Barnabás Pőcze
78d9f7bb75 libcamera: pipeline: uvcvideo: Expose Gamma control
Commit 294ead848c ("libcamera: Add gamma control id")
introduced the "Gamma" control, so expose it for UVC
cameras as well using the `V4L2_CID_GAMMA` control.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Tested-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-04-15 13:07:21 +02:00
Barnabás Pőcze
5553efc6b1 libcamera: request: Avoid double map lookup
Use `try_emplace()` that more or less combines `find()` and `operator[]`
in one function.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-15 12:11:40 +02:00
Barnabás Pőcze
7fd317adf0 apps: lc-compliance: Add multi-stream tests
Rename the `SingleStream` test to `SimpleCapture`, and extend it
to support using multiple roles. And instantiate another test suite
from the `SimpleCapture` test that tests multiple streams in one
capture session.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-04-15 09:54:18 +02:00
Barnabás Pőcze
13cca98046 apps: lc-compliance: Support multiple streams in helpers
Prepare to add a test suite for capture operations with multiple
streams.

Modify the Capture helper class to support multiple roles and streams
in the configure() and capture() operations. The buffer count
of each stream is asserted to be the same.

Multi-stream support will be added in next patches.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
2025-04-15 09:54:18 +02:00
Pavel Machek
886f877dd3 doc: Mention right meson version
Documentation says 0.60, but in fact 0.63 is required.

Signed-off-by: Pavel Machek <pavel@ucw.cz>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-14 22:07:31 +03:00
Quentin Schulz
ae2b6cb3ca meson: Do not automatically build documentation if sphinx-build-3 is found
Commit aba567338b ("Documentation: Move all dependencies into
features") did an incomplete migration of the documentation boolean
option into a documentation feature.

If sphinx-build-3 binary is found on the host system, the documentation
is built, regardless of the value of the feature option.

This makes sure that sphinx-build-3 presence is only checked if the
documentation feature is not disabled (which is the default, as it's
"auto" by default).

This is essential for reproducibility for build systems where
sphinx-build-3 may or may not be present when libcamera is built, and
also to declutter the generated package if documentation isn't desired.

Fixes: aba567338b ("Documentation: Move all dependencies into features")
Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Quentin Schulz <quentin.schulz@cherry.de>
Tested-by: Quentin Schulz <quentin.schulz@cherry.de>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-14 22:07:17 +03:00
Quentin Schulz
eea723ad72 meson: Make the default value of "documentation" feature explicit
The meson documentation on the feature build options isn't clear if a
missing "value" is legal and if it is, what its default value is ([1]).

Therefore, let's make it explicit by using what is experimentally the
default: auto.

[1] https://mesonbuild.com/Build-options.html#features

Signed-off-by: Quentin Schulz <quentin.schulz@cherry.de>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-14 22:07:11 +03:00
Milan Zamazal
21088e605c libcamera: software_isp: Add a clarification comment to AWB
The computed AWB gains are applied when constructing LUT tables rather
than in awb.cpp itself.  This can look confusing when reading awb.cpp,
let's add a clarifying comment.

Signed-off-by: Milan Zamazal <mzamazal@redhat.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-04-10 00:09:40 +03:00
148 changed files with 4553 additions and 899 deletions

View file

@ -57,7 +57,8 @@ GENERATE_LATEX = NO
MACRO_EXPANSION = YES MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES EXPAND_ONLY_PREDEF = YES
INCLUDE_PATH = "@TOP_SRCDIR@/include/libcamera" INCLUDE_PATH = "@TOP_BUILDDIR@/include" \
"@TOP_SRCDIR@/include"
INCLUDE_FILE_PATTERNS = *.h INCLUDE_FILE_PATTERNS = *.h
IMAGE_PATH = "@TOP_SRCDIR@/Documentation/images" IMAGE_PATH = "@TOP_SRCDIR@/Documentation/images"

View file

@ -618,7 +618,7 @@ accordingly. In this example, the application file has been named
simple_cam = executable('simple-cam', simple_cam = executable('simple-cam',
'simple-cam.cpp', 'simple-cam.cpp',
dependencies: dependency('libcamera', required : true)) dependencies: dependency('libcamera'))
The ``dependencies`` line instructs meson to ask ``pkgconfig`` (or ``cmake``) to The ``dependencies`` line instructs meson to ask ``pkgconfig`` (or ``cmake``) to
locate the ``libcamera`` library, which the test application will be locate the ``libcamera`` library, which the test application will be

View file

@ -213,7 +213,7 @@ implementations for the overridden class members.
std::vector<std::unique_ptr<FrameBuffer>> *buffers) override; std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;
int start(Camera *camera, const ControlList *controls) override; int start(Camera *camera, const ControlList *controls) override;
void stop(Camera *camera) override; void stopDevice(Camera *camera) override;
int queueRequestDevice(Camera *camera, Request *request) override; int queueRequestDevice(Camera *camera, Request *request) override;
@ -247,7 +247,7 @@ implementations for the overridden class members.
return -1; return -1;
} }
void PipelineHandlerVivid::stop(Camera *camera) void PipelineHandlerVivid::stopDevice(Camera *camera)
{ {
} }
@ -521,14 +521,14 @@ handler and camera manager using `registerCamera`_.
Finally with a successful construction, we return 'true' indicating that the Finally with a successful construction, we return 'true' indicating that the
PipelineHandler successfully matched and constructed a device. PipelineHandler successfully matched and constructed a device.
.. _Camera::create: https://libcamera.org/api-html/classlibcamera_1_1Camera.html#a453740e0d2a2f495048ae307a85a2574 .. _Camera::create: https://libcamera.org/internal-api-html/classlibcamera_1_1Camera.html#adf5e6c22411f953bfaa1ae21155d6c31
.. _registerCamera: https://libcamera.org/api-html/classlibcamera_1_1PipelineHandler.html#adf02a7f1bbd87aca73c0e8d8e0e6c98b .. _registerCamera: https://libcamera.org/api-html/classlibcamera_1_1PipelineHandler.html#adf02a7f1bbd87aca73c0e8d8e0e6c98b
.. code-block:: cpp .. code-block:: cpp
std::set<Stream *> streams{ &data->stream_ }; std::set<Stream *> streams{ &data->stream_ };
std::shared_ptr<Camera> camera = Camera::create(this, data->video_->deviceName(), streams); std::shared_ptr<Camera> camera = Camera::create(std::move(data), data->video_->deviceName(), streams);
registerCamera(std::move(camera), std::move(data)); registerCamera(std::move(camera));
return true; return true;
@ -554,8 +554,7 @@ Our match function should now look like the following:
/* Create and register the camera. */ /* Create and register the camera. */
std::set<Stream *> streams{ &data->stream_ }; std::set<Stream *> streams{ &data->stream_ };
const std::string &id = data->video_->deviceName(); std::shared_ptr<Camera> camera = Camera::create(std::move(data), data->video_->deviceName(), streams);
std::shared_ptr<Camera> camera = Camera::create(data.release(), id, streams);
registerCamera(std::move(camera)); registerCamera(std::move(camera));
return true; return true;
@ -593,11 +592,11 @@ immutable properties of the ``Camera`` device.
The libcamera controls and properties are defined in YAML form which is The libcamera controls and properties are defined in YAML form which is
processed to automatically generate documentation and interfaces. Controls are processed to automatically generate documentation and interfaces. Controls are
defined by the src/libcamera/`control_ids_core.yaml`_ file and camera properties defined by the src/libcamera/`control_ids_core.yaml`_ file and camera properties
are defined by src/libcamera/`properties_ids_core.yaml`_. are defined by src/libcamera/`property_ids_core.yaml`_.
.. _controls framework: https://libcamera.org/api-html/controls_8h.html .. _controls framework: https://libcamera.org/api-html/controls_8h.html
.. _control_ids_core.yaml: https://libcamera.org/api-html/control__ids_8h.html .. _control_ids_core.yaml: https://libcamera.org/api-html/control__ids_8h.html
.. _properties_ids_core.yaml: https://libcamera.org/api-html/property__ids_8h.html .. _property_ids_core.yaml: https://libcamera.org/api-html/property__ids_8h.html
Pipeline handlers can optionally register the list of controls an application Pipeline handlers can optionally register the list of controls an application
can set as well as a list of immutable camera properties. Being both can set as well as a list of immutable camera properties. Being both
@ -800,8 +799,7 @@ derived class, and assign it to a base class pointer.
.. code-block:: cpp .. code-block:: cpp
VividCameraData *data = cameraData(camera); auto config = std::make_unique<VividCameraConfiguration>();
CameraConfiguration *config = new VividCameraConfiguration();
A ``CameraConfiguration`` is specific to each pipeline, so you can only create A ``CameraConfiguration`` is specific to each pipeline, so you can only create
it from the pipeline handler code path. Applications can also generate an empty it from the pipeline handler code path. Applications can also generate an empty
@ -829,9 +827,7 @@ To generate a ``StreamConfiguration``, you need a list of pixel formats and
frame sizes which are supported as outputs of the stream. You can fetch a map of frame sizes which are supported as outputs of the stream. You can fetch a map of
the ``V4LPixelFormat`` and ``SizeRange`` supported by the underlying output the ``V4LPixelFormat`` and ``SizeRange`` supported by the underlying output
device, but the pipeline handler needs to convert this to a device, but the pipeline handler needs to convert this to a
``libcamera::PixelFormat`` type to pass to applications. We do this here using ``libcamera::PixelFormat`` type to pass to applications.
``std::transform`` to convert the formats and populate a new ``PixelFormat`` map
as shown below.
Continue adding the following code example to our ``generateConfiguration`` Continue adding the following code example to our ``generateConfiguration``
implementation. implementation.
@ -841,14 +837,12 @@ implementation.
std::map<V4L2PixelFormat, std::vector<SizeRange>> v4l2Formats = std::map<V4L2PixelFormat, std::vector<SizeRange>> v4l2Formats =
data->video_->formats(); data->video_->formats();
std::map<PixelFormat, std::vector<SizeRange>> deviceFormats; std::map<PixelFormat, std::vector<SizeRange>> deviceFormats;
std::transform(v4l2Formats.begin(), v4l2Formats.end(),
std::inserter(deviceFormats, deviceFormats.begin()), for (auto &[v4l2PixelFormat, sizes] : v4l2Formats) {
[&](const decltype(v4l2Formats)::value_type &format) { PixelFormat pixelFormat = v4l2PixelFormat.toPixelFormat();
return decltype(deviceFormats)::value_type{ if (pixelFormat.isValid())
format.first.toPixelFormat(), deviceFormats.try_emplace(pixelFormat, std::move(sizes));
format.second }
};
});
The `StreamFormats`_ class holds information about the pixel formats and frame The `StreamFormats`_ class holds information about the pixel formats and frame
sizes that a stream can support. The class groups size information by the pixel sizes that a stream can support. The class groups size information by the pixel
@ -938,9 +932,9 @@ Add the following function implementation to your file:
StreamConfiguration &cfg = config_[0]; StreamConfiguration &cfg = config_[0];
const std::vector<libcamera::PixelFormat> formats = cfg.formats().pixelformats(); const std::vector<libcamera::PixelFormat> &formats = cfg.formats().pixelformats();
if (std::find(formats.begin(), formats.end(), cfg.pixelFormat) == formats.end()) { if (std::find(formats.begin(), formats.end(), cfg.pixelFormat) == formats.end()) {
cfg.pixelFormat = cfg.formats().pixelformats()[0]; cfg.pixelFormat = formats[0];
LOG(VIVID, Debug) << "Adjusting format to " << cfg.pixelFormat.toString(); LOG(VIVID, Debug) << "Adjusting format to " << cfg.pixelFormat.toString();
status = Adjusted; status = Adjusted;
} }
@ -1158,7 +1152,7 @@ available to the devices which have to be started and ready to produce
images. At the end of a capture session the ``Camera`` device needs to be images. At the end of a capture session the ``Camera`` device needs to be
stopped, to gracefully clean up any allocated memory and stop the hardware stopped, to gracefully clean up any allocated memory and stop the hardware
devices. Pipeline handlers implement two functions for these purposes, the devices. Pipeline handlers implement two functions for these purposes, the
``start()`` and ``stop()`` functions. ``start()`` and ``stopDevice()`` functions.
The memory initialization phase that happens at ``start()`` time serves to The memory initialization phase that happens at ``start()`` time serves to
configure video devices to be able to use memory buffers exported as dma-buf configure video devices to be able to use memory buffers exported as dma-buf
@ -1261,8 +1255,8 @@ algorithms, or other devices you should also stop them.
.. _releaseBuffers: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a191619c152f764e03bc461611f3fcd35 .. _releaseBuffers: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a191619c152f764e03bc461611f3fcd35
Of course we also need to handle the corresponding actions to stop streaming on Of course we also need to handle the corresponding actions to stop streaming on
a device, Add the following to the ``stop`` function, to stop the stream with a device, Add the following to the ``stopDevice()`` function, to stop the
the `streamOff`_ function and release all buffers. stream with the `streamOff`_ function and release all buffers.
.. _streamOff: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a61998710615bdf7aa25a046c8565ed66 .. _streamOff: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a61998710615bdf7aa25a046c8565ed66

View file

@ -116,10 +116,8 @@ endif
# Sphinx # Sphinx
# #
sphinx = find_program('sphinx-build-3', required : false) sphinx = find_program('sphinx-build-3', 'sphinx-build',
if not sphinx.found() required : get_option('documentation'))
sphinx = find_program('sphinx-build', required : get_option('documentation'))
endif
if sphinx.found() if sphinx.found()
docs_sources = [ docs_sources = [

View file

@ -44,7 +44,7 @@ A C++ toolchain: [required]
Either {g++, clang} Either {g++, clang}
Meson Build system: [required] Meson Build system: [required]
meson (>= 0.60) ninja-build pkg-config meson (>= 0.63) ninja-build pkg-config
for the libcamera core: [required] for the libcamera core: [required]
libyaml-dev python3-yaml python3-ply python3-jinja2 libyaml-dev python3-yaml python3-ply python3-jinja2
@ -83,9 +83,10 @@ for cam: [optional]
- libdrm-dev: Enables the KMS sink - libdrm-dev: Enables the KMS sink
- libjpeg-dev: Enables MJPEG on the SDL sink - libjpeg-dev: Enables MJPEG on the SDL sink
- libsdl2-dev: Enables the SDL sink - libsdl2-dev: Enables the SDL sink
- libtiff-dev: Enables writing DNG
for qcam: [optional] for qcam: [optional]
libtiff-dev qt6-base-dev qt6-tools-dev-tools libtiff-dev qt6-base-dev
for tracing with lttng: [optional] for tracing with lttng: [optional]
liblttng-ust-dev python3-jinja2 lttng-tools liblttng-ust-dev python3-jinja2 lttng-tools

View file

@ -120,7 +120,7 @@ struct control_type<Point> {
}; };
template<typename T, std::size_t N> template<typename T, std::size_t N>
struct control_type<Span<T, N>> : public control_type<std::remove_cv_t<T>> { struct control_type<Span<T, N>, std::enable_if_t<control_type<std::remove_cv_t<T>>::size == 0>> : public control_type<std::remove_cv_t<T>> {
static constexpr std::size_t size = N; static constexpr std::size_t size = N;
}; };

View file

@ -26,6 +26,7 @@ struct FrameMetadata {
FrameSuccess, FrameSuccess,
FrameError, FrameError,
FrameCancelled, FrameCancelled,
FrameStartup,
}; };
struct Plane { struct Plane {

View file

@ -0,0 +1,68 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Camera recovery algorithm
*/
#pragma once
#include <stdint.h>
namespace libcamera {
class ClockRecovery
{
public:
ClockRecovery();
void configure(unsigned int numSamples = 100, unsigned int maxJitter = 2000,
unsigned int minSamples = 10, unsigned int errorThreshold = 50000);
void reset();
void addSample();
void addSample(uint64_t input, uint64_t output);
uint64_t getOutput(uint64_t input);
private:
/* Approximate number of samples over which the model state persists. */
unsigned int numSamples_;
/* Remove any output jitter larger than this immediately. */
unsigned int maxJitter_;
/* Number of samples required before we start to use model estimates. */
unsigned int minSamples_;
/* Threshold above which we assume the wallclock has been reset. */
unsigned int errorThreshold_;
/* How many samples seen (up to numSamples_). */
unsigned int count_;
/* This gets subtracted from all input values, just to make the numbers easier. */
uint64_t inputBase_;
/* As above, for the output. */
uint64_t outputBase_;
/* The previous input sample. */
uint64_t lastInput_;
/* The previous output sample. */
uint64_t lastOutput_;
/* Average x value seen so far. */
double xAve_;
/* Average y value seen so far */
double yAve_;
/* Average x^2 value seen so far. */
double x2Ave_;
/* Average x*y value seen so far. */
double xyAve_;
/*
* The latest estimate of linear parameters to derive the output clock
* from the input.
*/
double slope_;
double offset_;
/* Use this cumulative error to monitor for spontaneous clock updates. */
double error_;
};
} /* namespace libcamera */

View file

@ -10,13 +10,15 @@
#include <stdint.h> #include <stdint.h>
#include <unordered_map> #include <unordered_map>
#include <libcamera/base/object.h>
#include <libcamera/controls.h> #include <libcamera/controls.h>
namespace libcamera { namespace libcamera {
class V4L2Device; class V4L2Device;
class DelayedControls class DelayedControls : public Object
{ {
public: public:
struct ControlParams { struct ControlParams {

View file

@ -309,7 +309,6 @@ public:
serialize(const Flags<E> &data, [[maybe_unused]] ControlSerializer *cs = nullptr) serialize(const Flags<E> &data, [[maybe_unused]] ControlSerializer *cs = nullptr)
{ {
std::vector<uint8_t> dataVec; std::vector<uint8_t> dataVec;
dataVec.reserve(sizeof(Flags<E>));
appendPOD<uint32_t>(dataVec, static_cast<typename Flags<E>::Type>(data)); appendPOD<uint32_t>(dataVec, static_cast<typename Flags<E>::Type>(data));
return { dataVec, {} }; return { dataVec, {} };

View file

@ -29,7 +29,7 @@ public:
bool isValid() const; bool isValid() const;
const struct IPAModuleInfo &info() const; const struct IPAModuleInfo &info() const;
const std::vector<uint8_t> signature() const; const std::vector<uint8_t> &signature() const;
const std::string &path() const; const std::string &path() const;
bool load(); bool load();

View file

@ -8,6 +8,7 @@
#include <algorithm> #include <algorithm>
#include <sstream> #include <sstream>
#include <type_traits>
#include <vector> #include <vector>
#include <libcamera/base/log.h> #include <libcamera/base/log.h>
@ -20,17 +21,19 @@ namespace libcamera {
LOG_DECLARE_CATEGORY(Matrix) LOG_DECLARE_CATEGORY(Matrix)
#ifndef __DOXYGEN__ #ifndef __DOXYGEN__
template<typename T, unsigned int Rows, unsigned int Cols, template<typename T>
std::enable_if_t<std::is_arithmetic_v<T>> * = nullptr> bool matrixInvert(Span<const T> dataIn, Span<T> dataOut, unsigned int dim,
#else Span<T> scratchBuffer, Span<unsigned int> swapBuffer);
template<typename T, unsigned int Rows, unsigned int Cols>
#endif /* __DOXYGEN__ */ #endif /* __DOXYGEN__ */
template<typename T, unsigned int Rows, unsigned int Cols>
class Matrix class Matrix
{ {
static_assert(std::is_arithmetic_v<T>, "Matrix type must be arithmetic");
public: public:
Matrix() constexpr Matrix()
{ {
data_.fill(static_cast<T>(0));
} }
Matrix(const std::array<T, Rows * Cols> &data) Matrix(const std::array<T, Rows * Cols> &data)
@ -38,7 +41,12 @@ public:
std::copy(data.begin(), data.end(), data_.begin()); std::copy(data.begin(), data.end(), data_.begin());
} }
static Matrix identity() Matrix(const Span<const T, Rows * Cols> data)
{
std::copy(data.begin(), data.end(), data_.begin());
}
static constexpr Matrix identity()
{ {
Matrix ret; Matrix ret;
for (size_t i = 0; i < std::min(Rows, Cols); i++) for (size_t i = 0; i < std::min(Rows, Cols); i++)
@ -66,14 +74,14 @@ public:
return out.str(); return out.str();
} }
Span<const T, Rows * Cols> data() const { return data_; } constexpr Span<const T, Rows * Cols> data() const { return data_; }
Span<const T, Cols> operator[](size_t i) const constexpr Span<const T, Cols> operator[](size_t i) const
{ {
return Span<const T, Cols>{ &data_.data()[i * Cols], Cols }; return Span<const T, Cols>{ &data_.data()[i * Cols], Cols };
} }
Span<T, Cols> operator[](size_t i) constexpr Span<T, Cols> operator[](size_t i)
{ {
return Span<T, Cols>{ &data_.data()[i * Cols], Cols }; return Span<T, Cols>{ &data_.data()[i * Cols], Cols };
} }
@ -90,8 +98,30 @@ public:
return *this; return *this;
} }
Matrix<T, Rows, Cols> inverse(bool *ok = nullptr) const
{
static_assert(Rows == Cols, "Matrix must be square");
Matrix<T, Rows, Cols> inverse;
std::array<T, Rows * Cols * 2> scratchBuffer;
std::array<unsigned int, Rows> swapBuffer;
bool res = matrixInvert(Span<const T>(data_),
Span<T>(inverse.data_),
Rows,
Span<T>(scratchBuffer),
Span<unsigned int>(swapBuffer));
if (ok)
*ok = res;
return inverse;
}
private: private:
std::array<T, Rows * Cols> data_; /*
* \todo The initializer is only necessary for the constructor to be
* constexpr in C++17. Remove the initializer as soon as we are on
* C++20.
*/
std::array<T, Rows * Cols> data_ = {};
}; };
#ifndef __DOXYGEN__ #ifndef __DOXYGEN__
@ -123,21 +153,16 @@ Matrix<U, Rows, Cols> operator*(const Matrix<U, Rows, Cols> &m, T d)
return d * m; return d * m;
} }
#ifndef __DOXYGEN__ template<typename T1, unsigned int R1, unsigned int C1, typename T2, unsigned int R2, unsigned int C2>
template<typename T, constexpr Matrix<std::common_type_t<T1, T2>, R1, C2> operator*(const Matrix<T1, R1, C1> &m1,
unsigned int R1, unsigned int C1, const Matrix<T2, R2, C2> &m2)
unsigned int R2, unsigned int C2,
std::enable_if_t<C1 == R2> * = nullptr>
#else
template<typename T, unsigned int R1, unsigned int C1, unsigned int R2, unsigned in C2>
#endif /* __DOXYGEN__ */
Matrix<T, R1, C2> operator*(const Matrix<T, R1, C1> &m1, const Matrix<T, R2, C2> &m2)
{ {
Matrix<T, R1, C2> result; static_assert(C1 == R2, "Matrix dimensions must match for multiplication");
Matrix<std::common_type_t<T1, T2>, R1, C2> result;
for (unsigned int i = 0; i < R1; i++) { for (unsigned int i = 0; i < R1; i++) {
for (unsigned int j = 0; j < C2; j++) { for (unsigned int j = 0; j < C2; j++) {
T sum = 0; std::common_type_t<T1, T2> sum = 0;
for (unsigned int k = 0; k < C1; k++) for (unsigned int k = 0; k < C1; k++)
sum += m1[i][k] * m2[k][j]; sum += m1[i][k] * m2[k][j];
@ -150,7 +175,7 @@ Matrix<T, R1, C2> operator*(const Matrix<T, R1, C1> &m1, const Matrix<T, R2, C2>
} }
template<typename T, unsigned int Rows, unsigned int Cols> template<typename T, unsigned int Rows, unsigned int Cols>
Matrix<T, Rows, Cols> operator+(const Matrix<T, Rows, Cols> &m1, const Matrix<T, Rows, Cols> &m2) constexpr Matrix<T, Rows, Cols> operator+(const Matrix<T, Rows, Cols> &m1, const Matrix<T, Rows, Cols> &m2)
{ {
Matrix<T, Rows, Cols> result; Matrix<T, Rows, Cols> result;

View file

@ -55,6 +55,8 @@ public:
Signal<> disconnected; Signal<> disconnected;
std::vector<MediaEntity *> locateEntities(unsigned int function);
protected: protected:
std::string logPrefix() const override; std::string logPrefix() const override;

View file

@ -0,0 +1,59 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Ideas on Board Oy
*
* Media pipeline support
*/
#pragma once
#include <list>
#include <string>
#include <libcamera/base/log.h>
namespace libcamera {
class CameraSensor;
class MediaEntity;
class MediaLink;
class MediaPad;
struct V4L2SubdeviceFormat;
class MediaPipeline
{
public:
int init(MediaEntity *source, std::string_view sink);
int initLinks();
int configure(CameraSensor *sensor, V4L2SubdeviceFormat *);
private:
struct Entity {
/* The media entity, always valid. */
MediaEntity *entity;
/*
* Whether or not the entity is a subdev that supports the
* routing API.
*/
bool supportsRouting;
/*
* The local sink pad connected to the upstream entity, null for
* the camera sensor at the beginning of the pipeline.
*/
const MediaPad *sink;
/*
* The local source pad connected to the downstream entity, null
* for the video node at the end of the pipeline.
*/
const MediaPad *source;
/*
* The link on the source pad, to the downstream entity, null
* for the video node at the end of the pipeline.
*/
MediaLink *sourceLink;
};
std::list<Entity> entities_;
};
} /* namespace libcamera */

View file

@ -11,6 +11,7 @@ libcamera_internal_headers = files([
'camera_manager.h', 'camera_manager.h',
'camera_sensor.h', 'camera_sensor.h',
'camera_sensor_properties.h', 'camera_sensor_properties.h',
'clock_recovery.h',
'control_serializer.h', 'control_serializer.h',
'control_validator.h', 'control_validator.h',
'converter.h', 'converter.h',
@ -32,6 +33,7 @@ libcamera_internal_headers = files([
'matrix.h', 'matrix.h',
'media_device.h', 'media_device.h',
'media_object.h', 'media_object.h',
'media_pipeline.h',
'pipeline_handler.h', 'pipeline_handler.h',
'process.h', 'process.h',
'pub_key.h', 'pub_key.h',

View file

@ -11,6 +11,7 @@
#include <string> #include <string>
#include <vector> #include <vector>
#include <libcamera/base/class.h>
#include <libcamera/base/signal.h> #include <libcamera/base/signal.h>
#include <libcamera/base/unique_fd.h> #include <libcamera/base/unique_fd.h>
@ -42,6 +43,8 @@ public:
Signal<enum ExitStatus, int> finished; Signal<enum ExitStatus, int> finished;
private: private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Process)
void closeAllFdsExcept(const std::vector<int> &fds); void closeAllFdsExcept(const std::vector<int> &fds);
int isolate(); int isolate();
void died(int wstatus); void died(int wstatus);

View file

@ -13,6 +13,7 @@
#include <numeric> #include <numeric>
#include <optional> #include <optional>
#include <ostream> #include <ostream>
#include <type_traits>
#include <libcamera/base/log.h> #include <libcamera/base/log.h>
#include <libcamera/base/span.h> #include <libcamera/base/span.h>
@ -42,8 +43,12 @@ public:
constexpr Vector(const std::array<T, Rows> &data) constexpr Vector(const std::array<T, Rows> &data)
{ {
for (unsigned int i = 0; i < Rows; i++) std::copy(data.begin(), data.end(), data_.begin());
data_[i] = data[i]; }
constexpr Vector(const Span<const T, Rows> data)
{
std::copy(data.begin(), data.end(), data_.begin());
} }
const T &operator[](size_t i) const const T &operator[](size_t i) const
@ -291,13 +296,13 @@ private:
template<typename T> template<typename T>
using RGB = Vector<T, 3>; using RGB = Vector<T, 3>;
template<typename T, unsigned int Rows, unsigned int Cols> template<typename T, typename U, unsigned int Rows, unsigned int Cols>
Vector<T, Rows> operator*(const Matrix<T, Rows, Cols> &m, const Vector<T, Cols> &v) Vector<std::common_type_t<T, U>, Rows> operator*(const Matrix<T, Rows, Cols> &m, const Vector<U, Cols> &v)
{ {
Vector<T, Rows> result; Vector<std::common_type_t<T, U>, Rows> result;
for (unsigned int i = 0; i < Rows; i++) { for (unsigned int i = 0; i < Rows; i++) {
T sum = 0; std::common_type_t<T, U> sum = 0;
for (unsigned int j = 0; j < Cols; j++) for (unsigned int j = 0; j < Cols; j++)
sum += m[i][j] * v[j]; sum += m[i][j] * v[j];
result[i] = sum; result[i] = sum;

View file

@ -52,7 +52,8 @@ struct ConfigResult {
struct StartResult { struct StartResult {
libcamera.ControlList controls; libcamera.ControlList controls;
int32 dropFrameCount; int32 startupFrameCount;
int32 invalidFrameCount;
}; };
struct PrepareParams { struct PrepareParams {

View file

@ -90,6 +90,7 @@ foreach mode, entry : controls_map
command : [gen_controls, '-o', '@OUTPUT@', command : [gen_controls, '-o', '@OUTPUT@',
'--mode', mode, '-t', template_file, '--mode', mode, '-t', template_file,
'-r', ranges_file, '@INPUT@'], '-r', ranges_file, '@INPUT@'],
depend_files : [py_mod_controls],
env : py_build_env, env : py_build_env,
install : true, install : true,
install_dir : libcamera_headers_install_dir) install_dir : libcamera_headers_install_dir)

View file

@ -2,7 +2,7 @@
project('libcamera', 'c', 'cpp', project('libcamera', 'c', 'cpp',
meson_version : '>= 0.63', meson_version : '>= 0.63',
version : '0.5.0', version : '0.5.1',
default_options : [ default_options : [
'werror=true', 'werror=true',
'warning_level=2', 'warning_level=2',

View file

@ -18,6 +18,7 @@ option('cam',
option('documentation', option('documentation',
type : 'feature', type : 'feature',
value : 'auto',
description : 'Generate the project documentation') description : 'Generate the project documentation')
option('doc_werror', option('doc_werror',

View file

@ -1079,7 +1079,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques
buffer.internalBuffer = frameBuffer; buffer.internalBuffer = frameBuffer;
descriptor->request_->addBuffer(sourceStream->stream(), descriptor->request_->addBuffer(sourceStream->stream(),
frameBuffer, nullptr); frameBuffer);
requestedStreams.insert(sourceStream); requestedStreams.insert(sourceStream);
} }

View file

@ -62,11 +62,32 @@ CameraSession::CameraSession(CameraManager *cm,
return; return;
} }
std::vector<StreamRole> roles = StreamKeyValueParser::roles(options_[OptStream]); std::vector<StreamRole> roles =
StreamKeyValueParser::roles(options_[OptStream]);
std::vector<std::vector<StreamRole>> tryRoles;
if (!roles.empty()) {
/*
* If the roles are explicitly specified then there's no need
* to try other roles
*/
tryRoles.push_back(roles);
} else {
tryRoles.push_back({ StreamRole::Viewfinder });
tryRoles.push_back({ StreamRole::Raw });
}
std::unique_ptr<CameraConfiguration> config = std::unique_ptr<CameraConfiguration> config;
camera_->generateConfiguration(roles); bool valid = false;
if (!config || config->size() != roles.size()) { for (std::vector<StreamRole> &rolesIt : tryRoles) {
config = camera_->generateConfiguration(rolesIt);
if (config && config->size() == rolesIt.size()) {
roles = rolesIt;
valid = true;
break;
}
}
if (!valid) {
std::cerr << "Failed to get default stream configuration" std::cerr << "Failed to get default stream configuration"
<< std::endl; << std::endl;
return; return;

View file

@ -8,6 +8,7 @@
#include "capture_script.h" #include "capture_script.h"
#include <iostream> #include <iostream>
#include <memory>
#include <stdio.h> #include <stdio.h>
#include <stdlib.h> #include <stdlib.h>
@ -521,45 +522,22 @@ ControlValue CaptureScript::parseArrayControl(const ControlId *id,
case ControlTypeNone: case ControlTypeNone:
break; break;
case ControlTypeBool: { case ControlTypeBool: {
/* auto values = std::make_unique<bool[]>(repr.size());
* This is unpleasant, but we cannot use an std::vector<> as its
* boolean type overload does not allow to access the raw data,
* as boolean values are stored in a bitmask for efficiency.
*
* As we need a contiguous memory region to wrap in a Span<>,
* use an array instead but be strict about not overflowing it
* by limiting the number of controls we can store.
*
* Be loud but do not fail, as the issue would present at
* runtime and it's not fatal.
*/
static constexpr unsigned int kMaxNumBooleanControls = 1024;
std::array<bool, kMaxNumBooleanControls> values;
unsigned int idx = 0;
for (const std::string &s : repr) { for (std::size_t i = 0; i < repr.size(); i++) {
bool val; const auto &s = repr[i];
if (s == "true") { if (s == "true") {
val = true; values[i] = true;
} else if (s == "false") { } else if (s == "false") {
val = false; values[i] = false;
} else { } else {
unpackFailure(id, s); unpackFailure(id, s);
return value; return value;
} }
if (idx == kMaxNumBooleanControls) {
std::cerr << "Cannot parse more than "
<< kMaxNumBooleanControls
<< " boolean controls" << std::endl;
break;
}
values[idx++] = val;
} }
value = Span<bool>(values.data(), idx); value = Span<bool>(values.get(), repr.size());
break; break;
} }
case ControlTypeByte: { case ControlTypeByte: {
@ -600,10 +578,6 @@ ControlValue CaptureScript::parseArrayControl(const ControlId *id,
value = Span<const float>(values.data(), values.size()); value = Span<const float>(values.data(), values.size());
break; break;
} }
case ControlTypeString: {
value = Span<const std::string>(repr.data(), repr.size());
break;
}
default: default:
std::cerr << "Unsupported control type" << std::endl; std::cerr << "Unsupported control type" << std::endl;
break; break;

View file

@ -450,8 +450,6 @@ int Device::openCard()
} }
for (struct dirent *res; (res = readdir(folder));) { for (struct dirent *res; (res = readdir(folder));) {
uint64_t cap;
if (strncmp(res->d_name, "card", 4)) if (strncmp(res->d_name, "card", 4))
continue; continue;
@ -465,15 +463,22 @@ int Device::openCard()
} }
/* /*
* Skip devices that don't support the modeset API, to avoid * Skip non-display devices. While this could in theory be done
* selecting a DRM device corresponding to a GPU. There is no * by checking for support of the mode setting API, some
* modeset capability, but the kernel returns an error for most * out-of-tree render-only GPU drivers (namely powervr)
* caps if mode setting isn't support by the driver. The * incorrectly set the DRIVER_MODESET driver feature. Check for
* DRM_CAP_DUMB_BUFFER capability is one of those, other would * the presence of at least one CRTC, encoder and connector
* do as well. The capability value itself isn't relevant. * instead.
*/ */
ret = drmGetCap(fd_, DRM_CAP_DUMB_BUFFER, &cap); std::unique_ptr<drmModeRes, decltype(&drmModeFreeResources)> resources{
if (ret < 0) { drmModeGetResources(fd_),
&drmModeFreeResources
};
if (!resources ||
resources->count_connectors <= 0 ||
resources->count_crtcs <= 0 ||
resources->count_encoders <= 0) {
resources.reset();
drmClose(fd_); drmClose(fd_);
fd_ = -1; fd_ = -1;
continue; continue;

View file

@ -34,6 +34,7 @@ if libsdl2.found()
cam_sources += files([ cam_sources += files([
'sdl_sink.cpp', 'sdl_sink.cpp',
'sdl_texture.cpp', 'sdl_texture.cpp',
'sdl_texture_1plane.cpp',
'sdl_texture_yuv.cpp', 'sdl_texture_yuv.cpp',
]) ])

View file

@ -11,6 +11,7 @@
#include <fcntl.h> #include <fcntl.h>
#include <iomanip> #include <iomanip>
#include <iostream> #include <iostream>
#include <optional>
#include <signal.h> #include <signal.h>
#include <sstream> #include <sstream>
#include <string.h> #include <string.h>
@ -22,6 +23,7 @@
#include "../common/event_loop.h" #include "../common/event_loop.h"
#include "../common/image.h" #include "../common/image.h"
#include "sdl_texture_1plane.h"
#ifdef HAVE_LIBJPEG #ifdef HAVE_LIBJPEG
#include "sdl_texture_mjpg.h" #include "sdl_texture_mjpg.h"
#endif #endif
@ -31,6 +33,46 @@ using namespace libcamera;
using namespace std::chrono_literals; using namespace std::chrono_literals;
namespace {
std::optional<SDL_PixelFormatEnum> singlePlaneFormatToSDL(const libcamera::PixelFormat &f)
{
switch (f) {
case libcamera::formats::RGB888:
return SDL_PIXELFORMAT_BGR24;
case libcamera::formats::BGR888:
return SDL_PIXELFORMAT_RGB24;
case libcamera::formats::RGBA8888:
return SDL_PIXELFORMAT_ABGR32;
case libcamera::formats::ARGB8888:
return SDL_PIXELFORMAT_BGRA32;
case libcamera::formats::BGRA8888:
return SDL_PIXELFORMAT_ARGB32;
case libcamera::formats::ABGR8888:
return SDL_PIXELFORMAT_RGBA32;
#if SDL_VERSION_ATLEAST(2, 29, 1)
case libcamera::formats::RGBX8888:
return SDL_PIXELFORMAT_XBGR32;
case libcamera::formats::XRGB8888:
return SDL_PIXELFORMAT_BGRX32;
case libcamera::formats::BGRX8888:
return SDL_PIXELFORMAT_XRGB32;
case libcamera::formats::XBGR8888:
return SDL_PIXELFORMAT_RGBX32;
#endif
case libcamera::formats::YUYV:
return SDL_PIXELFORMAT_YUY2;
case libcamera::formats::UYVY:
return SDL_PIXELFORMAT_UYVY;
case libcamera::formats::YVYU:
return SDL_PIXELFORMAT_YVYU;
}
return {};
}
} /* namespace */
SDLSink::SDLSink() SDLSink::SDLSink()
: window_(nullptr), renderer_(nullptr), rect_({}), : window_(nullptr), renderer_(nullptr), rect_({}),
init_(false) init_(false)
@ -62,25 +104,20 @@ int SDLSink::configure(const libcamera::CameraConfiguration &config)
rect_.w = cfg.size.width; rect_.w = cfg.size.width;
rect_.h = cfg.size.height; rect_.h = cfg.size.height;
switch (cfg.pixelFormat) { if (auto sdlFormat = singlePlaneFormatToSDL(cfg.pixelFormat))
texture_ = std::make_unique<SDLTexture1Plane>(rect_, *sdlFormat, cfg.stride);
#ifdef HAVE_LIBJPEG #ifdef HAVE_LIBJPEG
case libcamera::formats::MJPEG: else if (cfg.pixelFormat == libcamera::formats::MJPEG)
texture_ = std::make_unique<SDLTextureMJPG>(rect_); texture_ = std::make_unique<SDLTextureMJPG>(rect_);
break;
#endif #endif
#if SDL_VERSION_ATLEAST(2, 0, 16) #if SDL_VERSION_ATLEAST(2, 0, 16)
case libcamera::formats::NV12: else if (cfg.pixelFormat == libcamera::formats::NV12)
texture_ = std::make_unique<SDLTextureNV12>(rect_, cfg.stride); texture_ = std::make_unique<SDLTextureNV12>(rect_, cfg.stride);
break;
#endif #endif
case libcamera::formats::YUYV: else {
texture_ = std::make_unique<SDLTextureYUYV>(rect_, cfg.stride); std::cerr << "Unsupported pixel format " << cfg.pixelFormat << std::endl;
break;
default:
std::cerr << "Unsupported pixel format "
<< cfg.pixelFormat.toString() << std::endl;
return -EINVAL; return -EINVAL;
}; }
return 0; return 0;
} }

View file

@ -7,7 +7,7 @@
#pragma once #pragma once
#include <vector> #include <libcamera/base/span.h>
#include <SDL2/SDL.h> #include <SDL2/SDL.h>
@ -19,7 +19,7 @@ public:
SDLTexture(const SDL_Rect &rect, uint32_t pixelFormat, const int stride); SDLTexture(const SDL_Rect &rect, uint32_t pixelFormat, const int stride);
virtual ~SDLTexture(); virtual ~SDLTexture();
int create(SDL_Renderer *renderer); int create(SDL_Renderer *renderer);
virtual void update(const std::vector<libcamera::Span<const uint8_t>> &data) = 0; virtual void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) = 0;
SDL_Texture *get() const { return ptr_; } SDL_Texture *get() const { return ptr_; }
protected: protected:

View file

@ -0,0 +1,17 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2025, Ideas on Board Oy
*
* SDL single plane textures
*/
#include "sdl_texture_1plane.h"
#include <assert.h>
void SDLTexture1Plane::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
{
assert(data.size() == 1);
assert(data[0].size_bytes() == std::size_t(rect_.h) * std::size_t(stride_));
SDL_UpdateTexture(ptr_, nullptr, data[0].data(), stride_);
}

View file

@ -0,0 +1,18 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2025, Ideas on Board Oy
*
* SDL single plane textures
*/
#pragma once
#include "sdl_texture.h"
class SDLTexture1Plane final : public SDLTexture
{
public:
using SDLTexture::SDLTexture;
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
};

View file

@ -76,7 +76,7 @@ int SDLTextureMJPG::decompress(Span<const uint8_t> data)
return 0; return 0;
} }
void SDLTextureMJPG::update(const std::vector<libcamera::Span<const uint8_t>> &data) void SDLTextureMJPG::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
{ {
decompress(data[0]); decompress(data[0]);
SDL_UpdateTexture(ptr_, nullptr, rgb_.get(), stride_); SDL_UpdateTexture(ptr_, nullptr, rgb_.get(), stride_);

View file

@ -14,7 +14,7 @@ class SDLTextureMJPG : public SDLTexture
public: public:
SDLTextureMJPG(const SDL_Rect &rect); SDLTextureMJPG(const SDL_Rect &rect);
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override; void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
private: private:
int decompress(libcamera::Span<const uint8_t> data); int decompress(libcamera::Span<const uint8_t> data);

View file

@ -15,19 +15,9 @@ SDLTextureNV12::SDLTextureNV12(const SDL_Rect &rect, unsigned int stride)
{ {
} }
void SDLTextureNV12::update(const std::vector<libcamera::Span<const uint8_t>> &data) void SDLTextureNV12::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
{ {
SDL_UpdateNVTexture(ptr_, &rect_, data[0].data(), stride_, SDL_UpdateNVTexture(ptr_, nullptr, data[0].data(), stride_,
data[1].data(), stride_); data[1].data(), stride_);
} }
#endif #endif
SDLTextureYUYV::SDLTextureYUYV(const SDL_Rect &rect, unsigned int stride)
: SDLTexture(rect, SDL_PIXELFORMAT_YUY2, stride)
{
}
void SDLTextureYUYV::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
SDL_UpdateTexture(ptr_, &rect_, data[0].data(), stride_);
}

View file

@ -14,13 +14,6 @@ class SDLTextureNV12 : public SDLTexture
{ {
public: public:
SDLTextureNV12(const SDL_Rect &rect, unsigned int stride); SDLTextureNV12(const SDL_Rect &rect, unsigned int stride);
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override; void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
}; };
#endif #endif
class SDLTextureYUYV : public SDLTexture
{
public:
SDLTextureYUYV(const SDL_Rect &rect, unsigned int stride);
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
};

View file

@ -98,12 +98,12 @@ unsigned int Image::numPlanes() const
Span<uint8_t> Image::data(unsigned int plane) Span<uint8_t> Image::data(unsigned int plane)
{ {
assert(plane <= planes_.size()); assert(plane < planes_.size());
return planes_[plane]; return planes_[plane];
} }
Span<const uint8_t> Image::data(unsigned int plane) const Span<const uint8_t> Image::data(unsigned int plane) const
{ {
assert(plane <= planes_.size()); assert(plane < planes_.size());
return planes_[plane]; return planes_[plane];
} }

View file

@ -42,9 +42,8 @@ KeyValueParser::Options StreamKeyValueParser::parse(const char *arguments)
std::vector<StreamRole> StreamKeyValueParser::roles(const OptionValue &values) std::vector<StreamRole> StreamKeyValueParser::roles(const OptionValue &values)
{ {
/* If no configuration values to examine default to viewfinder. */
if (values.empty()) if (values.empty())
return { StreamRole::Viewfinder }; return {};
const std::vector<OptionValue> &streamParameters = values.toArray(); const std::vector<OptionValue> &streamParameters = values.toArray();

View file

@ -23,12 +23,29 @@ Capture::~Capture()
stop(); stop();
} }
void Capture::configure(StreamRole role) void Capture::configure(libcamera::Span<const libcamera::StreamRole> roles)
{ {
config_ = camera_->generateConfiguration({ role }); assert(!roles.empty());
config_ = camera_->generateConfiguration(roles);
if (!config_) if (!config_)
GTEST_SKIP() << "Role not supported by camera"; GTEST_SKIP() << "Roles not supported by camera";
ASSERT_EQ(config_->size(), roles.size()) << "Unexpected number of streams in configuration";
/*
* Set the buffers count to the largest value across all streams.
* \todo: Should all streams from a Camera have the same buffer count ?
*/
auto largest =
std::max_element(config_->begin(), config_->end(),
[](const StreamConfiguration &l, const StreamConfiguration &r)
{ return l.bufferCount < r.bufferCount; });
assert(largest != config_->end());
for (auto &cfg : *config_)
cfg.bufferCount = largest->bufferCount;
if (config_->validate() != CameraConfiguration::Valid) { if (config_->validate() != CameraConfiguration::Valid) {
config_.reset(); config_.reset();
@ -103,29 +120,37 @@ void Capture::start()
assert(!allocator_.allocated()); assert(!allocator_.allocated());
assert(requests_.empty()); assert(requests_.empty());
Stream *stream = config_->at(0).stream(); const auto bufferCount = config_->at(0).bufferCount;
int count = allocator_.allocate(stream);
ASSERT_GE(count, 0) << "Failed to allocate buffers";
EXPECT_EQ(count, config_->at(0).bufferCount) << "Allocated less buffers than expected";
const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator_.buffers(stream);
/* No point in testing less requests then the camera depth. */ /* No point in testing less requests then the camera depth. */
if (queueLimit_ && *queueLimit_ < buffers.size()) { if (queueLimit_ && *queueLimit_ < bufferCount) {
GTEST_SKIP() << "Camera needs " << buffers.size() GTEST_SKIP() << "Camera needs " << bufferCount
<< " requests, can't test only " << *queueLimit_; << " requests, can't test only " << *queueLimit_;
} }
for (const std::unique_ptr<FrameBuffer> &buffer : buffers) { for (std::size_t i = 0; i < bufferCount; i++) {
std::unique_ptr<Request> request = camera_->createRequest(); std::unique_ptr<Request> request = camera_->createRequest();
ASSERT_TRUE(request) << "Can't create request"; ASSERT_TRUE(request) << "Can't create request";
ASSERT_EQ(request->addBuffer(stream, buffer.get()), 0) << "Can't set buffer for request";
requests_.push_back(std::move(request)); requests_.push_back(std::move(request));
} }
for (const auto &cfg : *config_) {
Stream *stream = cfg.stream();
int count = allocator_.allocate(stream);
ASSERT_GE(count, 0) << "Failed to allocate buffers";
const auto &buffers = allocator_.buffers(stream);
ASSERT_EQ(buffers.size(), bufferCount) << "Mismatching buffer count";
for (std::size_t i = 0; i < bufferCount; i++) {
ASSERT_EQ(requests_[i]->addBuffer(stream, buffers[i].get()), 0)
<< "Failed to add buffer to request";
}
}
ASSERT_TRUE(allocator_.allocated());
camera_->requestCompleted.connect(this, &Capture::requestComplete); camera_->requestCompleted.connect(this, &Capture::requestComplete);
ASSERT_EQ(camera_->start(), 0) << "Failed to start camera"; ASSERT_EQ(camera_->start(), 0) << "Failed to start camera";
@ -140,7 +165,12 @@ void Capture::stop()
camera_->requestCompleted.disconnect(this); camera_->requestCompleted.disconnect(this);
Stream *stream = config_->at(0).stream();
requests_.clear(); requests_.clear();
allocator_.free(stream);
for (const auto &cfg : *config_) {
EXPECT_EQ(allocator_.free(cfg.stream()), 0)
<< "Failed to free buffers associated with stream";
}
EXPECT_FALSE(allocator_.allocated());
} }

View file

@ -20,7 +20,7 @@ public:
Capture(std::shared_ptr<libcamera::Camera> camera); Capture(std::shared_ptr<libcamera::Camera> camera);
~Capture(); ~Capture();
void configure(libcamera::StreamRole role); void configure(libcamera::Span<const libcamera::StreamRole> roles);
void run(unsigned int captureLimit, std::optional<unsigned int> queueLimit = {}); void run(unsigned int captureLimit, std::optional<unsigned int> queueLimit = {});
private: private:

View file

@ -15,6 +15,7 @@ lc_compliance_sources = files([
'environment.cpp', 'environment.cpp',
'helpers/capture.cpp', 'helpers/capture.cpp',
'main.cpp', 'main.cpp',
'test_base.cpp',
'tests/capture_test.cpp', 'tests/capture_test.cpp',
]) ])

View file

@ -0,0 +1,28 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2021, Collabora Ltd.
*
* test_base.cpp - Base definitions for tests
*/
#include "test_base.h"
#include "environment.h"
void CameraHolder::acquireCamera()
{
Environment *env = Environment::get();
camera_ = env->cm()->get(env->cameraId());
ASSERT_EQ(camera_->acquire(), 0);
}
void CameraHolder::releaseCamera()
{
if (!camera_)
return;
camera_->release();
camera_.reset();
}

View file

@ -0,0 +1,24 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2021, Collabora Ltd.
*
* test_base.h - Base definitions for tests
*/
#ifndef __LC_COMPLIANCE_TEST_BASE_H__
#define __LC_COMPLIANCE_TEST_BASE_H__
#include <libcamera/libcamera.h>
#include <gtest/gtest.h>
class CameraHolder
{
protected:
void acquireCamera();
void releaseCamera();
std::shared_ptr<libcamera::Camera> camera_;
};
#endif /* __LC_COMPLIANCE_TEST_BASE_H__ */

View file

@ -8,72 +8,54 @@
#include "capture.h" #include "capture.h"
#include <iostream> #include <sstream>
#include <string>
#include <tuple>
#include <vector>
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include "environment.h" #include "test_base.h"
namespace { namespace {
using namespace libcamera; using namespace libcamera;
const int NUMREQUESTS[] = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 }; class SimpleCapture : public testing::TestWithParam<std::tuple<std::vector<StreamRole>, int>>, public CameraHolder
const StreamRole ROLES[] = {
StreamRole::Raw,
StreamRole::StillCapture,
StreamRole::VideoRecording,
StreamRole::Viewfinder
};
class SingleStream : public testing::TestWithParam<std::tuple<StreamRole, int>>
{ {
public: public:
static std::string nameParameters(const testing::TestParamInfo<SingleStream::ParamType> &info); static std::string nameParameters(const testing::TestParamInfo<SimpleCapture::ParamType> &info);
protected: protected:
void SetUp() override; void SetUp() override;
void TearDown() override; void TearDown() override;
std::shared_ptr<Camera> camera_;
}; };
/* /*
* We use gtest's SetUp() and TearDown() instead of constructor and destructor * We use gtest's SetUp() and TearDown() instead of constructor and destructor
* in order to be able to assert on them. * in order to be able to assert on them.
*/ */
void SingleStream::SetUp() void SimpleCapture::SetUp()
{ {
Environment *env = Environment::get(); acquireCamera();
camera_ = env->cm()->get(env->cameraId());
ASSERT_EQ(camera_->acquire(), 0);
} }
void SingleStream::TearDown() void SimpleCapture::TearDown()
{ {
if (!camera_) releaseCamera();
return;
camera_->release();
camera_.reset();
} }
std::string SingleStream::nameParameters(const testing::TestParamInfo<SingleStream::ParamType> &info) std::string SimpleCapture::nameParameters(const testing::TestParamInfo<SimpleCapture::ParamType> &info)
{ {
std::map<StreamRole, std::string> rolesMap = { const auto &[roles, numRequests] = info.param;
{ StreamRole::Raw, "Raw" }, std::ostringstream ss;
{ StreamRole::StillCapture, "StillCapture" },
{ StreamRole::VideoRecording, "VideoRecording" },
{ StreamRole::Viewfinder, "Viewfinder" }
};
std::string roleName = rolesMap[std::get<0>(info.param)]; for (StreamRole r : roles)
std::string numRequestsName = std::to_string(std::get<1>(info.param)); ss << r << '_';
return roleName + "_" + numRequestsName; ss << '_' << numRequests;
return ss.str();
} }
/* /*
@ -83,13 +65,13 @@ std::string SingleStream::nameParameters(const testing::TestParamInfo<SingleStre
* failure is a camera that completes less requests than the number of requests * failure is a camera that completes less requests than the number of requests
* queued. * queued.
*/ */
TEST_P(SingleStream, Capture) TEST_P(SimpleCapture, Capture)
{ {
auto [role, numRequests] = GetParam(); const auto &[roles, numRequests] = GetParam();
Capture capture(camera_); Capture capture(camera_);
capture.configure(role); capture.configure(roles);
capture.run(numRequests, numRequests); capture.run(numRequests, numRequests);
} }
@ -101,14 +83,14 @@ TEST_P(SingleStream, Capture)
* a camera that does not clean up correctly in its error path but is only * a camera that does not clean up correctly in its error path but is only
* tested by single-capture applications. * tested by single-capture applications.
*/ */
TEST_P(SingleStream, CaptureStartStop) TEST_P(SimpleCapture, CaptureStartStop)
{ {
auto [role, numRequests] = GetParam(); const auto &[roles, numRequests] = GetParam();
unsigned int numRepeats = 3; unsigned int numRepeats = 3;
Capture capture(camera_); Capture capture(camera_);
capture.configure(role); capture.configure(roles);
for (unsigned int starts = 0; starts < numRepeats; starts++) for (unsigned int starts = 0; starts < numRepeats; starts++)
capture.run(numRequests, numRequests); capture.run(numRequests, numRequests);
@ -121,21 +103,43 @@ TEST_P(SingleStream, CaptureStartStop)
* is a camera that does not handle cancelation of buffers coming back from the * is a camera that does not handle cancelation of buffers coming back from the
* video device while stopping. * video device while stopping.
*/ */
TEST_P(SingleStream, UnbalancedStop) TEST_P(SimpleCapture, UnbalancedStop)
{ {
auto [role, numRequests] = GetParam(); const auto &[roles, numRequests] = GetParam();
Capture capture(camera_); Capture capture(camera_);
capture.configure(role); capture.configure(roles);
capture.run(numRequests); capture.run(numRequests);
} }
INSTANTIATE_TEST_SUITE_P(CaptureTests, const int NUMREQUESTS[] = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 };
SingleStream,
testing::Combine(testing::ValuesIn(ROLES), const std::vector<StreamRole> SINGLEROLES[] = {
{ StreamRole::Raw, },
{ StreamRole::StillCapture, },
{ StreamRole::VideoRecording, },
{ StreamRole::Viewfinder, },
};
const std::vector<StreamRole> MULTIROLES[] = {
{ StreamRole::Raw, StreamRole::StillCapture },
{ StreamRole::Raw, StreamRole::VideoRecording },
{ StreamRole::StillCapture, StreamRole::VideoRecording },
{ StreamRole::VideoRecording, StreamRole::VideoRecording },
};
INSTANTIATE_TEST_SUITE_P(SingleStream,
SimpleCapture,
testing::Combine(testing::ValuesIn(SINGLEROLES),
testing::ValuesIn(NUMREQUESTS)), testing::ValuesIn(NUMREQUESTS)),
SingleStream::nameParameters); SimpleCapture::nameParameters);
INSTANTIATE_TEST_SUITE_P(MultiStream,
SimpleCapture,
testing::Combine(testing::ValuesIn(MULTIROLES),
testing::ValuesIn(NUMREQUESTS)),
SimpleCapture::nameParameters);
} /* namespace */ } /* namespace */

View file

@ -356,6 +356,9 @@ int MainWindow::startCapture()
/* Verify roles are supported. */ /* Verify roles are supported. */
switch (roles.size()) { switch (roles.size()) {
case 0:
roles.push_back(StreamRole::Viewfinder);
break;
case 1: case 1:
if (roles[0] != StreamRole::Viewfinder) { if (roles[0] != StreamRole::Viewfinder) {
qWarning() << "Only viewfinder supported for single stream"; qWarning() << "Only viewfinder supported for single stream";

View file

@ -68,7 +68,7 @@ static const GEnumValue {{ ctrl.name|snake_case }}_types[] = {
"{{ enum.gst_name }}" "{{ enum.gst_name }}"
}, },
{%- endfor %} {%- endfor %}
{0, NULL, NULL} {0, nullptr, nullptr}
}; };
#define TYPE_{{ ctrl.name|snake_case|upper }} \ #define TYPE_{{ ctrl.name|snake_case|upper }} \

View file

@ -494,9 +494,12 @@ void gst_libcamera_configure_stream_from_caps(StreamConfiguration &stream_cfg,
/* Configure colorimetry */ /* Configure colorimetry */
if (gst_structure_has_field(s, "colorimetry")) { if (gst_structure_has_field(s, "colorimetry")) {
const gchar *colorimetry_str = gst_structure_get_string(s, "colorimetry"); const gchar *colorimetry_str;
GstVideoColorimetry colorimetry; GstVideoColorimetry colorimetry;
gst_structure_fixate_field(s, "colorimetry");
colorimetry_str = gst_structure_get_string(s, "colorimetry");
if (!gst_video_colorimetry_from_string(&colorimetry, colorimetry_str)) if (!gst_video_colorimetry_from_string(&colorimetry, colorimetry_str))
g_critical("Invalid colorimetry %s", colorimetry_str); g_critical("Invalid colorimetry %s", colorimetry_str);
@ -596,6 +599,43 @@ gst_task_resume(GstTask *task)
} }
#endif #endif
#if !GST_CHECK_VERSION(1, 22, 0)
/*
* Copyright (C) <1999> Erik Walthinsen <omega@cse.ogi.edu>
* Library <2002> Ronald Bultje <rbultje@ronald.bitfreak.net>
* Copyright (C) <2007> David A. Schleef <ds@schleef.org>
*/
/*
* This function has been imported directly from the gstreamer project to
* support backwards compatibility and should be removed when the older version
* is no longer supported.
*/
gint gst_video_format_info_extrapolate_stride(const GstVideoFormatInfo *finfo, gint plane, gint stride)
{
gint estride;
gint comp[GST_VIDEO_MAX_COMPONENTS];
gint i;
/* There is nothing to extrapolate on first plane. */
if (plane == 0)
return stride;
gst_video_format_info_component(finfo, plane, comp);
/*
* For now, all planar formats have a single component on first plane, but
* if there was a planar format with more, we'd have to make a ratio of the
* number of component on the first plane against the number of component on
* the current plane.
*/
estride = 0;
for (i = 0; i < GST_VIDEO_MAX_COMPONENTS && comp[i] >= 0; i++)
estride += GST_VIDEO_FORMAT_INFO_SCALE_WIDTH(finfo, comp[i], stride);
return estride;
}
#endif
G_LOCK_DEFINE_STATIC(cm_singleton_lock); G_LOCK_DEFINE_STATIC(cm_singleton_lock);
static std::weak_ptr<CameraManager> cm_singleton_ptr; static std::weak_ptr<CameraManager> cm_singleton_ptr;

View file

@ -36,6 +36,11 @@ static inline void gst_clear_event(GstEvent **event_ptr)
#if !GST_CHECK_VERSION(1, 17, 1) #if !GST_CHECK_VERSION(1, 17, 1)
gboolean gst_task_resume(GstTask *task); gboolean gst_task_resume(GstTask *task);
#endif #endif
#if !GST_CHECK_VERSION(1, 22, 0)
gint gst_video_format_info_extrapolate_stride(const GstVideoFormatInfo *finfo, gint plane, gint stride);
#endif
std::shared_ptr<libcamera::CameraManager> gst_libcamera_get_camera_manager(int &ret); std::shared_ptr<libcamera::CameraManager> gst_libcamera_get_camera_manager(int &ret);
/** /**

View file

@ -18,6 +18,8 @@ struct _GstLibcameraPad {
GstPad parent; GstPad parent;
StreamRole role; StreamRole role;
GstLibcameraPool *pool; GstLibcameraPool *pool;
GstBufferPool *video_pool;
GstVideoInfo info;
GstClockTime latency; GstClockTime latency;
}; };
@ -70,6 +72,10 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
if (query->type != GST_QUERY_LATENCY) if (query->type != GST_QUERY_LATENCY)
return gst_pad_query_default(pad, parent, query); return gst_pad_query_default(pad, parent, query);
GLibLocker lock(GST_OBJECT(self));
if (self->latency == GST_CLOCK_TIME_NONE)
return FALSE;
/* TRUE here means live, we assumes that max latency is the same as min /* TRUE here means live, we assumes that max latency is the same as min
* as we have no idea that duration of frames. */ * as we have no idea that duration of frames. */
gst_query_set_latency(query, TRUE, self->latency, self->latency); gst_query_set_latency(query, TRUE, self->latency, self->latency);
@ -79,6 +85,7 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
static void static void
gst_libcamera_pad_init(GstLibcameraPad *self) gst_libcamera_pad_init(GstLibcameraPad *self)
{ {
self->latency = GST_CLOCK_TIME_NONE;
GST_PAD_QUERYFUNC(self) = gst_libcamera_pad_query; GST_PAD_QUERYFUNC(self) = gst_libcamera_pad_query;
} }
@ -100,7 +107,7 @@ gst_libcamera_stream_role_get_type()
"libcamera::Viewfinder", "libcamera::Viewfinder",
"view-finder", "view-finder",
}, },
{ 0, NULL, NULL } { 0, nullptr, nullptr }
}; };
if (!type) if (!type)
@ -153,6 +160,35 @@ gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool)
self->pool = pool; self->pool = pool;
} }
GstBufferPool *
gst_libcamera_pad_get_video_pool(GstPad *pad)
{
auto *self = GST_LIBCAMERA_PAD(pad);
return self->video_pool;
}
void gst_libcamera_pad_set_video_pool(GstPad *pad, GstBufferPool *video_pool)
{
auto *self = GST_LIBCAMERA_PAD(pad);
if (self->video_pool)
g_object_unref(self->video_pool);
self->video_pool = video_pool;
}
GstVideoInfo gst_libcamera_pad_get_video_info(GstPad *pad)
{
auto *self = GST_LIBCAMERA_PAD(pad);
return self->info;
}
void gst_libcamera_pad_set_video_info(GstPad *pad, const GstVideoInfo *info)
{
auto *self = GST_LIBCAMERA_PAD(pad);
self->info = *info;
}
Stream * Stream *
gst_libcamera_pad_get_stream(GstPad *pad) gst_libcamera_pad_get_stream(GstPad *pad)
{ {

View file

@ -23,6 +23,14 @@ GstLibcameraPool *gst_libcamera_pad_get_pool(GstPad *pad);
void gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool); void gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool);
GstBufferPool *gst_libcamera_pad_get_video_pool(GstPad *pad);
void gst_libcamera_pad_set_video_pool(GstPad *pad, GstBufferPool *video_pool);
GstVideoInfo gst_libcamera_pad_get_video_info(GstPad *pad);
void gst_libcamera_pad_set_video_info(GstPad *pad, const GstVideoInfo *info);
libcamera::Stream *gst_libcamera_pad_get_stream(GstPad *pad); libcamera::Stream *gst_libcamera_pad_get_stream(GstPad *pad);
void gst_libcamera_pad_set_latency(GstPad *pad, GstClockTime latency); void gst_libcamera_pad_set_latency(GstPad *pad, GstClockTime latency);

View file

@ -134,8 +134,20 @@ gst_libcamera_pool_class_init(GstLibcameraPoolClass *klass)
G_TYPE_NONE, 0); G_TYPE_NONE, 0);
} }
static void
gst_libcamera_buffer_add_video_meta(GstBuffer *buffer, GstVideoInfo *info)
{
GstVideoMeta *vmeta;
vmeta = gst_buffer_add_video_meta_full(buffer, GST_VIDEO_FRAME_FLAG_NONE,
GST_VIDEO_INFO_FORMAT(info), GST_VIDEO_INFO_WIDTH(info),
GST_VIDEO_INFO_HEIGHT(info), GST_VIDEO_INFO_N_PLANES(info),
info->offset, info->stride);
GST_META_FLAGS(vmeta) = (GstMetaFlags)(GST_META_FLAGS(vmeta) | GST_META_FLAG_POOLED);
}
GstLibcameraPool * GstLibcameraPool *
gst_libcamera_pool_new(GstLibcameraAllocator *allocator, Stream *stream) gst_libcamera_pool_new(GstLibcameraAllocator *allocator, Stream *stream,
GstVideoInfo *info)
{ {
auto *pool = GST_LIBCAMERA_POOL(g_object_new(GST_TYPE_LIBCAMERA_POOL, nullptr)); auto *pool = GST_LIBCAMERA_POOL(g_object_new(GST_TYPE_LIBCAMERA_POOL, nullptr));
@ -145,6 +157,7 @@ gst_libcamera_pool_new(GstLibcameraAllocator *allocator, Stream *stream)
gsize pool_size = gst_libcamera_allocator_get_pool_size(allocator, stream); gsize pool_size = gst_libcamera_allocator_get_pool_size(allocator, stream);
for (gsize i = 0; i < pool_size; i++) { for (gsize i = 0; i < pool_size; i++) {
GstBuffer *buffer = gst_buffer_new(); GstBuffer *buffer = gst_buffer_new();
gst_libcamera_buffer_add_video_meta(buffer, info);
pool->queue->push_back(buffer); pool->queue->push_back(buffer);
} }

View file

@ -14,6 +14,7 @@
#include "gstlibcameraallocator.h" #include "gstlibcameraallocator.h"
#include <gst/gst.h> #include <gst/gst.h>
#include <gst/video/video.h>
#include <libcamera/stream.h> #include <libcamera/stream.h>
@ -21,7 +22,7 @@
G_DECLARE_FINAL_TYPE(GstLibcameraPool, gst_libcamera_pool, GST_LIBCAMERA, POOL, GstBufferPool) G_DECLARE_FINAL_TYPE(GstLibcameraPool, gst_libcamera_pool, GST_LIBCAMERA, POOL, GstBufferPool)
GstLibcameraPool *gst_libcamera_pool_new(GstLibcameraAllocator *allocator, GstLibcameraPool *gst_libcamera_pool_new(GstLibcameraAllocator *allocator,
libcamera::Stream *stream); libcamera::Stream *stream, GstVideoInfo *info);
libcamera::Stream *gst_libcamera_pool_get_stream(GstLibcameraPool *self); libcamera::Stream *gst_libcamera_pool_get_stream(GstLibcameraPool *self);

View file

@ -29,6 +29,8 @@
#include <atomic> #include <atomic>
#include <queue> #include <queue>
#include <tuple>
#include <utility>
#include <vector> #include <vector>
#include <libcamera/camera.h> #include <libcamera/camera.h>
@ -268,6 +270,69 @@ GstLibcameraSrcState::requestCompleted(Request *request)
gst_task_resume(src_->task); gst_task_resume(src_->task);
} }
static void
gst_libcamera_extrapolate_info(GstVideoInfo *info, guint32 stride)
{
guint i, estride;
gsize offset = 0;
/* This should be updated if tiled formats get added in the future. */
for (i = 0; i < GST_VIDEO_INFO_N_PLANES(info); i++) {
estride = gst_video_format_info_extrapolate_stride(info->finfo, i, stride);
info->stride[i] = estride;
info->offset[i] = offset;
offset += estride * GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT(info->finfo, i,
GST_VIDEO_INFO_HEIGHT(info));
}
}
static GstFlowReturn
gst_libcamera_video_frame_copy(GstBuffer *src, GstBuffer *dest,
const GstVideoInfo *dest_info, guint32 stride)
{
/*
* When dropping support for versions earlier than v1.22.0, use
*
* g_auto (GstVideoFrame) src_frame = GST_VIDEO_FRAME_INIT;
* g_auto (GstVideoFrame) dest_frame = GST_VIDEO_FRAME_INIT;
*
* and drop the gst_video_frame_unmap() calls.
*/
GstVideoFrame src_frame, dest_frame;
GstVideoInfo src_info = *dest_info;
gst_libcamera_extrapolate_info(&src_info, stride);
src_info.size = gst_buffer_get_size(src);
if (!gst_video_frame_map(&src_frame, &src_info, src, GST_MAP_READ)) {
GST_ERROR("Could not map src buffer");
return GST_FLOW_ERROR;
}
/*
* When dropping support for versions earlier than 1.20.0, drop the
* const_cast<>().
*/
if (!gst_video_frame_map(&dest_frame, const_cast<GstVideoInfo *>(dest_info),
dest, GST_MAP_WRITE)) {
GST_ERROR("Could not map dest buffer");
gst_video_frame_unmap(&src_frame);
return GST_FLOW_ERROR;
}
if (!gst_video_frame_copy(&dest_frame, &src_frame)) {
GST_ERROR("Could not copy frame");
gst_video_frame_unmap(&src_frame);
gst_video_frame_unmap(&dest_frame);
return GST_FLOW_ERROR;
}
gst_video_frame_unmap(&src_frame);
gst_video_frame_unmap(&dest_frame);
return GST_FLOW_OK;
}
/* Must be called with stream_lock held. */ /* Must be called with stream_lock held. */
int GstLibcameraSrcState::processRequest() int GstLibcameraSrcState::processRequest()
{ {
@ -292,11 +357,41 @@ int GstLibcameraSrcState::processRequest()
GstFlowReturn ret = GST_FLOW_OK; GstFlowReturn ret = GST_FLOW_OK;
gst_flow_combiner_reset(src_->flow_combiner); gst_flow_combiner_reset(src_->flow_combiner);
for (GstPad *srcpad : srcpads_) { for (gsize i = 0; i < srcpads_.size(); i++) {
GstPad *srcpad = srcpads_[i];
Stream *stream = gst_libcamera_pad_get_stream(srcpad); Stream *stream = gst_libcamera_pad_get_stream(srcpad);
GstBuffer *buffer = wrap->detachBuffer(stream); GstBuffer *buffer = wrap->detachBuffer(stream);
FrameBuffer *fb = gst_libcamera_buffer_get_frame_buffer(buffer); FrameBuffer *fb = gst_libcamera_buffer_get_frame_buffer(buffer);
const StreamConfiguration &stream_cfg = config_->at(i);
GstBufferPool *video_pool = gst_libcamera_pad_get_video_pool(srcpad);
if (video_pool) {
/* Only set video pool when a copy is needed. */
GstBuffer *copy = nullptr;
const GstVideoInfo info = gst_libcamera_pad_get_video_info(srcpad);
ret = gst_buffer_pool_acquire_buffer(video_pool, &copy, nullptr);
if (ret != GST_FLOW_OK) {
gst_buffer_unref(buffer);
GST_ELEMENT_ERROR(src_, RESOURCE, SETTINGS,
("Failed to acquire buffer"),
("GstLibcameraSrcState::processRequest() failed: %s", g_strerror(-ret)));
return -EPIPE;
}
ret = gst_libcamera_video_frame_copy(buffer, copy, &info, stream_cfg.stride);
gst_buffer_unref(buffer);
if (ret != GST_FLOW_OK) {
gst_buffer_unref(copy);
GST_ELEMENT_ERROR(src_, RESOURCE, SETTINGS,
("Failed to copy buffer"),
("GstLibcameraSrcState::processRequest() failed: %s", g_strerror(-ret)));
return -EPIPE;
}
buffer = copy;
}
if (GST_CLOCK_TIME_IS_VALID(wrap->pts_)) { if (GST_CLOCK_TIME_IS_VALID(wrap->pts_)) {
GST_BUFFER_PTS(buffer) = wrap->pts_; GST_BUFFER_PTS(buffer) = wrap->pts_;
@ -428,6 +523,73 @@ gst_libcamera_src_open(GstLibcameraSrc *self)
return true; return true;
} }
/**
* \brief Create a video pool for a pad
* \param[in] self The libcamerasrc instance
* \param[in] srcpad The pad
* \param[in] caps The pad caps
* \param[in] info The video info for the pad
*
* This function creates and returns a video buffer pool for the given pad if
* needed to accommodate stride mismatch. If the peer element supports stride
* negotiation through the meta API, no pool is needed and the function will
* return a null pool.
*
* \return A tuple containing the video buffers pool pointer and an error code
*/
static std::tuple<GstBufferPool *, int>
gst_libcamera_create_video_pool(GstLibcameraSrc *self, GstPad *srcpad,
GstCaps *caps, const GstVideoInfo *info)
{
g_autoptr(GstQuery) query = nullptr;
g_autoptr(GstBufferPool) pool = nullptr;
const gboolean need_pool = true;
/*
* Get the peer allocation hints to check if it supports the meta API.
* If so, the stride will be negotiated, and there's no need to create a
* video pool.
*/
query = gst_query_new_allocation(caps, need_pool);
if (!gst_pad_peer_query(srcpad, query))
GST_DEBUG_OBJECT(self, "Didn't get downstream ALLOCATION hints");
else if (gst_query_find_allocation_meta(query, GST_VIDEO_META_API_TYPE, nullptr))
return { nullptr, 0 };
GST_WARNING_OBJECT(self, "Downstream doesn't support video meta, need to copy frame.");
/*
* If the allocation query has pools, use the first one. Otherwise,
* create a new pool.
*/
if (gst_query_get_n_allocation_pools(query) > 0)
gst_query_parse_nth_allocation_pool(query, 0, &pool, nullptr,
nullptr, nullptr);
if (!pool) {
GstStructure *config;
guint min_buffers = 3;
pool = gst_video_buffer_pool_new();
config = gst_buffer_pool_get_config(pool);
gst_buffer_pool_config_set_params(config, caps, info->size, min_buffers, 0);
GST_DEBUG_OBJECT(self, "Own pool config is %" GST_PTR_FORMAT, config);
gst_buffer_pool_set_config(GST_BUFFER_POOL_CAST(pool), config);
}
if (!gst_buffer_pool_set_active(pool, true)) {
GST_ELEMENT_ERROR(self, RESOURCE, SETTINGS,
("Failed to active buffer pool"),
("gst_libcamera_src_negotiate() failed."));
return { nullptr, -EINVAL };
}
return { std::exchange(pool, nullptr), 0 };
}
/* Must be called with stream_lock held. */ /* Must be called with stream_lock held. */
static bool static bool
gst_libcamera_src_negotiate(GstLibcameraSrc *self) gst_libcamera_src_negotiate(GstLibcameraSrc *self)
@ -499,13 +661,33 @@ gst_libcamera_src_negotiate(GstLibcameraSrc *self)
for (gsize i = 0; i < state->srcpads_.size(); i++) { for (gsize i = 0; i < state->srcpads_.size(); i++) {
GstPad *srcpad = state->srcpads_[i]; GstPad *srcpad = state->srcpads_[i];
const StreamConfiguration &stream_cfg = state->config_->at(i); const StreamConfiguration &stream_cfg = state->config_->at(i);
GstBufferPool *video_pool = nullptr;
GstVideoInfo info;
g_autoptr(GstCaps) caps = gst_libcamera_stream_configuration_to_caps(stream_cfg, transfer[i]);
gst_video_info_from_caps(&info, caps);
gst_libcamera_pad_set_video_info(srcpad, &info);
/* Stride mismatch between camera stride and that calculated by video-info. */
if (static_cast<unsigned int>(info.stride[0]) != stream_cfg.stride &&
GST_VIDEO_INFO_FORMAT(&info) != GST_VIDEO_FORMAT_ENCODED) {
gst_libcamera_extrapolate_info(&info, stream_cfg.stride);
std::tie(video_pool, ret) =
gst_libcamera_create_video_pool(self, srcpad,
caps, &info);
if (ret)
return false;
}
GstLibcameraPool *pool = gst_libcamera_pool_new(self->allocator, GstLibcameraPool *pool = gst_libcamera_pool_new(self->allocator,
stream_cfg.stream()); stream_cfg.stream(), &info);
g_signal_connect_swapped(pool, "buffer-notify", g_signal_connect_swapped(pool, "buffer-notify",
G_CALLBACK(gst_task_resume), self->task); G_CALLBACK(gst_task_resume), self->task);
gst_libcamera_pad_set_pool(srcpad, pool); gst_libcamera_pad_set_pool(srcpad, pool);
gst_libcamera_pad_set_video_pool(srcpad, video_pool);
/* Clear all reconfigure flags. */ /* Clear all reconfigure flags. */
gst_pad_check_reconfigure(srcpad); gst_pad_check_reconfigure(srcpad);
@ -699,8 +881,10 @@ gst_libcamera_src_task_leave([[maybe_unused]] GstTask *task,
{ {
GLibRecLocker locker(&self->stream_lock); GLibRecLocker locker(&self->stream_lock);
for (GstPad *srcpad : state->srcpads_) for (GstPad *srcpad : state->srcpads_) {
gst_libcamera_pad_set_latency(srcpad, GST_CLOCK_TIME_NONE);
gst_libcamera_pad_set_pool(srcpad, nullptr); gst_libcamera_pad_set_pool(srcpad, nullptr);
}
} }
g_clear_object(&self->allocator); g_clear_object(&self->allocator);
@ -884,7 +1068,7 @@ gst_libcamera_src_request_new_pad(GstElement *element, GstPadTemplate *templ,
const gchar *name, [[maybe_unused]] const GstCaps *caps) const gchar *name, [[maybe_unused]] const GstCaps *caps)
{ {
GstLibcameraSrc *self = GST_LIBCAMERA_SRC(element); GstLibcameraSrc *self = GST_LIBCAMERA_SRC(element);
g_autoptr(GstPad) pad = NULL; g_autoptr(GstPad) pad = nullptr;
GST_DEBUG_OBJECT(self, "new request pad created"); GST_DEBUG_OBJECT(self, "new request pad created");
@ -898,12 +1082,12 @@ gst_libcamera_src_request_new_pad(GstElement *element, GstPadTemplate *templ,
GST_ELEMENT_ERROR(element, STREAM, FAILED, GST_ELEMENT_ERROR(element, STREAM, FAILED,
("Internal data stream error."), ("Internal data stream error."),
("Could not add pad to element")); ("Could not add pad to element"));
return NULL; return nullptr;
} }
gst_child_proxy_child_added(GST_CHILD_PROXY(self), G_OBJECT(pad), GST_OBJECT_NAME(pad)); gst_child_proxy_child_added(GST_CHILD_PROXY(self), G_OBJECT(pad), GST_OBJECT_NAME(pad));
return reinterpret_cast<GstPad *>(g_steal_pointer(&pad)); return std::exchange(pad, nullptr);
} }
static void static void
@ -922,6 +1106,12 @@ gst_libcamera_src_release_pad(GstElement *element, GstPad *pad)
auto end_iterator = pads.end(); auto end_iterator = pads.end();
auto pad_iterator = std::find(begin_iterator, end_iterator, pad); auto pad_iterator = std::find(begin_iterator, end_iterator, pad);
GstBufferPool *video_pool = gst_libcamera_pad_get_video_pool(pad);
if (video_pool) {
gst_buffer_pool_set_active(video_pool, false);
gst_object_unref(video_pool);
}
if (pad_iterator != end_iterator) { if (pad_iterator != end_iterator) {
g_object_unref(*pad_iterator); g_object_unref(*pad_iterator);
pads.erase(pad_iterator); pads.erase(pad_iterator);

View file

@ -33,6 +33,7 @@ libcamera_gst_sources += custom_target('gstlibcamera-controls.cpp',
output : 'gstlibcamera-controls.cpp', output : 'gstlibcamera-controls.cpp',
command : [gen_gst_controls, '-o', '@OUTPUT@', command : [gen_gst_controls, '-o', '@OUTPUT@',
'-t', gen_gst_controls_template, '@INPUT@'], '-t', gen_gst_controls_template, '@INPUT@'],
depend_files : [py_mod_controls],
env : py_build_env) env : py_build_env)
libcamera_gst_cpp_args = [ libcamera_gst_cpp_args = [

View file

@ -218,8 +218,7 @@ int AgcMeanLuminance::parseConstraintModes(const YamlObject &tuningData)
constraintModes_[controls::ConstraintNormal].insert( constraintModes_[controls::ConstraintNormal].insert(
constraintModes_[controls::ConstraintNormal].begin(), constraintModes_[controls::ConstraintNormal].begin(),
constraint); constraint);
availableConstraintModes.push_back( availableConstraintModes.push_back(controls::ConstraintNormal);
AeConstraintModeNameValueMap.at("ConstraintNormal"));
} }
controls_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes); controls_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);
@ -287,7 +286,7 @@ int AgcMeanLuminance::parseExposureModes(const YamlObject &tuningData)
* possible before touching gain. * possible before touching gain.
*/ */
if (availableExposureModes.empty()) { if (availableExposureModes.empty()) {
int32_t exposureModeId = AeExposureModeNameValueMap.at("ExposureNormal"); int32_t exposureModeId = controls::ExposureNormal;
std::vector<std::pair<utils::Duration, double>> stages = { }; std::vector<std::pair<utils::Duration, double>> stages = { };
std::shared_ptr<ExposureModeHelper> helper = std::shared_ptr<ExposureModeHelper> helper =

View file

@ -114,7 +114,7 @@ namespace ipa {
* does not take any statistics into account. It is used to compute the colour * does not take any statistics into account. It is used to compute the colour
* gains when the user manually specifies a colour temperature. * gains when the user manually specifies a colour temperature.
* *
* \return The colour gains * \return The colour gains or std::nullopt if the conversion is not possible
*/ */
/** /**

View file

@ -8,6 +8,7 @@
#pragma once #pragma once
#include <map> #include <map>
#include <optional>
#include <libcamera/control_ids.h> #include <libcamera/control_ids.h>
#include <libcamera/controls.h> #include <libcamera/controls.h>
@ -39,7 +40,7 @@ public:
virtual int init(const YamlObject &tuningData) = 0; virtual int init(const YamlObject &tuningData) = 0;
virtual AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) = 0; virtual AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) = 0;
virtual RGB<double> gainsFromColourTemperature(double colourTemperature) = 0; virtual std::optional<RGB<double>> gainsFromColourTemperature(double colourTemperature) = 0;
const ControlInfoMap::Map &controls() const const ControlInfoMap::Map &controls() const
{ {

View file

@ -270,7 +270,7 @@ void AwbBayes::handleControls(const ControlList &controls)
} }
} }
RGB<double> AwbBayes::gainsFromColourTemperature(double colourTemperature) std::optional<RGB<double>> AwbBayes::gainsFromColourTemperature(double colourTemperature)
{ {
/* /*
* \todo In the RaspberryPi code, the ct curve was interpolated in * \todo In the RaspberryPi code, the ct curve was interpolated in
@ -278,7 +278,7 @@ RGB<double> AwbBayes::gainsFromColourTemperature(double colourTemperature)
* intuitive, as the gains are in linear space. But I can't prove it. * intuitive, as the gains are in linear space. But I can't prove it.
*/ */
const auto &gains = colourGainCurve_.getInterpolated(colourTemperature); const auto &gains = colourGainCurve_.getInterpolated(colourTemperature);
return { { gains[0], 1.0, gains[1] } }; return RGB<double>{ { gains[0], 1.0, gains[1] } };
} }
AwbResult AwbBayes::calculateAwb(const AwbStats &stats, unsigned int lux) AwbResult AwbBayes::calculateAwb(const AwbStats &stats, unsigned int lux)

View file

@ -27,7 +27,7 @@ public:
int init(const YamlObject &tuningData) override; int init(const YamlObject &tuningData) override;
AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) override; AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) override;
RGB<double> gainsFromColourTemperature(double temperatureK) override; std::optional<RGB<double>> gainsFromColourTemperature(double temperatureK) override;
void handleControls(const ControlList &controls) override; void handleControls(const ControlList &controls) override;
private: private:

View file

@ -98,15 +98,15 @@ AwbResult AwbGrey::calculateAwb(const AwbStats &stats, [[maybe_unused]] unsigned
* \return The colour gains if a colour temperature curve is available, * \return The colour gains if a colour temperature curve is available,
* [1, 1, 1] otherwise. * [1, 1, 1] otherwise.
*/ */
RGB<double> AwbGrey::gainsFromColourTemperature(double colourTemperature) std::optional<RGB<double>> AwbGrey::gainsFromColourTemperature(double colourTemperature)
{ {
if (!colourGainCurve_) { if (!colourGainCurve_) {
LOG(Awb, Error) << "No gains defined"; LOG(Awb, Error) << "No gains defined";
return RGB<double>({ 1.0, 1.0, 1.0 }); return std::nullopt;
} }
auto gains = colourGainCurve_->getInterpolated(colourTemperature); auto gains = colourGainCurve_->getInterpolated(colourTemperature);
return { { gains[0], 1.0, gains[1] } }; return RGB<double>{ { gains[0], 1.0, gains[1] } };
} }
} /* namespace ipa */ } /* namespace ipa */

View file

@ -25,7 +25,7 @@ public:
int init(const YamlObject &tuningData) override; int init(const YamlObject &tuningData) override;
AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) override; AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) override;
RGB<double> gainsFromColourTemperature(double colourTemperature) override; std::optional<RGB<double>> gainsFromColourTemperature(double colourTemperature) override;
private: private:
std::optional<Interpolator<Vector<double, 2>>> colourGainCurve_; std::optional<Interpolator<Vector<double, 2>>> colourGainCurve_;

View file

@ -4,7 +4,7 @@ ipa_includes = [
libcamera_includes, libcamera_includes,
] ]
ipa_install_dir = libcamera_libdir ipa_install_dir = libcamera_libdir / 'ipa'
ipa_data_dir = libcamera_datadir / 'ipa' ipa_data_dir = libcamera_datadir / 'ipa'
ipa_sysconf_dir = libcamera_sysconfdir / 'ipa' ipa_sysconf_dir = libcamera_sysconfdir / 'ipa'

View file

@ -68,10 +68,9 @@ int Agc::parseMeteringModes(IPAContext &context, const YamlObject &tuningData)
if (meteringModes_.empty()) { if (meteringModes_.empty()) {
LOG(RkISP1Agc, Warning) LOG(RkISP1Agc, Warning)
<< "No metering modes read from tuning file; defaulting to matrix"; << "No metering modes read from tuning file; defaulting to matrix";
int32_t meteringModeId = controls::AeMeteringModeNameValueMap.at("MeteringMatrix");
std::vector<uint8_t> weights(context.hw->numHistogramWeights, 1); std::vector<uint8_t> weights(context.hw->numHistogramWeights, 1);
meteringModes_[meteringModeId] = weights; meteringModes_[controls::MeteringMatrix] = weights;
} }
std::vector<ControlValue> meteringModes; std::vector<ControlValue> meteringModes;

View file

@ -90,6 +90,8 @@ int Awb::init(IPAContext &context, const YamlObject &tuningData)
cmap[&controls::ColourTemperature] = ControlInfo(kMinColourTemperature, cmap[&controls::ColourTemperature] = ControlInfo(kMinColourTemperature,
kMaxColourTemperature, kMaxColourTemperature,
kDefaultColourTemperature); kDefaultColourTemperature);
cmap[&controls::AwbEnable] = ControlInfo(false, true);
cmap[&controls::ColourGains] = ControlInfo(0.0f, 3.996f, 1.0f);
if (!tuningData.contains("algorithm")) if (!tuningData.contains("algorithm"))
LOG(RkISP1Awb, Info) << "No AWB algorithm specified." LOG(RkISP1Awb, Info) << "No AWB algorithm specified."
@ -124,11 +126,16 @@ int Awb::init(IPAContext &context, const YamlObject &tuningData)
int Awb::configure(IPAContext &context, int Awb::configure(IPAContext &context,
const IPACameraSensorInfo &configInfo) const IPACameraSensorInfo &configInfo)
{ {
context.activeState.awb.gains.manual = RGB<double>{ 1.0 }; context.activeState.awb.manual.gains = RGB<double>{ 1.0 };
context.activeState.awb.gains.automatic = auto gains = awbAlgo_->gainsFromColourTemperature(kDefaultColourTemperature);
awbAlgo_->gainsFromColourTemperature(kDefaultColourTemperature); if (gains)
context.activeState.awb.automatic.gains = *gains;
else
context.activeState.awb.automatic.gains = RGB<double>{ 1.0 };
context.activeState.awb.autoEnabled = true; context.activeState.awb.autoEnabled = true;
context.activeState.awb.temperatureK = kDefaultColourTemperature; context.activeState.awb.manual.temperatureK = kDefaultColourTemperature;
context.activeState.awb.automatic.temperatureK = kDefaultColourTemperature;
/* /*
* Define the measurement window for AWB as a centered rectangle * Define the measurement window for AWB as a centered rectangle
@ -173,8 +180,8 @@ void Awb::queueRequest(IPAContext &context,
const auto &colourTemperature = controls.get(controls::ColourTemperature); const auto &colourTemperature = controls.get(controls::ColourTemperature);
bool update = false; bool update = false;
if (colourGains) { if (colourGains) {
awb.gains.manual.r() = (*colourGains)[0]; awb.manual.gains.r() = (*colourGains)[0];
awb.gains.manual.b() = (*colourGains)[1]; awb.manual.gains.b() = (*colourGains)[1];
/* /*
* \todo Colour temperature reported in metadata is now * \todo Colour temperature reported in metadata is now
* incorrect, as we can't deduce the temperature from the gains. * incorrect, as we can't deduce the temperature from the gains.
@ -182,19 +189,21 @@ void Awb::queueRequest(IPAContext &context,
*/ */
update = true; update = true;
} else if (colourTemperature) { } else if (colourTemperature) {
awb.manual.temperatureK = *colourTemperature;
const auto &gains = awbAlgo_->gainsFromColourTemperature(*colourTemperature); const auto &gains = awbAlgo_->gainsFromColourTemperature(*colourTemperature);
awb.gains.manual.r() = gains.r(); if (gains) {
awb.gains.manual.b() = gains.b(); awb.manual.gains.r() = gains->r();
awb.temperatureK = *colourTemperature; awb.manual.gains.b() = gains->b();
update = true; update = true;
}
} }
if (update) if (update)
LOG(RkISP1Awb, Debug) LOG(RkISP1Awb, Debug)
<< "Set colour gains to " << awb.gains.manual; << "Set colour gains to " << awb.manual.gains;
frameContext.awb.gains = awb.gains.manual; frameContext.awb.gains = awb.manual.gains;
frameContext.awb.temperatureK = awb.temperatureK; frameContext.awb.temperatureK = awb.manual.temperatureK;
} }
/** /**
@ -208,8 +217,9 @@ void Awb::prepare(IPAContext &context, const uint32_t frame,
* most up-to-date automatic values we can read. * most up-to-date automatic values we can read.
*/ */
if (frameContext.awb.autoEnabled) { if (frameContext.awb.autoEnabled) {
frameContext.awb.gains = context.activeState.awb.gains.automatic; const auto &awb = context.activeState.awb;
frameContext.awb.temperatureK = context.activeState.awb.temperatureK; frameContext.awb.gains = awb.automatic.gains;
frameContext.awb.temperatureK = awb.automatic.temperatureK;
} }
auto gainConfig = params->block<BlockType::AwbGain>(); auto gainConfig = params->block<BlockType::AwbGain>();
@ -296,6 +306,11 @@ void Awb::process(IPAContext &context,
const rkisp1_cif_isp_stat *params = &stats->params; const rkisp1_cif_isp_stat *params = &stats->params;
const rkisp1_cif_isp_awb_stat *awb = &params->awb; const rkisp1_cif_isp_awb_stat *awb = &params->awb;
if (awb->awb_mean[0].cnt == 0) {
LOG(RkISP1Awb, Debug) << "AWB statistics are empty";
return;
}
RGB<double> rgbMeans = calculateRgbMeans(frameContext, awb); RGB<double> rgbMeans = calculateRgbMeans(frameContext, awb);
/* /*
@ -309,11 +324,6 @@ void Awb::process(IPAContext &context,
RkISP1AwbStats awbStats{ rgbMeans }; RkISP1AwbStats awbStats{ rgbMeans };
AwbResult awbResult = awbAlgo_->calculateAwb(awbStats, frameContext.lux.lux); AwbResult awbResult = awbAlgo_->calculateAwb(awbStats, frameContext.lux.lux);
activeState.awb.temperatureK = awbResult.colourTemperature;
/* Metadata shall contain the up to date measurement */
metadata.set(controls::ColourTemperature, activeState.awb.temperatureK);
/* /*
* Clamp the gain values to the hardware, which expresses gains as Q2.8 * Clamp the gain values to the hardware, which expresses gains as Q2.8
* unsigned integer values. Set the minimum just above zero to avoid * unsigned integer values. Set the minimum just above zero to avoid
@ -324,16 +334,19 @@ void Awb::process(IPAContext &context,
/* Filter the values to avoid oscillations. */ /* Filter the values to avoid oscillations. */
double speed = 0.2; double speed = 0.2;
double ct = awbResult.colourTemperature;
ct = ct * speed + activeState.awb.automatic.temperatureK * (1 - speed);
awbResult.gains = awbResult.gains * speed + awbResult.gains = awbResult.gains * speed +
activeState.awb.gains.automatic * (1 - speed); activeState.awb.automatic.gains * (1 - speed);
activeState.awb.gains.automatic = awbResult.gains; activeState.awb.automatic.temperatureK = static_cast<unsigned int>(ct);
activeState.awb.automatic.gains = awbResult.gains;
LOG(RkISP1Awb, Debug) LOG(RkISP1Awb, Debug)
<< std::showpoint << std::showpoint
<< "Means " << rgbMeans << ", gains " << "Means " << rgbMeans << ", gains "
<< activeState.awb.gains.automatic << ", temp " << activeState.awb.automatic.gains << ", temp "
<< activeState.awb.temperatureK << "K"; << activeState.awb.automatic.temperatureK << "K";
} }
RGB<double> Awb::calculateRgbMeans(const IPAFrameContext &frameContext, const rkisp1_cif_isp_awb_stat *awb) const RGB<double> Awb::calculateRgbMeans(const IPAFrameContext &frameContext, const rkisp1_cif_isp_awb_stat *awb) const
@ -391,12 +404,18 @@ RGB<double> Awb::calculateRgbMeans(const IPAFrameContext &frameContext, const rk
rgbMeans = rgbMeans.max(0.0); rgbMeans = rgbMeans.max(0.0);
} }
/*
* The ISP computes the AWB means after applying the CCM. Apply the
* inverse as we want to get the raw means before the colour gains.
*/
rgbMeans = frameContext.ccm.ccm.inverse() * rgbMeans;
/* /*
* The ISP computes the AWB means after applying the colour gains, * The ISP computes the AWB means after applying the colour gains,
* divide by the gains that were used to get the raw means from the * divide by the gains that were used to get the raw means from the
* sensor. * sensor. Apply a minimum value to avoid divisions by near-zero.
*/ */
rgbMeans /= frameContext.awb.gains; rgbMeans /= frameContext.awb.gains.max(0.01);
return rgbMeans; return rgbMeans;
} }

View file

@ -7,8 +7,6 @@
#pragma once #pragma once
#include <optional>
#include "libcamera/internal/vector.h" #include "libcamera/internal/vector.h"
#include "libipa/awb.h" #include "libipa/awb.h"

View file

@ -36,17 +36,25 @@ namespace ipa::rkisp1::algorithms {
LOG_DEFINE_CATEGORY(RkISP1Ccm) LOG_DEFINE_CATEGORY(RkISP1Ccm)
constexpr Matrix<float, 3, 3> kIdentity3x3 = Matrix<float, 3, 3>::identity();
/** /**
* \copydoc libcamera::ipa::Algorithm::init * \copydoc libcamera::ipa::Algorithm::init
*/ */
int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData) int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData)
{ {
auto &cmap = context.ctrlMap;
cmap[&controls::ColourCorrectionMatrix] = ControlInfo(
ControlValue(-8.0f),
ControlValue(7.993f),
ControlValue(kIdentity3x3.data()));
int ret = ccm_.readYaml(tuningData["ccms"], "ct", "ccm"); int ret = ccm_.readYaml(tuningData["ccms"], "ct", "ccm");
if (ret < 0) { if (ret < 0) {
LOG(RkISP1Ccm, Warning) LOG(RkISP1Ccm, Warning)
<< "Failed to parse 'ccm' " << "Failed to parse 'ccm' "
<< "parameter from tuning file; falling back to unit matrix"; << "parameter from tuning file; falling back to unit matrix";
ccm_.setData({ { 0, Matrix<float, 3, 3>::identity() } }); ccm_.setData({ { 0, kIdentity3x3 } });
} }
ret = offsets_.readYaml(tuningData["ccms"], "ct", "offsets"); ret = offsets_.readYaml(tuningData["ccms"], "ct", "offsets");
@ -61,13 +69,51 @@ int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData
return 0; return 0;
} }
/**
* \copydoc libcamera::ipa::Algorithm::configure
*/
int Ccm::configure(IPAContext &context,
[[maybe_unused]] const IPACameraSensorInfo &configInfo)
{
auto &as = context.activeState;
as.ccm.manual = kIdentity3x3;
as.ccm.automatic = ccm_.getInterpolated(as.awb.automatic.temperatureK);
return 0;
}
void Ccm::queueRequest(IPAContext &context,
[[maybe_unused]] const uint32_t frame,
IPAFrameContext &frameContext,
const ControlList &controls)
{
/* Nothing to do here, the ccm will be calculated in prepare() */
if (frameContext.awb.autoEnabled)
return;
auto &ccm = context.activeState.ccm;
const auto &colourTemperature = controls.get(controls::ColourTemperature);
const auto &ccmMatrix = controls.get(controls::ColourCorrectionMatrix);
if (ccmMatrix) {
ccm.manual = Matrix<float, 3, 3>(*ccmMatrix);
LOG(RkISP1Ccm, Debug)
<< "Setting manual CCM from CCM control to " << ccm.manual;
} else if (colourTemperature) {
ccm.manual = ccm_.getInterpolated(*colourTemperature);
LOG(RkISP1Ccm, Debug)
<< "Setting manual CCM from CT control to " << ccm.manual;
}
frameContext.ccm.ccm = ccm.manual;
}
void Ccm::setParameters(struct rkisp1_cif_isp_ctk_config &config, void Ccm::setParameters(struct rkisp1_cif_isp_ctk_config &config,
const Matrix<float, 3, 3> &matrix, const Matrix<float, 3, 3> &matrix,
const Matrix<int16_t, 3, 1> &offsets) const Matrix<int16_t, 3, 1> &offsets)
{ {
/* /*
* 4 bit integer and 7 bit fractional, ranging from -8 (0x400) to * 4 bit integer and 7 bit fractional, ranging from -8 (0x400) to
* +7.992 (0x3ff) * +7.9921875 (0x3ff)
*/ */
for (unsigned int i = 0; i < 3; i++) { for (unsigned int i = 0; i < 3; i++) {
for (unsigned int j = 0; j < 3; j++) for (unsigned int j = 0; j < 3; j++)
@ -88,14 +134,16 @@ void Ccm::setParameters(struct rkisp1_cif_isp_ctk_config &config,
void Ccm::prepare(IPAContext &context, const uint32_t frame, void Ccm::prepare(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext, RkISP1Params *params) IPAFrameContext &frameContext, RkISP1Params *params)
{ {
uint32_t ct = context.activeState.awb.temperatureK; if (!frameContext.awb.autoEnabled) {
auto config = params->block<BlockType::Ctk>();
config.setEnabled(true);
setParameters(*config, frameContext.ccm.ccm, Matrix<int16_t, 3, 1>());
return;
}
/* uint32_t ct = frameContext.awb.temperatureK;
* \todo The colour temperature will likely be noisy, add filtering to
* avoid updating the CCM matrix all the time.
*/
if (frame > 0 && ct == ct_) { if (frame > 0 && ct == ct_) {
frameContext.ccm.ccm = context.activeState.ccm.ccm; frameContext.ccm.ccm = context.activeState.ccm.automatic;
return; return;
} }
@ -103,7 +151,7 @@ void Ccm::prepare(IPAContext &context, const uint32_t frame,
Matrix<float, 3, 3> ccm = ccm_.getInterpolated(ct); Matrix<float, 3, 3> ccm = ccm_.getInterpolated(ct);
Matrix<int16_t, 3, 1> offsets = offsets_.getInterpolated(ct); Matrix<int16_t, 3, 1> offsets = offsets_.getInterpolated(ct);
context.activeState.ccm.ccm = ccm; context.activeState.ccm.automatic = ccm;
frameContext.ccm.ccm = ccm; frameContext.ccm.ccm = ccm;
auto config = params->block<BlockType::Ctk>(); auto config = params->block<BlockType::Ctk>();

View file

@ -26,6 +26,12 @@ public:
~Ccm() = default; ~Ccm() = default;
int init(IPAContext &context, const YamlObject &tuningData) override; int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context,
const IPACameraSensorInfo &configInfo) override;
void queueRequest(IPAContext &context,
const uint32_t frame,
IPAFrameContext &frameContext,
const ControlList &controls) override;
void prepare(IPAContext &context, const uint32_t frame, void prepare(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext, IPAFrameContext &frameContext,
RkISP1Params *params) override; RkISP1Params *params) override;

View file

@ -39,6 +39,17 @@ LOG_DEFINE_CATEGORY(RkISP1Filter)
static constexpr uint32_t kFiltLumWeightDefault = 0x00022040; static constexpr uint32_t kFiltLumWeightDefault = 0x00022040;
static constexpr uint32_t kFiltModeDefault = 0x000004f2; static constexpr uint32_t kFiltModeDefault = 0x000004f2;
/**
* \copydoc libcamera::ipa::Algorithm::init
*/
int Filter::init(IPAContext &context,
[[maybe_unused]] const YamlObject &tuningData)
{
auto &cmap = context.ctrlMap;
cmap[&controls::Sharpness] = ControlInfo(0.0f, 10.0f, 1.0f);
return 0;
}
/** /**
* \copydoc libcamera::ipa::Algorithm::queueRequest * \copydoc libcamera::ipa::Algorithm::queueRequest
*/ */

View file

@ -21,6 +21,7 @@ public:
Filter() = default; Filter() = default;
~Filter() = default; ~Filter() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
void queueRequest(IPAContext &context, const uint32_t frame, void queueRequest(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext, IPAFrameContext &frameContext,
const ControlList &controls) override; const ControlList &controls) override;

View file

@ -404,12 +404,12 @@ void LensShadingCorrection::copyTable(rkisp1_cif_isp_lsc_config &config,
/** /**
* \copydoc libcamera::ipa::Algorithm::prepare * \copydoc libcamera::ipa::Algorithm::prepare
*/ */
void LensShadingCorrection::prepare(IPAContext &context, void LensShadingCorrection::prepare([[maybe_unused]] IPAContext &context,
[[maybe_unused]] const uint32_t frame, [[maybe_unused]] const uint32_t frame,
[[maybe_unused]] IPAFrameContext &frameContext, IPAFrameContext &frameContext,
RkISP1Params *params) RkISP1Params *params)
{ {
uint32_t ct = context.activeState.awb.temperatureK; uint32_t ct = frameContext.awb.temperatureK;
if (std::abs(static_cast<int>(ct) - static_cast<int>(lastAppliedCt_)) < if (std::abs(static_cast<int>(ct) - static_cast<int>(lastAppliedCt_)) <
kColourTemperatureChangeThreshhold) kColourTemperatureChangeThreshhold)
return; return;

View file

@ -191,22 +191,36 @@ namespace libcamera::ipa::rkisp1 {
* \var IPAActiveState::awb * \var IPAActiveState::awb
* \brief State for the Automatic White Balance algorithm * \brief State for the Automatic White Balance algorithm
* *
* \struct IPAActiveState::awb.gains * \struct IPAActiveState::awb::AwbState
* \brief Struct for the AWB regulation state
*
* \var IPAActiveState::awb::AwbState.gains
* \brief White balance gains * \brief White balance gains
* *
* \var IPAActiveState::awb.gains.manual * \var IPAActiveState::awb::AwbState.temperatureK
* \brief Manual white balance gains (set through requests) * \brief Color temperature
* *
* \var IPAActiveState::awb.gains.automatic * \var IPAActiveState::awb.manual
* \brief Automatic white balance gains (computed by the algorithm) * \brief Manual regulation state (set through requests)
* *
* \var IPAActiveState::awb.temperatureK * \var IPAActiveState::awb.automatic
* \brief Estimated color temperature * \brief Automatic regulation state (computed by the algorithm)
* *
* \var IPAActiveState::awb.autoEnabled * \var IPAActiveState::awb.autoEnabled
* \brief Whether the Auto White Balance algorithm is enabled * \brief Whether the Auto White Balance algorithm is enabled
*/ */
/**
* \var IPAActiveState::ccm
* \brief State for the Colour Correction Matrix algorithm
*
* \var IPAActiveState::ccm.manual
* \brief Manual CCM (set through requests)
*
* \var IPAActiveState::awb.automatic
* \brief Automatic CCM (computed by the algorithm)
*/
/** /**
* \var IPAActiveState::cproc * \var IPAActiveState::cproc
* \brief State for the Color Processing algorithm * \brief State for the Color Processing algorithm
@ -346,12 +360,23 @@ namespace libcamera::ipa::rkisp1 {
* \brief White balance gains * \brief White balance gains
* *
* \var IPAFrameContext::awb.temperatureK * \var IPAFrameContext::awb.temperatureK
* \brief Estimated color temperature * \brief Color temperature used for processing this frame
*
* This does not match the color temperature estimated for this frame as the
* measurements were taken on a previous frame.
* *
* \var IPAFrameContext::awb.autoEnabled * \var IPAFrameContext::awb.autoEnabled
* \brief Whether the Auto White Balance algorithm is enabled * \brief Whether the Auto White Balance algorithm is enabled
*/ */
/**
* \var IPAFrameContext::ccm
* \brief Colour Correction Matrix parameters for this frame
*
* \struct IPAFrameContext::ccm.ccm
* \brief Colour Correction Matrix
*/
/** /**
* \var IPAFrameContext::cproc * \var IPAFrameContext::cproc
* \brief Color Processing parameters for this frame * \brief Color Processing parameters for this frame

View file

@ -89,17 +89,20 @@ struct IPAActiveState {
} agc; } agc;
struct { struct {
struct { struct AwbState {
RGB<double> manual; RGB<double> gains;
RGB<double> automatic; unsigned int temperatureK;
} gains; };
AwbState manual;
AwbState automatic;
unsigned int temperatureK;
bool autoEnabled; bool autoEnabled;
} awb; } awb;
struct { struct {
Matrix<float, 3, 3> ccm; Matrix<float, 3, 3> manual;
Matrix<float, 3, 3> automatic;
} ccm; } ccm;
struct { struct {

View file

@ -115,10 +115,7 @@ const IPAHwSettings ipaHwSettingsV12{
/* List of controls handled by the RkISP1 IPA */ /* List of controls handled by the RkISP1 IPA */
const ControlInfoMap::Map rkisp1Controls{ const ControlInfoMap::Map rkisp1Controls{
{ &controls::AwbEnable, ControlInfo(false, true) },
{ &controls::ColourGains, ControlInfo(0.0f, 3.996f, 1.0f) },
{ &controls::DebugMetadataEnable, ControlInfo(false, true, false) }, { &controls::DebugMetadataEnable, ControlInfo(false, true, false) },
{ &controls::Sharpness, ControlInfo(0.0f, 10.0f, 1.0f) },
{ &controls::draft::NoiseReductionMode, ControlInfo(controls::draft::NoiseReductionModeValues) }, { &controls::draft::NoiseReductionMode, ControlInfo(controls::draft::NoiseReductionModeValues) },
}; };

View file

@ -58,23 +58,24 @@ const ControlInfoMap::Map ipaControls{
/* \todo Move this to the Camera class */ /* \todo Move this to the Camera class */
{ &controls::AeEnable, ControlInfo(false, true, true) }, { &controls::AeEnable, ControlInfo(false, true, true) },
{ &controls::ExposureTimeMode, { &controls::ExposureTimeMode,
ControlInfo(static_cast<int32_t>(controls::ExposureTimeModeAuto), ControlInfo({ { ControlValue(controls::ExposureTimeModeAuto),
static_cast<int32_t>(controls::ExposureTimeModeManual), ControlValue(controls::ExposureTimeModeManual) } },
static_cast<int32_t>(controls::ExposureTimeModeAuto)) }, ControlValue(controls::ExposureTimeModeAuto)) },
{ &controls::ExposureTime, { &controls::ExposureTime,
ControlInfo(1, 66666, static_cast<int32_t>(defaultExposureTime.get<std::micro>())) }, ControlInfo(1, 66666, static_cast<int32_t>(defaultExposureTime.get<std::micro>())) },
{ &controls::AnalogueGainMode, { &controls::AnalogueGainMode,
ControlInfo(static_cast<int32_t>(controls::AnalogueGainModeAuto), ControlInfo({ { ControlValue(controls::AnalogueGainModeAuto),
static_cast<int32_t>(controls::AnalogueGainModeManual), ControlValue(controls::AnalogueGainModeManual) } },
static_cast<int32_t>(controls::AnalogueGainModeAuto)) }, ControlValue(controls::AnalogueGainModeAuto)) },
{ &controls::AnalogueGain, ControlInfo(1.0f, 16.0f, 1.0f) }, { &controls::AnalogueGain, ControlInfo(1.0f, 16.0f, 1.0f) },
{ &controls::AeMeteringMode, ControlInfo(controls::AeMeteringModeValues) }, { &controls::AeMeteringMode, ControlInfo(controls::AeMeteringModeValues) },
{ &controls::AeConstraintMode, ControlInfo(controls::AeConstraintModeValues) }, { &controls::AeConstraintMode, ControlInfo(controls::AeConstraintModeValues) },
{ &controls::AeExposureMode, ControlInfo(controls::AeExposureModeValues) }, { &controls::AeExposureMode, ControlInfo(controls::AeExposureModeValues) },
{ &controls::ExposureValue, ControlInfo(-8.0f, 8.0f, 0.0f) }, { &controls::ExposureValue, ControlInfo(-8.0f, 8.0f, 0.0f) },
{ &controls::AeFlickerMode, ControlInfo(static_cast<int>(controls::FlickerOff), { &controls::AeFlickerMode,
static_cast<int>(controls::FlickerManual), ControlInfo({ { ControlValue(controls::FlickerOff),
static_cast<int>(controls::FlickerOff)) }, ControlValue(controls::FlickerManual) } },
ControlValue(controls::FlickerOff)) },
{ &controls::AeFlickerPeriod, ControlInfo(100, 1000000) }, { &controls::AeFlickerPeriod, ControlInfo(100, 1000000) },
{ &controls::Brightness, ControlInfo(-1.0f, 1.0f, 0.0f) }, { &controls::Brightness, ControlInfo(-1.0f, 1.0f, 0.0f) },
{ &controls::Contrast, ControlInfo(0.0f, 32.0f, 1.0f) }, { &controls::Contrast, ControlInfo(0.0f, 32.0f, 1.0f) },
@ -232,25 +233,6 @@ int32_t IpaBase::configure(const IPACameraSensorInfo &sensorInfo, const ConfigPa
agcStatus.analogueGain = defaultAnalogueGain; agcStatus.analogueGain = defaultAnalogueGain;
applyAGC(&agcStatus, ctrls); applyAGC(&agcStatus, ctrls);
/*
* Set the lens to the default (typically hyperfocal) position
* on first start.
*/
if (lensPresent_) {
RPiController::AfAlgorithm *af =
dynamic_cast<RPiController::AfAlgorithm *>(controller_.getAlgorithm("af"));
if (af) {
float defaultPos =
ipaAfControls.at(&controls::LensPosition).def().get<float>();
ControlList lensCtrl(lensCtrls_);
int32_t hwpos;
af->setLensPosition(defaultPos, &hwpos);
lensCtrl.set(V4L2_CID_FOCUS_ABSOLUTE, hwpos);
result->lensControls = std::move(lensCtrl);
}
}
} }
result->sensorControls = std::move(ctrls); result->sensorControls = std::move(ctrls);
@ -280,8 +262,20 @@ int32_t IpaBase::configure(const IPACameraSensorInfo &sensorInfo, const ConfigPa
ctrlMap.merge(ControlInfoMap::Map(ipaColourControls)); ctrlMap.merge(ControlInfoMap::Map(ipaColourControls));
/* Declare Autofocus controls, only if we have a controllable lens */ /* Declare Autofocus controls, only if we have a controllable lens */
if (lensPresent_) if (lensPresent_) {
ctrlMap.merge(ControlInfoMap::Map(ipaAfControls)); ctrlMap.merge(ControlInfoMap::Map(ipaAfControls));
RPiController::AfAlgorithm *af =
dynamic_cast<RPiController::AfAlgorithm *>(controller_.getAlgorithm("af"));
if (af) {
double min, max, dflt;
af->getLensLimits(min, max);
dflt = af->getDefaultLensPosition();
ctrlMap[&controls::LensPosition] =
ControlInfo(static_cast<float>(min),
static_cast<float>(max),
static_cast<float>(dflt));
}
}
result->controlInfo = ControlInfoMap(std::move(ctrlMap), controls::controls); result->controlInfo = ControlInfoMap(std::move(ctrlMap), controls::controls);
@ -319,14 +313,35 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
/* Make a note of this as it tells us the HDR status of the first few frames. */ /* Make a note of this as it tells us the HDR status of the first few frames. */
hdrStatus_ = agcStatus.hdr; hdrStatus_ = agcStatus.hdr;
/*
* AF: If no lens position was specified, drive lens to a default position.
* This had to be deferred (not initialised by a constructor) until here
* to ensure that exactly ONE starting position is sent to the lens driver.
* It should be the static API default, not dependent on AF range or mode.
*/
if (firstStart_ && lensPresent_) {
RPiController::AfAlgorithm *af = dynamic_cast<RPiController::AfAlgorithm *>(
controller_.getAlgorithm("af"));
if (af && !af->getLensPosition()) {
int32_t hwpos;
double pos = af->getDefaultLensPosition();
if (af->setLensPosition(pos, &hwpos, true)) {
ControlList lensCtrls(lensCtrls_);
lensCtrls.set(V4L2_CID_FOCUS_ABSOLUTE, hwpos);
setLensControls.emit(lensCtrls);
}
}
}
/* /*
* Initialise frame counts, and decide how many frames must be hidden or * Initialise frame counts, and decide how many frames must be hidden or
* "mistrusted", which depends on whether this is a startup from cold, * "mistrusted", which depends on whether this is a startup from cold,
* or merely a mode switch in a running system. * or merely a mode switch in a running system.
*/ */
unsigned int agcConvergenceFrames = 0, awbConvergenceFrames = 0;
frameCount_ = 0; frameCount_ = 0;
if (firstStart_) { if (firstStart_) {
dropFrameCount_ = helper_->hideFramesStartup(); invalidCount_ = helper_->hideFramesStartup();
mistrustCount_ = helper_->mistrustFramesStartup(); mistrustCount_ = helper_->mistrustFramesStartup();
/* /*
@ -336,7 +351,6 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
* (mistrustCount_) that they won't see. But if zero (i.e. * (mistrustCount_) that they won't see. But if zero (i.e.
* no convergence necessary), no frames need to be dropped. * no convergence necessary), no frames need to be dropped.
*/ */
unsigned int agcConvergenceFrames = 0;
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>( RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc")); controller_.getAlgorithm("agc"));
if (agc) { if (agc) {
@ -345,7 +359,6 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
agcConvergenceFrames += mistrustCount_; agcConvergenceFrames += mistrustCount_;
} }
unsigned int awbConvergenceFrames = 0;
RPiController::AwbAlgorithm *awb = dynamic_cast<RPiController::AwbAlgorithm *>( RPiController::AwbAlgorithm *awb = dynamic_cast<RPiController::AwbAlgorithm *>(
controller_.getAlgorithm("awb")); controller_.getAlgorithm("awb"));
if (awb) { if (awb) {
@ -353,15 +366,18 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
if (awbConvergenceFrames) if (awbConvergenceFrames)
awbConvergenceFrames += mistrustCount_; awbConvergenceFrames += mistrustCount_;
} }
dropFrameCount_ = std::max({ dropFrameCount_, agcConvergenceFrames, awbConvergenceFrames });
LOG(IPARPI, Debug) << "Drop " << dropFrameCount_ << " frames on startup";
} else { } else {
dropFrameCount_ = helper_->hideFramesModeSwitch(); invalidCount_ = helper_->hideFramesModeSwitch();
mistrustCount_ = helper_->mistrustFramesModeSwitch(); mistrustCount_ = helper_->mistrustFramesModeSwitch();
} }
result->dropFrameCount = dropFrameCount_; result->startupFrameCount = std::max({ agcConvergenceFrames, awbConvergenceFrames });
result->invalidFrameCount = invalidCount_;
invalidCount_ = std::max({ invalidCount_, agcConvergenceFrames, awbConvergenceFrames });
LOG(IPARPI, Debug) << "Startup frames: " << result->startupFrameCount
<< " Invalid frames: " << result->invalidFrameCount;
firstStart_ = false; firstStart_ = false;
lastRunTimestamp_ = 0; lastRunTimestamp_ = 0;
@ -441,7 +457,7 @@ void IpaBase::prepareIsp(const PrepareParams &params)
/* Allow a 10% margin on the comparison below. */ /* Allow a 10% margin on the comparison below. */
Duration delta = (frameTimestamp - lastRunTimestamp_) * 1.0ns; Duration delta = (frameTimestamp - lastRunTimestamp_) * 1.0ns;
if (lastRunTimestamp_ && frameCount_ > dropFrameCount_ && if (lastRunTimestamp_ && frameCount_ > invalidCount_ &&
delta < controllerMinFrameDuration * 0.9 && !hdrChange) { delta < controllerMinFrameDuration * 0.9 && !hdrChange) {
/* /*
* Ensure we merge the previous frame's metadata with the current * Ensure we merge the previous frame's metadata with the current
@ -946,6 +962,17 @@ void IpaBase::applyControls(const ControlList &controls)
break; break;
} }
case controls::AE_ENABLE: {
/*
* The AeEnable control is now just a wrapper that will already have been
* converted to ExposureTimeMode and AnalogueGainMode equivalents, so there
* would be nothing to do here. Nonetheless, "handle" the control so as to
* avoid warnings from the "default:" clause of the switch statement.
*/
break;
}
case controls::AE_FLICKER_MODE: { case controls::AE_FLICKER_MODE: {
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>( RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc")); controller_.getAlgorithm("agc"));
@ -1552,7 +1579,8 @@ void IpaBase::applyFrameDurations(Duration minFrameDuration, Duration maxFrameDu
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>( RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc")); controller_.getAlgorithm("agc"));
agc->setMaxExposureTime(maxExposureTime); if (agc)
agc->setMaxExposureTime(maxExposureTime);
} }
void IpaBase::applyAGC(const struct AgcStatus *agcStatus, ControlList &ctrls) void IpaBase::applyAGC(const struct AgcStatus *agcStatus, ControlList &ctrls)

View file

@ -115,8 +115,8 @@ private:
/* How many frames we should avoid running control algos on. */ /* How many frames we should avoid running control algos on. */
unsigned int mistrustCount_; unsigned int mistrustCount_;
/* Number of frames that need to be dropped on startup. */ /* Number of frames that need to be marked as dropped on startup. */
unsigned int dropFrameCount_; unsigned int invalidCount_;
/* Frame timestamp for the last run of the controller. */ /* Frame timestamp for the last run of the controller. */
uint64_t lastRunTimestamp_; uint64_t lastRunTimestamp_;

View file

@ -33,6 +33,10 @@ public:
* *
* getMode() is provided mainly for validating controls. * getMode() is provided mainly for validating controls.
* getLensPosition() is provided for populating DeviceStatus. * getLensPosition() is provided for populating DeviceStatus.
*
* getDefaultlensPosition() and getLensLimits() were added for
* populating ControlInfoMap. They return the static API limits
* which should be independent of the current range or mode.
*/ */
enum AfRange { AfRangeNormal = 0, enum AfRange { AfRangeNormal = 0,
@ -66,7 +70,9 @@ public:
} }
virtual void setMode(AfMode mode) = 0; virtual void setMode(AfMode mode) = 0;
virtual AfMode getMode() const = 0; virtual AfMode getMode() const = 0;
virtual bool setLensPosition(double dioptres, int32_t *hwpos) = 0; virtual double getDefaultLensPosition() const = 0;
virtual void getLensLimits(double &min, double &max) const = 0;
virtual bool setLensPosition(double dioptres, int32_t *hwpos, bool force = false) = 0;
virtual std::optional<double> getLensPosition() const = 0; virtual std::optional<double> getLensPosition() const = 0;
virtual void triggerScan() = 0; virtual void triggerScan() = 0;
virtual void cancelScan() = 0; virtual void cancelScan() = 0;

View file

@ -46,6 +46,8 @@ Af::SpeedDependentParams::SpeedDependentParams()
: stepCoarse(1.0), : stepCoarse(1.0),
stepFine(0.25), stepFine(0.25),
contrastRatio(0.75), contrastRatio(0.75),
retriggerRatio(0.75),
retriggerDelay(10),
pdafGain(-0.02), pdafGain(-0.02),
pdafSquelch(0.125), pdafSquelch(0.125),
maxSlew(2.0), maxSlew(2.0),
@ -60,6 +62,7 @@ Af::CfgParams::CfgParams()
confThresh(16), confThresh(16),
confClip(512), confClip(512),
skipFrames(5), skipFrames(5),
checkForIR(false),
map() map()
{ {
} }
@ -87,6 +90,8 @@ void Af::SpeedDependentParams::read(const libcamera::YamlObject &params)
readNumber<double>(stepCoarse, params, "step_coarse"); readNumber<double>(stepCoarse, params, "step_coarse");
readNumber<double>(stepFine, params, "step_fine"); readNumber<double>(stepFine, params, "step_fine");
readNumber<double>(contrastRatio, params, "contrast_ratio"); readNumber<double>(contrastRatio, params, "contrast_ratio");
readNumber<double>(retriggerRatio, params, "retrigger_ratio");
readNumber<uint32_t>(retriggerDelay, params, "retrigger_delay");
readNumber<double>(pdafGain, params, "pdaf_gain"); readNumber<double>(pdafGain, params, "pdaf_gain");
readNumber<double>(pdafSquelch, params, "pdaf_squelch"); readNumber<double>(pdafSquelch, params, "pdaf_squelch");
readNumber<double>(maxSlew, params, "max_slew"); readNumber<double>(maxSlew, params, "max_slew");
@ -137,6 +142,7 @@ int Af::CfgParams::read(const libcamera::YamlObject &params)
readNumber<uint32_t>(confThresh, params, "conf_thresh"); readNumber<uint32_t>(confThresh, params, "conf_thresh");
readNumber<uint32_t>(confClip, params, "conf_clip"); readNumber<uint32_t>(confClip, params, "conf_clip");
readNumber<uint32_t>(skipFrames, params, "skip_frames"); readNumber<uint32_t>(skipFrames, params, "skip_frames");
readNumber<bool>(checkForIR, params, "check_for_ir");
if (params.contains("map")) if (params.contains("map"))
map = params["map"].get<ipa::Pwl>(ipa::Pwl{}); map = params["map"].get<ipa::Pwl>(ipa::Pwl{});
@ -176,27 +182,38 @@ Af::Af(Controller *controller)
useWindows_(false), useWindows_(false),
phaseWeights_(), phaseWeights_(),
contrastWeights_(), contrastWeights_(),
awbWeights_(),
scanState_(ScanState::Idle), scanState_(ScanState::Idle),
initted_(false), initted_(false),
irFlag_(false),
ftarget_(-1.0), ftarget_(-1.0),
fsmooth_(-1.0), fsmooth_(-1.0),
prevContrast_(0.0), prevContrast_(0.0),
oldSceneContrast_(0.0),
prevAverage_{ 0.0, 0.0, 0.0 },
oldSceneAverage_{ 0.0, 0.0, 0.0 },
prevPhase_(0.0),
skipCount_(0), skipCount_(0),
stepCount_(0), stepCount_(0),
dropCount_(0), dropCount_(0),
sameSignCount_(0),
sceneChangeCount_(0),
scanMaxContrast_(0.0), scanMaxContrast_(0.0),
scanMinContrast_(1.0e9), scanMinContrast_(1.0e9),
scanStep_(0.0),
scanData_(), scanData_(),
reportState_(AfState::Idle) reportState_(AfState::Idle)
{ {
/* /*
* Reserve space for data, to reduce memory fragmentation. It's too early * Reserve space for data structures, to reduce memory fragmentation.
* to query the size of the PDAF (from camera) and Contrast (from ISP) * It's too early to query the size of the PDAF sensor data, so guess.
* statistics, but these are plausible upper bounds.
*/ */
windows_.reserve(1);
phaseWeights_.w.reserve(16 * 12); phaseWeights_.w.reserve(16 * 12);
contrastWeights_.w.reserve(getHardwareConfig().focusRegions.width * contrastWeights_.w.reserve(getHardwareConfig().focusRegions.width *
getHardwareConfig().focusRegions.height); getHardwareConfig().focusRegions.height);
contrastWeights_.w.reserve(getHardwareConfig().awbRegions.width *
getHardwareConfig().awbRegions.height);
scanData_.reserve(32); scanData_.reserve(32);
} }
@ -235,13 +252,14 @@ void Af::switchMode(CameraMode const &cameraMode, [[maybe_unused]] Metadata *met
<< statsRegion_.height; << statsRegion_.height;
invalidateWeights(); invalidateWeights();
if (scanState_ >= ScanState::Coarse && scanState_ < ScanState::Settle) { if (scanState_ >= ScanState::Coarse1 && scanState_ < ScanState::Settle) {
/* /*
* If a scan was in progress, re-start it, as CDAF statistics * If a scan was in progress, re-start it, as CDAF statistics
* may have changed. Though if the application is just about * may have changed. Though if the application is just about
* to take a still picture, this will not help... * to take a still picture, this will not help...
*/ */
startProgrammedScan(); startProgrammedScan();
updateLensPosition();
} }
skipCount_ = cfg_.skipFrames; skipCount_ = cfg_.skipFrames;
} }
@ -307,6 +325,7 @@ void Af::invalidateWeights()
{ {
phaseWeights_.sum = 0; phaseWeights_.sum = 0;
contrastWeights_.sum = 0; contrastWeights_.sum = 0;
awbWeights_.sum = 0;
} }
bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf) bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf)
@ -328,9 +347,8 @@ bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf)
if (c >= cfg_.confThresh) { if (c >= cfg_.confThresh) {
if (c > cfg_.confClip) if (c > cfg_.confClip)
c = cfg_.confClip; c = cfg_.confClip;
c -= (cfg_.confThresh >> 2); c -= (cfg_.confThresh >> 1);
sumWc += w * c; sumWc += w * c;
c -= (cfg_.confThresh >> 2);
sumWcp += (int64_t)(w * c) * (int64_t)data.phase; sumWcp += (int64_t)(w * c) * (int64_t)data.phase;
} }
} }
@ -364,6 +382,54 @@ double Af::getContrast(const FocusRegions &focusStats)
return (contrastWeights_.sum > 0) ? ((double)sumWc / (double)contrastWeights_.sum) : 0.0; return (contrastWeights_.sum > 0) ? ((double)sumWc / (double)contrastWeights_.sum) : 0.0;
} }
/*
* Get the average R, G, B values in AF window[s] (from AWB statistics).
* Optionally, check if all of {R,G,B} are within 4:5 of each other
* across more than 50% of the counted area and within the AF window:
* for an RGB sensor this strongly suggests that IR lighting is in use.
*/
bool Af::getAverageAndTestIr(const RgbyRegions &awbStats, double rgb[3])
{
libcamera::Size size = awbStats.size();
if (size.height != awbWeights_.rows ||
size.width != awbWeights_.cols || awbWeights_.sum == 0) {
LOG(RPiAf, Debug) << "Recompute RGB weights " << size.width << 'x' << size.height;
computeWeights(&awbWeights_, size.height, size.width);
}
uint64_t sr = 0, sg = 0, sb = 0, sw = 1;
uint64_t greyCount = 0, allCount = 0;
for (unsigned i = 0; i < awbStats.numRegions(); ++i) {
uint64_t r = awbStats.get(i).val.rSum;
uint64_t g = awbStats.get(i).val.gSum;
uint64_t b = awbStats.get(i).val.bSum;
uint64_t w = awbWeights_.w[i];
if (w) {
sw += w;
sr += w * r;
sg += w * g;
sb += w * b;
}
if (cfg_.checkForIR) {
if (4 * r < 5 * b && 4 * b < 5 * r &&
4 * r < 5 * g && 4 * g < 5 * r &&
4 * b < 5 * g && 4 * g < 5 * b)
greyCount += awbStats.get(i).counted;
allCount += awbStats.get(i).counted;
}
}
rgb[0] = sr / (double)sw;
rgb[1] = sg / (double)sw;
rgb[2] = sb / (double)sw;
return (cfg_.checkForIR && 2 * greyCount > allCount &&
4 * sr < 5 * sb && 4 * sb < 5 * sr &&
4 * sr < 5 * sg && 4 * sg < 5 * sr &&
4 * sb < 5 * sg && 4 * sg < 5 * sb);
}
void Af::doPDAF(double phase, double conf) void Af::doPDAF(double phase, double conf)
{ {
/* Apply loop gain */ /* Apply loop gain */
@ -410,7 +476,7 @@ void Af::doPDAF(double phase, double conf)
bool Af::earlyTerminationByPhase(double phase) bool Af::earlyTerminationByPhase(double phase)
{ {
if (scanData_.size() > 0 && if (scanData_.size() > 0 &&
scanData_[scanData_.size() - 1].conf >= cfg_.confEpsilon) { scanData_[scanData_.size() - 1].conf >= cfg_.confThresh) {
double oldFocus = scanData_[scanData_.size() - 1].focus; double oldFocus = scanData_[scanData_.size() - 1].focus;
double oldPhase = scanData_[scanData_.size() - 1].phase; double oldPhase = scanData_[scanData_.size() - 1].phase;
@ -419,11 +485,12 @@ bool Af::earlyTerminationByPhase(double phase)
* Interpolate/extrapolate the lens position for zero phase. * Interpolate/extrapolate the lens position for zero phase.
* Check that the extrapolation is well-conditioned. * Check that the extrapolation is well-conditioned.
*/ */
if ((ftarget_ - oldFocus) * (phase - oldPhase) > 0.0) { if ((ftarget_ - oldFocus) * (phase - oldPhase) * cfg_.speeds[speed_].pdafGain < 0.0) {
double param = phase / (phase - oldPhase); double param = phase / (phase - oldPhase);
if (-3.0 <= param && param <= 3.5) { if ((-2.5 <= param || mode_ == AfModeContinuous) && param <= 3.0) {
ftarget_ += param * (oldFocus - ftarget_);
LOG(RPiAf, Debug) << "ETBP: param=" << param; LOG(RPiAf, Debug) << "ETBP: param=" << param;
param = std::max(param, -2.5);
ftarget_ += param * (oldFocus - ftarget_);
return true; return true;
} }
} }
@ -436,15 +503,28 @@ double Af::findPeak(unsigned i) const
{ {
double f = scanData_[i].focus; double f = scanData_[i].focus;
if (i > 0 && i + 1 < scanData_.size()) { if (scanData_.size() >= 3) {
double dropLo = scanData_[i].contrast - scanData_[i - 1].contrast; /*
double dropHi = scanData_[i].contrast - scanData_[i + 1].contrast; * Given the sample with the highest contrast score and its two
if (0.0 <= dropLo && dropLo < dropHi) { * neighbours either side (or same side if at the end of a scan),
double param = 0.3125 * (1.0 - dropLo / dropHi) * (1.6 - dropLo / dropHi); * solve for the best lens position by fitting a parabola.
f += param * (scanData_[i - 1].focus - f); * Adapted from awb.cpp: interpolateQaudaratic()
} else if (0.0 <= dropHi && dropHi < dropLo) { */
double param = 0.3125 * (1.0 - dropHi / dropLo) * (1.6 - dropHi / dropLo);
f += param * (scanData_[i + 1].focus - f); if (i == 0)
i++;
else if (i + 1 >= scanData_.size())
i--;
double abx = scanData_[i - 1].focus - scanData_[i].focus;
double aby = scanData_[i - 1].contrast - scanData_[i].contrast;
double cbx = scanData_[i + 1].focus - scanData_[i].focus;
double cby = scanData_[i + 1].contrast - scanData_[i].contrast;
double denom = 2.0 * (aby * cbx - cby * abx);
if (std::abs(denom) >= (1.0 / 64.0) && denom * abx > 0.0) {
f = (aby * cbx * cbx - cby * abx * abx) / denom;
f = std::clamp(f, std::min(abx, cbx), std::max(abx, cbx));
f += scanData_[i].focus;
} }
} }
@ -458,36 +538,49 @@ void Af::doScan(double contrast, double phase, double conf)
if (scanData_.empty() || contrast > scanMaxContrast_) { if (scanData_.empty() || contrast > scanMaxContrast_) {
scanMaxContrast_ = contrast; scanMaxContrast_ = contrast;
scanMaxIndex_ = scanData_.size(); scanMaxIndex_ = scanData_.size();
if (scanState_ != ScanState::Fine)
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
} }
if (contrast < scanMinContrast_) if (contrast < scanMinContrast_)
scanMinContrast_ = contrast; scanMinContrast_ = contrast;
scanData_.emplace_back(ScanRecord{ ftarget_, contrast, phase, conf }); scanData_.emplace_back(ScanRecord{ ftarget_, contrast, phase, conf });
if (scanState_ == ScanState::Coarse) { if ((scanStep_ >= 0.0 && ftarget_ >= cfg_.ranges[range_].focusMax) ||
if (ftarget_ >= cfg_.ranges[range_].focusMax || (scanStep_ <= 0.0 && ftarget_ <= cfg_.ranges[range_].focusMin) ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) { (scanState_ == ScanState::Fine && scanData_.size() >= 3) ||
/* contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
* Finished course scan, or termination based on contrast. double pk = findPeak(scanMaxIndex_);
* Jump to just after max contrast and start fine scan. /*
*/ * Finished a scan, by hitting a limit or due to constrast dropping off.
ftarget_ = std::min(ftarget_, findPeak(scanMaxIndex_) + * If this is a first coarse scan and we didn't bracket the peak, reverse!
2.0 * cfg_.speeds[speed_].stepFine); * If this is a fine scan, or no fine step was defined, we've finished.
scanState_ = ScanState::Fine; * Otherwise, start fine scan in opposite direction.
scanData_.clear(); */
} else if (scanState_ == ScanState::Coarse1 &&
ftarget_ += cfg_.speeds[speed_].stepCoarse; scanData_[0].contrast >= cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
} else { /* ScanState::Fine */ scanStep_ = -scanStep_;
if (ftarget_ <= cfg_.ranges[range_].focusMin || scanData_.size() >= 5 || scanState_ = ScanState::Coarse2;
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) { } else if (scanState_ == ScanState::Fine || cfg_.speeds[speed_].stepFine <= 0.0) {
/* ftarget_ = pk;
* Finished fine scan, or termination based on contrast.
* Use quadratic peak-finding to find best contrast position.
*/
ftarget_ = findPeak(scanMaxIndex_);
scanState_ = ScanState::Settle; scanState_ = ScanState::Settle;
} else } else if (scanState_ == ScanState::Coarse1 &&
ftarget_ -= cfg_.speeds[speed_].stepFine; scanData_[0].contrast >= cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
} scanStep_ = -scanStep_;
scanState_ = ScanState::Coarse2;
} else if (scanStep_ >= 0.0) {
ftarget_ = std::min(pk + cfg_.speeds[speed_].stepFine,
cfg_.ranges[range_].focusMax);
scanStep_ = -cfg_.speeds[speed_].stepFine;
scanState_ = ScanState::Fine;
} else {
ftarget_ = std::max(pk - cfg_.speeds[speed_].stepFine,
cfg_.ranges[range_].focusMin);
scanStep_ = cfg_.speeds[speed_].stepFine;
scanState_ = ScanState::Fine;
}
scanData_.clear();
} else
ftarget_ += scanStep_;
stepCount_ = (ftarget_ == fsmooth_) ? 0 : cfg_.speeds[speed_].stepFrames; stepCount_ = (ftarget_ == fsmooth_) ? 0 : cfg_.speeds[speed_].stepFrames;
} }
@ -501,26 +594,70 @@ void Af::doAF(double contrast, double phase, double conf)
return; return;
} }
/* Count frames for which PDAF phase has had same sign */
if (phase * prevPhase_ <= 0.0)
sameSignCount_ = 0;
else
sameSignCount_++;
prevPhase_ = phase;
if (mode_ == AfModeManual)
return; /* nothing to do */
if (scanState_ == ScanState::Pdaf) { if (scanState_ == ScanState::Pdaf) {
/* /*
* Use PDAF closed-loop control whenever available, in both CAF * Use PDAF closed-loop control whenever available, in both CAF
* mode and (for a limited number of iterations) when triggered. * mode and (for a limited number of iterations) when triggered.
* If PDAF fails (due to poor contrast, noise or large defocus), * If PDAF fails (due to poor contrast, noise or large defocus)
* fall back to a CDAF-based scan. To avoid "nuisance" scans, * for at least dropoutFrames, fall back to a CDAF-based scan
* scan only after a number of frames with low PDAF confidence. * immediately (in triggered-auto) or on scene change (in CAF).
*/ */
if (conf > (dropCount_ ? 1.0 : 0.25) * cfg_.confEpsilon) { if (conf >= cfg_.confEpsilon) {
doPDAF(phase, conf); if (mode_ == AfModeAuto || sameSignCount_ >= 3)
doPDAF(phase, conf);
if (stepCount_ > 0) if (stepCount_ > 0)
stepCount_--; stepCount_--;
else if (mode_ != AfModeContinuous) else if (mode_ != AfModeContinuous)
scanState_ = ScanState::Idle; scanState_ = ScanState::Idle;
oldSceneContrast_ = contrast;
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
sceneChangeCount_ = 0;
dropCount_ = 0; dropCount_ = 0;
} else if (++dropCount_ == cfg_.speeds[speed_].dropoutFrames) return;
startProgrammedScan(); } else {
} else if (scanState_ >= ScanState::Coarse && fsmooth_ == ftarget_) { dropCount_++;
if (dropCount_ < cfg_.speeds[speed_].dropoutFrames)
return;
if (mode_ != AfModeContinuous) {
startProgrammedScan();
return;
}
/* else fall through to waiting for a scene change */
}
}
if (scanState_ < ScanState::Coarse1 && mode_ == AfModeContinuous) {
/* /*
* Scanning sequence. This means PDAF has become unavailable. * In CAF mode, not in a scan, and PDAF is unavailable.
* Wait for a scene change, followed by stability.
*/
if (contrast + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneContrast_ ||
oldSceneContrast_ + 1.0 < cfg_.speeds[speed_].retriggerRatio * contrast ||
prevAverage_[0] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[0] ||
oldSceneAverage_[0] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[0] ||
prevAverage_[1] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[1] ||
oldSceneAverage_[1] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[1] ||
prevAverage_[2] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[2] ||
oldSceneAverage_[2] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[2]) {
oldSceneContrast_ = contrast;
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
sceneChangeCount_ = 1;
} else if (sceneChangeCount_)
sceneChangeCount_++;
if (sceneChangeCount_ >= cfg_.speeds[speed_].retriggerDelay)
startProgrammedScan();
} else if (scanState_ >= ScanState::Coarse1 && fsmooth_ == ftarget_) {
/*
* CDAF-based scanning sequence.
* Allow a delay between steps for CDAF FoM statistics to be * Allow a delay between steps for CDAF FoM statistics to be
* updated, and a "settling time" at the end of the sequence. * updated, and a "settling time" at the end of the sequence.
* [A coarse or fine scan can be abandoned if two PDAF samples * [A coarse or fine scan can be abandoned if two PDAF samples
@ -539,11 +676,14 @@ void Af::doAF(double contrast, double phase, double conf)
scanState_ = ScanState::Pdaf; scanState_ = ScanState::Pdaf;
else else
scanState_ = ScanState::Idle; scanState_ = ScanState::Idle;
dropCount_ = 0;
sceneChangeCount_ = 0;
oldSceneContrast_ = std::max(scanMaxContrast_, prevContrast_);
scanData_.clear(); scanData_.clear();
} else if (conf >= cfg_.confEpsilon && earlyTerminationByPhase(phase)) { } else if (conf >= cfg_.confThresh && earlyTerminationByPhase(phase)) {
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
scanState_ = ScanState::Settle; scanState_ = ScanState::Settle;
stepCount_ = (mode_ == AfModeContinuous) ? 0 stepCount_ = (mode_ == AfModeContinuous) ? 0 : cfg_.speeds[speed_].stepFrames;
: cfg_.speeds[speed_].stepFrames;
} else } else
doScan(contrast, phase, conf); doScan(contrast, phase, conf);
} }
@ -573,7 +713,8 @@ void Af::updateLensPosition()
void Af::startAF() void Af::startAF()
{ {
/* Use PDAF if the tuning file allows it; else CDAF. */ /* Use PDAF if the tuning file allows it; else CDAF. */
if (cfg_.speeds[speed_].dropoutFrames > 0 && if (cfg_.speeds[speed_].pdafGain != 0.0 &&
cfg_.speeds[speed_].dropoutFrames > 0 &&
(mode_ == AfModeContinuous || cfg_.speeds[speed_].pdafFrames > 0)) { (mode_ == AfModeContinuous || cfg_.speeds[speed_].pdafFrames > 0)) {
if (!initted_) { if (!initted_) {
ftarget_ = cfg_.ranges[range_].focusDefault; ftarget_ = cfg_.ranges[range_].focusDefault;
@ -583,16 +724,30 @@ void Af::startAF()
scanState_ = ScanState::Pdaf; scanState_ = ScanState::Pdaf;
scanData_.clear(); scanData_.clear();
dropCount_ = 0; dropCount_ = 0;
oldSceneContrast_ = 0.0;
sceneChangeCount_ = 0;
reportState_ = AfState::Scanning; reportState_ = AfState::Scanning;
} else } else {
startProgrammedScan(); startProgrammedScan();
updateLensPosition();
}
} }
void Af::startProgrammedScan() void Af::startProgrammedScan()
{ {
ftarget_ = cfg_.ranges[range_].focusMin; if (!initted_ || mode_ != AfModeContinuous ||
updateLensPosition(); fsmooth_ <= cfg_.ranges[range_].focusMin + 2.0 * cfg_.speeds[speed_].stepCoarse) {
scanState_ = ScanState::Coarse; ftarget_ = cfg_.ranges[range_].focusMin;
scanStep_ = cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse2;
} else if (fsmooth_ >= cfg_.ranges[range_].focusMax - 2.0 * cfg_.speeds[speed_].stepCoarse) {
ftarget_ = cfg_.ranges[range_].focusMax;
scanStep_ = -cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse2;
} else {
scanStep_ = -cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse1;
}
scanMaxContrast_ = 0.0; scanMaxContrast_ = 0.0;
scanMinContrast_ = 1.0e9; scanMinContrast_ = 1.0e9;
scanMaxIndex_ = 0; scanMaxIndex_ = 0;
@ -633,7 +788,7 @@ void Af::prepare(Metadata *imageMetadata)
uint32_t oldSt = stepCount_; uint32_t oldSt = stepCount_;
if (imageMetadata->get("pdaf.regions", regions) == 0) if (imageMetadata->get("pdaf.regions", regions) == 0)
getPhase(regions, phase, conf); getPhase(regions, phase, conf);
doAF(prevContrast_, phase, conf); doAF(prevContrast_, phase, irFlag_ ? 0 : conf);
updateLensPosition(); updateLensPosition();
LOG(RPiAf, Debug) << std::fixed << std::setprecision(2) LOG(RPiAf, Debug) << std::fixed << std::setprecision(2)
<< static_cast<unsigned int>(reportState_) << static_cast<unsigned int>(reportState_)
@ -643,7 +798,8 @@ void Af::prepare(Metadata *imageMetadata)
<< " ft" << oldFt << "->" << ftarget_ << " ft" << oldFt << "->" << ftarget_
<< " fs" << oldFs << "->" << fsmooth_ << " fs" << oldFs << "->" << fsmooth_
<< " cont=" << (int)prevContrast_ << " cont=" << (int)prevContrast_
<< " phase=" << (int)phase << " conf=" << (int)conf; << " phase=" << (int)phase << " conf=" << (int)conf
<< (irFlag_ ? " IR" : "");
} }
/* Report status and produce new lens setting */ /* Report status and produce new lens setting */
@ -656,6 +812,8 @@ void Af::prepare(Metadata *imageMetadata)
if (mode_ == AfModeAuto && scanState_ != ScanState::Idle) if (mode_ == AfModeAuto && scanState_ != ScanState::Idle)
status.state = AfState::Scanning; status.state = AfState::Scanning;
else if (mode_ == AfModeManual)
status.state = AfState::Idle;
else else
status.state = reportState_; status.state = reportState_;
status.lensSetting = initted_ ? std::optional<int>(cfg_.map.eval(fsmooth_)) status.lensSetting = initted_ ? std::optional<int>(cfg_.map.eval(fsmooth_))
@ -667,6 +825,7 @@ void Af::process(StatisticsPtr &stats, [[maybe_unused]] Metadata *imageMetadata)
{ {
(void)imageMetadata; (void)imageMetadata;
prevContrast_ = getContrast(stats->focusRegions); prevContrast_ = getContrast(stats->focusRegions);
irFlag_ = getAverageAndTestIr(stats->awbRegions, prevAverage_);
} }
/* Controls */ /* Controls */
@ -715,11 +874,23 @@ void Af::setWindows(libcamera::Span<libcamera::Rectangle const> const &wins)
invalidateWeights(); invalidateWeights();
} }
bool Af::setLensPosition(double dioptres, int *hwpos) double Af::getDefaultLensPosition() const
{
return cfg_.ranges[AfRangeNormal].focusDefault;
}
void Af::getLensLimits(double &min, double &max) const
{
/* Limits for manual focus are set by map, not by ranges */
min = cfg_.map.domain().start;
max = cfg_.map.domain().end;
}
bool Af::setLensPosition(double dioptres, int *hwpos, bool force)
{ {
bool changed = false; bool changed = false;
if (mode_ == AfModeManual) { if (mode_ == AfModeManual || force) {
LOG(RPiAf, Debug) << "setLensPosition: " << dioptres; LOG(RPiAf, Debug) << "setLensPosition: " << dioptres;
ftarget_ = cfg_.map.domain().clamp(dioptres); ftarget_ = cfg_.map.domain().clamp(dioptres);
changed = !(initted_ && fsmooth_ == ftarget_); changed = !(initted_ && fsmooth_ == ftarget_);
@ -763,7 +934,7 @@ void Af::setMode(AfAlgorithm::AfMode mode)
pauseFlag_ = false; pauseFlag_ = false;
if (mode == AfModeContinuous) if (mode == AfModeContinuous)
scanState_ = ScanState::Trigger; scanState_ = ScanState::Trigger;
else if (mode != AfModeAuto || scanState_ < ScanState::Coarse) else if (mode != AfModeAuto || scanState_ < ScanState::Coarse1)
goIdle(); goIdle();
} }
} }
@ -779,12 +950,14 @@ void Af::pause(AfAlgorithm::AfPause pause)
if (mode_ == AfModeContinuous) { if (mode_ == AfModeContinuous) {
if (pause == AfPauseResume && pauseFlag_) { if (pause == AfPauseResume && pauseFlag_) {
pauseFlag_ = false; pauseFlag_ = false;
if (scanState_ < ScanState::Coarse) if (scanState_ < ScanState::Coarse1)
scanState_ = ScanState::Trigger; scanState_ = ScanState::Trigger;
} else if (pause != AfPauseResume && !pauseFlag_) { } else if (pause != AfPauseResume && !pauseFlag_) {
pauseFlag_ = true; pauseFlag_ = true;
if (pause == AfPauseImmediate || scanState_ < ScanState::Coarse) if (pause == AfPauseImmediate || scanState_ < ScanState::Coarse1) {
goIdle(); scanState_ = ScanState::Idle;
scanData_.clear();
}
} }
} }
} }

View file

@ -15,20 +15,28 @@
/* /*
* This algorithm implements a hybrid of CDAF and PDAF, favouring PDAF. * This algorithm implements a hybrid of CDAF and PDAF, favouring PDAF.
* *
* Whenever PDAF is available, it is used in a continuous feedback loop. * Whenever PDAF is available (and reports sufficiently high confidence),
* When triggered in auto mode, we simply enable AF for a limited number * it is used for continuous feedback control of the lens position. When
* of frames (it may terminate early if the delta becomes small enough). * triggered in Auto mode, we enable the loop for a limited number of frames
* (it may terminate sooner if the phase becomes small). In CAF mode, the
* PDAF loop runs continuously. Very small lens movements are suppressed.
* *
* When PDAF confidence is low (due e.g. to low contrast or extreme defocus) * When PDAF confidence is low (due e.g. to low contrast or extreme defocus)
* or PDAF data are absent, fall back to CDAF with a programmed scan pattern. * or PDAF data are absent, fall back to CDAF with a programmed scan pattern.
* A coarse and fine scan are performed, using ISP's CDAF focus FoM to * A coarse and fine scan are performed, using the ISP's CDAF contrast FoM
* estimate the lens position with peak contrast. This is slower due to * to estimate the lens position with peak contrast. (This is slower due to
* extra latency in the ISP, and requires a settling time between steps. * extra latency in the ISP, and requires a settling time between steps.)
* The scan may terminate early if PDAF recovers and allows the zero-phase
* lens position to be interpolated.
* *
* Some hysteresis is applied to the switch between PDAF and CDAF, to avoid * In CAF mode, the fallback to a CDAF scan is triggered when PDAF fails to
* "nuisance" scans. During each interval where PDAF is not working, only * report high confidence and a configurable number of frames have elapsed
* ONE scan will be performed; CAF cannot track objects using CDAF alone. * since the last image change since either PDAF was working or a previous
* scan found peak contrast. Image changes are detected using both contrast
* and AWB statistics (within the AF window[s]).
* *
* IR lighting can interfere with the correct operation of PDAF, so we
* optionally try to detect it (from AWB statistics).
*/ */
namespace RPiController { namespace RPiController {
@ -54,7 +62,9 @@ public:
void setWindows(libcamera::Span<libcamera::Rectangle const> const &wins) override; void setWindows(libcamera::Span<libcamera::Rectangle const> const &wins) override;
void setMode(AfMode mode) override; void setMode(AfMode mode) override;
AfMode getMode() const override; AfMode getMode() const override;
bool setLensPosition(double dioptres, int32_t *hwpos) override; double getDefaultLensPosition() const override;
void getLensLimits(double &min, double &max) const override;
bool setLensPosition(double dioptres, int32_t *hwpos, bool force) override;
std::optional<double> getLensPosition() const override; std::optional<double> getLensPosition() const override;
void triggerScan() override; void triggerScan() override;
void cancelScan() override; void cancelScan() override;
@ -65,7 +75,8 @@ private:
Idle = 0, Idle = 0,
Trigger, Trigger,
Pdaf, Pdaf,
Coarse, Coarse1,
Coarse2,
Fine, Fine,
Settle Settle
}; };
@ -80,9 +91,11 @@ private:
}; };
struct SpeedDependentParams { struct SpeedDependentParams {
double stepCoarse; /* used for scans */ double stepCoarse; /* in dioptres; used for scans */
double stepFine; /* used for scans */ double stepFine; /* in dioptres; used for scans */
double contrastRatio; /* used for scan termination and reporting */ double contrastRatio; /* used for scan termination and reporting */
double retriggerRatio; /* contrast and RGB ratio for re-triggering */
uint32_t retriggerDelay; /* frames of stability before re-triggering */
double pdafGain; /* coefficient for PDAF feedback loop */ double pdafGain; /* coefficient for PDAF feedback loop */
double pdafSquelch; /* PDAF stability parameter (device-specific) */ double pdafSquelch; /* PDAF stability parameter (device-specific) */
double maxSlew; /* limit for lens movement per frame */ double maxSlew; /* limit for lens movement per frame */
@ -101,6 +114,7 @@ private:
uint32_t confThresh; /* PDAF confidence cell min (sensor-specific) */ uint32_t confThresh; /* PDAF confidence cell min (sensor-specific) */
uint32_t confClip; /* PDAF confidence cell max (sensor-specific) */ uint32_t confClip; /* PDAF confidence cell max (sensor-specific) */
uint32_t skipFrames; /* frames to skip at start or modeswitch */ uint32_t skipFrames; /* frames to skip at start or modeswitch */
bool checkForIR; /* Set this if PDAF is unreliable in IR light */
libcamera::ipa::Pwl map; /* converts dioptres -> lens driver position */ libcamera::ipa::Pwl map; /* converts dioptres -> lens driver position */
CfgParams(); CfgParams();
@ -129,6 +143,7 @@ private:
void invalidateWeights(); void invalidateWeights();
bool getPhase(PdafRegions const &regions, double &phase, double &conf); bool getPhase(PdafRegions const &regions, double &phase, double &conf);
double getContrast(const FocusRegions &focusStats); double getContrast(const FocusRegions &focusStats);
bool getAverageAndTestIr(const RgbyRegions &awbStats, double rgb[3]);
void doPDAF(double phase, double conf); void doPDAF(double phase, double conf);
bool earlyTerminationByPhase(double phase); bool earlyTerminationByPhase(double phase);
double findPeak(unsigned index) const; double findPeak(unsigned index) const;
@ -150,15 +165,20 @@ private:
bool useWindows_; bool useWindows_;
RegionWeights phaseWeights_; RegionWeights phaseWeights_;
RegionWeights contrastWeights_; RegionWeights contrastWeights_;
RegionWeights awbWeights_;
/* Working state. */ /* Working state. */
ScanState scanState_; ScanState scanState_;
bool initted_; bool initted_, irFlag_;
double ftarget_, fsmooth_; double ftarget_, fsmooth_;
double prevContrast_; double prevContrast_, oldSceneContrast_;
double prevAverage_[3], oldSceneAverage_[3];
double prevPhase_;
unsigned skipCount_, stepCount_, dropCount_; unsigned skipCount_, stepCount_, dropCount_;
unsigned sameSignCount_;
unsigned sceneChangeCount_;
unsigned scanMaxIndex_; unsigned scanMaxIndex_;
double scanMaxContrast_, scanMinContrast_; double scanMaxContrast_, scanMinContrast_, scanStep_;
std::vector<ScanRecord> scanData_; std::vector<ScanRecord> scanData_;
AfState reportState_; AfState reportState_;
}; };

View file

@ -717,7 +717,7 @@ static double computeInitialY(StatisticsPtr &stats, AwbStatus const &awb,
/* Factor in the AWB correction if needed. */ /* Factor in the AWB correction if needed. */
if (stats->agcStatsPos == Statistics::AgcStatsPos::PreWb) if (stats->agcStatsPos == Statistics::AgcStatsPos::PreWb)
sum *= RGB<double>{ { awb.gainR, awb.gainR, awb.gainB } }; sum *= RGB<double>{ { awb.gainR, awb.gainG, awb.gainB } };
double ySum = ipa::rec601LuminanceFromRGB(sum); double ySum = ipa::rec601LuminanceFromRGB(sum);

View file

@ -165,7 +165,6 @@ int AwbConfig::read(const libcamera::YamlObject &params)
bayes = false; bayes = false;
} }
} }
fast = params[fast].get<int>(bayes); /* default to fast for Bayesian, otherwise slow */
whitepointR = params["whitepoint_r"].get<double>(0.0); whitepointR = params["whitepoint_r"].get<double>(0.0);
whitepointB = params["whitepoint_b"].get<double>(0.0); whitepointB = params["whitepoint_b"].get<double>(0.0);
if (bayes == false) if (bayes == false)

View file

@ -43,7 +43,6 @@ struct AwbConfig {
uint16_t startupFrames; uint16_t startupFrames;
unsigned int convergenceFrames; /* approx number of frames to converge */ unsigned int convergenceFrames; /* approx number of frames to converge */
double speed; /* IIR filter speed applied to algorithm results */ double speed; /* IIR filter speed applied to algorithm results */
bool fast; /* "fast" mode uses a 16x16 rather than 32x32 grid */
libcamera::ipa::Pwl ctR; /* function maps CT to r (= R/G) */ libcamera::ipa::Pwl ctR; /* function maps CT to r (= R/G) */
libcamera::ipa::Pwl ctB; /* function maps CT to b (= B/G) */ libcamera::ipa::Pwl ctB; /* function maps CT to b (= B/G) */
libcamera::ipa::Pwl ctRInverse; /* inverse of ctR */ libcamera::ipa::Pwl ctRInverse; /* inverse of ctR */

File diff suppressed because it is too large Load diff

View file

@ -1139,11 +1139,27 @@
"step_coarse": 1.0, "step_coarse": 1.0,
"step_fine": 0.25, "step_fine": 0.25,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02, "pdaf_gain": -0.02,
"pdaf_squelch": 0.125, "pdaf_squelch": 0.125,
"max_slew": 2.0, "max_slew": 2.0,
"pdaf_frames": 20, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -1151,6 +1167,7 @@
"conf_thresh": 16, "conf_thresh": 16,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 445, 15.0, 925 ] "map": [ 0.0, 445, 15.0, 925 ]
} }
}, },
@ -1267,4 +1284,4 @@
} }
} }
] ]
} }

View file

@ -1156,11 +1156,27 @@
"step_coarse": 1.0, "step_coarse": 1.0,
"step_fine": 0.25, "step_fine": 0.25,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02, "pdaf_gain": -0.02,
"pdaf_squelch": 0.125, "pdaf_squelch": 0.125,
"max_slew": 2.0, "max_slew": 2.0,
"pdaf_frames": 20, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -1168,6 +1184,7 @@
"conf_thresh": 16, "conf_thresh": 16,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 445, 15.0, 925 ] "map": [ 0.0, 445, 15.0, 925 ]
} }
}, },
@ -1230,4 +1247,4 @@
} }
} }
] ]
} }

View file

@ -1148,23 +1148,27 @@
"step_coarse": 2.0, "step_coarse": 2.0,
"step_fine": 0.5, "step_fine": 0.5,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 10,
"pdaf_gain": -0.03, "pdaf_gain": -0.03,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 4.0, "max_slew": 3.0,
"pdaf_frames": 20, "pdaf_frames": 20,
"dropout_frames": 6, "dropout_frames": 6,
"step_frames": 4 "step_frames": 5
}, },
"fast": "fast":
{ {
"step_coarse": 2.0, "step_coarse": 2.5,
"step_fine": 0.5, "step_fine": 0.0,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 8,
"pdaf_gain": -0.05, "pdaf_gain": -0.05,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 5.0, "max_slew": 4.0,
"pdaf_frames": 16, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -1172,6 +1176,7 @@
"conf_thresh": 12, "conf_thresh": 12,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 420, 35.0, 920 ] "map": [ 0.0, 420, 35.0, 920 ]
} }
}, },
@ -1290,4 +1295,4 @@
} }
} }
] ]
} }

View file

@ -1057,23 +1057,27 @@
"step_coarse": 2.0, "step_coarse": 2.0,
"step_fine": 0.5, "step_fine": 0.5,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 10,
"pdaf_gain": -0.03, "pdaf_gain": -0.03,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 4.0, "max_slew": 3.0,
"pdaf_frames": 20, "pdaf_frames": 20,
"dropout_frames": 6, "dropout_frames": 6,
"step_frames": 4 "step_frames": 5
}, },
"fast": "fast":
{ {
"step_coarse": 2.0, "step_coarse": 2.5,
"step_fine": 0.5, "step_fine": 0.0,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 8,
"pdaf_gain": -0.05, "pdaf_gain": -0.05,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 5.0, "max_slew": 4.0,
"pdaf_frames": 16, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -1081,6 +1085,7 @@
"conf_thresh": 12, "conf_thresh": 12,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 420, 35.0, 920 ] "map": [ 0.0, 420, 35.0, 920 ]
} }
}, },
@ -1145,4 +1150,4 @@
} }
} }
] ]
} }

View file

@ -3,6 +3,7 @@
conf_files = files([ conf_files = files([
'imx219.json', 'imx219.json',
'imx219_noir.json', 'imx219_noir.json',
'imx283.json',
'imx290.json', 'imx290.json',
'imx296.json', 'imx296.json',
'imx296_mono.json', 'imx296_mono.json',

View file

@ -14,25 +14,25 @@
{ {
"rpi.lux": "rpi.lux":
{ {
"reference_shutter_speed": 2461, "reference_shutter_speed": 10857,
"reference_gain": 1.0, "reference_gain": 1.49,
"reference_aperture": 1.0, "reference_aperture": 1.0,
"reference_lux": 1148, "reference_lux": 1050,
"reference_Y": 13314 "reference_Y": 13959
} }
}, },
{ {
"rpi.noise": "rpi.noise":
{ {
"reference_constant": 0, "reference_constant": 0,
"reference_slope": 2.204 "reference_slope": 2.147
} }
}, },
{ {
"rpi.geq": "rpi.geq":
{ {
"offset": 199, "offset": 249,
"slope": 0.01947 "slope": 0.02036
} }
}, },
{ {
@ -104,19 +104,35 @@
{ {
"lo": 5500, "lo": 5500,
"hi": 6500 "hi": 6500
},
"cloudy":
{
"lo": 6000,
"hi": 6800
} }
}, },
"bayes": 1, "bayes": 1,
"ct_curve": "ct_curve":
[ [
2213.0, 0.9607, 0.2593, 2500.0, 0.9429, 0.2809,
5313.0, 0.4822, 0.5909, 2820.0, 0.8488, 0.3472,
6237.0, 0.4739, 0.6308 2830.0, 0.8303, 0.3609,
2885.0, 0.8177, 0.3703,
3601.0, 0.6935, 0.4705,
3615.0, 0.6918, 0.4719,
3622.0, 0.6894, 0.4741,
4345.0, 0.5999, 0.5546,
4410.0, 0.5942, 0.5601,
4486.0, 0.5878, 0.5661,
4576.0, 0.5779, 0.5756,
5672.0, 0.5211, 0.6318,
5710.0, 0.5168, 0.6362,
6850.0, 0.4841, 0.6702
], ],
"sensitivity_r": 1.0, "sensitivity_r": 1.0,
"sensitivity_b": 1.0, "sensitivity_b": 1.0,
"transverse_pos": 0.0144, "transverse_pos": 0.02601,
"transverse_neg": 0.01 "transverse_neg": 0.0246
} }
}, },
{ {
@ -209,7 +225,136 @@
{ {
"omega": 1.3, "omega": 1.3,
"n_iter": 100, "n_iter": 100,
"luminance_strength": 0.7 "luminance_strength": 0.8,
"calibrations_Cr": [
{
"ct": 2940,
"table":
[
1.021, 1.026, 1.028, 1.029, 1.031, 1.029, 1.029, 1.029, 1.029, 1.031, 1.031, 1.028, 1.027, 1.022, 1.013, 1.008,
1.022, 1.026, 1.027, 1.028, 1.027, 1.026, 1.026, 1.025, 1.026, 1.026, 1.027, 1.027, 1.027, 1.022, 1.014, 1.009,
1.023, 1.026, 1.026, 1.027, 1.026, 1.025, 1.024, 1.024, 1.024, 1.025, 1.026, 1.027, 1.026, 1.023, 1.017, 1.012,
1.024, 1.026, 1.026, 1.026, 1.025, 1.024, 1.024, 1.023, 1.023, 1.024, 1.025, 1.026, 1.026, 1.024, 1.018, 1.013,
1.024, 1.026, 1.026, 1.026, 1.025, 1.024, 1.023, 1.023, 1.023, 1.023, 1.024, 1.026, 1.026, 1.025, 1.019, 1.013,
1.025, 1.026, 1.026, 1.026, 1.025, 1.024, 1.023, 1.023, 1.023, 1.023, 1.024, 1.026, 1.026, 1.025, 1.018, 1.013,
1.025, 1.027, 1.027, 1.027, 1.026, 1.025, 1.024, 1.023, 1.023, 1.024, 1.024, 1.026, 1.026, 1.024, 1.018, 1.013,
1.025, 1.027, 1.028, 1.028, 1.027, 1.026, 1.025, 1.024, 1.024, 1.024, 1.025, 1.026, 1.026, 1.024, 1.017, 1.012,
1.024, 1.027, 1.029, 1.029, 1.028, 1.027, 1.026, 1.026, 1.025, 1.025, 1.026, 1.026, 1.025, 1.022, 1.014, 1.009,
1.024, 1.027, 1.029, 1.031, 1.031, 1.029, 1.028, 1.028, 1.028, 1.028, 1.027, 1.026, 1.025, 1.021, 1.011, 1.007,
1.022, 1.026, 1.031, 1.031, 1.031, 1.032, 1.031, 1.031, 1.029, 1.029, 1.028, 1.026, 1.022, 1.017, 1.007, 1.003,
1.019, 1.024, 1.029, 1.031, 1.032, 1.032, 1.032, 1.031, 1.029, 1.029, 1.027, 1.024, 1.019, 1.013, 1.003, 1.001
]
},
{
"ct": 4000,
"table":
[
1.027, 1.035, 1.039, 1.041, 1.043, 1.043, 1.043, 1.043, 1.044, 1.044, 1.044, 1.041, 1.041, 1.034, 1.021, 1.014,
1.029, 1.035, 1.038, 1.039, 1.039, 1.039, 1.039, 1.039, 1.041, 1.041, 1.041, 1.041, 1.041, 1.035, 1.024, 1.017,
1.029, 1.034, 1.036, 1.038, 1.038, 1.038, 1.039, 1.039, 1.039, 1.039, 1.039, 1.041, 1.039, 1.036, 1.027, 1.021,
1.031, 1.034, 1.036, 1.036, 1.037, 1.037, 1.038, 1.037, 1.037, 1.038, 1.038, 1.039, 1.039, 1.037, 1.029, 1.021,
1.031, 1.034, 1.035, 1.036, 1.037, 1.037, 1.037, 1.037, 1.037, 1.037, 1.038, 1.038, 1.038, 1.037, 1.029, 1.022,
1.031, 1.034, 1.035, 1.036, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.037, 1.038, 1.038, 1.037, 1.029, 1.022,
1.031, 1.035, 1.036, 1.037, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.037, 1.038, 1.038, 1.036, 1.028, 1.021,
1.031, 1.034, 1.036, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.036, 1.036, 1.037, 1.037, 1.035, 1.026, 1.019,
1.028, 1.034, 1.037, 1.037, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.037, 1.037, 1.037, 1.033, 1.022, 1.016,
1.028, 1.034, 1.037, 1.038, 1.039, 1.038, 1.037, 1.037, 1.037, 1.037, 1.037, 1.037, 1.035, 1.031, 1.017, 1.011,
1.025, 1.031, 1.036, 1.039, 1.039, 1.039, 1.038, 1.038, 1.038, 1.038, 1.038, 1.036, 1.031, 1.024, 1.011, 1.006,
1.021, 1.028, 1.034, 1.037, 1.039, 1.039, 1.039, 1.038, 1.038, 1.038, 1.036, 1.033, 1.027, 1.019, 1.006, 1.001
]
},
{
"ct": 6000,
"table":
[
1.026, 1.037, 1.048, 1.054, 1.057, 1.058, 1.059, 1.059, 1.061, 1.059, 1.059, 1.056, 1.049, 1.038, 1.019, 1.013,
1.031, 1.039, 1.049, 1.054, 1.057, 1.058, 1.059, 1.059, 1.059, 1.059, 1.059, 1.056, 1.051, 1.042, 1.026, 1.018,
1.033, 1.044, 1.051, 1.054, 1.057, 1.058, 1.059, 1.059, 1.059, 1.059, 1.058, 1.058, 1.055, 1.046, 1.031, 1.023,
1.035, 1.045, 1.051, 1.055, 1.057, 1.059, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.056, 1.049, 1.035, 1.026,
1.037, 1.046, 1.052, 1.055, 1.058, 1.059, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.057, 1.051, 1.037, 1.027,
1.037, 1.047, 1.053, 1.056, 1.059, 1.059, 1.061, 1.059, 1.059, 1.059, 1.059, 1.058, 1.057, 1.051, 1.037, 1.027,
1.037, 1.047, 1.053, 1.057, 1.059, 1.059, 1.061, 1.061, 1.059, 1.059, 1.059, 1.058, 1.056, 1.049, 1.036, 1.026,
1.037, 1.047, 1.054, 1.057, 1.059, 1.059, 1.061, 1.061, 1.059, 1.059, 1.059, 1.058, 1.056, 1.048, 1.034, 1.025,
1.034, 1.045, 1.054, 1.057, 1.059, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.057, 1.053, 1.045, 1.029, 1.021,
1.032, 1.043, 1.052, 1.057, 1.058, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.055, 1.049, 1.041, 1.022, 1.013,
1.028, 1.037, 1.048, 1.053, 1.057, 1.059, 1.059, 1.059, 1.059, 1.058, 1.056, 1.051, 1.044, 1.032, 1.013, 1.007,
1.021, 1.033, 1.044, 1.051, 1.055, 1.058, 1.059, 1.059, 1.058, 1.057, 1.052, 1.047, 1.039, 1.026, 1.007, 1.001
]
}
],
"calibrations_Cb": [
{
"ct": 2940,
"table":
[
1.002, 1.012, 1.031, 1.042, 1.051, 1.056, 1.058, 1.058, 1.058, 1.058, 1.057, 1.055, 1.045, 1.033, 1.017, 1.016,
1.011, 1.026, 1.041, 1.048, 1.056, 1.063, 1.066, 1.067, 1.067, 1.066, 1.064, 1.061, 1.051, 1.045, 1.028, 1.017,
1.016, 1.033, 1.047, 1.056, 1.063, 1.067, 1.071, 1.072, 1.072, 1.071, 1.068, 1.064, 1.061, 1.051, 1.033, 1.024,
1.021, 1.038, 1.051, 1.061, 1.067, 1.071, 1.073, 1.075, 1.075, 1.074, 1.071, 1.068, 1.063, 1.054, 1.036, 1.025,
1.023, 1.041, 1.054, 1.063, 1.069, 1.073, 1.075, 1.077, 1.077, 1.076, 1.074, 1.069, 1.064, 1.055, 1.038, 1.027,
1.023, 1.043, 1.055, 1.063, 1.069, 1.074, 1.076, 1.078, 1.078, 1.077, 1.075, 1.071, 1.065, 1.056, 1.039, 1.028,
1.023, 1.043, 1.055, 1.063, 1.069, 1.074, 1.076, 1.077, 1.078, 1.076, 1.074, 1.071, 1.065, 1.056, 1.039, 1.028,
1.023, 1.041, 1.052, 1.062, 1.068, 1.072, 1.074, 1.076, 1.076, 1.075, 1.073, 1.069, 1.064, 1.055, 1.038, 1.028,
1.021, 1.038, 1.051, 1.059, 1.066, 1.069, 1.072, 1.074, 1.074, 1.073, 1.069, 1.067, 1.062, 1.052, 1.036, 1.027,
1.018, 1.032, 1.046, 1.055, 1.061, 1.066, 1.069, 1.069, 1.069, 1.069, 1.067, 1.062, 1.057, 1.047, 1.031, 1.021,
1.011, 1.023, 1.039, 1.049, 1.056, 1.061, 1.062, 1.064, 1.065, 1.064, 1.062, 1.058, 1.049, 1.038, 1.021, 1.016,
1.001, 1.019, 1.035, 1.046, 1.053, 1.058, 1.061, 1.062, 1.062, 1.062, 1.059, 1.053, 1.043, 1.033, 1.016, 1.011
]
},
{
"ct": 4000,
"table":
[
1.001, 1.003, 1.011, 1.016, 1.019, 1.019, 1.021, 1.021, 1.019, 1.019, 1.019, 1.017, 1.017, 1.013, 1.007, 1.006,
1.003, 1.011, 1.015, 1.021, 1.024, 1.026, 1.027, 1.027, 1.027, 1.026, 1.025, 1.023, 1.022, 1.016, 1.012, 1.007,
1.007, 1.015, 1.021, 1.024, 1.027, 1.029, 1.031, 1.031, 1.031, 1.029, 1.028, 1.026, 1.024, 1.022, 1.015, 1.011,
1.011, 1.017, 1.023, 1.027, 1.029, 1.032, 1.033, 1.033, 1.033, 1.033, 1.031, 1.028, 1.026, 1.024, 1.016, 1.011,
1.012, 1.019, 1.025, 1.029, 1.032, 1.033, 1.034, 1.035, 1.035, 1.034, 1.033, 1.031, 1.028, 1.025, 1.018, 1.014,
1.013, 1.021, 1.026, 1.031, 1.033, 1.034, 1.036, 1.036, 1.036, 1.035, 1.034, 1.032, 1.029, 1.026, 1.019, 1.015,
1.013, 1.021, 1.026, 1.031, 1.033, 1.035, 1.036, 1.037, 1.037, 1.036, 1.034, 1.032, 1.029, 1.027, 1.019, 1.016,
1.013, 1.021, 1.026, 1.031, 1.033, 1.035, 1.036, 1.036, 1.036, 1.036, 1.035, 1.033, 1.031, 1.027, 1.021, 1.016,
1.013, 1.021, 1.025, 1.029, 1.032, 1.034, 1.035, 1.035, 1.036, 1.035, 1.034, 1.032, 1.031, 1.027, 1.021, 1.015,
1.012, 1.019, 1.024, 1.027, 1.029, 1.032, 1.034, 1.034, 1.034, 1.034, 1.032, 1.031, 1.029, 1.026, 1.019, 1.015,
1.009, 1.015, 1.022, 1.025, 1.028, 1.029, 1.031, 1.032, 1.032, 1.031, 1.031, 1.029, 1.026, 1.023, 1.017, 1.015,
1.005, 1.014, 1.021, 1.025, 1.027, 1.029, 1.029, 1.031, 1.031, 1.031, 1.029, 1.029, 1.024, 1.021, 1.016, 1.015
]
},
{
"ct": 6000,
"table":
[
1.001, 1.001, 1.006, 1.007, 1.008, 1.009, 1.009, 1.009, 1.009, 1.009, 1.009, 1.011, 1.011, 1.011, 1.009, 1.008,
1.001, 1.005, 1.008, 1.011, 1.012, 1.013, 1.014, 1.014, 1.014, 1.013, 1.013, 1.014, 1.014, 1.012, 1.011, 1.009,
1.004, 1.008, 1.011, 1.012, 1.014, 1.016, 1.016, 1.016, 1.016, 1.016, 1.015, 1.015, 1.015, 1.014, 1.012, 1.011,
1.005, 1.009, 1.012, 1.014, 1.016, 1.017, 1.018, 1.018, 1.018, 1.018, 1.017, 1.016, 1.016, 1.015, 1.012, 1.011,
1.006, 1.011, 1.013, 1.015, 1.017, 1.018, 1.018, 1.019, 1.019, 1.019, 1.018, 1.017, 1.016, 1.015, 1.012, 1.011,
1.007, 1.011, 1.013, 1.015, 1.017, 1.018, 1.019, 1.019, 1.019, 1.019, 1.019, 1.018, 1.017, 1.016, 1.013, 1.011,
1.007, 1.012, 1.013, 1.015, 1.017, 1.018, 1.019, 1.019, 1.019, 1.019, 1.019, 1.018, 1.018, 1.017, 1.014, 1.013,
1.007, 1.012, 1.013, 1.015, 1.016, 1.018, 1.019, 1.019, 1.019, 1.019, 1.019, 1.018, 1.018, 1.017, 1.015, 1.014,
1.007, 1.011, 1.012, 1.014, 1.016, 1.017, 1.018, 1.018, 1.019, 1.019, 1.019, 1.018, 1.018, 1.018, 1.016, 1.015,
1.007, 1.011, 1.012, 1.013, 1.015, 1.016, 1.017, 1.017, 1.018, 1.018, 1.018, 1.018, 1.018, 1.017, 1.016, 1.015,
1.006, 1.009, 1.012, 1.013, 1.014, 1.015, 1.015, 1.016, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017, 1.016,
1.005, 1.009, 1.012, 1.013, 1.015, 1.015, 1.015, 1.015, 1.016, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017
]
}
],
"luminance_lut":
[
1.223, 1.187, 1.129, 1.085, 1.061, 1.049, 1.046, 1.046, 1.046, 1.051, 1.061, 1.089, 1.134, 1.212, 1.359, 1.367,
1.188, 1.141, 1.098, 1.065, 1.048, 1.037, 1.029, 1.029, 1.034, 1.036, 1.046, 1.066, 1.095, 1.158, 1.269, 1.359,
1.158, 1.109, 1.073, 1.049, 1.035, 1.025, 1.019, 1.016, 1.017, 1.022, 1.033, 1.047, 1.072, 1.127, 1.219, 1.269,
1.147, 1.092, 1.058, 1.039, 1.026, 1.017, 1.011, 1.007, 1.009, 1.015, 1.022, 1.035, 1.058, 1.107, 1.191, 1.236,
1.144, 1.082, 1.051, 1.033, 1.021, 1.011, 1.005, 1.002, 1.004, 1.009, 1.017, 1.031, 1.051, 1.097, 1.177, 1.232,
1.144, 1.081, 1.049, 1.031, 1.018, 1.008, 1.002, 1.001, 1.001, 1.006, 1.015, 1.029, 1.048, 1.096, 1.177, 1.232,
1.144, 1.084, 1.051, 1.032, 1.018, 1.009, 1.004, 1.001, 1.002, 1.009, 1.016, 1.029, 1.051, 1.098, 1.183, 1.232,
1.149, 1.096, 1.057, 1.037, 1.022, 1.014, 1.008, 1.005, 1.007, 1.012, 1.019, 1.033, 1.059, 1.113, 1.205, 1.248,
1.166, 1.117, 1.071, 1.046, 1.031, 1.021, 1.014, 1.012, 1.014, 1.019, 1.029, 1.045, 1.078, 1.141, 1.247, 1.314,
1.202, 1.151, 1.096, 1.061, 1.044, 1.031, 1.023, 1.021, 1.022, 1.029, 1.044, 1.067, 1.109, 1.182, 1.314, 1.424,
1.242, 1.202, 1.134, 1.088, 1.061, 1.045, 1.038, 1.036, 1.039, 1.048, 1.066, 1.103, 1.157, 1.248, 1.424, 1.532,
1.318, 1.238, 1.162, 1.111, 1.078, 1.059, 1.048, 1.048, 1.049, 1.063, 1.089, 1.133, 1.189, 1.296, 1.532, 1.606
],
"sigma": 0.00175,
"sigma_Cb": 0.00268
} }
}, },
{ {
@ -259,48 +404,138 @@
{ {
"ccms": [ "ccms": [
{ {
"ct": 2213, "ct": 2500,
"ccm": "ccm":
[ [
1.91264, -0.27609, -0.63655, 1.82257, -0.40941, -0.41316,
-0.65708, 2.11718, -0.46009, -0.52091, 1.83005, -0.30915,
0.03629, -1.38441, 2.34811 0.22503, -1.41259, 2.18757
] ]
}, },
{ {
"ct": 2255, "ct": 2820,
"ccm": "ccm":
[ [
1.90369, -0.29309, -0.61059, 1.80564, -0.47587, -0.32977,
-0.64693, 2.08169, -0.43476, -0.47385, 1.83075, -0.35691,
0.04086, -1.29999, 2.25914 0.21369, -1.22609, 2.01239
] ]
}, },
{ {
"ct": 2259, "ct": 2830,
"ccm": "ccm":
[ [
1.92762, -0.35134, -0.57628, 1.80057, -0.51479, -0.28578,
-0.63523, 2.08481, -0.44958, -0.64031, 2.16074, -0.52044,
0.06754, -1.32953, 2.26199 0.11794, -0.95667, 1.83873
] ]
}, },
{ {
"ct": 5313, "ct": 2885,
"ccm": "ccm":
[ [
1.75924, -0.54053, -0.21871, 1.78452, -0.49769, -0.28683,
-0.38159, 1.88671, -0.50511, -0.63651, 2.13634, -0.49983,
-0.00747, -0.53492, 1.54239 0.08547, -0.86501, 1.77954
] ]
}, },
{ {
"ct": 6237, "ct": 3601,
"ccm": "ccm":
[ [
2.19299, -0.74764, -0.44536, 1.85165, -0.57008, -0.28156,
-0.51678, 2.27651, -0.75972, -0.56249, 2.08321, -0.52072,
-0.06498, -0.74269, 1.80767 0.03724, -0.70964, 1.67239
]
},
{
"ct": 3615,
"ccm":
[
1.87611, -0.60772, -0.26839,
-0.55497, 2.07257, -0.51761,
0.04151, -0.70635, 1.66485
]
},
{
"ct": 3622,
"ccm":
[
1.85505, -0.58542, -0.26963,
-0.55053, 2.05981, -0.50928,
0.04005, -0.69302, 1.65297
]
},
{
"ct": 4345,
"ccm":
[
1.81872, -0.57511, -0.24361,
-0.49071, 2.16621, -0.67551,
0.02641, -0.67838, 1.65196
]
},
{
"ct": 4410,
"ccm":
[
1.83689, -0.60178, -0.23512,
-0.48204, 2.14729, -0.66525,
0.02773, -0.67615, 1.64841
]
},
{
"ct": 4486,
"ccm":
[
1.85101, -0.60733, -0.24368,
-0.47635, 2.13101, -0.65465,
0.02229, -0.66412, 1.64183
]
},
{
"ct": 4576,
"ccm":
[
1.84076, -0.59449, -0.24626,
-0.47307, 2.13369, -0.66062,
0.01984, -0.65788, 1.63804
]
},
{
"ct": 5657,
"ccm":
[
1.84536, -0.57827, -0.26709,
-0.44532, 2.04086, -0.59554,
-0.01738, -0.52806, 1.54544
]
},
{
"ct": 5672,
"ccm":
[
1.84251, -0.57486, -0.26765,
-0.44925, 2.04615, -0.59689,
-0.03179, -0.51748, 1.54928
]
},
{
"ct": 5710,
"ccm":
[
1.84081, -0.58127, -0.25953,
-0.44169, 2.03593, -0.59424,
-0.02503, -0.52696, 1.55199
]
},
{
"ct": 6850,
"ccm":
[
1.80426, -0.22567, -0.57859,
-0.48629, 2.49024, -1.00395,
-0.10865, -0.63841, 1.74705
] ]
} }
] ]

View file

@ -638,11 +638,27 @@
"step_coarse": 1.0, "step_coarse": 1.0,
"step_fine": 0.25, "step_fine": 0.25,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02, "pdaf_gain": -0.02,
"pdaf_squelch": 0.125, "pdaf_squelch": 0.125,
"max_slew": 2.0, "max_slew": 2.0,
"pdaf_frames": 20, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -650,6 +666,7 @@
"conf_thresh": 16, "conf_thresh": 16,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 445, 15.0, 925 ] "map": [ 0.0, 445, 15.0, 925 ]
} }
}, },
@ -668,4 +685,4 @@
} }
} }
] ]
} }

View file

@ -737,11 +737,27 @@
"step_coarse": 1.0, "step_coarse": 1.0,
"step_fine": 0.25, "step_fine": 0.25,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02, "pdaf_gain": -0.02,
"pdaf_squelch": 0.125, "pdaf_squelch": 0.125,
"max_slew": 2.0, "max_slew": 2.0,
"pdaf_frames": 20, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -749,6 +765,7 @@
"conf_thresh": 16, "conf_thresh": 16,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 445, 15.0, 925 ] "map": [ 0.0, 445, 15.0, 925 ]
} }
}, },
@ -767,4 +784,4 @@
} }
} }
] ]
} }

View file

@ -637,23 +637,27 @@
"step_coarse": 2.0, "step_coarse": 2.0,
"step_fine": 0.5, "step_fine": 0.5,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.03, "pdaf_gain": -0.03,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 4.0, "max_slew": 3.0,
"pdaf_frames": 20, "pdaf_frames": 20,
"dropout_frames": 6, "dropout_frames": 6,
"step_frames": 4 "step_frames": 5
}, },
"fast": "fast":
{ {
"step_coarse": 2.0, "step_coarse": 2.5,
"step_fine": 0.5, "step_fine": 0.0,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.05, "pdaf_gain": -0.05,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 5.0, "max_slew": 4.0,
"pdaf_frames": 16, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -661,6 +665,7 @@
"conf_thresh": 12, "conf_thresh": 12,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 420, 35.0, 920 ] "map": [ 0.0, 420, 35.0, 920 ]
} }
}, },
@ -679,4 +684,4 @@
} }
} }
] ]
} }

View file

@ -628,23 +628,27 @@
"step_coarse": 2.0, "step_coarse": 2.0,
"step_fine": 0.5, "step_fine": 0.5,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.03, "pdaf_gain": -0.03,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 4.0, "max_slew": 3.0,
"pdaf_frames": 20, "pdaf_frames": 20,
"dropout_frames": 6, "dropout_frames": 6,
"step_frames": 4 "step_frames": 5
}, },
"fast": "fast":
{ {
"step_coarse": 2.0, "step_coarse": 2.5,
"step_fine": 0.5, "step_fine": 0.0,
"contrast_ratio": 0.75, "contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.05, "pdaf_gain": -0.05,
"pdaf_squelch": 0.2, "pdaf_squelch": 0.2,
"max_slew": 5.0, "max_slew": 4.0,
"pdaf_frames": 16, "pdaf_frames": 16,
"dropout_frames": 6, "dropout_frames": 4,
"step_frames": 4 "step_frames": 4
} }
}, },
@ -652,6 +656,7 @@
"conf_thresh": 12, "conf_thresh": 12,
"conf_clip": 512, "conf_clip": 512,
"skip_frames": 5, "skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 420, 35.0, 920 ] "map": [ 0.0, 420, 35.0, 920 ]
} }
}, },
@ -670,4 +675,4 @@
} }
} }
] ]
} }

View file

@ -40,6 +40,7 @@ void Awb::prepare(IPAContext &context,
[[maybe_unused]] DebayerParams *params) [[maybe_unused]] DebayerParams *params)
{ {
auto &gains = context.activeState.awb.gains; auto &gains = context.activeState.awb.gains;
/* Just report, the gains are applied in LUT algorithm. */
frameContext.gains.red = gains.r(); frameContext.gains.red = gains.r();
frameContext.gains.blue = gains.b(); frameContext.gains.blue = gains.b();
} }

View file

@ -3,7 +3,7 @@
* Copyright (C) 2024, Ideas On Board * Copyright (C) 2024, Ideas On Board
* Copyright (C) 2024-2025, Red Hat Inc. * Copyright (C) 2024-2025, Red Hat Inc.
* *
* Color correction matrix * Color correction matrix + saturation
*/ */
#include "ccm.h" #include "ccm.h"
@ -13,6 +13,8 @@
#include <libcamera/control_ids.h> #include <libcamera/control_ids.h>
#include "libcamera/internal/matrix.h"
namespace { namespace {
constexpr unsigned int kTemperatureThreshold = 100; constexpr unsigned int kTemperatureThreshold = 100;
@ -35,28 +37,77 @@ int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData
} }
context.ccmEnabled = true; context.ccmEnabled = true;
context.ctrlMap[&controls::Saturation] = ControlInfo(0.0f, 2.0f, 1.0f);
return 0; return 0;
} }
int Ccm::configure(IPAContext &context,
[[maybe_unused]] const IPAConfigInfo &configInfo)
{
context.activeState.knobs.saturation = std::optional<double>();
return 0;
}
void Ccm::queueRequest(typename Module::Context &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]] typename Module::FrameContext &frameContext,
const ControlList &controls)
{
const auto &saturation = controls.get(controls::Saturation);
if (saturation.has_value()) {
context.activeState.knobs.saturation = saturation;
LOG(IPASoftCcm, Debug) << "Setting saturation to " << saturation.value();
}
}
void Ccm::applySaturation(Matrix<float, 3, 3> &ccm, float saturation)
{
/* https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion */
const Matrix<float, 3, 3> rgb2ycbcr{
{ 0.256788235294, 0.504129411765, 0.0979058823529,
-0.148223529412, -0.290992156863, 0.439215686275,
0.439215686275, -0.367788235294, -0.0714274509804 }
};
const Matrix<float, 3, 3> ycbcr2rgb{
{ 1.16438356164, 0, 1.59602678571,
1.16438356164, -0.391762290094, -0.812967647235,
1.16438356164, 2.01723214285, 0 }
};
const Matrix<float, 3, 3> saturationMatrix{
{ 1, 0, 0,
0, saturation, 0,
0, 0, saturation }
};
ccm = ycbcr2rgb * saturationMatrix * rgb2ycbcr * ccm;
}
void Ccm::prepare(IPAContext &context, const uint32_t frame, void Ccm::prepare(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext, [[maybe_unused]] DebayerParams *params) IPAFrameContext &frameContext, [[maybe_unused]] DebayerParams *params)
{ {
auto &saturation = context.activeState.knobs.saturation;
const unsigned int ct = context.activeState.awb.temperatureK; const unsigned int ct = context.activeState.awb.temperatureK;
/* Change CCM only on bigger temperature changes. */ /* Change CCM only on saturation or bigger temperature changes. */
if (frame > 0 && if (frame > 0 &&
utils::abs_diff(ct, lastCt_) < kTemperatureThreshold) { utils::abs_diff(ct, lastCt_) < kTemperatureThreshold &&
saturation == lastSaturation_) {
frameContext.ccm.ccm = context.activeState.ccm.ccm; frameContext.ccm.ccm = context.activeState.ccm.ccm;
context.activeState.ccm.changed = false; context.activeState.ccm.changed = false;
return; return;
} }
lastCt_ = ct; lastCt_ = ct;
lastSaturation_ = saturation;
Matrix<float, 3, 3> ccm = ccm_.getInterpolated(ct); Matrix<float, 3, 3> ccm = ccm_.getInterpolated(ct);
if (saturation)
applySaturation(ccm, saturation.value());
context.activeState.ccm.ccm = ccm; context.activeState.ccm.ccm = ccm;
frameContext.ccm.ccm = ccm; frameContext.ccm.ccm = ccm;
frameContext.saturation = saturation;
context.activeState.ccm.changed = true; context.activeState.ccm.changed = true;
} }
@ -67,6 +118,9 @@ void Ccm::process([[maybe_unused]] IPAContext &context,
ControlList &metadata) ControlList &metadata)
{ {
metadata.set(controls::ColourCorrectionMatrix, frameContext.ccm.ccm.data()); metadata.set(controls::ColourCorrectionMatrix, frameContext.ccm.ccm.data());
const auto &saturation = frameContext.saturation;
metadata.set(controls::Saturation, saturation.value_or(1.0));
} }
REGISTER_IPA_ALGORITHM(Ccm, "Ccm") REGISTER_IPA_ALGORITHM(Ccm, "Ccm")

View file

@ -7,6 +7,8 @@
#pragma once #pragma once
#include <optional>
#include "libcamera/internal/matrix.h" #include "libcamera/internal/matrix.h"
#include <libipa/interpolator.h> #include <libipa/interpolator.h>
@ -24,6 +26,12 @@ public:
~Ccm() = default; ~Ccm() = default;
int init(IPAContext &context, const YamlObject &tuningData) override; int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context,
const IPAConfigInfo &configInfo) override;
void queueRequest(typename Module::Context &context,
const uint32_t frame,
typename Module::FrameContext &frameContext,
const ControlList &controls) override;
void prepare(IPAContext &context, void prepare(IPAContext &context,
const uint32_t frame, const uint32_t frame,
IPAFrameContext &frameContext, IPAFrameContext &frameContext,
@ -34,7 +42,10 @@ public:
ControlList &metadata) override; ControlList &metadata) override;
private: private:
void applySaturation(Matrix<float, 3, 3> &ccm, float saturation);
unsigned int lastCt_; unsigned int lastCt_;
std::optional<float> lastSaturation_;
Interpolator<Matrix<float, 3, 3>> ccm_; Interpolator<Matrix<float, 3, 3>> ccm_;
}; };

View file

@ -122,7 +122,7 @@ void Lut::prepare(IPAContext &context,
Matrix<float, 3, 3> gainCcm = { { gains.r(), 0, 0, Matrix<float, 3, 3> gainCcm = { { gains.r(), 0, 0,
0, gains.g(), 0, 0, gains.g(), 0,
0, 0, gains.b() } }; 0, 0, gains.b() } };
auto ccm = gainCcm * context.activeState.ccm.ccm; auto ccm = context.activeState.ccm.ccm * gainCcm;
auto &red = params->redCcm; auto &red = params->redCcm;
auto &green = params->greenCcm; auto &green = params->greenCcm;
auto &blue = params->blueCcm; auto &blue = params->blueCcm;

View file

@ -63,6 +63,7 @@ struct IPAActiveState {
struct { struct {
/* 0..2 range, 1.0 = normal */ /* 0..2 range, 1.0 = normal */
std::optional<double> contrast; std::optional<double> contrast;
std::optional<float> saturation;
} knobs; } knobs;
}; };
@ -75,11 +76,14 @@ struct IPAFrameContext : public FrameContext {
int32_t exposure; int32_t exposure;
double gain; double gain;
} sensor; } sensor;
struct { struct {
double red; double red;
double blue; double blue;
} gains; } gains;
std::optional<double> contrast; std::optional<double> contrast;
std::optional<float> saturation;
}; };
struct IPAContext { struct IPAContext {

View file

@ -690,8 +690,9 @@ LogSeverity Logger::parseLogLevel(std::string_view level)
unsigned int severity = LogInvalid; unsigned int severity = LogInvalid;
if (std::isdigit(level[0])) { if (std::isdigit(level[0])) {
auto [end, ec] = std::from_chars(level.data(), level.data() + level.size(), severity); const char *levelEnd = level.data() + level.size();
if (ec != std::errc() || *end != '\0' || severity > LogFatal) auto [end, ec] = std::from_chars(level.data(), levelEnd, severity);
if (ec != std::errc() || end != levelEnd || severity > LogFatal)
severity = LogInvalid; severity = LogInvalid;
} else { } else {
for (unsigned int i = 0; i < std::size(names); ++i) { for (unsigned int i = 0; i < std::size(names); ++i) {

View file

@ -488,7 +488,7 @@ std::size_t CameraConfiguration::size() const
* *
* \return A CameraConfiguration::Status value that describes the validation * \return A CameraConfiguration::Status value that describes the validation
* status. * status.
* \retval CameraConfigutation::Adjusted The configuration has been adjusted * \retval CameraConfiguration::Adjusted The configuration has been adjusted
* and is now valid. The color space of some or all of the streams may have * and is now valid. The color space of some or all of the streams may have
* been changed. The caller shall check the color spaces carefully. * been changed. The caller shall check the color spaces carefully.
* \retval CameraConfiguration::Valid The configuration was already valid and * \retval CameraConfiguration::Valid The configuration was already valid and

View file

@ -0,0 +1,230 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Clock recovery algorithm
*/
#include "libcamera/internal/clock_recovery.h"
#include <time.h>
#include <libcamera/base/log.h>
/**
* \file clock_recovery.h
* \brief Clock recovery - deriving one clock from another independent clock
*/
namespace libcamera {
LOG_DEFINE_CATEGORY(ClockRec)
/**
* \class ClockRecovery
* \brief Recover an output clock from an input clock
*
* The ClockRecovery class derives an output clock from an input clock,
* modelling the output clock as being linearly related to the input clock.
* For example, we may use it to derive wall clock timestamps from timestamps
* measured by the internal system clock which counts local time since boot.
*
* When pairs of corresponding input and output timestamps are available,
* they should be submitted to the model with addSample(). The model will
* update, and output clock values for known input clock values can be
* obtained using getOutput().
*
* As a convenience, if the input clock is indeed the time since boot, and the
* output clock represents a real wallclock time, then addSample() can be
* called with no arguments, and a pair of timestamps will be captured at
* that moment.
*
* The configure() function accepts some configuration parameters to control
* the linear fitting process.
*/
/**
* \brief Construct a ClockRecovery
*/
ClockRecovery::ClockRecovery()
{
configure();
reset();
}
/**
* \brief Set configuration parameters
* \param[in] numSamples The approximate duration for which the state of the model
* is persistent
* \param[in] maxJitter New output samples are clamped to no more than this
* amount of jitter, to prevent sudden swings from having a large effect
* \param[in] minSamples The fitted clock model is not used to generate outputs
* until this many samples have been received
* \param[in] errorThreshold If the accumulated differences between input and
* output clocks reaches this amount over a few frames, the model is reset
*/
void ClockRecovery::configure(unsigned int numSamples, unsigned int maxJitter,
unsigned int minSamples, unsigned int errorThreshold)
{
LOG(ClockRec, Debug)
<< "configure " << numSamples << " " << maxJitter << " " << minSamples << " " << errorThreshold;
numSamples_ = numSamples;
maxJitter_ = maxJitter;
minSamples_ = minSamples;
errorThreshold_ = errorThreshold;
}
/**
* \brief Reset the clock recovery model and start again from scratch
*/
void ClockRecovery::reset()
{
LOG(ClockRec, Debug) << "reset";
lastInput_ = 0;
lastOutput_ = 0;
xAve_ = 0;
yAve_ = 0;
x2Ave_ = 0;
xyAve_ = 0;
count_ = 0;
error_ = 0.0;
/*
* Setting slope_ and offset_ to zero initially means that the clocks
* advance at exactly the same rate.
*/
slope_ = 0.0;
offset_ = 0.0;
}
/**
* \brief Add a sample point to the clock recovery model, for recovering a wall
* clock value from the internal system time since boot
*
* This is a convenience function to make it easy to derive a wall clock value
* (using the Linux CLOCK_REALTIME) from the time since the system started
* (measured by CLOCK_BOOTTIME).
*/
void ClockRecovery::addSample()
{
LOG(ClockRec, Debug) << "addSample";
struct timespec bootTime1;
struct timespec bootTime2;
struct timespec wallTime;
/* Get boot and wall clocks in microseconds. */
clock_gettime(CLOCK_BOOTTIME, &bootTime1);
clock_gettime(CLOCK_REALTIME, &wallTime);
clock_gettime(CLOCK_BOOTTIME, &bootTime2);
uint64_t boot1 = bootTime1.tv_sec * 1000000ULL + bootTime1.tv_nsec / 1000;
uint64_t boot2 = bootTime2.tv_sec * 1000000ULL + bootTime2.tv_nsec / 1000;
uint64_t boot = (boot1 + boot2) / 2;
uint64_t wall = wallTime.tv_sec * 1000000ULL + wallTime.tv_nsec / 1000;
addSample(boot, wall);
}
/**
* \brief Add a sample point to the clock recovery model, specifying the exact
* input and output clock values
* \param[in] input The input clock value
* \param[in] output The value of the output clock at the same moment, as far
* as possible, that the input clock was sampled
*
* This function should be used for corresponding clocks other than the Linux
* BOOTTIME and REALTIME clocks.
*/
void ClockRecovery::addSample(uint64_t input, uint64_t output)
{
LOG(ClockRec, Debug) << "addSample " << input << " " << output;
if (count_ == 0) {
inputBase_ = input;
outputBase_ = output;
}
/*
* We keep an eye on cumulative drift over the last several frames. If this exceeds a
* threshold, then probably the system clock has been updated and we're going to have to
* reset everything and start over.
*/
if (lastOutput_) {
int64_t inputDiff = getOutput(input) - getOutput(lastInput_);
int64_t outputDiff = output - lastOutput_;
error_ = error_ * 0.95 + (outputDiff - inputDiff);
if (std::abs(error_) > errorThreshold_) {
reset();
inputBase_ = input;
outputBase_ = output;
}
}
lastInput_ = input;
lastOutput_ = output;
/*
* Never let the new output value be more than maxJitter_ away from what
* we would have expected. This is just to reduce the effect of sudden
* large delays in the measured output.
*/
uint64_t expectedOutput = getOutput(input);
output = std::clamp(output, expectedOutput - maxJitter_, expectedOutput + maxJitter_);
/*
* We use x, y, x^2 and x*y sums to calculate the best fit line. Here we
* update them by pretending we have count_ samples at the previous fit,
* and now one new one. Gradually the effect of the older values gets
* lost. This is a very simple way of updating the fit (there are much
* more complicated ones!), but it works well enough. Using averages
* instead of sums makes the relative effect of old values and the new
* sample clearer.
*/
double x = static_cast<int64_t>(input - inputBase_);
double y = static_cast<int64_t>(output - outputBase_) - x;
unsigned int count1 = count_ + 1;
xAve_ = (count_ * xAve_ + x) / count1;
yAve_ = (count_ * yAve_ + y) / count1;
x2Ave_ = (count_ * x2Ave_ + x * x) / count1;
xyAve_ = (count_ * xyAve_ + x * y) / count1;
/*
* Don't update slope and offset until we've seen "enough" sample
* points. Note that the initial settings for slope_ and offset_
* ensures that the wallclock advances at the same rate as the realtime
* clock (but with their respective initial offsets).
*/
if (count_ > minSamples_) {
/* These are the standard equations for least squares linear regression. */
slope_ = (count1 * count1 * xyAve_ - count1 * xAve_ * count1 * yAve_) /
(count1 * count1 * x2Ave_ - count1 * xAve_ * count1 * xAve_);
offset_ = yAve_ - slope_ * xAve_;
}
/*
* Don't increase count_ above numSamples_, as this controls the long-term
* amount of the residual fit.
*/
if (count1 < numSamples_)
count_++;
}
/**
* \brief Calculate the output clock value according to the model from an input
* clock value
* \param[in] input The input clock value
*
* \return Output clock value
*/
uint64_t ClockRecovery::getOutput(uint64_t input)
{
double x = static_cast<int64_t>(input - inputBase_);
double y = slope_ * x + offset_;
uint64_t output = y + x + outputBase_;
LOG(ClockRec, Debug) << "getOutput " << input << " " << output;
return output;
}
} /* namespace libcamera */

View file

@ -212,7 +212,7 @@ controls:
description: | description: |
Exposure time for the frame applied in the sensor device. Exposure time for the frame applied in the sensor device.
This value is specified in micro-seconds. This value is specified in microseconds.
This control will only take effect if ExposureTimeMode is Manual. If This control will only take effect if ExposureTimeMode is Manual. If
this control is set when ExposureTimeMode is Auto, the value will be this control is set when ExposureTimeMode is Auto, the value will be
@ -1268,4 +1268,20 @@ controls:
description: | description: |
Enable or disable the debug metadata. Enable or disable the debug metadata.
- FrameWallClock:
type: int64_t
direction: out
description: |
This timestamp corresponds to the same moment in time as the
SensorTimestamp, but is represented as a wall clock time as measured by
the CLOCK_REALTIME clock. Like SensorTimestamp, the timestamp value is
expressed in nanoseconds.
Being a wall clock measurement, it can be used to synchronise timing
across different devices.
\sa SensorTimestamp
The FrameWallClock control can only be returned in metadata.
... ...

View file

@ -71,4 +71,116 @@ controls:
\sa StatsOutputEnable \sa StatsOutputEnable
- SyncMode:
type: int32_t
direction: in
description: |
Enable or disable camera synchronisation ("sync") mode.
When sync mode is enabled, a camera will synchronise frames temporally
with other cameras, either attached to the same device or a different
one. There should be one "server" device, which broadcasts timing
information to one or more "clients". Communication is one-way, from
server to clients only, and it is only clients that adjust their frame
timings to match the server.
Sync mode requires all cameras to be running at (as far as possible) the
same fixed framerate. Clients may continue to make adjustments to keep
their cameras synchronised with the server for the duration of the
session, though any updates after the initial ones should remain small.
\sa SyncReady
\sa SyncTimer
\sa SyncFrames
enum:
- name: SyncModeOff
value: 0
description: Disable sync mode.
- name: SyncModeServer
value: 1
description: |
Enable sync mode, act as server. The server broadcasts timing
messages to any clients that are listening, so that the clients can
synchronise their camera frames with the server's.
- name: SyncModeClient
value: 2
description: |
Enable sync mode, act as client. A client listens for any server
messages, and arranges for its camera frames to synchronise as
closely as possible with the server's. Many clients can listen out
for the same server. Clients can also be started ahead of any
servers, causing them merely to wait for the server to start.
- SyncReady:
type: bool
direction: out
description: |
When using the camera synchronisation algorithm, the server broadcasts
timing information to the clients. This also includes the time (some
number of frames in the future, called the "ready time") at which the
server will signal its controlling application, using this control, to
start using the image frames.
The client receives the "ready time" from the server, and will signal
its application to start using the frames at this same moment.
While this control value is false, applications (on both client and
server) should continue to wait, and not use the frames.
Once this value becomes true, it means that this is the first frame
where the server and its clients have agreed that they will both be
synchronised and that applications should begin consuming frames.
Thereafter, this control will continue to signal the value true for
the rest of the session.
\sa SyncMode
\sa SyncTimer
\sa SyncFrames
- SyncTimer:
type: int64_t
direction: out
description: |
This reports the amount of time, in microseconds, until the "ready
time", at which the server and client will signal their controlling
applications that the frames are now synchronised and should be
used. The value may be refined slightly over time, becoming more precise
as the "ready time" approaches.
Servers always report this value, whereas clients will omit this control
until they have received a message from the server that enables them to
calculate it.
Normally the value will start positive (the "ready time" is in the
future), and decrease towards zero, before becoming negative (the "ready
time" has elapsed). So there should be just one frame where the timer
value is, or is very close to, zero - the one for which the SyncReady
control becomes true. At this moment, the value indicates how closely
synchronised the client believes it is with the server.
But note that if frames are being dropped, then the "near zero" valued
frame, or indeed any other, could be skipped. In these cases the timer
value allows an application to deduce that this has happened.
\sa SyncMode
\sa SyncReady
\sa SyncFrames
- SyncFrames:
type: int32_t
direction: in
description: |
The number of frames the server should wait, after enabling
SyncModeServer, before signalling (via the SyncReady control) that
frames should be used. This therefore determines the "ready time" for
all synchronised cameras.
This control value should be set only for the device that is to act as
the server, before or at the same moment at which SyncModeServer is
enabled.
\sa SyncMode
\sa SyncReady
\sa SyncTimer
... ...

Some files were not shown because too many files have changed in this diff Show more