Compare commits

...
Sign in to create a new pull request.

64 commits

Author SHA1 Message Date
2c5bda6f08 gstreamer: remove dublicated property
All checks were successful
PostmarketOS Build / Prepare (push) Successful in 15s
PostmarketOS Build / Build for aarch64 (push) Successful in 11m53s
PostmarketOS Build / Build for x86_64 (push) Successful in 4m5s
PostmarketOS Build / Clean (push) Successful in 13s
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
f6d95130f7 gstreamer: fix crash on stream stop
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
a106a43632 libcamera: software_isp: Add autofocus
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
7fe3e610cd HACK: WIP: Clean queued request before assertion :D
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
6e8d4d86d7 libcamera: software_isp: Add control to disable statistic collection
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
d0bf6e7f88 libcamera: software_isp: Add manual exposure control
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
7d19533589 libcamera: software_isp: Add focus control
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
30e17d48c7 libcamera: software_isp: Add AGC disable control
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
e3b7163254 libcamera: software_isp: Add brightness control
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
f7bf4c8d4f CI: Add local forgejo CI
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
98921d93d0 gitignore: ignore my setup
Signed-off-by: Vasiliy Doylov <nekocwd@mainlining.org>
2025-07-12 00:00:59 +00:00
Laurent Pinchart
afd9890b7b libcamera: delayed_controls: Inherit from Object class
A second use-after-free bug related to signals staying connected after
the receiver DelayedControls instance gets deleted has been found, this
time in the simple pipeline handler. Fix the issue once and for all by
making the DelayedControls class inherit from Object. This will
disconnect signals automatically upon deletion of the receiver.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Tested-by: Stanislaw Gruszka <stanislaw.gruszka@linux.intel.com>
Tested-by: Isaac Scott <isaac.scott@ideasonboard.com>
Reviewed-by: Isaac Scott <isaac.scott@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-11 12:25:46 +01:00
Umang Jain
fb72083975 camera: Fix spell error
Correct 'CameraConfigutation' spell error to 'CameraConfiguration'.

Signed-off-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 14:10:14 +01:00
Naushir Patuck
29a88d85b7 libcamera: controls: Use nanoseconds units for FrameWallClock
Use nanoseconds for the FrameWallClock control to match the units for
other timestamp controls, including SensorTimestamp.

Update the RPi pipeline handlers to match the new nanoseconds units when
converting from SensorTimestamp to FrameWallClock.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:58 +01:00
Naushir Patuck
a437212753 libcamera: controls: Remove hyphenation in control description text
Remove the hyphenation in "micro-seconds" in the description for the
ExposureTime control to match the rest of the document.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:47 +01:00
Nick Hollinghurst
e6fb24ffdb ipa: rpi: Fix bug in AfState reporting
A previous change introduced a bug in which it reported AfStateIdle
when idle in Auto mode, when it should continue to report the most
recent AF cycle's outcome (AfStateFocused or AfStateFailed).

Also fix the Pause method so it won't reset state to AfStateIdle
when paused in Continuous AF mode (to match documented behaviour).

Fixes: ea5f451c56 ("ipa: rpi: controller: AutoFocus bidirectional scanning")
Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Tested-by: David Plowman <david.plowman@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:17 +01:00
Harvey Yang
525325440b V4L2VideoDevice: Call FrameBuffer::Private::cancel() in streamOff()
At the moment `V4L2VideoDevice::streamOff()` sets
`FrameBuffer::Private`'s metadata directly, while that's equivalent to
calling `FrameBuffer::Private::cancel()`. To ease code tracing, this
patch replace the manual modification with the function call.

Signed-off-by: Harvey Yang <chenghaoyang@chromium.org>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <uajain@igalia.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-08 11:18:12 +01:00
Christian Rauch
17eed522e8 subprojects: libpisp: Update to 1.2.1
Update the libpisp wrap to use the latest 1.2.1 release which silences
an 'unused-parameter' warning.

Bug: https://github.com/raspberrypi/libpisp/pull/43
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Christian Rauch <Rauch.Christian@gmx.de>
Acked-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-07-08 13:05:54 +03:00
Nick Hollinghurst
619da07f73 ipa: rpi: Update IMX708 camera tuning files for AutoFocus changes
Explicitly add new parameters: "retrigger_ratio", "retrigger_delay",
"check_for_ir". Tweak other parameters to suit algorithm changes.
(Though existing tuning files should still work acceptably.)

Add AfSpeedFast parameters for the Raspberry Pi V3 standard lens.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:33 +01:00
Nick Hollinghurst
ea5f451c56 ipa: rpi: controller: AutoFocus bidirectional scanning
To reduce unnecessary lens movements, allow the CDAF-based
search procedure to start from either end of the range;
or if not near an end, from the current lens position.

This sometimes requires a second coarse scan, if the first
one started in the middle and did not find peak contrast.

Shorten the fine scan from 5 steps to 3 steps; allow fine scan
to be omitted altogether when "step_fine": 0 in the tuning file.

Move updateLensPosition() out of startProgrammedScan() to avoid
calling it more than once per iteration.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
686f88707c ipa: rpi: controller: Autofocus to use AWB statistics; re-trigger
Analyse AWB statistics: used both for scene change detection
and to detect IR lighting (when a flag is set in the tuning file).

Option to suppress PDAF altogether when IR lighting is detected.

Rather than being based solely on PDAF "dropout", allow a scan to
be (re-)triggered whenever the scene changes and then stabilizes,
based on contrast and average RGB statistics within the AF window.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
3d44987bc6 ipa: rpi: controller: AutoFocus tweak earlyTerminationByPhase()
Increase threshold for ETBP, from "confEpsilon" to "confThresh".
Correct sign test to take account of pdafGain sign (typically -ve).
Reduce allowed extrapolation range, but relax the check in the
case of Continuous AF, when we go back into the PDAF closed loop.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
429a5ab48f ipa: rpi: controller: Autofocus CAF/PDAF stability tweak
When in Continuous AF mode using PDAF, only move the lens when
phase has had the same sign for at least 4 frames. This reduces
lens wobble in e.g. noisy conditions.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
0fa2b05a86 ipa: rpi: controller: AutoFocus weighting tweak
In getPhase(), stop using different weights for sumWc and sumWcp.
This should improve linearity e.g. in earlyTerminationByPhase().
Phases are slightly larger but confidence values slightly reduced.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
a283287fbf ipa: rpi: controller: Improve findPeak() function in AF algorithm
Improve quadratic peak fitting in findPeak(). The old approximation
was good but only valid when points were equally spaced and the
MAX was not at one end of the series.

Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:32 +01:00
Nick Hollinghurst
30114cadd8 ipa: rpi: Defer initialising AF LensPosition ControlInfo and value
This fixes two small bugs:

We previously populated LensPosition's ControlInfo with hard-coded
values, ignoring the tuning file. Now we query the AfAlgorithm to
get limits (over all AF ranges) and default (for AfRangeNormal).

We previously sent a default position to the lens driver, even when
a user-specified starting position would follow. Defer doing this,
to reduce unnecessary lens movement at startup (for some drivers).

Bug: https://bugs.libcamera.org/show_bug.cgi?id=258
Signed-off-by: Nick Hollinghurst <nick.hollinghurst@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-07-03 10:24:26 +01:00
Barnabás Pőcze
6b5cc1c92a libcamera: pipeline: uvcvideo: Handle controls during startup
Process the control list passed to `Camera::start()`, and set
the V4L2 controls accordingly.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-07-02 10:26:41 +02:00
Naushir Patuck
5f94209b1d pipeline: rpi: Fix for enumerating the media graphs
When there are multiple entities between the sensor and CFE device (e.g.
a serialiser and deserialiser or multiple mux devices), the media graph
enumeration would work incorrectly and report that the frontend entity
was not found. This is because the found flag was stored locally in a
boolean and got lost in the recursion.

Fix this by explicitly tracking and returning the frontend found flag
through the return value of enumerateVideoDevices(). This ensures the
flag does not get lost through nested recursion.

This flag can also be used to fail a camera registration if the frontend
is not found.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-07-01 02:10:34 +03:00
Barnabás Pőcze
35ee8752b7 libcamera: pipeline: uvcvideo: Silently ignore AeEnable
The `AeEnable` control is handled in `Camera::queueRequest()` but it
still reaches the pipeline handler because a single element cannot be
removed from a `ControlList`. So ignore it silently.

Fixes: ffcecda4d5 ("libcamera: pipeline: uvcvideo: Report new AeEnable control as available")
Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-06-30 14:46:19 +02:00
Umang Jain
e9528306f2 camera_sensor: Expand on computeTransform() documentation
The description for computeTransform() when the desired orientation
cannot be achieved, can be expanded a further bit, to clearly report
that orientation will be adjusted to native camera sensor mounting
rotation.

Signed-off-by: Umang Jain <uajain@igalia.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-26 16:20:53 +03:00
Barnabás Pőcze
a29c53f6a6 meson: Use libyaml wrap file from wrapdb
Use the libyaml wrap file from the meson wrapdb instead of
creating the wrap file manually and using the cmake module.
This provides better integration with meson, such as the
`force_fallback_for` built-in option.

This is also needed because the upstream CMakeLists.txt is
out of date, failing with a sufficiently new cmake version:

    CMake Error at CMakeLists.txt:2 (cmake_minimum_required):
    Compatibility with CMake < 3.5 has been removed from CMake.

The above is nonetheless addressed by https://github.com/yaml/libyaml/pull/314,
but the project seems a bit inactive at the moment.

The wrap file was added using `meson wrap install libyaml`,
and it can be updated using `meson wrap update libyaml`.

`default_library=static` is used to match the behaviour of the
previously used cmake build. `werror=false` needs to be set
because libyaml does not compile without warnings, and that
would abort the build process otherwise.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-26 14:01:19 +02:00
Kieran Bingham
5f4d2ac935 libcamera: controls: Revert incorrect SPDX removal
In commit 6a09deaf7d ("controls: Add FrameWallClock control") the
existing SPDX was accidentally removed, likely from a rebase operation
at some point.

Unfortunately as this patch had already collected Reviewed-by tags, the
surruptious removal wasn't noticed until after it was merged.

Re-insert the existing SPDX and copyright banner as the header to the
control definitions file.

Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-26 12:59:03 +01:00
Stefan Klug
0dfb052fbd libcamera: base: Fix log level parsing when multiple categories are
listed

For a list of log levels like LIBCAMERA_LOG_LEVELS="CatA:0,CatB:1" only
the severity of the last entry is correctly parsed.

Due to the change of level to a string_view in 24c2caa1c1 ("libcamera:
base: log: Use `std::string_view` to avoid some copies") the level is no
longer necessarily null terminated as it is a view on the original data.

Replace the check for a terminating null by a check for the end position
to fix the issue.

Fixes: 24c2caa1c1 ("libcamera: base: log: Use `std::string_view` to avoid some copies")
Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-23 16:40:43 +02:00
Stefan Klug
8ea3ef083f libcamera: test: Add a failing test for the log level parser
Log level parsing doesn't always work as expected.  Add a failing test
for that.

Signed-off-by: Stefan Klug <stefan.klug@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-23 16:39:47 +02:00
Laurent Pinchart
c19047dfdf gstreamer: Use std::exchange() instead of g_steal_pointer()
g_steal_pointer) only preserves the type since glib 2.68, requiring
casts. Use std::exchange() instead.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
2025-06-23 02:30:47 +03:00
Laurent Pinchart
02a3b436c4 ipa: rkisp1: Move Sharpness control creation to Filter algorithm
The Sharpness control is used solely by the Filter algorithm. Create it
there, to avoid exposing it to applications when the algorithm is
disabled.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Paul Elder <paul.elder@ideasonboard.com>
2025-06-23 02:30:47 +03:00
David Plowman
1537da7442 pipeline: rpi: Add wallclock timestamp support
A ClockRecovery object is added for derived classes to use, and
wallclock timestamps are copied into the request metadata for
applications.

Wallclock timestamps are derived corresponding to the sensor
timestamp, and made available to the base pipeline handler class and
to IPAs, for both vc4 and pisp platforms.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
David Plowman
1d1ba78b45 controls: Add camera synchronisation controls for Raspberry Pi
New controls are added to control the camera "sync" algorithm, which
allows different cameras to synchronise their frames. For the time
being, the controls are Raspberry Pi specific, though this is expected
to change in future.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
David Plowman
2a4e347dfe libcamera: Add ClockRecovery class to generate wallclock timestamps
The ClockRecovery class takes pairs of timestamps from two different
clocks, and models the second ("output") clock from the first ("input")
clock.

We can use it, in particular, to get a good wallclock estimate for a
frame's SensorTimestamp.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
David Plowman
6a09deaf7d controls: Add FrameWallClock control
Add a FrameWallClock control that reports the same moment as the
frame's SensorTimestamp, but in wallclock units.

Signed-off-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 11:12:26 +01:00
Hou Qi
4a277906a4 gstreamer: Fix libcamerasrc responding latency before setting caps
Whenever a downstream element queries latency, libcamerasrc will always reply,
even though it has not yet determined the latency.

However some downstream elements (e.g. glvideomixer/aggregator) will query the
latency before libcamerasrc sets the caps. When these elements get the latency,
they will start the caps negotiation. Since libcamerasrc has not yet determined
caps, invalid negotiation is performed and workflow is disrupted.

So, set latency to 'GST_CLOCK_TIME_NONE' during initialization, and reply to the
query after libcamerasrc confirms the latency. At this time, libcamerasrc has also
completed caps negotiation and downstream elements work fine.

In addition, every time the src pad task stops, we reset the latency to
GST_CLOCK_TIME_NONE to ensure that when next time task starts, the downstream
elements can generate out buffers after receiving the effective latency.

Signed-off-by: Hou Qi <qi.hou@nxp.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-19 01:50:38 +01:00
Barnabás Pőcze
b4c92a61bf ipa: rpi: Initialize enum controls with a list of values
This is how uvcvideo and rkisp1 do it. See ee918b370a
("ipa: rkisp1: agc: Initialize enum controls with a list of values")
for the motivation. In summary, having a list of values is used as a sign
that the control is an enum in multiple places (e.g. `cam`, `camshark`).

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 10:59:12 +02:00
Laurent Pinchart
b3ff75d758 gstreamer: Replace NULL with nullptr
Usage of NULL has slowly crept in the libcamerasrc sources. Replace it
with nullptr.

Reported-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:31 +03:00
Laurent Pinchart
a8f90517e0 gstreamer: Drop incorrect unref on caps
The caps object passeed to the gst_libcamera_create_video_pool()
function is managed as a g_autoptr() in the caller. The function doesn't
acquire any new reference, so it shouldn't call gst_caps_unref(). Fix
it.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:29 +03:00
Laurent Pinchart
772b06bd8c gstreamer: Fix leak of GstQuery and GstBufferPool in error path
The gst_libcamera_create_video_pool() function leaks a GstQuery instance
and a GstBufferPool instance in an error path. Fix the leaks with
g_autoptr().

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
2025-06-17 01:01:26 +03:00
Laurent Pinchart
f7c4fcd301 gstreamer: Rename variable in gst_libcamera_create_video_pool()
Now that the code is isolated in a function, the video_pool variable in
gst_libcamera_create_video_pool() can be renamed to pool without
clashing with another local variable. Do so to reduce line length.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:23 +03:00
Laurent Pinchart
613202b809 gstreamer: Reduce indentation in gst_libcamera_create_video_pool()
Now that video pool creation is handled by a dedicated function, the
logic can be simplified by returning early instead of nesting scopes. Do
so to decrease indentation and improve readability, and document the
implementation of the function with comments.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:20 +03:00
Laurent Pinchart
3b68207789 gstreamer: Factor out video pool creation
The gst_libcamera_src_negotiate() function uses 5 indentation levels,
causing long lines. Move video pool creation to a separate function to
increase readability.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:17 +03:00
Laurent Pinchart
04e7823eb2 gstreamer: Document improvements when updating minimum GStreamer version
A const_cast<> was recently added to fix a compilation issue with older
GStreamer versions. Add a comment to indicate it can be removed when
bumping the minimum GStreamer version requirement. While at it, also
document a possible future improvement in the same function, and wrap
long lines.

Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
2025-06-17 01:01:14 +03:00
Antoine Bouyer
d3f3b95b64 pipeline: imx8-isi: Dynamically compute crossbar subdevice's first source.
So far, imx8-isi pipeline supports _symetrical_ crossbar, with same
amount of sink and source pads.

But for some other imx SoCs, such as i.MX8QM or i.MX95, crossbar is not
symetric anymore.

Since each crossbar source is already captured as a pipes_ vector entry,
we use pipes_ vector's size to compute 1st source index.

  "1st source index" = "total number of crossbar pads" - pipes_.count()

Signed-off-by: Antoine Bouyer <antoine.bouyer@nxp.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 00:44:05 +03:00
Antoine Bouyer
5621ac27a2 pipeline: imx8-isi: Fix match returned value in error case
The match() function returns a boolean type, while it could return int
in case of error when opening the capture file.

Fixes: 0ec982d210 ("libcamera: pipeline: Add IMX8 ISI pipeline")
Signed-off-by: Antoine Bouyer <antoine.bouyer@nxp.com>
Reviewed-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 00:19:54 +03:00
Antoine Bouyer
5c8de8a08e pipeline: imx8-isi: Cosmetic changes
Change indentation to pass checkstyle script.

Fixes: 680cde6005 ("libcamera: imx8-isi: Split Bayer/YUV config generation")
Signed-off-by: Antoine Bouyer <antoine.bouyer@nxp.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-17 00:19:53 +03:00
Barnabás Pőcze
b544ce1c19 apps: common: image: Fix assertion
`plane` must be strictly less than the vector's size,
it cannot be equal to it.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-16 10:57:54 +02:00
Naushir Patuck
8d2cd0b5b8 ipa: rpi: Rename dropFrameCount_ to invalidCount_
Rename dropFrameCount_ to invalidCount_ to better reflect its use as
frames are no longer dropped by the pipeline handler.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:55 +01:00
Naushir Patuck
a402f9ebc1 pipeline: rpi: Remove ispOutputCount_ and ispOutputTotal_
With the drop frame logic removed from the pipeline handler, these
member variables and not used, so remove them.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:55 +01:00
Naushir Patuck
98d144fef3 pipeline: rpi: Remove disable_startup_frame_drops config option
With the previous change to not drop frames in the pipeline handler,
the "disable_startup_frame_drops" pipeline config option is not used.
Remove it, and throw a warning if the option is present in the YAML
config file.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:55 +01:00
Naushir Patuck
6cf9c4d34f pipeline: ipa: rpi: Split RPiCameraData::dropFrameCount_
Split the pipeline handler drop frame tracking into startup frames and
invalid frames, as reported by the IPA.

Remove the drop buffer handling logic in the pipeline handler. Now all
image buffers are returned out with the appropriate FrameStatus set
for startup or invalid frames.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:54 +01:00
Naushir Patuck
b114c155a7 ipa: rpi: Replace dropFrameCount in the IPA -> PH interface
Replace the dropFrameCount parameter returned from ipa::start() to the
pipeline handler by startupFrameCount and invalidFrameCount. The former
counts the number of frames required for AWB/AGC to converge, and the
latter counts the number of invalid frames produced by the sensor when
starting up.

In the pipeline handler, use the sum of these 2 values to replicate the
existing dropFrameCount behaviour.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:54 +01:00
Naushir Patuck
c50eb1f04a libcamera: framebuffer: Add FrameMetadata::Status::FrameStartup
Add a new status enum, FrameStartup, used to denote that even though
the frame has been successfully captured, the IQ parameters set by the
IPA will cause the frame to be unusable and applications are advised to
not consume this frame. An example of this would be on a cold-start of
the 3A algorithms, and there will be large oscillations to converge to
a stable state quickly.

Additional, update the definition of the FrameError state to cover the
usage when the sensor is known to produce a number of invalid/error
frames after stream-on.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: David Plowman <david.plowman@raspberrypi.com>
Reviewed-by: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
Signed-off-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-12 17:26:54 +01:00
Barnabás Pőcze
8d168f3348 libcamera: process: Ensure that file descriptors are nonnegative
Return `-EINVAL` from `Process::start()` if any of the file descriptors
are negative as those most likely signal some kind of issue such as
missed error checking.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:26:11 +02:00
Barnabás Pőcze
fae2b506d7 libcamera: process: Return error if already running
Returning 0 when a running process is already managed can be confusing
since the parameters might be completely different, causing the caller
to mistakenly assume that the program it specified has been started.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:25:59 +02:00
Barnabás Pőcze
0a591eaf8c libcamera: process: Misc. cleanup around execv()
Firstly, get the number of arguments first, and use that to determine the
size of the allocation instead of retrieving it twice.

Secondly, use `const_cast` instead of a C-style cast when calling `execv()`.

Third, use `size_t` to match the type of `args.size()`.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:25:22 +02:00
Barnabás Pőcze
081554db34 libcamera: process: Disable copy/move
A `Process` object has address identity because a pointer to it is
stored inside the `ProcessManager`. However, copy/move special
methods are still generated by the compiler. So disable them to
avoid potential issues and confusion.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
2025-06-09 15:25:18 +02:00
Barnabás Pőcze
633063e099 android: camera_device: Do not pass nullptr to Request::addBuffer()
The default argument already takes care of passing no fence to
`addBuffer()`, so there is no reason to specify `nullptr` explicitly.

Signed-off-by: Barnabás Pőcze <barnabas.pocze@ideasonboard.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Kieran Bingham <kieran.bingham@ideasonboard.com>
2025-06-04 09:31:23 +02:00
74 changed files with 1898 additions and 451 deletions

View file

@ -0,0 +1,58 @@
name: PostmarketOS Build
run-name: PostmarketOS Build
on:
push:
workflow_dispatch:
jobs:
prepare:
name: Prepare
runs-on: Pmbootstrap
outputs:
time: ${{ steps.time.outputs.time }}
steps:
- name: Set start Time
id: time
shell: sh
run: echo time=$(date +"%Y%m%d%H%M%S") >> $GITHUB_OUTPUT
- name: Update pmbootstrap
uses: actions/pmbootstrap-update@master
- name: Remove libcamera aport
run: rm -rf ${{env.PMB_PMAPORTS}}/temp/libcamera
build:
name: Build for ${{ matrix.info.arch }}
runs-on: Pmbootstrap
strategy:
matrix:
info:
- arch: x86_64
- arch: aarch64
needs: prepare
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Build packages
id: build
uses: actions/pmbootstrap-build@main
with:
name: libcamera
aports: ${{github.workspace}}/package/alpine
arch: ${{ matrix.info.arch }}
src: ${{github.workspace}}
time: ${{ needs.prepare.outputs.time }}
- name: "Upload packages"
uses: actions/upload-alpine-package@main
with:
files: ${{steps.build.outputs.packages}}
secret: ${{secrets.PACKAGE_TOKEN}}
clean:
name: "Clean"
runs-on: Pmbootstrap
needs: build
if: always()
continue-on-error: true
steps:
- name: Update pmbootstrap
uses: actions/pmbootstrap-update@master

View file

@ -0,0 +1,18 @@
name: Sync fork with upstream
run-name: Sync fork with upstream
on:
schedule:
- cron: "@daily"
workflow_dispatch:
jobs:
sync:
name: Sync
runs-on: Misc
steps:
- name: Sync repository with upstream
uses: actions/sync-with-mirror@main
with:
secret: ${{ secrets.PUSH_TOKEN }}
name: libcamera
branch: master

3
.gitignore vendored
View file

@ -7,3 +7,6 @@
*.pyc
__pycache__/
venv/
.vscode/
.cache/
compile_commands.json

View file

@ -26,6 +26,7 @@ struct FrameMetadata {
FrameSuccess,
FrameError,
FrameCancelled,
FrameStartup,
};
struct Plane {

View file

@ -0,0 +1,68 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Camera recovery algorithm
*/
#pragma once
#include <stdint.h>
namespace libcamera {
class ClockRecovery
{
public:
ClockRecovery();
void configure(unsigned int numSamples = 100, unsigned int maxJitter = 2000,
unsigned int minSamples = 10, unsigned int errorThreshold = 50000);
void reset();
void addSample();
void addSample(uint64_t input, uint64_t output);
uint64_t getOutput(uint64_t input);
private:
/* Approximate number of samples over which the model state persists. */
unsigned int numSamples_;
/* Remove any output jitter larger than this immediately. */
unsigned int maxJitter_;
/* Number of samples required before we start to use model estimates. */
unsigned int minSamples_;
/* Threshold above which we assume the wallclock has been reset. */
unsigned int errorThreshold_;
/* How many samples seen (up to numSamples_). */
unsigned int count_;
/* This gets subtracted from all input values, just to make the numbers easier. */
uint64_t inputBase_;
/* As above, for the output. */
uint64_t outputBase_;
/* The previous input sample. */
uint64_t lastInput_;
/* The previous output sample. */
uint64_t lastOutput_;
/* Average x value seen so far. */
double xAve_;
/* Average y value seen so far */
double yAve_;
/* Average x^2 value seen so far. */
double x2Ave_;
/* Average x*y value seen so far. */
double xyAve_;
/*
* The latest estimate of linear parameters to derive the output clock
* from the input.
*/
double slope_;
double offset_;
/* Use this cumulative error to monitor for spontaneous clock updates. */
double error_;
};
} /* namespace libcamera */

View file

@ -10,13 +10,15 @@
#include <stdint.h>
#include <unordered_map>
#include <libcamera/base/object.h>
#include <libcamera/controls.h>
namespace libcamera {
class V4L2Device;
class DelayedControls
class DelayedControls : public Object
{
public:
struct ControlParams {

View file

@ -11,6 +11,7 @@ libcamera_internal_headers = files([
'camera_manager.h',
'camera_sensor.h',
'camera_sensor_properties.h',
'clock_recovery.h',
'control_serializer.h',
'control_validator.h',
'converter.h',

View file

@ -11,6 +11,7 @@
#include <string>
#include <vector>
#include <libcamera/base/class.h>
#include <libcamera/base/signal.h>
#include <libcamera/base/unique_fd.h>
@ -42,6 +43,8 @@ public:
Signal<enum ExitStatus, int> finished;
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Process)
void closeAllFdsExcept(const std::vector<int> &fds);
int isolate();
void died(int wstatus);

View file

@ -49,6 +49,15 @@ struct DebayerParams {
CcmLookupTable greenCcm;
CcmLookupTable blueCcm;
LookupTable gammaLut;
/*
* Statistic controls
*
* Statistic collecting are very slow. We can disable it for some actions like
* video capture or streaming.
* TODO: Add statistic window control
*/
bool collect_stats;
};
} /* namespace libcamera */

View file

@ -86,11 +86,11 @@ public:
Signal<FrameBuffer *> outputBufferReady;
Signal<uint32_t, uint32_t> ispStatsReady;
Signal<uint32_t, const ControlList &> metadataReady;
Signal<const ControlList &> setSensorControls;
Signal<const ControlList &, const ControlList &> setSensorControls;
private:
void saveIspParams();
void setSensorCtrls(const ControlList &sensorControls);
void setSensorCtrls(const ControlList &sensorControls, const ControlList &lensControls);
void statsReady(uint32_t frame, uint32_t bufferId);
void inputReady(FrameBuffer *input);
void outputReady(FrameBuffer *output);

View file

@ -44,6 +44,10 @@ struct SwIspStats {
* \brief A histogram of luminance values
*/
Histogram yHistogram;
/**
* \brief Holds the sharpness of an image
*/
uint64_t sharpness;
};
} /* namespace libcamera */

View file

@ -52,7 +52,8 @@ struct ConfigResult {
struct StartResult {
libcamera.ControlList controls;
int32 dropFrameCount;
int32 startupFrameCount;
int32 invalidFrameCount;
};
struct PrepareParams {

View file

@ -10,6 +10,7 @@ import "include/libcamera/ipa/core.mojom";
struct IPAConfigInfo {
libcamera.ControlInfoMap sensorControls;
libcamera.ControlInfoMap lensControls;
};
interface IPASoftInterface {
@ -32,7 +33,7 @@ interface IPASoftInterface {
};
interface IPASoftEventInterface {
setSensorControls(libcamera.ControlList sensorControls);
setSensorControls(libcamera.ControlList sensorControls, libcamera.ControlList lensControls);
setIspParams();
metadataReady(uint32 frame, libcamera.ControlList metadata);
};

132
package/alpine/APKBUILD Normal file
View file

@ -0,0 +1,132 @@
pkgname=libcamera
pkgver=9999999
pkgrel=0
pkgdesc="Linux camera framework"
url="https://libcamera.org/"
arch="all"
license="LGPL-2.1-or-later AND GPL-2.0-or-later"
depends_dev="
eudev-dev
glib-dev
gnutls-dev
gst-plugins-bad-dev
qt6-qtbase-dev
"
makedepends="$depends_dev
coreutils
doxygen
graphviz
gtest-dev
libevent-dev
libpisp-dev
libunwind-dev
libyuv-dev
linux-headers
meson
py3-jinja2
py3-ply
py3-sphinx
py3-yaml
qt6-qttools-dev
yaml-dev
"
subpackages="
$pkgname-dbg
$pkgname-dev
$pkgname-doc
qcam
$pkgname-gstreamer
$pkgname-v4l2
$pkgname-tools
"
source=""
builddir="$srcdir/$pkgname-v$_pkgver"
# gstreamer tests fail
# manual strip because ipa .sign files depend on the file contents- have to re-sign after strip
options="!strip !check"
case "$CARCH" in
arm*|aarch64)
subpackages="$subpackages $pkgname-raspberrypi"
;;
esac
case "$CARCH" in
ppc64le|s390x|riscv64|loongarch64)
# doesn't install any ipa
;;
*)
# WIP: HACK? Don't depend on this this shit
# depends="$pkgname-ipa=$pkgver-r$pkgrel"
subpackages="$subpackages $pkgname-ipa"
;;
esac
build() {
abuild-meson \
-Dtest=false \
-Dv4l2=true \
-Dwerror=false \
. output
meson compile -C output
}
package() {
DESTDIR="$pkgdir" meson install --no-rebuild -C output
# manual strip first..
scanelf --recursive \
--nobanner \
--etype "ET_DYN,ET_EXEC" \
--format "%F" \
"$pkgdir" \
| while read -r file; do
strip "$file"
done
}
ipa() {
depends=""
amove usr/lib/libcamera
# then sign ipa's
local ipa
for ipa in "$subpkgdir"/usr/lib/libcamera/ipa/ipa*.so; do
msg "signing $ipa"
"$builddir"/src/ipa/ipa-sign.sh \
"$(find "$builddir"/output -type f -iname "*ipa-priv-key.pem")" \
"$ipa" \
"$ipa".sign
done
}
qcam() {
depends=""
amove usr/bin/qcam
}
gstreamer() {
depends=""
amove usr/lib/gstreamer-1.0
}
v4l2() {
depends=""
amove usr/libexec/libcamera/v4l2-compat.so
}
raspberrypi() {
depends=""
amove usr/share/libcamera/ipa/rpi
amove usr/libexec/libcamera/raspberrypi_ipa_proxy
amove usr/share/libcamera/pipeline/rpi/vc4
}
tools() {
depends=""
amove usr/bin/cam
amove usr/bin/lc-compliance
}
sha512sums=""

View file

@ -1079,7 +1079,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques
buffer.internalBuffer = frameBuffer;
descriptor->request_->addBuffer(sourceStream->stream(),
frameBuffer, nullptr);
frameBuffer);
requestedStreams.insert(sourceStream);
}

View file

@ -98,12 +98,12 @@ unsigned int Image::numPlanes() const
Span<uint8_t> Image::data(unsigned int plane)
{
assert(plane <= planes_.size());
assert(plane < planes_.size());
return planes_[plane];
}
Span<const uint8_t> Image::data(unsigned int plane) const
{
assert(plane <= planes_.size());
assert(plane < planes_.size());
return planes_[plane];
}

View file

@ -68,7 +68,7 @@ static const GEnumValue {{ ctrl.name|snake_case }}_types[] = {
"{{ enum.gst_name }}"
},
{%- endfor %}
{0, NULL, NULL}
{0, nullptr, nullptr}
};
#define TYPE_{{ ctrl.name|snake_case|upper }} \

View file

@ -72,6 +72,10 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
if (query->type != GST_QUERY_LATENCY)
return gst_pad_query_default(pad, parent, query);
GLibLocker lock(GST_OBJECT(self));
if (self->latency == GST_CLOCK_TIME_NONE)
return FALSE;
/* TRUE here means live, we assumes that max latency is the same as min
* as we have no idea that duration of frames. */
gst_query_set_latency(query, TRUE, self->latency, self->latency);
@ -81,6 +85,7 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
static void
gst_libcamera_pad_init(GstLibcameraPad *self)
{
self->latency = GST_CLOCK_TIME_NONE;
GST_PAD_QUERYFUNC(self) = gst_libcamera_pad_query;
}
@ -102,7 +107,7 @@ gst_libcamera_stream_role_get_type()
"libcamera::Viewfinder",
"view-finder",
},
{ 0, NULL, NULL }
{ 0, nullptr, nullptr }
};
if (!type)

View file

@ -32,7 +32,7 @@ GST_DEBUG_CATEGORY_STATIC(provider_debug);
*/
enum {
PROP_DEVICE_NAME = 1,
PROP_DEVICE_ = 1,
};
#define GST_TYPE_LIBCAMERA_DEVICE gst_libcamera_device_get_type()
@ -76,14 +76,11 @@ gst_libcamera_device_reconfigure_element(GstDevice *device,
static void
gst_libcamera_device_set_property(GObject *object, guint prop_id,
const GValue *value, GParamSpec *pspec)
[[maybe_unused]]const GValue *value, GParamSpec *pspec)
{
GstLibcameraDevice *device = GST_LIBCAMERA_DEVICE(object);
// GstLibcameraDevice *device = GST_LIBCAMERA_DEVICE(object);
switch (prop_id) {
case PROP_DEVICE_NAME:
device->name = g_value_dup_string(value);
break;
default:
G_OBJECT_WARN_INVALID_PROPERTY_ID(object, prop_id, pspec);
break;
@ -117,12 +114,6 @@ gst_libcamera_device_class_init(GstLibcameraDeviceClass *klass)
object_class->set_property = gst_libcamera_device_set_property;
object_class->finalize = gst_libcamera_device_finalize;
GParamSpec *pspec = g_param_spec_string("name", "Name",
"The name of the camera device", "",
(GParamFlags)(G_PARAM_STATIC_STRINGS | G_PARAM_WRITABLE |
G_PARAM_CONSTRUCT_ONLY));
g_object_class_install_property(object_class, PROP_DEVICE_NAME, pspec);
}
static GstDevice *

View file

@ -29,6 +29,8 @@
#include <atomic>
#include <queue>
#include <tuple>
#include <utility>
#include <vector>
#include <libcamera/camera.h>
@ -234,6 +236,8 @@ GstLibcameraSrcState::requestCompleted(Request *request)
GLibLocker locker(&lock_);
controls_.readMetadata(request);
if(queuedRequests_.empty())
return;
wrap = std::move(queuedRequests_.front());
queuedRequests_.pop();
@ -285,10 +289,19 @@ gst_libcamera_extrapolate_info(GstVideoInfo *info, guint32 stride)
}
static GstFlowReturn
gst_libcamera_video_frame_copy(GstBuffer *src, GstBuffer *dest, const GstVideoInfo *dest_info, guint32 stride)
gst_libcamera_video_frame_copy(GstBuffer *src, GstBuffer *dest,
const GstVideoInfo *dest_info, guint32 stride)
{
GstVideoInfo src_info = *dest_info;
/*
* When dropping support for versions earlier than v1.22.0, use
*
* g_auto (GstVideoFrame) src_frame = GST_VIDEO_FRAME_INIT;
* g_auto (GstVideoFrame) dest_frame = GST_VIDEO_FRAME_INIT;
*
* and drop the gst_video_frame_unmap() calls.
*/
GstVideoFrame src_frame, dest_frame;
GstVideoInfo src_info = *dest_info;
gst_libcamera_extrapolate_info(&src_info, stride);
src_info.size = gst_buffer_get_size(src);
@ -298,7 +311,12 @@ gst_libcamera_video_frame_copy(GstBuffer *src, GstBuffer *dest, const GstVideoIn
return GST_FLOW_ERROR;
}
if (!gst_video_frame_map(&dest_frame, const_cast<GstVideoInfo *>(dest_info), dest, GST_MAP_WRITE)) {
/*
* When dropping support for versions earlier than 1.20.0, drop the
* const_cast<>().
*/
if (!gst_video_frame_map(&dest_frame, const_cast<GstVideoInfo *>(dest_info),
dest, GST_MAP_WRITE)) {
GST_ERROR("Could not map dest buffer");
gst_video_frame_unmap(&src_frame);
return GST_FLOW_ERROR;
@ -352,10 +370,10 @@ int GstLibcameraSrcState::processRequest()
if (video_pool) {
/* Only set video pool when a copy is needed. */
GstBuffer *copy = NULL;
GstBuffer *copy = nullptr;
const GstVideoInfo info = gst_libcamera_pad_get_video_info(srcpad);
ret = gst_buffer_pool_acquire_buffer(video_pool, &copy, NULL);
ret = gst_buffer_pool_acquire_buffer(video_pool, &copy, nullptr);
if (ret != GST_FLOW_OK) {
gst_buffer_unref(buffer);
GST_ELEMENT_ERROR(src_, RESOURCE, SETTINGS,
@ -507,6 +525,73 @@ gst_libcamera_src_open(GstLibcameraSrc *self)
return true;
}
/**
* \brief Create a video pool for a pad
* \param[in] self The libcamerasrc instance
* \param[in] srcpad The pad
* \param[in] caps The pad caps
* \param[in] info The video info for the pad
*
* This function creates and returns a video buffer pool for the given pad if
* needed to accommodate stride mismatch. If the peer element supports stride
* negotiation through the meta API, no pool is needed and the function will
* return a null pool.
*
* \return A tuple containing the video buffers pool pointer and an error code
*/
static std::tuple<GstBufferPool *, int>
gst_libcamera_create_video_pool(GstLibcameraSrc *self, GstPad *srcpad,
GstCaps *caps, const GstVideoInfo *info)
{
g_autoptr(GstQuery) query = nullptr;
g_autoptr(GstBufferPool) pool = nullptr;
const gboolean need_pool = true;
/*
* Get the peer allocation hints to check if it supports the meta API.
* If so, the stride will be negotiated, and there's no need to create a
* video pool.
*/
query = gst_query_new_allocation(caps, need_pool);
if (!gst_pad_peer_query(srcpad, query))
GST_DEBUG_OBJECT(self, "Didn't get downstream ALLOCATION hints");
else if (gst_query_find_allocation_meta(query, GST_VIDEO_META_API_TYPE, nullptr))
return { nullptr, 0 };
GST_WARNING_OBJECT(self, "Downstream doesn't support video meta, need to copy frame.");
/*
* If the allocation query has pools, use the first one. Otherwise,
* create a new pool.
*/
if (gst_query_get_n_allocation_pools(query) > 0)
gst_query_parse_nth_allocation_pool(query, 0, &pool, nullptr,
nullptr, nullptr);
if (!pool) {
GstStructure *config;
guint min_buffers = 3;
pool = gst_video_buffer_pool_new();
config = gst_buffer_pool_get_config(pool);
gst_buffer_pool_config_set_params(config, caps, info->size, min_buffers, 0);
GST_DEBUG_OBJECT(self, "Own pool config is %" GST_PTR_FORMAT, config);
gst_buffer_pool_set_config(GST_BUFFER_POOL_CAST(pool), config);
}
if (!gst_buffer_pool_set_active(pool, true)) {
GST_ELEMENT_ERROR(self, RESOURCE, SETTINGS,
("Failed to active buffer pool"),
("gst_libcamera_src_negotiate() failed."));
return { nullptr, -EINVAL };
}
return { std::exchange(pool, nullptr), 0 };
}
/* Must be called with stream_lock held. */
static bool
gst_libcamera_src_negotiate(GstLibcameraSrc *self)
@ -578,7 +663,7 @@ gst_libcamera_src_negotiate(GstLibcameraSrc *self)
for (gsize i = 0; i < state->srcpads_.size(); i++) {
GstPad *srcpad = state->srcpads_[i];
const StreamConfiguration &stream_cfg = state->config_->at(i);
GstBufferPool *video_pool = NULL;
GstBufferPool *video_pool = nullptr;
GstVideoInfo info;
g_autoptr(GstCaps) caps = gst_libcamera_stream_configuration_to_caps(stream_cfg, transfer[i]);
@ -589,50 +674,13 @@ gst_libcamera_src_negotiate(GstLibcameraSrc *self)
/* Stride mismatch between camera stride and that calculated by video-info. */
if (static_cast<unsigned int>(info.stride[0]) != stream_cfg.stride &&
GST_VIDEO_INFO_FORMAT(&info) != GST_VIDEO_FORMAT_ENCODED) {
GstQuery *query = NULL;
const gboolean need_pool = true;
gboolean has_video_meta = false;
gst_libcamera_extrapolate_info(&info, stream_cfg.stride);
query = gst_query_new_allocation(caps, need_pool);
if (!gst_pad_peer_query(srcpad, query))
GST_DEBUG_OBJECT(self, "Didn't get downstream ALLOCATION hints");
else
has_video_meta = gst_query_find_allocation_meta(query, GST_VIDEO_META_API_TYPE, NULL);
if (!has_video_meta) {
GstBufferPool *pool = NULL;
if (gst_query_get_n_allocation_pools(query) > 0)
gst_query_parse_nth_allocation_pool(query, 0, &pool, NULL, NULL, NULL);
if (pool)
video_pool = pool;
else {
GstStructure *config;
guint min_buffers = 3;
video_pool = gst_video_buffer_pool_new();
config = gst_buffer_pool_get_config(video_pool);
gst_buffer_pool_config_set_params(config, caps, info.size, min_buffers, 0);
GST_DEBUG_OBJECT(self, "Own pool config is %" GST_PTR_FORMAT, config);
gst_buffer_pool_set_config(GST_BUFFER_POOL_CAST(video_pool), config);
}
GST_WARNING_OBJECT(self, "Downstream doesn't support video meta, need to copy frame.");
if (!gst_buffer_pool_set_active(video_pool, true)) {
gst_caps_unref(caps);
GST_ELEMENT_ERROR(self, RESOURCE, SETTINGS,
("Failed to active buffer pool"),
("gst_libcamera_src_negotiate() failed."));
return false;
}
}
gst_query_unref(query);
std::tie(video_pool, ret) =
gst_libcamera_create_video_pool(self, srcpad,
caps, &info);
if (ret)
return false;
}
GstLibcameraPool *pool = gst_libcamera_pool_new(self->allocator,
@ -835,8 +883,10 @@ gst_libcamera_src_task_leave([[maybe_unused]] GstTask *task,
{
GLibRecLocker locker(&self->stream_lock);
for (GstPad *srcpad : state->srcpads_)
for (GstPad *srcpad : state->srcpads_) {
gst_libcamera_pad_set_latency(srcpad, GST_CLOCK_TIME_NONE);
gst_libcamera_pad_set_pool(srcpad, nullptr);
}
}
g_clear_object(&self->allocator);
@ -1020,7 +1070,7 @@ gst_libcamera_src_request_new_pad(GstElement *element, GstPadTemplate *templ,
const gchar *name, [[maybe_unused]] const GstCaps *caps)
{
GstLibcameraSrc *self = GST_LIBCAMERA_SRC(element);
g_autoptr(GstPad) pad = NULL;
g_autoptr(GstPad) pad = nullptr;
GST_DEBUG_OBJECT(self, "new request pad created");
@ -1034,12 +1084,12 @@ gst_libcamera_src_request_new_pad(GstElement *element, GstPadTemplate *templ,
GST_ELEMENT_ERROR(element, STREAM, FAILED,
("Internal data stream error."),
("Could not add pad to element"));
return NULL;
return nullptr;
}
gst_child_proxy_child_added(GST_CHILD_PROXY(self), G_OBJECT(pad), GST_OBJECT_NAME(pad));
return reinterpret_cast<GstPad *>(g_steal_pointer(&pad));
return std::exchange(pad, nullptr);
}
static void

View file

@ -39,6 +39,17 @@ LOG_DEFINE_CATEGORY(RkISP1Filter)
static constexpr uint32_t kFiltLumWeightDefault = 0x00022040;
static constexpr uint32_t kFiltModeDefault = 0x000004f2;
/**
* \copydoc libcamera::ipa::Algorithm::init
*/
int Filter::init(IPAContext &context,
[[maybe_unused]] const YamlObject &tuningData)
{
auto &cmap = context.ctrlMap;
cmap[&controls::Sharpness] = ControlInfo(0.0f, 10.0f, 1.0f);
return 0;
}
/**
* \copydoc libcamera::ipa::Algorithm::queueRequest
*/

View file

@ -21,6 +21,7 @@ public:
Filter() = default;
~Filter() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
void queueRequest(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext,
const ControlList &controls) override;

View file

@ -116,7 +116,6 @@ const IPAHwSettings ipaHwSettingsV12{
/* List of controls handled by the RkISP1 IPA */
const ControlInfoMap::Map rkisp1Controls{
{ &controls::DebugMetadataEnable, ControlInfo(false, true, false) },
{ &controls::Sharpness, ControlInfo(0.0f, 10.0f, 1.0f) },
{ &controls::draft::NoiseReductionMode, ControlInfo(controls::draft::NoiseReductionModeValues) },
};

View file

@ -58,23 +58,24 @@ const ControlInfoMap::Map ipaControls{
/* \todo Move this to the Camera class */
{ &controls::AeEnable, ControlInfo(false, true, true) },
{ &controls::ExposureTimeMode,
ControlInfo(static_cast<int32_t>(controls::ExposureTimeModeAuto),
static_cast<int32_t>(controls::ExposureTimeModeManual),
static_cast<int32_t>(controls::ExposureTimeModeAuto)) },
ControlInfo({ { ControlValue(controls::ExposureTimeModeAuto),
ControlValue(controls::ExposureTimeModeManual) } },
ControlValue(controls::ExposureTimeModeAuto)) },
{ &controls::ExposureTime,
ControlInfo(1, 66666, static_cast<int32_t>(defaultExposureTime.get<std::micro>())) },
{ &controls::AnalogueGainMode,
ControlInfo(static_cast<int32_t>(controls::AnalogueGainModeAuto),
static_cast<int32_t>(controls::AnalogueGainModeManual),
static_cast<int32_t>(controls::AnalogueGainModeAuto)) },
ControlInfo({ { ControlValue(controls::AnalogueGainModeAuto),
ControlValue(controls::AnalogueGainModeManual) } },
ControlValue(controls::AnalogueGainModeAuto)) },
{ &controls::AnalogueGain, ControlInfo(1.0f, 16.0f, 1.0f) },
{ &controls::AeMeteringMode, ControlInfo(controls::AeMeteringModeValues) },
{ &controls::AeConstraintMode, ControlInfo(controls::AeConstraintModeValues) },
{ &controls::AeExposureMode, ControlInfo(controls::AeExposureModeValues) },
{ &controls::ExposureValue, ControlInfo(-8.0f, 8.0f, 0.0f) },
{ &controls::AeFlickerMode, ControlInfo(static_cast<int>(controls::FlickerOff),
static_cast<int>(controls::FlickerManual),
static_cast<int>(controls::FlickerOff)) },
{ &controls::AeFlickerMode,
ControlInfo({ { ControlValue(controls::FlickerOff),
ControlValue(controls::FlickerManual) } },
ControlValue(controls::FlickerOff)) },
{ &controls::AeFlickerPeriod, ControlInfo(100, 1000000) },
{ &controls::Brightness, ControlInfo(-1.0f, 1.0f, 0.0f) },
{ &controls::Contrast, ControlInfo(0.0f, 32.0f, 1.0f) },
@ -232,25 +233,6 @@ int32_t IpaBase::configure(const IPACameraSensorInfo &sensorInfo, const ConfigPa
agcStatus.analogueGain = defaultAnalogueGain;
applyAGC(&agcStatus, ctrls);
/*
* Set the lens to the default (typically hyperfocal) position
* on first start.
*/
if (lensPresent_) {
RPiController::AfAlgorithm *af =
dynamic_cast<RPiController::AfAlgorithm *>(controller_.getAlgorithm("af"));
if (af) {
float defaultPos =
ipaAfControls.at(&controls::LensPosition).def().get<float>();
ControlList lensCtrl(lensCtrls_);
int32_t hwpos;
af->setLensPosition(defaultPos, &hwpos);
lensCtrl.set(V4L2_CID_FOCUS_ABSOLUTE, hwpos);
result->lensControls = std::move(lensCtrl);
}
}
}
result->sensorControls = std::move(ctrls);
@ -280,8 +262,20 @@ int32_t IpaBase::configure(const IPACameraSensorInfo &sensorInfo, const ConfigPa
ctrlMap.merge(ControlInfoMap::Map(ipaColourControls));
/* Declare Autofocus controls, only if we have a controllable lens */
if (lensPresent_)
if (lensPresent_) {
ctrlMap.merge(ControlInfoMap::Map(ipaAfControls));
RPiController::AfAlgorithm *af =
dynamic_cast<RPiController::AfAlgorithm *>(controller_.getAlgorithm("af"));
if (af) {
double min, max, dflt;
af->getLensLimits(min, max);
dflt = af->getDefaultLensPosition();
ctrlMap[&controls::LensPosition] =
ControlInfo(static_cast<float>(min),
static_cast<float>(max),
static_cast<float>(dflt));
}
}
result->controlInfo = ControlInfoMap(std::move(ctrlMap), controls::controls);
@ -319,14 +313,35 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
/* Make a note of this as it tells us the HDR status of the first few frames. */
hdrStatus_ = agcStatus.hdr;
/*
* AF: If no lens position was specified, drive lens to a default position.
* This had to be deferred (not initialised by a constructor) until here
* to ensure that exactly ONE starting position is sent to the lens driver.
* It should be the static API default, not dependent on AF range or mode.
*/
if (firstStart_ && lensPresent_) {
RPiController::AfAlgorithm *af = dynamic_cast<RPiController::AfAlgorithm *>(
controller_.getAlgorithm("af"));
if (af && !af->getLensPosition()) {
int32_t hwpos;
double pos = af->getDefaultLensPosition();
if (af->setLensPosition(pos, &hwpos, true)) {
ControlList lensCtrls(lensCtrls_);
lensCtrls.set(V4L2_CID_FOCUS_ABSOLUTE, hwpos);
setLensControls.emit(lensCtrls);
}
}
}
/*
* Initialise frame counts, and decide how many frames must be hidden or
* "mistrusted", which depends on whether this is a startup from cold,
* or merely a mode switch in a running system.
*/
unsigned int agcConvergenceFrames = 0, awbConvergenceFrames = 0;
frameCount_ = 0;
if (firstStart_) {
dropFrameCount_ = helper_->hideFramesStartup();
invalidCount_ = helper_->hideFramesStartup();
mistrustCount_ = helper_->mistrustFramesStartup();
/*
@ -336,7 +351,6 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
* (mistrustCount_) that they won't see. But if zero (i.e.
* no convergence necessary), no frames need to be dropped.
*/
unsigned int agcConvergenceFrames = 0;
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc"));
if (agc) {
@ -345,7 +359,6 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
agcConvergenceFrames += mistrustCount_;
}
unsigned int awbConvergenceFrames = 0;
RPiController::AwbAlgorithm *awb = dynamic_cast<RPiController::AwbAlgorithm *>(
controller_.getAlgorithm("awb"));
if (awb) {
@ -353,15 +366,18 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
if (awbConvergenceFrames)
awbConvergenceFrames += mistrustCount_;
}
dropFrameCount_ = std::max({ dropFrameCount_, agcConvergenceFrames, awbConvergenceFrames });
LOG(IPARPI, Debug) << "Drop " << dropFrameCount_ << " frames on startup";
} else {
dropFrameCount_ = helper_->hideFramesModeSwitch();
invalidCount_ = helper_->hideFramesModeSwitch();
mistrustCount_ = helper_->mistrustFramesModeSwitch();
}
result->dropFrameCount = dropFrameCount_;
result->startupFrameCount = std::max({ agcConvergenceFrames, awbConvergenceFrames });
result->invalidFrameCount = invalidCount_;
invalidCount_ = std::max({ invalidCount_, agcConvergenceFrames, awbConvergenceFrames });
LOG(IPARPI, Debug) << "Startup frames: " << result->startupFrameCount
<< " Invalid frames: " << result->invalidFrameCount;
firstStart_ = false;
lastRunTimestamp_ = 0;
@ -441,7 +457,7 @@ void IpaBase::prepareIsp(const PrepareParams &params)
/* Allow a 10% margin on the comparison below. */
Duration delta = (frameTimestamp - lastRunTimestamp_) * 1.0ns;
if (lastRunTimestamp_ && frameCount_ > dropFrameCount_ &&
if (lastRunTimestamp_ && frameCount_ > invalidCount_ &&
delta < controllerMinFrameDuration * 0.9 && !hdrChange) {
/*
* Ensure we merge the previous frame's metadata with the current

View file

@ -115,8 +115,8 @@ private:
/* How many frames we should avoid running control algos on. */
unsigned int mistrustCount_;
/* Number of frames that need to be dropped on startup. */
unsigned int dropFrameCount_;
/* Number of frames that need to be marked as dropped on startup. */
unsigned int invalidCount_;
/* Frame timestamp for the last run of the controller. */
uint64_t lastRunTimestamp_;

View file

@ -33,6 +33,10 @@ public:
*
* getMode() is provided mainly for validating controls.
* getLensPosition() is provided for populating DeviceStatus.
*
* getDefaultlensPosition() and getLensLimits() were added for
* populating ControlInfoMap. They return the static API limits
* which should be independent of the current range or mode.
*/
enum AfRange { AfRangeNormal = 0,
@ -66,7 +70,9 @@ public:
}
virtual void setMode(AfMode mode) = 0;
virtual AfMode getMode() const = 0;
virtual bool setLensPosition(double dioptres, int32_t *hwpos) = 0;
virtual double getDefaultLensPosition() const = 0;
virtual void getLensLimits(double &min, double &max) const = 0;
virtual bool setLensPosition(double dioptres, int32_t *hwpos, bool force = false) = 0;
virtual std::optional<double> getLensPosition() const = 0;
virtual void triggerScan() = 0;
virtual void cancelScan() = 0;

View file

@ -46,6 +46,8 @@ Af::SpeedDependentParams::SpeedDependentParams()
: stepCoarse(1.0),
stepFine(0.25),
contrastRatio(0.75),
retriggerRatio(0.75),
retriggerDelay(10),
pdafGain(-0.02),
pdafSquelch(0.125),
maxSlew(2.0),
@ -60,6 +62,7 @@ Af::CfgParams::CfgParams()
confThresh(16),
confClip(512),
skipFrames(5),
checkForIR(false),
map()
{
}
@ -87,6 +90,8 @@ void Af::SpeedDependentParams::read(const libcamera::YamlObject &params)
readNumber<double>(stepCoarse, params, "step_coarse");
readNumber<double>(stepFine, params, "step_fine");
readNumber<double>(contrastRatio, params, "contrast_ratio");
readNumber<double>(retriggerRatio, params, "retrigger_ratio");
readNumber<uint32_t>(retriggerDelay, params, "retrigger_delay");
readNumber<double>(pdafGain, params, "pdaf_gain");
readNumber<double>(pdafSquelch, params, "pdaf_squelch");
readNumber<double>(maxSlew, params, "max_slew");
@ -137,6 +142,7 @@ int Af::CfgParams::read(const libcamera::YamlObject &params)
readNumber<uint32_t>(confThresh, params, "conf_thresh");
readNumber<uint32_t>(confClip, params, "conf_clip");
readNumber<uint32_t>(skipFrames, params, "skip_frames");
readNumber<bool>(checkForIR, params, "check_for_ir");
if (params.contains("map"))
map = params["map"].get<ipa::Pwl>(ipa::Pwl{});
@ -176,27 +182,38 @@ Af::Af(Controller *controller)
useWindows_(false),
phaseWeights_(),
contrastWeights_(),
awbWeights_(),
scanState_(ScanState::Idle),
initted_(false),
irFlag_(false),
ftarget_(-1.0),
fsmooth_(-1.0),
prevContrast_(0.0),
oldSceneContrast_(0.0),
prevAverage_{ 0.0, 0.0, 0.0 },
oldSceneAverage_{ 0.0, 0.0, 0.0 },
prevPhase_(0.0),
skipCount_(0),
stepCount_(0),
dropCount_(0),
sameSignCount_(0),
sceneChangeCount_(0),
scanMaxContrast_(0.0),
scanMinContrast_(1.0e9),
scanStep_(0.0),
scanData_(),
reportState_(AfState::Idle)
{
/*
* Reserve space for data, to reduce memory fragmentation. It's too early
* to query the size of the PDAF (from camera) and Contrast (from ISP)
* statistics, but these are plausible upper bounds.
* Reserve space for data structures, to reduce memory fragmentation.
* It's too early to query the size of the PDAF sensor data, so guess.
*/
windows_.reserve(1);
phaseWeights_.w.reserve(16 * 12);
contrastWeights_.w.reserve(getHardwareConfig().focusRegions.width *
getHardwareConfig().focusRegions.height);
contrastWeights_.w.reserve(getHardwareConfig().awbRegions.width *
getHardwareConfig().awbRegions.height);
scanData_.reserve(32);
}
@ -235,13 +252,14 @@ void Af::switchMode(CameraMode const &cameraMode, [[maybe_unused]] Metadata *met
<< statsRegion_.height;
invalidateWeights();
if (scanState_ >= ScanState::Coarse && scanState_ < ScanState::Settle) {
if (scanState_ >= ScanState::Coarse1 && scanState_ < ScanState::Settle) {
/*
* If a scan was in progress, re-start it, as CDAF statistics
* may have changed. Though if the application is just about
* to take a still picture, this will not help...
*/
startProgrammedScan();
updateLensPosition();
}
skipCount_ = cfg_.skipFrames;
}
@ -307,6 +325,7 @@ void Af::invalidateWeights()
{
phaseWeights_.sum = 0;
contrastWeights_.sum = 0;
awbWeights_.sum = 0;
}
bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf)
@ -328,9 +347,8 @@ bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf)
if (c >= cfg_.confThresh) {
if (c > cfg_.confClip)
c = cfg_.confClip;
c -= (cfg_.confThresh >> 2);
c -= (cfg_.confThresh >> 1);
sumWc += w * c;
c -= (cfg_.confThresh >> 2);
sumWcp += (int64_t)(w * c) * (int64_t)data.phase;
}
}
@ -364,6 +382,54 @@ double Af::getContrast(const FocusRegions &focusStats)
return (contrastWeights_.sum > 0) ? ((double)sumWc / (double)contrastWeights_.sum) : 0.0;
}
/*
* Get the average R, G, B values in AF window[s] (from AWB statistics).
* Optionally, check if all of {R,G,B} are within 4:5 of each other
* across more than 50% of the counted area and within the AF window:
* for an RGB sensor this strongly suggests that IR lighting is in use.
*/
bool Af::getAverageAndTestIr(const RgbyRegions &awbStats, double rgb[3])
{
libcamera::Size size = awbStats.size();
if (size.height != awbWeights_.rows ||
size.width != awbWeights_.cols || awbWeights_.sum == 0) {
LOG(RPiAf, Debug) << "Recompute RGB weights " << size.width << 'x' << size.height;
computeWeights(&awbWeights_, size.height, size.width);
}
uint64_t sr = 0, sg = 0, sb = 0, sw = 1;
uint64_t greyCount = 0, allCount = 0;
for (unsigned i = 0; i < awbStats.numRegions(); ++i) {
uint64_t r = awbStats.get(i).val.rSum;
uint64_t g = awbStats.get(i).val.gSum;
uint64_t b = awbStats.get(i).val.bSum;
uint64_t w = awbWeights_.w[i];
if (w) {
sw += w;
sr += w * r;
sg += w * g;
sb += w * b;
}
if (cfg_.checkForIR) {
if (4 * r < 5 * b && 4 * b < 5 * r &&
4 * r < 5 * g && 4 * g < 5 * r &&
4 * b < 5 * g && 4 * g < 5 * b)
greyCount += awbStats.get(i).counted;
allCount += awbStats.get(i).counted;
}
}
rgb[0] = sr / (double)sw;
rgb[1] = sg / (double)sw;
rgb[2] = sb / (double)sw;
return (cfg_.checkForIR && 2 * greyCount > allCount &&
4 * sr < 5 * sb && 4 * sb < 5 * sr &&
4 * sr < 5 * sg && 4 * sg < 5 * sr &&
4 * sb < 5 * sg && 4 * sg < 5 * sb);
}
void Af::doPDAF(double phase, double conf)
{
/* Apply loop gain */
@ -410,7 +476,7 @@ void Af::doPDAF(double phase, double conf)
bool Af::earlyTerminationByPhase(double phase)
{
if (scanData_.size() > 0 &&
scanData_[scanData_.size() - 1].conf >= cfg_.confEpsilon) {
scanData_[scanData_.size() - 1].conf >= cfg_.confThresh) {
double oldFocus = scanData_[scanData_.size() - 1].focus;
double oldPhase = scanData_[scanData_.size() - 1].phase;
@ -419,11 +485,12 @@ bool Af::earlyTerminationByPhase(double phase)
* Interpolate/extrapolate the lens position for zero phase.
* Check that the extrapolation is well-conditioned.
*/
if ((ftarget_ - oldFocus) * (phase - oldPhase) > 0.0) {
if ((ftarget_ - oldFocus) * (phase - oldPhase) * cfg_.speeds[speed_].pdafGain < 0.0) {
double param = phase / (phase - oldPhase);
if (-3.0 <= param && param <= 3.5) {
ftarget_ += param * (oldFocus - ftarget_);
if ((-2.5 <= param || mode_ == AfModeContinuous) && param <= 3.0) {
LOG(RPiAf, Debug) << "ETBP: param=" << param;
param = std::max(param, -2.5);
ftarget_ += param * (oldFocus - ftarget_);
return true;
}
}
@ -436,15 +503,28 @@ double Af::findPeak(unsigned i) const
{
double f = scanData_[i].focus;
if (i > 0 && i + 1 < scanData_.size()) {
double dropLo = scanData_[i].contrast - scanData_[i - 1].contrast;
double dropHi = scanData_[i].contrast - scanData_[i + 1].contrast;
if (0.0 <= dropLo && dropLo < dropHi) {
double param = 0.3125 * (1.0 - dropLo / dropHi) * (1.6 - dropLo / dropHi);
f += param * (scanData_[i - 1].focus - f);
} else if (0.0 <= dropHi && dropHi < dropLo) {
double param = 0.3125 * (1.0 - dropHi / dropLo) * (1.6 - dropHi / dropLo);
f += param * (scanData_[i + 1].focus - f);
if (scanData_.size() >= 3) {
/*
* Given the sample with the highest contrast score and its two
* neighbours either side (or same side if at the end of a scan),
* solve for the best lens position by fitting a parabola.
* Adapted from awb.cpp: interpolateQaudaratic()
*/
if (i == 0)
i++;
else if (i + 1 >= scanData_.size())
i--;
double abx = scanData_[i - 1].focus - scanData_[i].focus;
double aby = scanData_[i - 1].contrast - scanData_[i].contrast;
double cbx = scanData_[i + 1].focus - scanData_[i].focus;
double cby = scanData_[i + 1].contrast - scanData_[i].contrast;
double denom = 2.0 * (aby * cbx - cby * abx);
if (std::abs(denom) >= (1.0 / 64.0) && denom * abx > 0.0) {
f = (aby * cbx * cbx - cby * abx * abx) / denom;
f = std::clamp(f, std::min(abx, cbx), std::max(abx, cbx));
f += scanData_[i].focus;
}
}
@ -458,36 +538,49 @@ void Af::doScan(double contrast, double phase, double conf)
if (scanData_.empty() || contrast > scanMaxContrast_) {
scanMaxContrast_ = contrast;
scanMaxIndex_ = scanData_.size();
if (scanState_ != ScanState::Fine)
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
}
if (contrast < scanMinContrast_)
scanMinContrast_ = contrast;
scanData_.emplace_back(ScanRecord{ ftarget_, contrast, phase, conf });
if (scanState_ == ScanState::Coarse) {
if (ftarget_ >= cfg_.ranges[range_].focusMax ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
/*
* Finished course scan, or termination based on contrast.
* Jump to just after max contrast and start fine scan.
*/
ftarget_ = std::min(ftarget_, findPeak(scanMaxIndex_) +
2.0 * cfg_.speeds[speed_].stepFine);
scanState_ = ScanState::Fine;
scanData_.clear();
} else
ftarget_ += cfg_.speeds[speed_].stepCoarse;
} else { /* ScanState::Fine */
if (ftarget_ <= cfg_.ranges[range_].focusMin || scanData_.size() >= 5 ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
/*
* Finished fine scan, or termination based on contrast.
* Use quadratic peak-finding to find best contrast position.
*/
ftarget_ = findPeak(scanMaxIndex_);
if ((scanStep_ >= 0.0 && ftarget_ >= cfg_.ranges[range_].focusMax) ||
(scanStep_ <= 0.0 && ftarget_ <= cfg_.ranges[range_].focusMin) ||
(scanState_ == ScanState::Fine && scanData_.size() >= 3) ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
double pk = findPeak(scanMaxIndex_);
/*
* Finished a scan, by hitting a limit or due to constrast dropping off.
* If this is a first coarse scan and we didn't bracket the peak, reverse!
* If this is a fine scan, or no fine step was defined, we've finished.
* Otherwise, start fine scan in opposite direction.
*/
if (scanState_ == ScanState::Coarse1 &&
scanData_[0].contrast >= cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
scanStep_ = -scanStep_;
scanState_ = ScanState::Coarse2;
} else if (scanState_ == ScanState::Fine || cfg_.speeds[speed_].stepFine <= 0.0) {
ftarget_ = pk;
scanState_ = ScanState::Settle;
} else
ftarget_ -= cfg_.speeds[speed_].stepFine;
}
} else if (scanState_ == ScanState::Coarse1 &&
scanData_[0].contrast >= cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
scanStep_ = -scanStep_;
scanState_ = ScanState::Coarse2;
} else if (scanStep_ >= 0.0) {
ftarget_ = std::min(pk + cfg_.speeds[speed_].stepFine,
cfg_.ranges[range_].focusMax);
scanStep_ = -cfg_.speeds[speed_].stepFine;
scanState_ = ScanState::Fine;
} else {
ftarget_ = std::max(pk - cfg_.speeds[speed_].stepFine,
cfg_.ranges[range_].focusMin);
scanStep_ = cfg_.speeds[speed_].stepFine;
scanState_ = ScanState::Fine;
}
scanData_.clear();
} else
ftarget_ += scanStep_;
stepCount_ = (ftarget_ == fsmooth_) ? 0 : cfg_.speeds[speed_].stepFrames;
}
@ -501,26 +594,70 @@ void Af::doAF(double contrast, double phase, double conf)
return;
}
/* Count frames for which PDAF phase has had same sign */
if (phase * prevPhase_ <= 0.0)
sameSignCount_ = 0;
else
sameSignCount_++;
prevPhase_ = phase;
if (mode_ == AfModeManual)
return; /* nothing to do */
if (scanState_ == ScanState::Pdaf) {
/*
* Use PDAF closed-loop control whenever available, in both CAF
* mode and (for a limited number of iterations) when triggered.
* If PDAF fails (due to poor contrast, noise or large defocus),
* fall back to a CDAF-based scan. To avoid "nuisance" scans,
* scan only after a number of frames with low PDAF confidence.
* If PDAF fails (due to poor contrast, noise or large defocus)
* for at least dropoutFrames, fall back to a CDAF-based scan
* immediately (in triggered-auto) or on scene change (in CAF).
*/
if (conf > (dropCount_ ? 1.0 : 0.25) * cfg_.confEpsilon) {
doPDAF(phase, conf);
if (conf >= cfg_.confEpsilon) {
if (mode_ == AfModeAuto || sameSignCount_ >= 3)
doPDAF(phase, conf);
if (stepCount_ > 0)
stepCount_--;
else if (mode_ != AfModeContinuous)
scanState_ = ScanState::Idle;
oldSceneContrast_ = contrast;
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
sceneChangeCount_ = 0;
dropCount_ = 0;
} else if (++dropCount_ == cfg_.speeds[speed_].dropoutFrames)
startProgrammedScan();
} else if (scanState_ >= ScanState::Coarse && fsmooth_ == ftarget_) {
return;
} else {
dropCount_++;
if (dropCount_ < cfg_.speeds[speed_].dropoutFrames)
return;
if (mode_ != AfModeContinuous) {
startProgrammedScan();
return;
}
/* else fall through to waiting for a scene change */
}
}
if (scanState_ < ScanState::Coarse1 && mode_ == AfModeContinuous) {
/*
* Scanning sequence. This means PDAF has become unavailable.
* In CAF mode, not in a scan, and PDAF is unavailable.
* Wait for a scene change, followed by stability.
*/
if (contrast + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneContrast_ ||
oldSceneContrast_ + 1.0 < cfg_.speeds[speed_].retriggerRatio * contrast ||
prevAverage_[0] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[0] ||
oldSceneAverage_[0] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[0] ||
prevAverage_[1] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[1] ||
oldSceneAverage_[1] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[1] ||
prevAverage_[2] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[2] ||
oldSceneAverage_[2] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[2]) {
oldSceneContrast_ = contrast;
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
sceneChangeCount_ = 1;
} else if (sceneChangeCount_)
sceneChangeCount_++;
if (sceneChangeCount_ >= cfg_.speeds[speed_].retriggerDelay)
startProgrammedScan();
} else if (scanState_ >= ScanState::Coarse1 && fsmooth_ == ftarget_) {
/*
* CDAF-based scanning sequence.
* Allow a delay between steps for CDAF FoM statistics to be
* updated, and a "settling time" at the end of the sequence.
* [A coarse or fine scan can be abandoned if two PDAF samples
@ -539,11 +676,14 @@ void Af::doAF(double contrast, double phase, double conf)
scanState_ = ScanState::Pdaf;
else
scanState_ = ScanState::Idle;
dropCount_ = 0;
sceneChangeCount_ = 0;
oldSceneContrast_ = std::max(scanMaxContrast_, prevContrast_);
scanData_.clear();
} else if (conf >= cfg_.confEpsilon && earlyTerminationByPhase(phase)) {
} else if (conf >= cfg_.confThresh && earlyTerminationByPhase(phase)) {
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
scanState_ = ScanState::Settle;
stepCount_ = (mode_ == AfModeContinuous) ? 0
: cfg_.speeds[speed_].stepFrames;
stepCount_ = (mode_ == AfModeContinuous) ? 0 : cfg_.speeds[speed_].stepFrames;
} else
doScan(contrast, phase, conf);
}
@ -573,7 +713,8 @@ void Af::updateLensPosition()
void Af::startAF()
{
/* Use PDAF if the tuning file allows it; else CDAF. */
if (cfg_.speeds[speed_].dropoutFrames > 0 &&
if (cfg_.speeds[speed_].pdafGain != 0.0 &&
cfg_.speeds[speed_].dropoutFrames > 0 &&
(mode_ == AfModeContinuous || cfg_.speeds[speed_].pdafFrames > 0)) {
if (!initted_) {
ftarget_ = cfg_.ranges[range_].focusDefault;
@ -583,16 +724,30 @@ void Af::startAF()
scanState_ = ScanState::Pdaf;
scanData_.clear();
dropCount_ = 0;
oldSceneContrast_ = 0.0;
sceneChangeCount_ = 0;
reportState_ = AfState::Scanning;
} else
} else {
startProgrammedScan();
updateLensPosition();
}
}
void Af::startProgrammedScan()
{
ftarget_ = cfg_.ranges[range_].focusMin;
updateLensPosition();
scanState_ = ScanState::Coarse;
if (!initted_ || mode_ != AfModeContinuous ||
fsmooth_ <= cfg_.ranges[range_].focusMin + 2.0 * cfg_.speeds[speed_].stepCoarse) {
ftarget_ = cfg_.ranges[range_].focusMin;
scanStep_ = cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse2;
} else if (fsmooth_ >= cfg_.ranges[range_].focusMax - 2.0 * cfg_.speeds[speed_].stepCoarse) {
ftarget_ = cfg_.ranges[range_].focusMax;
scanStep_ = -cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse2;
} else {
scanStep_ = -cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse1;
}
scanMaxContrast_ = 0.0;
scanMinContrast_ = 1.0e9;
scanMaxIndex_ = 0;
@ -633,7 +788,7 @@ void Af::prepare(Metadata *imageMetadata)
uint32_t oldSt = stepCount_;
if (imageMetadata->get("pdaf.regions", regions) == 0)
getPhase(regions, phase, conf);
doAF(prevContrast_, phase, conf);
doAF(prevContrast_, phase, irFlag_ ? 0 : conf);
updateLensPosition();
LOG(RPiAf, Debug) << std::fixed << std::setprecision(2)
<< static_cast<unsigned int>(reportState_)
@ -643,7 +798,8 @@ void Af::prepare(Metadata *imageMetadata)
<< " ft" << oldFt << "->" << ftarget_
<< " fs" << oldFs << "->" << fsmooth_
<< " cont=" << (int)prevContrast_
<< " phase=" << (int)phase << " conf=" << (int)conf;
<< " phase=" << (int)phase << " conf=" << (int)conf
<< (irFlag_ ? " IR" : "");
}
/* Report status and produce new lens setting */
@ -656,6 +812,8 @@ void Af::prepare(Metadata *imageMetadata)
if (mode_ == AfModeAuto && scanState_ != ScanState::Idle)
status.state = AfState::Scanning;
else if (mode_ == AfModeManual)
status.state = AfState::Idle;
else
status.state = reportState_;
status.lensSetting = initted_ ? std::optional<int>(cfg_.map.eval(fsmooth_))
@ -667,6 +825,7 @@ void Af::process(StatisticsPtr &stats, [[maybe_unused]] Metadata *imageMetadata)
{
(void)imageMetadata;
prevContrast_ = getContrast(stats->focusRegions);
irFlag_ = getAverageAndTestIr(stats->awbRegions, prevAverage_);
}
/* Controls */
@ -715,11 +874,23 @@ void Af::setWindows(libcamera::Span<libcamera::Rectangle const> const &wins)
invalidateWeights();
}
bool Af::setLensPosition(double dioptres, int *hwpos)
double Af::getDefaultLensPosition() const
{
return cfg_.ranges[AfRangeNormal].focusDefault;
}
void Af::getLensLimits(double &min, double &max) const
{
/* Limits for manual focus are set by map, not by ranges */
min = cfg_.map.domain().start;
max = cfg_.map.domain().end;
}
bool Af::setLensPosition(double dioptres, int *hwpos, bool force)
{
bool changed = false;
if (mode_ == AfModeManual) {
if (mode_ == AfModeManual || force) {
LOG(RPiAf, Debug) << "setLensPosition: " << dioptres;
ftarget_ = cfg_.map.domain().clamp(dioptres);
changed = !(initted_ && fsmooth_ == ftarget_);
@ -763,7 +934,7 @@ void Af::setMode(AfAlgorithm::AfMode mode)
pauseFlag_ = false;
if (mode == AfModeContinuous)
scanState_ = ScanState::Trigger;
else if (mode != AfModeAuto || scanState_ < ScanState::Coarse)
else if (mode != AfModeAuto || scanState_ < ScanState::Coarse1)
goIdle();
}
}
@ -779,12 +950,14 @@ void Af::pause(AfAlgorithm::AfPause pause)
if (mode_ == AfModeContinuous) {
if (pause == AfPauseResume && pauseFlag_) {
pauseFlag_ = false;
if (scanState_ < ScanState::Coarse)
if (scanState_ < ScanState::Coarse1)
scanState_ = ScanState::Trigger;
} else if (pause != AfPauseResume && !pauseFlag_) {
pauseFlag_ = true;
if (pause == AfPauseImmediate || scanState_ < ScanState::Coarse)
goIdle();
if (pause == AfPauseImmediate || scanState_ < ScanState::Coarse1) {
scanState_ = ScanState::Idle;
scanData_.clear();
}
}
}
}

View file

@ -15,20 +15,28 @@
/*
* This algorithm implements a hybrid of CDAF and PDAF, favouring PDAF.
*
* Whenever PDAF is available, it is used in a continuous feedback loop.
* When triggered in auto mode, we simply enable AF for a limited number
* of frames (it may terminate early if the delta becomes small enough).
* Whenever PDAF is available (and reports sufficiently high confidence),
* it is used for continuous feedback control of the lens position. When
* triggered in Auto mode, we enable the loop for a limited number of frames
* (it may terminate sooner if the phase becomes small). In CAF mode, the
* PDAF loop runs continuously. Very small lens movements are suppressed.
*
* When PDAF confidence is low (due e.g. to low contrast or extreme defocus)
* or PDAF data are absent, fall back to CDAF with a programmed scan pattern.
* A coarse and fine scan are performed, using ISP's CDAF focus FoM to
* estimate the lens position with peak contrast. This is slower due to
* extra latency in the ISP, and requires a settling time between steps.
* A coarse and fine scan are performed, using the ISP's CDAF contrast FoM
* to estimate the lens position with peak contrast. (This is slower due to
* extra latency in the ISP, and requires a settling time between steps.)
* The scan may terminate early if PDAF recovers and allows the zero-phase
* lens position to be interpolated.
*
* Some hysteresis is applied to the switch between PDAF and CDAF, to avoid
* "nuisance" scans. During each interval where PDAF is not working, only
* ONE scan will be performed; CAF cannot track objects using CDAF alone.
* In CAF mode, the fallback to a CDAF scan is triggered when PDAF fails to
* report high confidence and a configurable number of frames have elapsed
* since the last image change since either PDAF was working or a previous
* scan found peak contrast. Image changes are detected using both contrast
* and AWB statistics (within the AF window[s]).
*
* IR lighting can interfere with the correct operation of PDAF, so we
* optionally try to detect it (from AWB statistics).
*/
namespace RPiController {
@ -54,7 +62,9 @@ public:
void setWindows(libcamera::Span<libcamera::Rectangle const> const &wins) override;
void setMode(AfMode mode) override;
AfMode getMode() const override;
bool setLensPosition(double dioptres, int32_t *hwpos) override;
double getDefaultLensPosition() const override;
void getLensLimits(double &min, double &max) const override;
bool setLensPosition(double dioptres, int32_t *hwpos, bool force) override;
std::optional<double> getLensPosition() const override;
void triggerScan() override;
void cancelScan() override;
@ -65,7 +75,8 @@ private:
Idle = 0,
Trigger,
Pdaf,
Coarse,
Coarse1,
Coarse2,
Fine,
Settle
};
@ -80,9 +91,11 @@ private:
};
struct SpeedDependentParams {
double stepCoarse; /* used for scans */
double stepFine; /* used for scans */
double stepCoarse; /* in dioptres; used for scans */
double stepFine; /* in dioptres; used for scans */
double contrastRatio; /* used for scan termination and reporting */
double retriggerRatio; /* contrast and RGB ratio for re-triggering */
uint32_t retriggerDelay; /* frames of stability before re-triggering */
double pdafGain; /* coefficient for PDAF feedback loop */
double pdafSquelch; /* PDAF stability parameter (device-specific) */
double maxSlew; /* limit for lens movement per frame */
@ -101,6 +114,7 @@ private:
uint32_t confThresh; /* PDAF confidence cell min (sensor-specific) */
uint32_t confClip; /* PDAF confidence cell max (sensor-specific) */
uint32_t skipFrames; /* frames to skip at start or modeswitch */
bool checkForIR; /* Set this if PDAF is unreliable in IR light */
libcamera::ipa::Pwl map; /* converts dioptres -> lens driver position */
CfgParams();
@ -129,6 +143,7 @@ private:
void invalidateWeights();
bool getPhase(PdafRegions const &regions, double &phase, double &conf);
double getContrast(const FocusRegions &focusStats);
bool getAverageAndTestIr(const RgbyRegions &awbStats, double rgb[3]);
void doPDAF(double phase, double conf);
bool earlyTerminationByPhase(double phase);
double findPeak(unsigned index) const;
@ -150,15 +165,20 @@ private:
bool useWindows_;
RegionWeights phaseWeights_;
RegionWeights contrastWeights_;
RegionWeights awbWeights_;
/* Working state. */
ScanState scanState_;
bool initted_;
bool initted_, irFlag_;
double ftarget_, fsmooth_;
double prevContrast_;
double prevContrast_, oldSceneContrast_;
double prevAverage_[3], oldSceneAverage_[3];
double prevPhase_;
unsigned skipCount_, stepCount_, dropCount_;
unsigned sameSignCount_;
unsigned sceneChangeCount_;
unsigned scanMaxIndex_;
double scanMaxContrast_, scanMinContrast_;
double scanMaxContrast_, scanMinContrast_, scanStep_;
std::vector<ScanRecord> scanData_;
AfState reportState_;
};

View file

@ -1139,11 +1139,27 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"pdaf_frames": 16,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -1151,6 +1167,7 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -1267,4 +1284,4 @@
}
}
]
}
}

View file

@ -1156,11 +1156,27 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"pdaf_frames": 16,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -1168,6 +1184,7 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -1230,4 +1247,4 @@
}
}
]
}
}

View file

@ -1148,23 +1148,27 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 3.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
"step_frames": 5
},
"fast":
{
"step_coarse": 2.0,
"step_fine": 0.5,
"step_coarse": 2.5,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 5.0,
"max_slew": 4.0,
"pdaf_frames": 16,
"dropout_frames": 6,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -1172,6 +1176,7 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -1290,4 +1295,4 @@
}
}
]
}
}

View file

@ -1057,23 +1057,27 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 3.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
"step_frames": 5
},
"fast":
{
"step_coarse": 2.0,
"step_fine": 0.5,
"step_coarse": 2.5,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 5.0,
"max_slew": 4.0,
"pdaf_frames": 16,
"dropout_frames": 6,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -1081,6 +1085,7 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -1145,4 +1150,4 @@
}
}
]
}
}

View file

@ -638,11 +638,27 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"pdaf_frames": 16,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -650,6 +666,7 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -668,4 +685,4 @@
}
}
]
}
}

View file

@ -737,11 +737,27 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"pdaf_frames": 16,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -749,6 +765,7 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -767,4 +784,4 @@
}
}
]
}
}

View file

@ -637,23 +637,27 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 3.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
"step_frames": 5
},
"fast":
{
"step_coarse": 2.0,
"step_fine": 0.5,
"step_coarse": 2.5,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 5.0,
"max_slew": 4.0,
"pdaf_frames": 16,
"dropout_frames": 6,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -661,6 +665,7 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -679,4 +684,4 @@
}
}
]
}
}

View file

@ -628,23 +628,27 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 3.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
"step_frames": 5
},
"fast":
{
"step_coarse": 2.0,
"step_fine": 0.5,
"step_coarse": 2.5,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 5.0,
"max_slew": 4.0,
"pdaf_frames": 16,
"dropout_frames": 6,
"dropout_frames": 4,
"step_frames": 4
}
},
@ -652,6 +656,7 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -670,4 +675,4 @@
}
}
]
}
}

View file

@ -0,0 +1,131 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2025 Vasiliy Doylov <nekodevelopper@gmail.com>
*
* Auto focus
*/
#include "af.h"
#include <stdint.h>
#include <libcamera/base/log.h>
#include "control_ids.h"
namespace libcamera {
LOG_DEFINE_CATEGORY(IPASoftAutoFocus)
namespace ipa::soft::algorithms {
Af::Af()
{
}
int Af::init(IPAContext &context,
[[maybe_unused]] const YamlObject &tuningData)
{
context.ctrlMap[&controls::LensPosition] = ControlInfo(0.0f, 100.0f, 50.0f);
context.ctrlMap[&controls::AfTrigger] = ControlInfo(0, 1, 0);
return 0;
}
int Af::configure(IPAContext &context,
[[maybe_unused]] const IPAConfigInfo &configInfo)
{
context.activeState.knobs.focus_sweep = std::optional<bool>();
context.activeState.knobs.focus_pos = std::optional<double>();
context.activeState.knobs.focus_sweep = false;
context.activeState.knobs.focus_pos = 0;
context.configuration.focus.skip = 10;
return 0;
}
void Af::queueRequest([[maybe_unused]] typename Module::Context &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]] typename Module::FrameContext &frameContext,
const ControlList &controls)
{
const auto &focus_pos = controls.get(controls::LensPosition);
const auto &af_trigger = controls.get(controls::AfTrigger);
if (focus_pos.has_value()) {
context.activeState.knobs.focus_pos = focus_pos;
LOG(IPASoftAutoFocus, Debug) << "Setting focus position to " << focus_pos.value();
}
if (af_trigger.has_value()) {
context.activeState.knobs.focus_sweep = af_trigger.value() == 1;
if(context.activeState.knobs.focus_sweep){
context.activeState.knobs.focus_pos = 0;
context.configuration.focus.focus_max_pos = 0;
context.configuration.focus.sharpness_max = 0;
context.configuration.focus.start = 0;
context.configuration.focus.stop = 100;
context.configuration.focus.step = 25;
LOG(IPASoftAutoFocus, Info) << "Starting focus sweep";
}
}
}
void Af::updateFocus([[maybe_unused]] IPAContext &context, [[maybe_unused]] IPAFrameContext &frameContext, [[maybe_unused]] double exposureMSV)
{
frameContext.lens.focus_pos = context.activeState.knobs.focus_pos.value_or(50.0) / 100.0 * (context.configuration.focus.focus_max - context.configuration.focus.focus_min);
}
void Af::step(uint32_t& skip, double& start, double& stop, double& step, double& focus_pos, double& max_pos, uint64_t& max_sharp, uint64_t sharp, bool& sweep){
if(!sweep)
return;
if(skip != 0){
skip --;
return;
}
skip = 2;
if(focus_pos < start) {
focus_pos = start;
return;
}
if(sharp > max_sharp) {
max_sharp = sharp;
max_pos = focus_pos;
}
if(focus_pos >= stop) {
LOG(IPASoftAutoFocus, Info) << "Best focus on step " <<step << ": " << focus_pos;
start = std::clamp(max_pos - step, 0.0, 100.0);
stop = std::clamp(max_pos + step, 0.0, 100.0);
focus_pos = start;
max_sharp = 0;
step /= 2;
if(step <= 0.2){
sweep = false;
LOG(IPASoftAutoFocus, Info) << "Sweep end. Best focus: " << focus_pos;
}
return;
}
focus_pos += step;
}
void Af::process([[maybe_unused]] IPAContext &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]] IPAFrameContext &frameContext,
[[maybe_unused]] const SwIspStats *stats,
[[maybe_unused]] ControlList &metadata)
{
step(context.configuration.focus.skip,
context.configuration.focus.start,
context.configuration.focus.stop,
context.configuration.focus.step,
context.activeState.knobs.focus_pos.value(),
context.configuration.focus.focus_max_pos,
context.configuration.focus.sharpness_max,
stats->sharpness,
context.activeState.knobs.focus_sweep.value());
updateFocus(context, frameContext, 0);
}
REGISTER_IPA_ALGORITHM(Af, "Af")
} /* namespace ipa::soft::algorithms */
} /* namespace libcamera */

View file

@ -0,0 +1,41 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2025 Vasiliy Doylov <nekodevelopper@gmail.com>
*
* Auto focus
*/
#pragma once
#include "algorithm.h"
namespace libcamera {
namespace ipa::soft::algorithms {
class Af : public Algorithm
{
public:
Af();
~Af() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context, const IPAConfigInfo &configInfo) override;
void queueRequest(typename Module::Context &context,
const uint32_t frame,
typename Module::FrameContext &frameContext,
const ControlList &controls)
override;
void process(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext,
const SwIspStats *stats,
ControlList &metadata) override;
private:
void updateFocus(IPAContext &context, IPAFrameContext &frameContext, double focus);
void step(uint32_t& skip, double& start, double& stop, double& step, double& focus_pos, double& max_pos, uint64_t& max_sharp, uint64_t sharp, bool& sweep);
};
} /* namespace ipa::soft::algorithms */
} /* namespace libcamera */

View file

@ -41,6 +41,47 @@ Agc::Agc()
{
}
int Agc::init(IPAContext &context,
[[maybe_unused]] const YamlObject &tuningData)
{
context.ctrlMap[&controls::AeEnable] = ControlInfo(false, true, true);
context.ctrlMap[&controls::Brightness] = ControlInfo(0.0f, 2.0f, 1.0f);
context.ctrlMap[&controls::ExposureValue] = ControlInfo(0.0f, 0.5f, 1.0f);
return 0;
}
int Agc::configure(IPAContext &context,
[[maybe_unused]] const IPAConfigInfo &configInfo)
{
context.activeState.knobs.brightness = std::optional<double>();
context.activeState.knobs.ae_enabled = std::optional<bool>();
return 0;
}
void Agc::queueRequest(typename Module::Context &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]] typename Module::FrameContext &frameContext,
const ControlList &controls)
{
const auto &brightness = controls.get(controls::Brightness);
const auto &ae_enabled = controls.get(controls::AeEnable);
const auto &exposure_value = controls.get(controls::ExposureValue);
if (brightness.has_value()) {
context.activeState.knobs.brightness = brightness;
LOG(IPASoftExposure, Debug) << "Setting brightness to " << brightness.value();
}
if (ae_enabled.has_value()) {
context.activeState.knobs.ae_enabled = ae_enabled;
LOG(IPASoftExposure, Debug) << "Setting ae_enable to " << ae_enabled.value();
}
if (exposure_value.has_value()) {
context.activeState.knobs.exposure_value = exposure_value.value();
LOG(IPASoftExposure, Debug) << "Setting exposure value to " << exposure_value.value();
}
}
void Agc::updateExposure(IPAContext &context, IPAFrameContext &frameContext, double exposureMSV)
{
/*
@ -54,6 +95,8 @@ void Agc::updateExposure(IPAContext &context, IPAFrameContext &frameContext, dou
double next;
int32_t &exposure = frameContext.sensor.exposure;
double &again = frameContext.sensor.gain;
const auto brightness = context.activeState.knobs.brightness.value_or(1.0);
exposureMSV /= brightness;
if (exposureMSV < kExposureOptimal - kExposureSatisfactory) {
next = exposure * kExpNumeratorUp / kExpDenominator;
@ -103,10 +146,17 @@ void Agc::process(IPAContext &context,
const SwIspStats *stats,
ControlList &metadata)
{
const auto ae_enable = context.activeState.knobs.ae_enabled.value_or(true);
if (!ae_enable)
frameContext.sensor.exposure = (int32_t)( context.activeState.knobs.exposure_value.value_or(0.5) * (context.configuration.agc.exposureMax - context.configuration.agc.exposureMin));
utils::Duration exposureTime =
context.configuration.agc.lineDuration * frameContext.sensor.exposure;
metadata.set(controls::ExposureTime, exposureTime.get<std::micro>());
metadata.set(controls::AnalogueGain, frameContext.sensor.gain);
LOG(IPASoftExposure, Debug) << "Setting exposure value to " << frameContext.sensor.exposure;
if (!ae_enable)
return;
/*
* Calculate Mean Sample Value (MSV) according to formula from:

View file

@ -19,6 +19,13 @@ public:
Agc();
~Agc() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context, const IPAConfigInfo &configInfo) override;
void queueRequest(typename Module::Context &context,
const uint32_t frame,
typename Module::FrameContext &frameContext,
const ControlList &controls)
override;
void process(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext,
const SwIspStats *stats,

View file

@ -6,4 +6,6 @@ soft_simple_ipa_algorithms = files([
'blc.cpp',
'ccm.cpp',
'lut.cpp',
'af.cpp',
'stat.cpp',
])

View file

@ -0,0 +1,65 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2025 Vasiliy Doylov <nekodevelopper@gmail.com>
*
* Debayer statistic controls
*/
#include "stat.h"
#include <stdint.h>
#include <libcamera/base/log.h>
#include "control_ids.h"
namespace libcamera {
LOG_DEFINE_CATEGORY(IPASoftStatistic)
namespace ipa::soft::algorithms {
Stat::Stat()
{
}
int Stat::init(IPAContext &context,
[[maybe_unused]] const YamlObject &tuningData)
{
context.ctrlMap[&controls::DebugMetadataEnable] = ControlInfo(false, true, true);
return 0;
}
int Stat::configure(IPAContext &context,
[[maybe_unused]] const IPAConfigInfo &configInfo)
{
context.activeState.knobs.stats_enabled = std::optional<bool>();
return 0;
}
void Stat::queueRequest([[maybe_unused]] typename Module::Context &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]] typename Module::FrameContext &frameContext,
const ControlList &controls)
{
const auto &stats_enabled = controls.get(controls::DebugMetadataEnable);
if (stats_enabled.has_value()) {
context.activeState.knobs.stats_enabled = stats_enabled;
LOG(IPASoftStatistic, Debug) << "Setting debayer enabled to " << stats_enabled.value();
}
}
void Stat::prepare([[maybe_unused]]IPAContext &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]]IPAFrameContext &frameContext,
[[maybe_unused]] DebayerParams *params)
{
params->collect_stats = context.activeState.knobs.stats_enabled.value_or(true);
}
REGISTER_IPA_ALGORITHM(Stat, "Stat")
} /* namespace ipa::soft::algorithms */
} /* namespace libcamera */

View file

@ -0,0 +1,38 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2025 Vasiliy Doylov <nekodevelopper@gmail.com>
*
* Debayer statistic controls
*/
#pragma once
#include "algorithm.h"
namespace libcamera {
namespace ipa::soft::algorithms {
class Stat : public Algorithm
{
public:
Stat();
~Stat() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context, const IPAConfigInfo &configInfo) override;
void queueRequest(typename Module::Context &context,
const uint32_t frame,
typename Module::FrameContext &frameContext,
const ControlList &controls)
override;
void prepare(IPAContext &context,
const uint32_t frame,
IPAFrameContext &frameContext,
DebayerParams *params) override;
};
} /* namespace ipa::soft::algorithms */
} /* namespace libcamera */

View file

@ -16,4 +16,6 @@ algorithms:
# 0, 0, 1]
- Lut:
- Agc:
- Af:
- Stat:
...

View file

@ -34,6 +34,13 @@ struct IPASessionConfiguration {
struct {
std::optional<uint8_t> level;
} black;
struct {
int32_t focus_min, focus_max;
double focus_max_pos;
uint64_t sharpness_max;
double start, stop, step;
uint32_t skip;
} focus;
};
struct IPAActiveState {
@ -64,6 +71,18 @@ struct IPAActiveState {
/* 0..2 range, 1.0 = normal */
std::optional<double> contrast;
std::optional<float> saturation;
/* 0..2 range, 1.0 = normal */
std::optional<double> brightness;
/* 0..1 range, 1 = normal */
std::optional<bool> ae_enabled;
/* 0..1 range, 0.5 = normal */
std::optional<double> exposure_value;
/* 0..100 range, 50.0 = normal */
std::optional<double> focus_pos;
/* 0..1 range, 1 = normal */
std::optional<bool> stats_enabled;
/* 0..1 range, 0 = normal */
std::optional<bool> focus_sweep;
} knobs;
};
@ -77,6 +96,10 @@ struct IPAFrameContext : public FrameContext {
double gain;
} sensor;
struct {
int32_t focus_pos;
} lens;
struct {
double red;
double blue;

View file

@ -77,6 +77,7 @@ private:
SwIspStats *stats_;
std::unique_ptr<CameraSensorHelper> camHelper_;
ControlInfoMap sensorInfoMap_;
ControlInfoMap lensInfoMap_;
/* Local parameter storage */
struct IPAContext context_;
@ -196,6 +197,7 @@ int IPASoftSimple::init(const IPASettings &settings,
int IPASoftSimple::configure(const IPAConfigInfo &configInfo)
{
sensorInfoMap_ = configInfo.sensorControls;
lensInfoMap_ = configInfo.lensControls;
const ControlInfo &exposureInfo = sensorInfoMap_.find(V4L2_CID_EXPOSURE)->second;
const ControlInfo &gainInfo = sensorInfoMap_.find(V4L2_CID_ANALOGUE_GAIN)->second;
@ -205,6 +207,17 @@ int IPASoftSimple::configure(const IPAConfigInfo &configInfo)
context_.activeState = {};
context_.frameContexts.clear();
if (lensInfoMap_.empty()) {
LOG(IPASoft, Warning) << "No camera leans found! Focus control disabled.";
context_.configuration.focus.focus_min = 0;
context_.configuration.focus.focus_max = 0;
} else {
const ControlInfo &lensInfo = lensInfoMap_.find(V4L2_CID_FOCUS_ABSOLUTE)->second;
context_.configuration.focus.focus_min = lensInfo.min().get<int32_t>();
context_.configuration.focus.focus_max = lensInfo.max().get<int32_t>();
LOG(IPASoft, Warning) << "Camera leans found! Focus: " << context_.configuration.focus.focus_min << "-" << context_.configuration.focus.focus_max;
}
context_.configuration.agc.lineDuration =
context_.sensorInfo.minLineLength * 1.0s / context_.sensorInfo.pixelRate;
context_.configuration.agc.exposureMin = exposureInfo.min().get<int32_t>();
@ -327,7 +340,10 @@ void IPASoftSimple::processStats(const uint32_t frame,
ctrls.set(V4L2_CID_ANALOGUE_GAIN,
static_cast<int32_t>(camHelper_ ? camHelper_->gainCode(againNew) : againNew));
setSensorControls.emit(ctrls);
ControlList lens_ctrls(lensInfoMap_);
lens_ctrls.set(V4L2_CID_FOCUS_ABSOLUTE, frameContext.lens.focus_pos);
setSensorControls.emit(ctrls, lens_ctrls);
}
std::string IPASoftSimple::logPrefix() const

View file

@ -690,8 +690,9 @@ LogSeverity Logger::parseLogLevel(std::string_view level)
unsigned int severity = LogInvalid;
if (std::isdigit(level[0])) {
auto [end, ec] = std::from_chars(level.data(), level.data() + level.size(), severity);
if (ec != std::errc() || *end != '\0' || severity > LogFatal)
const char *levelEnd = level.data() + level.size();
auto [end, ec] = std::from_chars(level.data(), levelEnd, severity);
if (ec != std::errc() || end != levelEnd || severity > LogFatal)
severity = LogInvalid;
} else {
for (unsigned int i = 0; i < std::size(names); ++i) {

View file

@ -488,7 +488,7 @@ std::size_t CameraConfiguration::size() const
*
* \return A CameraConfiguration::Status value that describes the validation
* status.
* \retval CameraConfigutation::Adjusted The configuration has been adjusted
* \retval CameraConfiguration::Adjusted The configuration has been adjusted
* and is now valid. The color space of some or all of the streams may have
* been changed. The caller shall check the color spaces carefully.
* \retval CameraConfiguration::Valid The configuration was already valid and

View file

@ -0,0 +1,230 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Clock recovery algorithm
*/
#include "libcamera/internal/clock_recovery.h"
#include <time.h>
#include <libcamera/base/log.h>
/**
* \file clock_recovery.h
* \brief Clock recovery - deriving one clock from another independent clock
*/
namespace libcamera {
LOG_DEFINE_CATEGORY(ClockRec)
/**
* \class ClockRecovery
* \brief Recover an output clock from an input clock
*
* The ClockRecovery class derives an output clock from an input clock,
* modelling the output clock as being linearly related to the input clock.
* For example, we may use it to derive wall clock timestamps from timestamps
* measured by the internal system clock which counts local time since boot.
*
* When pairs of corresponding input and output timestamps are available,
* they should be submitted to the model with addSample(). The model will
* update, and output clock values for known input clock values can be
* obtained using getOutput().
*
* As a convenience, if the input clock is indeed the time since boot, and the
* output clock represents a real wallclock time, then addSample() can be
* called with no arguments, and a pair of timestamps will be captured at
* that moment.
*
* The configure() function accepts some configuration parameters to control
* the linear fitting process.
*/
/**
* \brief Construct a ClockRecovery
*/
ClockRecovery::ClockRecovery()
{
configure();
reset();
}
/**
* \brief Set configuration parameters
* \param[in] numSamples The approximate duration for which the state of the model
* is persistent
* \param[in] maxJitter New output samples are clamped to no more than this
* amount of jitter, to prevent sudden swings from having a large effect
* \param[in] minSamples The fitted clock model is not used to generate outputs
* until this many samples have been received
* \param[in] errorThreshold If the accumulated differences between input and
* output clocks reaches this amount over a few frames, the model is reset
*/
void ClockRecovery::configure(unsigned int numSamples, unsigned int maxJitter,
unsigned int minSamples, unsigned int errorThreshold)
{
LOG(ClockRec, Debug)
<< "configure " << numSamples << " " << maxJitter << " " << minSamples << " " << errorThreshold;
numSamples_ = numSamples;
maxJitter_ = maxJitter;
minSamples_ = minSamples;
errorThreshold_ = errorThreshold;
}
/**
* \brief Reset the clock recovery model and start again from scratch
*/
void ClockRecovery::reset()
{
LOG(ClockRec, Debug) << "reset";
lastInput_ = 0;
lastOutput_ = 0;
xAve_ = 0;
yAve_ = 0;
x2Ave_ = 0;
xyAve_ = 0;
count_ = 0;
error_ = 0.0;
/*
* Setting slope_ and offset_ to zero initially means that the clocks
* advance at exactly the same rate.
*/
slope_ = 0.0;
offset_ = 0.0;
}
/**
* \brief Add a sample point to the clock recovery model, for recovering a wall
* clock value from the internal system time since boot
*
* This is a convenience function to make it easy to derive a wall clock value
* (using the Linux CLOCK_REALTIME) from the time since the system started
* (measured by CLOCK_BOOTTIME).
*/
void ClockRecovery::addSample()
{
LOG(ClockRec, Debug) << "addSample";
struct timespec bootTime1;
struct timespec bootTime2;
struct timespec wallTime;
/* Get boot and wall clocks in microseconds. */
clock_gettime(CLOCK_BOOTTIME, &bootTime1);
clock_gettime(CLOCK_REALTIME, &wallTime);
clock_gettime(CLOCK_BOOTTIME, &bootTime2);
uint64_t boot1 = bootTime1.tv_sec * 1000000ULL + bootTime1.tv_nsec / 1000;
uint64_t boot2 = bootTime2.tv_sec * 1000000ULL + bootTime2.tv_nsec / 1000;
uint64_t boot = (boot1 + boot2) / 2;
uint64_t wall = wallTime.tv_sec * 1000000ULL + wallTime.tv_nsec / 1000;
addSample(boot, wall);
}
/**
* \brief Add a sample point to the clock recovery model, specifying the exact
* input and output clock values
* \param[in] input The input clock value
* \param[in] output The value of the output clock at the same moment, as far
* as possible, that the input clock was sampled
*
* This function should be used for corresponding clocks other than the Linux
* BOOTTIME and REALTIME clocks.
*/
void ClockRecovery::addSample(uint64_t input, uint64_t output)
{
LOG(ClockRec, Debug) << "addSample " << input << " " << output;
if (count_ == 0) {
inputBase_ = input;
outputBase_ = output;
}
/*
* We keep an eye on cumulative drift over the last several frames. If this exceeds a
* threshold, then probably the system clock has been updated and we're going to have to
* reset everything and start over.
*/
if (lastOutput_) {
int64_t inputDiff = getOutput(input) - getOutput(lastInput_);
int64_t outputDiff = output - lastOutput_;
error_ = error_ * 0.95 + (outputDiff - inputDiff);
if (std::abs(error_) > errorThreshold_) {
reset();
inputBase_ = input;
outputBase_ = output;
}
}
lastInput_ = input;
lastOutput_ = output;
/*
* Never let the new output value be more than maxJitter_ away from what
* we would have expected. This is just to reduce the effect of sudden
* large delays in the measured output.
*/
uint64_t expectedOutput = getOutput(input);
output = std::clamp(output, expectedOutput - maxJitter_, expectedOutput + maxJitter_);
/*
* We use x, y, x^2 and x*y sums to calculate the best fit line. Here we
* update them by pretending we have count_ samples at the previous fit,
* and now one new one. Gradually the effect of the older values gets
* lost. This is a very simple way of updating the fit (there are much
* more complicated ones!), but it works well enough. Using averages
* instead of sums makes the relative effect of old values and the new
* sample clearer.
*/
double x = static_cast<int64_t>(input - inputBase_);
double y = static_cast<int64_t>(output - outputBase_) - x;
unsigned int count1 = count_ + 1;
xAve_ = (count_ * xAve_ + x) / count1;
yAve_ = (count_ * yAve_ + y) / count1;
x2Ave_ = (count_ * x2Ave_ + x * x) / count1;
xyAve_ = (count_ * xyAve_ + x * y) / count1;
/*
* Don't update slope and offset until we've seen "enough" sample
* points. Note that the initial settings for slope_ and offset_
* ensures that the wallclock advances at the same rate as the realtime
* clock (but with their respective initial offsets).
*/
if (count_ > minSamples_) {
/* These are the standard equations for least squares linear regression. */
slope_ = (count1 * count1 * xyAve_ - count1 * xAve_ * count1 * yAve_) /
(count1 * count1 * x2Ave_ - count1 * xAve_ * count1 * xAve_);
offset_ = yAve_ - slope_ * xAve_;
}
/*
* Don't increase count_ above numSamples_, as this controls the long-term
* amount of the residual fit.
*/
if (count1 < numSamples_)
count_++;
}
/**
* \brief Calculate the output clock value according to the model from an input
* clock value
* \param[in] input The input clock value
*
* \return Output clock value
*/
uint64_t ClockRecovery::getOutput(uint64_t input)
{
double x = static_cast<int64_t>(input - inputBase_);
double y = slope_ * x + offset_;
uint64_t output = y + x + outputBase_;
LOG(ClockRec, Debug) << "getOutput " << input << " " << output;
return output;
}
} /* namespace libcamera */

View file

@ -212,7 +212,7 @@ controls:
description: |
Exposure time for the frame applied in the sensor device.
This value is specified in micro-seconds.
This value is specified in microseconds.
This control will only take effect if ExposureTimeMode is Manual. If
this control is set when ExposureTimeMode is Auto, the value will be
@ -1268,4 +1268,20 @@ controls:
description: |
Enable or disable the debug metadata.
- FrameWallClock:
type: int64_t
direction: out
description: |
This timestamp corresponds to the same moment in time as the
SensorTimestamp, but is represented as a wall clock time as measured by
the CLOCK_REALTIME clock. Like SensorTimestamp, the timestamp value is
expressed in nanoseconds.
Being a wall clock measurement, it can be used to synchronise timing
across different devices.
\sa SensorTimestamp
The FrameWallClock control can only be returned in metadata.
...

View file

@ -71,4 +71,116 @@ controls:
\sa StatsOutputEnable
- SyncMode:
type: int32_t
direction: in
description: |
Enable or disable camera synchronisation ("sync") mode.
When sync mode is enabled, a camera will synchronise frames temporally
with other cameras, either attached to the same device or a different
one. There should be one "server" device, which broadcasts timing
information to one or more "clients". Communication is one-way, from
server to clients only, and it is only clients that adjust their frame
timings to match the server.
Sync mode requires all cameras to be running at (as far as possible) the
same fixed framerate. Clients may continue to make adjustments to keep
their cameras synchronised with the server for the duration of the
session, though any updates after the initial ones should remain small.
\sa SyncReady
\sa SyncTimer
\sa SyncFrames
enum:
- name: SyncModeOff
value: 0
description: Disable sync mode.
- name: SyncModeServer
value: 1
description: |
Enable sync mode, act as server. The server broadcasts timing
messages to any clients that are listening, so that the clients can
synchronise their camera frames with the server's.
- name: SyncModeClient
value: 2
description: |
Enable sync mode, act as client. A client listens for any server
messages, and arranges for its camera frames to synchronise as
closely as possible with the server's. Many clients can listen out
for the same server. Clients can also be started ahead of any
servers, causing them merely to wait for the server to start.
- SyncReady:
type: bool
direction: out
description: |
When using the camera synchronisation algorithm, the server broadcasts
timing information to the clients. This also includes the time (some
number of frames in the future, called the "ready time") at which the
server will signal its controlling application, using this control, to
start using the image frames.
The client receives the "ready time" from the server, and will signal
its application to start using the frames at this same moment.
While this control value is false, applications (on both client and
server) should continue to wait, and not use the frames.
Once this value becomes true, it means that this is the first frame
where the server and its clients have agreed that they will both be
synchronised and that applications should begin consuming frames.
Thereafter, this control will continue to signal the value true for
the rest of the session.
\sa SyncMode
\sa SyncTimer
\sa SyncFrames
- SyncTimer:
type: int64_t
direction: out
description: |
This reports the amount of time, in microseconds, until the "ready
time", at which the server and client will signal their controlling
applications that the frames are now synchronised and should be
used. The value may be refined slightly over time, becoming more precise
as the "ready time" approaches.
Servers always report this value, whereas clients will omit this control
until they have received a message from the server that enables them to
calculate it.
Normally the value will start positive (the "ready time" is in the
future), and decrease towards zero, before becoming negative (the "ready
time" has elapsed). So there should be just one frame where the timer
value is, or is very close to, zero - the one for which the SyncReady
control becomes true. At this moment, the value indicates how closely
synchronised the client believes it is with the server.
But note that if frames are being dropped, then the "near zero" valued
frame, or indeed any other, could be skipped. In these cases the timer
value allows an application to deduce that this has happened.
\sa SyncMode
\sa SyncReady
\sa SyncFrames
- SyncFrames:
type: int32_t
direction: in
description: |
The number of frames the server should wait, after enabling
SyncModeServer, before signalling (via the SyncReady control) that
frames should be used. This therefore determines the "ready time" for
all synchronised cameras.
This control value should be set only for the device that is to act as
the server, before or at the same moment at which SyncModeServer is
enabled.
\sa SyncMode
\sa SyncReady
\sa SyncTimer
...

View file

@ -43,12 +43,19 @@ LOG_DEFINE_CATEGORY(Buffer)
* The frame has been captured with success and contains valid data. All fields
* of the FrameMetadata structure are valid.
* \var FrameMetadata::FrameError
* An error occurred during capture of the frame. The frame data may be partly
* or fully invalid. The sequence and timestamp fields of the FrameMetadata
* structure is valid, the other fields may be invalid.
* The frame data is partly or fully corrupted, missing or otherwise invalid.
* This can for instance indicate a hardware transmission error, or invalid data
* produced by the sensor during its startup phase. The sequence and timestamp
* fields of the FrameMetadata structure is valid, all the other fields may be
* invalid.
* \var FrameMetadata::FrameCancelled
* Capture stopped before the frame completed. The frame data is not valid. All
* fields of the FrameMetadata structure but the status field are invalid.
* \var FrameMetadata::FrameStartup
* The frame has been successfully captured. However, the IPA is in a
* cold-start or reset phase and will result in image quality parameters
* producing unusable images. Applications are recommended to not consume these
* frames. All other fields of the FrameMetadata structure are valid.
*/
/**

View file

@ -21,6 +21,7 @@ libcamera_internal_sources = files([
'byte_stream_buffer.cpp',
'camera_controls.cpp',
'camera_lens.cpp',
'clock_recovery.cpp',
'control_serializer.cpp',
'control_validator.cpp',
'converter.cpp',
@ -83,7 +84,10 @@ if not cc.has_function('dlopen')
libdl = cc.find_library('dl')
endif
libudev = dependency('libudev', required : get_option('udev'))
libyaml = dependency('yaml-0.1', required : false)
libyaml = dependency('yaml-0.1', default_options : [
'default_library=static',
'werror=false',
])
# Use one of gnutls or libcrypto (provided by OpenSSL), trying gnutls first.
libcrypto = dependency('gnutls', required : false)
@ -119,17 +123,6 @@ if libudev.found()
])
endif
# Fallback to a subproject if libyaml isn't found, as it's not packaged in AOSP.
if not libyaml.found()
cmake = import('cmake')
libyaml_vars = cmake.subproject_options()
libyaml_vars.add_cmake_defines({'CMAKE_POSITION_INDEPENDENT_CODE': 'ON'})
libyaml_vars.append_compile_args('c', '-Wno-unused-value')
libyaml_wrap = cmake.subproject('libyaml', options : libyaml_vars)
libyaml = libyaml_wrap.dependency('yaml')
endif
control_sources = []
controls_mode_files = {

View file

@ -761,30 +761,28 @@ PipelineHandlerISI::generateConfiguration(Camera *camera,
*/
StreamConfiguration cfg;
switch (role) {
case StreamRole::StillCapture:
case StreamRole::Viewfinder:
case StreamRole::VideoRecording: {
Size size = role == StreamRole::StillCapture
? data->sensor_->resolution()
: PipelineHandlerISI::kPreviewSize;
cfg = generateYUVConfiguration(camera, size);
if (cfg.pixelFormat.isValid())
break;
switch (role) {
case StreamRole::StillCapture:
case StreamRole::Viewfinder:
case StreamRole::VideoRecording: {
Size size = role == StreamRole::StillCapture
? data->sensor_->resolution()
: PipelineHandlerISI::kPreviewSize;
cfg = generateYUVConfiguration(camera, size);
if (cfg.pixelFormat.isValid())
break;
/*
* Fallback to use a Bayer format if that's what the
* sensor supports.
*/
[[fallthrough]];
}
/*
* Fallback to use a Bayer format if that's what the
* sensor supports.
*/
[[fallthrough]];
}
case StreamRole::Raw: {
cfg = generateRawConfiguration(camera);
break;
}
case StreamRole::Raw: {
cfg = generateRawConfiguration(camera);
break;
}
default:
LOG(ISI, Error) << "Requested stream role not supported: " << role;
@ -822,7 +820,7 @@ int PipelineHandlerISI::configure(Camera *camera, CameraConfiguration *c)
* routing table instead of resetting it.
*/
V4L2Subdevice::Routing routing = {};
unsigned int xbarFirstSource = crossbar_->entity()->pads().size() / 2 + 1;
unsigned int xbarFirstSource = crossbar_->entity()->pads().size() - pipes_.size();
for (const auto &[idx, config] : utils::enumerate(*c)) {
uint32_t sourcePad = xbarFirstSource + idx;
@ -1005,7 +1003,7 @@ bool PipelineHandlerISI::match(DeviceEnumerator *enumerator)
ret = capture->open();
if (ret)
return ret;
return false;
pipes_.push_back({ std::move(isi), std::move(capture) });
}

View file

@ -659,9 +659,9 @@ int PipelineHandlerBase::start(Camera *camera, const ControlList *controls)
if (!result.controls.empty())
data->setSensorControls(result.controls);
/* Configure the number of dropped frames required on startup. */
data->dropFrameCount_ = data->config_.disableStartupFrameDrops
? 0 : result.dropFrameCount;
/* Configure the number of startup and invalid frames reported by the IPA. */
data->startupFrameCount_ = result.startupFrameCount;
data->invalidFrameCount_ = result.invalidFrameCount;
for (auto const stream : data->streams_)
stream->resetBuffers();
@ -678,7 +678,6 @@ int PipelineHandlerBase::start(Camera *camera, const ControlList *controls)
data->buffersAllocated_ = true;
}
/* We need to set the dropFrameCount_ before queueing buffers. */
ret = queueAllBuffers(camera);
if (ret) {
LOG(RPI, Error) << "Failed to queue buffers";
@ -686,6 +685,9 @@ int PipelineHandlerBase::start(Camera *camera, const ControlList *controls)
return ret;
}
/* A good moment to add an initial clock sample. */
data->wallClockRecovery_.addSample();
/*
* Reset the delayed controls with the gain and exposure values set by
* the IPA.
@ -804,7 +806,8 @@ int PipelineHandlerBase::registerCamera(std::unique_ptr<RPi::CameraData> &camera
* chain. There may be a cascade of devices in this chain!
*/
MediaLink *link = sensorEntity->getPadByIndex(0)->links()[0];
data->enumerateVideoDevices(link, frontendName);
if (!data->enumerateVideoDevices(link, frontendName))
return -EINVAL;
ipa::RPi::InitResult result;
if (data->loadIPA(&result)) {
@ -894,28 +897,12 @@ int PipelineHandlerBase::queueAllBuffers(Camera *camera)
int ret;
for (auto const stream : data->streams_) {
if (!(stream->getFlags() & StreamFlag::External)) {
ret = stream->queueAllBuffers();
if (ret < 0)
return ret;
} else {
/*
* For external streams, we must queue up a set of internal
* buffers to handle the number of drop frames requested by
* the IPA. This is done by passing nullptr in queueBuffer().
*
* The below queueBuffer() call will do nothing if there
* are not enough internal buffers allocated, but this will
* be handled by queuing the request for buffers in the
* RPiStream object.
*/
unsigned int i;
for (i = 0; i < data->dropFrameCount_; i++) {
ret = stream->queueBuffer(nullptr);
if (ret)
return ret;
}
}
if (stream->getFlags() & StreamFlag::External)
continue;
ret = stream->queueAllBuffers();
if (ret < 0)
return ret;
}
return 0;
@ -1032,16 +1019,20 @@ void CameraData::freeBuffers()
* | Sensor2 | | Sensor3 |
* +---------+ +---------+
*/
void CameraData::enumerateVideoDevices(MediaLink *link, const std::string &frontend)
bool CameraData::enumerateVideoDevices(MediaLink *link, const std::string &frontend)
{
const MediaPad *sinkPad = link->sink();
const MediaEntity *entity = sinkPad->entity();
bool frontendFound = false;
/* Once we reach the Frontend entity, we are done. */
if (link->sink()->entity()->name() == frontend)
return true;
/* We only deal with Video Mux and Bridge devices in cascade. */
if (entity->function() != MEDIA_ENT_F_VID_MUX &&
entity->function() != MEDIA_ENT_F_VID_IF_BRIDGE)
return;
return false;
/* Find the source pad for this Video Mux or Bridge device. */
const MediaPad *sourcePad = nullptr;
@ -1053,7 +1044,7 @@ void CameraData::enumerateVideoDevices(MediaLink *link, const std::string &front
* and this branch in the cascade.
*/
if (sourcePad)
return;
return false;
sourcePad = pad;
}
@ -1070,12 +1061,9 @@ void CameraData::enumerateVideoDevices(MediaLink *link, const std::string &front
* other Video Mux and Bridge devices.
*/
for (MediaLink *l : sourcePad->links()) {
enumerateVideoDevices(l, frontend);
/* Once we reach the Frontend entity, we are done. */
if (l->sink()->entity()->name() == frontend) {
frontendFound = true;
frontendFound = enumerateVideoDevices(l, frontend);
if (frontendFound)
break;
}
}
/* This identifies the end of our entity enumeration recursion. */
@ -1090,12 +1078,13 @@ void CameraData::enumerateVideoDevices(MediaLink *link, const std::string &front
bridgeDevices_.clear();
}
}
return frontendFound;
}
int CameraData::loadPipelineConfiguration()
{
config_ = {
.disableStartupFrameDrops = false,
.cameraTimeoutValue = 0,
};
@ -1132,8 +1121,10 @@ int CameraData::loadPipelineConfiguration()
const YamlObject &phConfig = (*root)["pipeline_handler"];
config_.disableStartupFrameDrops =
phConfig["disable_startup_frame_drops"].get<bool>(config_.disableStartupFrameDrops);
if (phConfig.contains("disable_startup_frame_drops"))
LOG(RPI, Warning)
<< "The disable_startup_frame_drops key is now deprecated, "
<< "startup frames are now identified by the FrameMetadata::Status::FrameStartup flag";
config_.cameraTimeoutValue =
phConfig["camera_timeout_value_ms"].get<unsigned int>(config_.cameraTimeoutValue);
@ -1412,7 +1403,15 @@ void CameraData::handleStreamBuffer(FrameBuffer *buffer, RPi::Stream *stream)
* buffer back to the stream.
*/
Request *request = requestQueue_.empty() ? nullptr : requestQueue_.front();
if (!dropFrameCount_ && request && request->findBuffer(stream) == buffer) {
if (request && request->findBuffer(stream) == buffer) {
FrameMetadata &md = buffer->_d()->metadata();
/* Mark the non-converged and invalid frames in the metadata. */
if (invalidFrameCount_)
md.status = FrameMetadata::Status::FrameError;
else if (startupFrameCount_)
md.status = FrameMetadata::Status::FrameStartup;
/*
* Tag the buffer as completed, returning it to the
* application.
@ -1458,42 +1457,31 @@ void CameraData::handleState()
void CameraData::checkRequestCompleted()
{
bool requestCompleted = false;
/*
* If we are dropping this frame, do not touch the request, simply
* change the state to IDLE when ready.
*/
if (!dropFrameCount_) {
Request *request = requestQueue_.front();
if (request->hasPendingBuffers())
return;
Request *request = requestQueue_.front();
if (request->hasPendingBuffers())
return;
/* Must wait for metadata to be filled in before completing. */
if (state_ != State::IpaComplete)
return;
/* Must wait for metadata to be filled in before completing. */
if (state_ != State::IpaComplete)
return;
LOG(RPI, Debug) << "Completing request sequence: "
<< request->sequence();
LOG(RPI, Debug) << "Completing request sequence: "
<< request->sequence();
pipe()->completeRequest(request);
requestQueue_.pop();
requestCompleted = true;
}
pipe()->completeRequest(request);
requestQueue_.pop();
/*
* Make sure we have three outputs completed in the case of a dropped
* frame.
*/
if (state_ == State::IpaComplete &&
((ispOutputCount_ == ispOutputTotal_ && dropFrameCount_) ||
requestCompleted)) {
LOG(RPI, Debug) << "Going into Idle state";
state_ = State::Idle;
if (dropFrameCount_) {
dropFrameCount_--;
LOG(RPI, Debug) << "Dropping frame at the request of the IPA ("
<< dropFrameCount_ << " left)";
}
LOG(RPI, Debug) << "Going into Idle state";
state_ = State::Idle;
if (invalidFrameCount_) {
invalidFrameCount_--;
LOG(RPI, Debug) << "Decrementing invalid frames to "
<< invalidFrameCount_;
} else if (startupFrameCount_) {
startupFrameCount_--;
LOG(RPI, Debug) << "Decrementing startup frames to "
<< startupFrameCount_;
}
}
@ -1501,6 +1489,8 @@ void CameraData::fillRequestMetadata(const ControlList &bufferControls, Request
{
request->metadata().set(controls::SensorTimestamp,
bufferControls.get(controls::SensorTimestamp).value_or(0));
request->metadata().set(controls::FrameWallClock,
bufferControls.get(controls::FrameWallClock).value_or(0));
if (cropParams_.size()) {
std::vector<Rectangle> crops;

View file

@ -20,6 +20,7 @@
#include "libcamera/internal/bayer_format.h"
#include "libcamera/internal/camera.h"
#include "libcamera/internal/camera_sensor.h"
#include "libcamera/internal/clock_recovery.h"
#include "libcamera/internal/framebuffer.h"
#include "libcamera/internal/media_device.h"
#include "libcamera/internal/media_object.h"
@ -48,8 +49,7 @@ class CameraData : public Camera::Private
public:
CameraData(PipelineHandler *pipe)
: Camera::Private(pipe), state_(State::Stopped),
dropFrameCount_(0), buffersAllocated_(false),
ispOutputCount_(0), ispOutputTotal_(0)
startupFrameCount_(0), invalidFrameCount_(0), buffersAllocated_(false)
{
}
@ -68,7 +68,7 @@ public:
void freeBuffers();
virtual void platformFreeBuffers() = 0;
void enumerateVideoDevices(MediaLink *link, const std::string &frontend);
bool enumerateVideoDevices(MediaLink *link, const std::string &frontend);
int loadPipelineConfiguration();
int loadIPA(ipa::RPi::InitResult *result);
@ -151,7 +151,8 @@ public:
/* Mapping of CropParams keyed by the output stream order in CameraConfiguration */
std::map<unsigned int, CropParams> cropParams_;
unsigned int dropFrameCount_;
unsigned int startupFrameCount_;
unsigned int invalidFrameCount_;
/*
* If set, this stores the value that represets a gain of one for
@ -163,11 +164,6 @@ public:
bool buffersAllocated_;
struct Config {
/*
* Override any request from the IPA to drop a number of startup
* frames.
*/
bool disableStartupFrameDrops;
/*
* Override the camera timeout value calculated by the IPA based
* on frame durations.
@ -177,15 +173,14 @@ public:
Config config_;
ClockRecovery wallClockRecovery_;
protected:
void fillRequestMetadata(const ControlList &bufferControls,
Request *request);
virtual void tryRunPipeline() = 0;
unsigned int ispOutputCount_;
unsigned int ispOutputTotal_;
private:
void checkRequestCompleted();
};

View file

@ -16,11 +16,6 @@
#
# "num_cfe_config_queue": 2,
# Override any request from the IPA to drop a number of startup
# frames.
#
# "disable_startup_frame_drops": false,
# Custom timeout value (in ms) for camera to use. This overrides
# the value computed by the pipeline handler based on frame
# durations.

View file

@ -1755,9 +1755,15 @@ void PiSPCameraData::cfeBufferDequeue(FrameBuffer *buffer)
auto [ctrl, delayContext] = delayedCtrls_->get(buffer->metadata().sequence);
/*
* Add the frame timestamp to the ControlList for the IPA to use
* as it does not receive the FrameBuffer object.
* as it does not receive the FrameBuffer object. Also derive a
* corresponding wallclock value.
*/
ctrl.set(controls::SensorTimestamp, buffer->metadata().timestamp);
wallClockRecovery_.addSample();
uint64_t sensorTimestamp = buffer->metadata().timestamp;
uint64_t wallClockTimestamp = wallClockRecovery_.getOutput(sensorTimestamp);
ctrl.set(controls::SensorTimestamp, sensorTimestamp);
ctrl.set(controls::FrameWallClock, wallClockTimestamp);
job.sensorControls = std::move(ctrl);
job.delayContext = delayContext;
} else if (stream == &cfe_[Cfe::Config]) {
@ -1834,12 +1840,6 @@ void PiSPCameraData::beOutputDequeue(FrameBuffer *buffer)
dmabufSyncEnd(buffer->planes()[0].fd);
handleStreamBuffer(buffer, stream);
/*
* Increment the number of ISP outputs generated.
* This is needed to track dropped frames.
*/
ispOutputCount_++;
handleState();
}
@ -1885,7 +1885,6 @@ void PiSPCameraData::prepareIspComplete(const ipa::RPi::BufferIds &buffers, bool
* If there is no need to run the Backend, just signal that the
* input buffer is completed and all Backend outputs are ready.
*/
ispOutputCount_ = ispOutputTotal_;
buffer = cfe_[Cfe::Output0].getBuffers().at(bayerId).buffer;
handleStreamBuffer(buffer, &cfe_[Cfe::Output0]);
} else
@ -1994,7 +1993,6 @@ int PiSPCameraData::configureBe(const std::optional<ColorSpace> &yuvColorSpace)
global.bayer_enables |= PISP_BE_BAYER_ENABLE_INPUT;
global.bayer_order = toPiSPBayerOrder(cfeFormat.fourcc);
ispOutputTotal_ = 1; /* Config buffer */
if (PISP_IMAGE_FORMAT_COMPRESSED(inputFormat.format)) {
pisp_decompress_config decompress;
decompress.offset = DefaultCompressionOffset;
@ -2025,7 +2023,6 @@ int PiSPCameraData::configureBe(const std::optional<ColorSpace> &yuvColorSpace)
setupOutputClipping(ispFormat0, outputFormat0);
be_->SetOutputFormat(0, outputFormat0);
ispOutputTotal_++;
}
if (global.rgb_enables & PISP_BE_RGB_ENABLE_OUTPUT1) {
@ -2049,7 +2046,6 @@ int PiSPCameraData::configureBe(const std::optional<ColorSpace> &yuvColorSpace)
setupOutputClipping(ispFormat1, outputFormat1);
be_->SetOutputFormat(1, outputFormat1);
ispOutputTotal_++;
}
/* Setup the TDN I/O blocks in case TDN gets turned on later. */
@ -2256,8 +2252,6 @@ void PiSPCameraData::prepareCfe()
void PiSPCameraData::prepareBe(uint32_t bufferId, bool stitchSwapBuffers)
{
ispOutputCount_ = 0;
FrameBuffer *buffer = cfe_[Cfe::Output0].getBuffers().at(bufferId).buffer;
LOG(RPI, Debug) << "Input re-queue to ISP, buffer id " << bufferId

View file

@ -29,11 +29,6 @@
#
# "min_total_unicam_buffers": 4,
# Override any request from the IPA to drop a number of startup
# frames.
#
# "disable_startup_frame_drops": false,
# Custom timeout value (in ms) for camera to use. This overrides
# the value computed by the pipeline handler based on frame
# durations.

View file

@ -597,8 +597,6 @@ int Vc4CameraData::platformConfigure(const RPi::RPiCameraConfiguration *rpiConfi
stream->setFlags(StreamFlag::External);
}
ispOutputTotal_ = outStreams.size();
/*
* If ISP::Output0 stream has not been configured by the application,
* we must allow the hardware to generate an output so that the data
@ -625,8 +623,6 @@ int Vc4CameraData::platformConfigure(const RPi::RPiCameraConfiguration *rpiConfi
return -EINVAL;
}
ispOutputTotal_++;
LOG(RPI, Debug) << "Defaulting ISP Output0 format to "
<< format;
}
@ -662,8 +658,6 @@ int Vc4CameraData::platformConfigure(const RPi::RPiCameraConfiguration *rpiConfi
<< ret;
return -EINVAL;
}
ispOutputTotal_++;
}
/* ISP statistics output format. */
@ -676,8 +670,6 @@ int Vc4CameraData::platformConfigure(const RPi::RPiCameraConfiguration *rpiConfi
return ret;
}
ispOutputTotal_++;
/*
* Configure the Unicam embedded data output format only if the sensor
* supports it.
@ -781,9 +773,15 @@ void Vc4CameraData::unicamBufferDequeue(FrameBuffer *buffer)
auto [ctrl, delayContext] = delayedCtrls_->get(buffer->metadata().sequence);
/*
* Add the frame timestamp to the ControlList for the IPA to use
* as it does not receive the FrameBuffer object.
* as it does not receive the FrameBuffer object. Also derive a
* corresponding wallclock value.
*/
ctrl.set(controls::SensorTimestamp, buffer->metadata().timestamp);
wallClockRecovery_.addSample();
uint64_t sensorTimestamp = buffer->metadata().timestamp;
uint64_t wallClockTimestamp = wallClockRecovery_.getOutput(sensorTimestamp);
ctrl.set(controls::SensorTimestamp, sensorTimestamp);
ctrl.set(controls::FrameWallClock, wallClockTimestamp);
bayerQueue_.push({ buffer, std::move(ctrl), delayContext });
} else {
embeddedQueue_.push(buffer);
@ -843,12 +841,6 @@ void Vc4CameraData::ispOutputDequeue(FrameBuffer *buffer)
handleStreamBuffer(buffer, stream);
}
/*
* Increment the number of ISP outputs generated.
* This is needed to track dropped frames.
*/
ispOutputCount_++;
handleState();
}
@ -880,7 +872,6 @@ void Vc4CameraData::prepareIspComplete(const ipa::RPi::BufferIds &buffers,
<< ", timestamp: " << buffer->metadata().timestamp;
isp_[Isp::Input].queueBuffer(buffer);
ispOutputCount_ = 0;
if (sensorMetadata_ && embeddedId) {
buffer = unicam_[Unicam::Embedded].getBuffers().at(embeddedId & RPi::MaskID).buffer;

View file

@ -30,6 +30,7 @@
#include <libcamera/stream.h>
#include "libcamera/internal/camera.h"
#include "libcamera/internal/camera_lens.h"
#include "libcamera/internal/camera_sensor.h"
#include "libcamera/internal/camera_sensor_properties.h"
#include "libcamera/internal/converter.h"
@ -41,6 +42,8 @@
#include "libcamera/internal/v4l2_subdevice.h"
#include "libcamera/internal/v4l2_videodevice.h"
#include "libcamera/controls.h"
namespace libcamera {
LOG_DEFINE_CATEGORY(SimplePipeline)
@ -356,7 +359,7 @@ private:
void ispStatsReady(uint32_t frame, uint32_t bufferId);
void metadataReady(uint32_t frame, const ControlList &metadata);
void setSensorControls(const ControlList &sensorControls);
void setSensorControls(const ControlList &sensorControls, const ControlList &lensControls);
};
class SimpleCameraConfiguration : public CameraConfiguration
@ -1002,7 +1005,7 @@ void SimpleCameraData::metadataReady(uint32_t frame, const ControlList &metadata
tryCompleteRequest(info->request);
}
void SimpleCameraData::setSensorControls(const ControlList &sensorControls)
void SimpleCameraData::setSensorControls(const ControlList &sensorControls, const ControlList &lensControls)
{
delayedCtrls_->push(sensorControls);
/*
@ -1013,10 +1016,21 @@ void SimpleCameraData::setSensorControls(const ControlList &sensorControls)
* but it also bypasses delayedCtrls_, creating AGC regulation issues.
* Both problems should be fixed.
*/
if (!frameStartEmitter_) {
ControlList ctrls(sensorControls);
sensor_->setControls(&ctrls);
}
if (frameStartEmitter_)
return;
ControlList ctrls(sensorControls);
sensor_->setControls(&ctrls);
CameraLens *focusLens = sensor_->focusLens();
if (!focusLens)
return;
if (!lensControls.contains(V4L2_CID_FOCUS_ABSOLUTE))
return;
const ControlValue &focusValue = lensControls.get(V4L2_CID_FOCUS_ABSOLUTE);
focusLens->setFocusPosition(focusValue.get<int32_t>());
}
/* Retrieve all source pads connected to a sink pad through active routes. */
@ -1406,6 +1420,10 @@ int SimplePipelineHandler::configure(Camera *camera, CameraConfiguration *c)
} else {
ipa::soft::IPAConfigInfo configInfo;
configInfo.sensorControls = data->sensor_->controls();
if (data->sensor_->focusLens() != nullptr)
configInfo.lensControls = data->sensor_->focusLens()->controls();
else
configInfo.lensControls = ControlInfoMap();
return data->swIsp_->configure(inputCfg, outputCfgs, configInfo);
}
}

View file

@ -100,7 +100,7 @@ public:
private:
int processControl(const UVCCameraData *data, ControlList *controls,
unsigned int id, const ControlValue &value);
int processControls(UVCCameraData *data, Request *request);
int processControls(UVCCameraData *data, const ControlList &reqControls);
bool acquireDevice(Camera *camera) override;
void releaseDevice(Camera *camera) override;
@ -287,7 +287,7 @@ int PipelineHandlerUVC::exportFrameBuffers(Camera *camera, Stream *stream,
return data->video_->exportBuffers(count, buffers);
}
int PipelineHandlerUVC::start(Camera *camera, [[maybe_unused]] const ControlList *controls)
int PipelineHandlerUVC::start(Camera *camera, const ControlList *controls)
{
UVCCameraData *data = cameraData(camera);
unsigned int count = data->stream_.configuration().bufferCount;
@ -296,13 +296,22 @@ int PipelineHandlerUVC::start(Camera *camera, [[maybe_unused]] const ControlList
if (ret < 0)
return ret;
ret = data->video_->streamOn();
if (ret < 0) {
data->video_->releaseBuffers();
return ret;
if (controls) {
ret = processControls(data, *controls);
if (ret < 0)
goto err_release_buffers;
}
ret = data->video_->streamOn();
if (ret < 0)
goto err_release_buffers;
return 0;
err_release_buffers:
data->video_->releaseBuffers();
return ret;
}
void PipelineHandlerUVC::stopDevice(Camera *camera)
@ -331,6 +340,8 @@ int PipelineHandlerUVC::processControl(const UVCCameraData *data, ControlList *c
cid = V4L2_CID_GAIN;
else if (id == controls::Gamma)
cid = V4L2_CID_GAMMA;
else if (id == controls::AeEnable)
return 0; /* Handled in `Camera::queueRequest()`. */
else
return -EINVAL;
@ -410,11 +421,11 @@ int PipelineHandlerUVC::processControl(const UVCCameraData *data, ControlList *c
return 0;
}
int PipelineHandlerUVC::processControls(UVCCameraData *data, Request *request)
int PipelineHandlerUVC::processControls(UVCCameraData *data, const ControlList &reqControls)
{
ControlList controls(data->video_->controls());
for (const auto &[id, value] : request->controls())
for (const auto &[id, value] : reqControls)
processControl(data, &controls, id, value);
for (const auto &ctrl : controls)
@ -442,7 +453,7 @@ int PipelineHandlerUVC::queueRequestDevice(Camera *camera, Request *request)
return -ENOENT;
}
int ret = processControls(data, request);
int ret = processControls(data, request->controls());
if (ret < 0)
return ret;

View file

@ -372,6 +372,8 @@ void PipelineHandler::stop(Camera *camera)
/* Make sure no requests are pending. */
Camera::Private *data = camera->_d();
// WIP: Just clean for now, idk maybe something wrong with thread sync?
data->queuedRequests_.clear();
ASSERT(data->queuedRequests_.empty());
data->requestSequence_ = 0;

View file

@ -241,7 +241,12 @@ int Process::start(const std::string &path,
int ret;
if (running_)
return 0;
return -EBUSY;
for (int fd : fds) {
if (fd < 0)
return -EINVAL;
}
int childPid = fork();
if (childPid == -1) {
@ -279,14 +284,15 @@ int Process::start(const std::string &path,
if (file && strcmp(file, "syslog"))
unsetenv("LIBCAMERA_LOG_FILE");
const char **argv = new const char *[args.size() + 2];
unsigned int len = args.size();
const size_t len = args.size();
auto argv = std::make_unique<const char *[]>(len + 2);
argv[0] = path.c_str();
for (unsigned int i = 0; i < len; i++)
for (size_t i = 0; i < len; i++)
argv[i + 1] = args[i].c_str();
argv[len + 1] = nullptr;
execv(path.c_str(), (char **)argv);
execv(path.c_str(), const_cast<char **>(argv.get()));
_exit(EXIT_FAILURE);
}
@ -297,6 +303,8 @@ void Process::closeAllFdsExcept(const std::vector<int> &fds)
std::vector<int> v(fds);
sort(v.begin(), v.end());
ASSERT(v.empty() || v.front() >= 0);
DIR *dir = opendir("/proc/self/fd");
if (!dir)
return;

View file

@ -302,8 +302,9 @@ int CameraSensor::setEmbeddedDataEnabled(bool enable)
* camera sensor, likely at configure() time.
*
* If the requested \a orientation cannot be obtained, the \a orientation
* parameter is adjusted to report the current image orientation and
* Transform::Identity is returned.
* parameter is adjusted to report the native image orientation (i.e. resulting
* from the physical mounting rotation of the camera sensor, without any
* transformation) and Transform::Identity is returned.
*
* If the requested \a orientation can be obtained, the function computes a
* Transform and does not adjust \a orientation.

View file

@ -668,7 +668,7 @@ void DebayerCpu::process2(const uint8_t *src, uint8_t *dst)
for (unsigned int y = window_.y; y < yEnd; y += 2) {
shiftLinePointers(linePointers, src);
memcpyNextLine(linePointers);
stats_->processLine0(y, linePointers);
if (this->enable_statistic) stats_->processLine0(y, linePointers);
(this->*debayer0_)(dst, linePointers);
src += inputConfig_.stride;
dst += outputConfig_.stride;
@ -683,7 +683,7 @@ void DebayerCpu::process2(const uint8_t *src, uint8_t *dst)
if (window_.y == 0) {
shiftLinePointers(linePointers, src);
memcpyNextLine(linePointers);
stats_->processLine0(yEnd, linePointers);
if (this->enable_statistic) stats_->processLine0(yEnd, linePointers);
(this->*debayer0_)(dst, linePointers);
src += inputConfig_.stride;
dst += outputConfig_.stride;
@ -720,7 +720,7 @@ void DebayerCpu::process4(const uint8_t *src, uint8_t *dst)
for (unsigned int y = window_.y; y < yEnd; y += 4) {
shiftLinePointers(linePointers, src);
memcpyNextLine(linePointers);
stats_->processLine0(y, linePointers);
if (this->enable_statistic) stats_->processLine0(y, linePointers);
(this->*debayer0_)(dst, linePointers);
src += inputConfig_.stride;
dst += outputConfig_.stride;
@ -733,7 +733,7 @@ void DebayerCpu::process4(const uint8_t *src, uint8_t *dst)
shiftLinePointers(linePointers, src);
memcpyNextLine(linePointers);
stats_->processLine2(y, linePointers);
if (this->enable_statistic) stats_->processLine2(y, linePointers);
(this->*debayer2_)(dst, linePointers);
src += inputConfig_.stride;
dst += outputConfig_.stride;
@ -771,7 +771,7 @@ void DebayerCpu::process(uint32_t frame, FrameBuffer *input, FrameBuffer *output
for (const FrameBuffer::Plane &plane : output->planes())
dmaSyncers.emplace_back(plane.fd, DmaSyncer::SyncType::Write);
enable_statistic = params.collect_stats;
green_ = params.green;
greenCcm_ = params.greenCcm;
if (swapRedBlueGains_) {
@ -805,7 +805,7 @@ void DebayerCpu::process(uint32_t frame, FrameBuffer *input, FrameBuffer *output
return;
}
stats_->startFrame();
if(this->enable_statistic) stats_->startFrame();
if (inputConfig_.patternSize.height == 2)
process2(in.planes()[0].data(), out.planes()[0].data());

View file

@ -165,6 +165,7 @@ private:
/* Skip 30 frames for things to stabilize then measure 30 frames */
static constexpr unsigned int kFramesToSkip = 30;
static constexpr unsigned int kLastFrameToMeasure = 60;
bool enable_statistic = true;
};
} /* namespace libcamera */

View file

@ -395,9 +395,9 @@ void SoftwareIsp::saveIspParams()
debayerParams_ = *sharedParams_;
}
void SoftwareIsp::setSensorCtrls(const ControlList &sensorControls)
void SoftwareIsp::setSensorCtrls(const ControlList &sensorControls, const ControlList &lensControls)
{
setSensorControls.emit(sensorControls);
setSensorControls.emit(sensorControls, lensControls);
}
void SoftwareIsp::statsReady(uint32_t frame, uint32_t bufferId)

View file

@ -147,7 +147,10 @@ static constexpr unsigned int kBlueYMul = 29; /* 0.114 * 256 */
\
uint64_t sumR = 0; \
uint64_t sumG = 0; \
uint64_t sumB = 0;
uint64_t sumB = 0; \
pixel_t r0 = 0, r1 = 0, b0 = 0, \
b1 = 0, g0 = 0, g1 = 0; \
uint64_t sharpness = 0;
#define SWSTATS_ACCUMULATE_LINE_STATS(div) \
sumR += r; \
@ -157,12 +160,18 @@ static constexpr unsigned int kBlueYMul = 29; /* 0.114 * 256 */
yVal = r * kRedYMul; \
yVal += g * kGreenYMul; \
yVal += b * kBlueYMul; \
stats_.yHistogram[yVal * SwIspStats::kYHistogramSize / (256 * 256 * (div))]++;
stats_.yHistogram[yVal * SwIspStats::kYHistogramSize / (256 * 256 * (div))]++; \
if (r0 != 0) \
sharpness += abs(r - 2*r1 + r0) * kRedYMul + abs(g - 2*g1 + g0) * kGreenYMul + abs(b - 2*b1 + b0) * kBlueYMul; \
r0 = r1; g0 = g1; b0 = b1; \
r1 = r; g1 = g; b1 = b; \
#define SWSTATS_FINISH_LINE_STATS() \
stats_.sumR_ += sumR; \
stats_.sumG_ += sumG; \
stats_.sumB_ += sumB;
stats_.sumB_ += sumB; \
stats_.sharpness += sharpness;
void SwStatsCpu::statsBGGR8Line0(const uint8_t *src[])
{
@ -306,6 +315,7 @@ void SwStatsCpu::startFrame(void)
stats_.sumR_ = 0;
stats_.sumB_ = 0;
stats_.sumG_ = 0;
stats_.sharpness = 0;
stats_.yHistogram.fill(0);
}

View file

@ -2031,10 +2031,9 @@ int V4L2VideoDevice::streamOff()
/* Send back all queued buffers. */
for (auto it : queuedBuffers_) {
FrameBuffer *buffer = it.second;
FrameMetadata &metadata = buffer->_d()->metadata();
cache_->put(it.first);
metadata.status = FrameMetadata::FrameCancelled;
buffer->_d()->cancel();
bufferReady.emit(buffer);
}

View file

@ -2,5 +2,5 @@
[wrap-git]
url = https://github.com/raspberrypi/libpisp.git
revision = v1.2.0
revision = v1.2.1
depth = 1

View file

@ -1,7 +1,13 @@
# SPDX-License-Identifier: CC0-1.0
[wrap-git]
directory = libyaml
url = https://github.com/yaml/libyaml
# tags/0.2.5
revision = 2c891fc7a770e8ba2fec34fc6b545c672beb37e6
[wrap-file]
directory = yaml-0.2.5
source_url = https://pyyaml.org/download/libyaml/yaml-0.2.5.tar.gz
source_filename = yaml-0.2.5.tar.gz
source_hash = c642ae9b75fee120b2d96c712538bd2cf283228d2337df2cf2988e3c02678ef4
patch_filename = libyaml_0.2.5-1_patch.zip
patch_url = https://wrapdb.mesonbuild.com/v2/libyaml_0.2.5-1/get_patch
patch_hash = bf2e9b922be00b6b00c5fce29d9fb8dc83f0431c77239f3b73e8b254d3f3f5b5
[provide]
yaml-0.1 = yaml_dep

View file

@ -26,6 +26,11 @@ using namespace std;
using namespace libcamera;
LOG_DEFINE_CATEGORY(LogAPITest)
LOG_DEFINE_CATEGORY(Cat0)
LOG_DEFINE_CATEGORY(Cat1)
LOG_DEFINE_CATEGORY(Cat2)
LOG_DEFINE_CATEGORY(Cat3)
LOG_DEFINE_CATEGORY(Cat4)
class LogAPITest : public Test
{
@ -74,6 +79,34 @@ protected:
return TestPass;
}
int testEnvLevels()
{
setenv("LIBCAMERA_LOG_LEVELS",
"Cat0:0,Cat0:9999,Cat1:INFO,Cat1:INVALID,Cat2:2,Cat2:-1,"
"Cat3:ERROR,Cat3:{[]},Cat4:4,Cat4:rubbish",
true);
logSetTarget(libcamera::LoggingTargetNone);
const std::pair<const LogCategory &, libcamera::LogSeverity> expected[] = {
{ _LOG_CATEGORY(Cat0)(), libcamera::LogDebug },
{ _LOG_CATEGORY(Cat1)(), libcamera::LogInfo },
{ _LOG_CATEGORY(Cat2)(), libcamera::LogWarning },
{ _LOG_CATEGORY(Cat3)(), libcamera::LogError },
{ _LOG_CATEGORY(Cat4)(), libcamera::LogFatal },
};
bool ok = true;
for (const auto &[c, s] : expected) {
if (c.severity() != s) {
ok = false;
cerr << "Severity of " << c.name() << " (" << c.severity() << ") "
<< "does not equal " << s << endl;
}
}
return ok ? TestPass : TestFail;
}
int testFile()
{
int fd = open("/tmp", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
@ -135,7 +168,11 @@ protected:
int run() override
{
int ret = testFile();
int ret = testEnvLevels();
if (ret != TestPass)
return TestFail;
ret = testFile();
if (ret != TestPass)
return TestFail;

View file

@ -11,5 +11,6 @@ foreach test : log_test
link_with : test_libraries,
include_directories : test_includes_internal)
test(test['name'], exe, suite : 'log')
test(test['name'], exe, suite : 'log',
should_fail : test.get('should_fail', false))
endforeach