Compare commits

..

No commits in common. "master" and "v0.4.0" have entirely different histories.

309 changed files with 2011 additions and 38753 deletions

View file

@ -57,8 +57,7 @@ GENERATE_LATEX = NO
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
INCLUDE_PATH = "@TOP_BUILDDIR@/include" \
"@TOP_SRCDIR@/include"
INCLUDE_PATH = "@TOP_SRCDIR@/include/libcamera"
INCLUDE_FILE_PATTERNS = *.h
IMAGE_PATH = "@TOP_SRCDIR@/Documentation/images"

View file

@ -26,7 +26,6 @@ EXCLUDE = @TOP_SRCDIR@/include/libcamera/base/span.h \
@TOP_SRCDIR@/src/libcamera/ipc_pipe_unixsocket.cpp \
@TOP_SRCDIR@/src/libcamera/pipeline/ \
@TOP_SRCDIR@/src/libcamera/sensor/camera_sensor_legacy.cpp \
@TOP_SRCDIR@/src/libcamera/sensor/camera_sensor_raw.cpp \
@TOP_SRCDIR@/src/libcamera/tracepoints.cpp \
@TOP_BUILDDIR@/include/libcamera/internal/tracepoints.h \
@TOP_BUILDDIR@/include/libcamera/ipa/soft_ipa_interface.h \

View file

@ -1,331 +0,0 @@
.. SPDX-License-Identifier: CC-BY-SA-4.0
Design of Exposure and Gain controls
====================================
This document explains the design and rationale of the controls related to
exposure and gain. This includes the all-encompassing auto-exposure (AE), the
manual exposure control, and the manual gain control.
Description of the problem
--------------------------
Sub controls
^^^^^^^^^^^^
There are more than one control that make up total exposure: exposure time,
gain, and aperture (though for now we will not consider aperture). We already
had individual controls for setting the values of manual exposure and manual
gain, but for switching between auto mode and manual mode we only had a
high-level boolean AeEnable control that would set *both* exposure and gain to
auto mode or manual mode; we had no way to set one to auto and the other to
manual.
So, we need to introduce two new controls to act as "levers" to indicate
individually for exposure and gain if the value would come from AEGC or if it
would come from the manual control value.
Aperture priority
^^^^^^^^^^^^^^^^^
We eventually may need to support aperture, and so whatever our solution is for
having only some controls on auto and the others on manual needs to be
extensible.
Flickering when going from auto to manual
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When a manual exposure or gain value is requested by the application, it costs
a few frames worth of time for them to take effect. This means that during a
transition from auto to manual, there would be flickering in the control values
and the transition won't be smooth.
Take for instance the following flow, where we start on auto exposure (which
for the purposes of the example increments by 1 each frame) and we want to
switch seamlessly to manual exposure, which involves copying the exposure value
computed by the auto exposure algorithm:
::
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
| N | | N+1 | | N+2 | | N+3 | | N+4 | | N+5 | | N+6 |
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
Mode requested: Auto Auto Auto Manual Manual Manual Manual
Exp requested: N/A N/A N/A 2 2 2 2
Set in Frame: N+2 N+3 N+4 N+5 N+6 N+7 N+8
Mode used: Auto Auto Auto Auto Auto Manual Manual
Exp used: 0 1 2 3 4 2 2
As we can see, after frame N+2 completes, we copy the exposure value that was
used for frame N+2 (which was computed by AE algorithm), and queue that value
into request N+3 with manual mode on. However, as it takes two frames for the
exposure to be set, the exposure still changes since it is set by AE, and we
get a flicker in the exposure during the switch from auto to manual.
A solution is to *not submit* any exposure value when manual mode is enabled,
and wait until the manual mode as been "applied" before copying the exposure
value:
::
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
| N | | N+1 | | N+2 | | N+3 | | N+4 | | N+5 | | N+6 |
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
Mode requested: Auto Auto Auto Manual Manual Manual Manual
Exp requested: N/A N/A N/A None None None 5
Set in Frame: N+2 N+3 N+4 N+5 N+6 N+7 N+8
Mode used: Auto Auto Auto Auto Auto Manual Manual
Exp used: 0 1 2 3 4 5 5
In practice, this works. However, libcamera has a policy where once a control
is submitted, its value is saved and does not need to be resubmitted. If the
manual exposure value was set while auto mode was on, in theory the value would
be saved, so when manual mode is enabled, the exposure value that was
previously set would immediately be used. Clearly this solution isn't correct,
but it can serve as the basis for a proper solution, with some more rigorous
rules.
Existing solutions
------------------
Raspberry Pi
^^^^^^^^^^^^
The Raspberry Pi IPA gets around the lack of individual AeEnable controls for
exposure and gain by using magic values. When AeEnable is false, if one of the
manual control values was set to 0 then the value computed by AEGC would be
used for just that control. This solution isn't desirable, as it prevents
that magic value from being used as a valid value.
To get around the flickering issue, when AeEnable is false, the Raspberry Pi
AEGC simply stops updating the values to be set, without restoring the
previously set manual exposure time and gain. This works, but is not a proper
solution.
Android
^^^^^^^
The Android HAL specification requires that exposure and gain (sensitivity)
must both be manual or both be auto. It cannot be that one is manual while the
other is auto, so they simply don't support sub controls.
For the flickering issue, the Android HAL has an AeLock control. To transition
from auto to manual, the application would keep AE on auto, and turn on the
lock. Once the lock has propagated through, then the value can be copied from
the result into the request and the lock disabled and the mode set to manual.
The problem with this solution is, besides the extra complexity, that it is
ambiguous what happens if there is a state transition from manual to locked
(even though it's a state transition that doesn't make sense). If locked is
defined to "use the last automatically computed values" then it could use the
values from the last time it AE was set to auto, or it would be undefined if AE
was never auto (eg. it started out as manual), or if AE is implemented to run
in the background it could just use the current values that are computed. If
locked is defined to "use the last value that was set" there would be less
ambiguity. Still, it's better if we can make it impossible to execute this
nonsensical state transition, and if we can reduce the complexity of having
this extra control or extra setting on a lever.
Summary of goals
----------------
- We need a lock of some sort, to instruct the AEGC to not update output
results
- We need manual modes, to override the values computed by the AEGC
- We need to support seamless transitions from auto to manual, and do so
without flickering
- We need custom minimum values for the manual controls; that is, no magic
values for enabling/disabling auto
- All of these need to be done with AE sub-controls (exposure time, analogue
gain) and be extensible to aperture in the future
Our solution
------------
A diagram of our solution:
::
+----------------------------+-------------+------------------+-----------------+
| INPUT | ALGORITHM | RESULT | OUTPUT |
+----------------------------+-------------+------------------+-----------------+
ExposureTimeMode ExposureTimeMode
---------------------+----------------------------------------+----------------->
0: Auto | |
1: Manual | V
| |\
| | \
| /----------------------------------> | 1| ExposureTime
| | +-------------+ exposure time | | -------------->
\--)--> | | --------------> | 0|
ExposureTime | | | | /
------------------------+--> | | |/
| | AeState
| AEGC | ----------------------------------->
AnalogueGain | |
------------------------+--> | | |\
| | | | \
/--)--> | | --------------> | 0| AnalogueGain
| | +-------------+ analogue gain | | -------------->
| \----------------------------------> | 1|
| | /
| |/
| ^
AnalogueGainMode | | AnalogueGainMode
---------------------+----------------------------------------+----------------->
0: Auto
1: Manual
AeEnable
- True -> ExposureTimeMode:Auto + AnalogueGainMode:Auto
- False -> ExposureTimeMode:Manual + AnalogueGainMode:Manual
The diagram is divided in four sections horizontally:
- Input: The values received from the request controls
- Algorithm: The algorithm itself
- Result: The values calculated by the algorithm
- Output: The values reported in result metadata and applied to the device
The four input controls are divided between manual values (ExposureTime and
AnalogueGain), and operation modes (ExposureTimeMode and AnalogueGainMode). The
former are the manual values, the latter control how they're applied. The two
modes are independent from each other, and each can take one of two values:
- Auto (0): The AGC computes the value normally. The AGC result is applied
to the output. The manual value is ignored *and is not retained*.
- Manual (1): The AGC uses the manual value internally. The corresponding
manual control from the request is applied to the output. The AGC result
is ignored.
The AeState control reports the state of the unified AEGC block. If both
ExposureTimeMode and AnalogueGainMode are set to manual then it will report
Idle. If at least one of the two is set to auto, then AeState will report
if the AEGC has Converged or not (Searching). This control replaces the old
AeLocked control, as it was insufficient for reporting the AE state.
There is a caveat to manual mode: the manual control value is not retained if
it is set during auto mode. This means that if manual mode is entered without
also setting the manual value, then it will enter a state similar to "locked",
where the last automatically computed value while the mode was auto will be
used. Once the manual value is set, then that will be used and retained as
usual.
This simulates an auto -> locked -> manual or auto -> manual state transition,
and makes it impossible to do the nonsensical manual -> locked state
transition.
AeEnable still exists to allow applications to set the mode of all the
sub-controls at once. Besides being for convenience, this will also be useful
when we eventually implement an aperture control. This is because applications
that will be made before aperture will have been available would still be able
to set aperture mode to auto or manual, as opposed to having the aperture stuck
at auto while the application really wanted manual. Although the aperture would
still be stuck at an uncontrollable value, at least it would be at a static
usable value as opposed to varying via the AEGC algorithm.
With this solution, the earlier example would become:
::
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
| N+2 | | N+3 | | N+4 | | N+5 | | N+6 | | N+7 | | N+8 | | N+9 | | N+10|
+-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+ +-----+
Mode requested: Auto Manual Manual Manual Manual Manual Manual Manual Manual
Exp requested: N/A None None None None 10 None 10 10
Set in Frame: N+4 N+5 N+6 N+7 N+8 N+9 N+10 N+11 N+12
Mode used: Auto Auto Auto Manual Manual Manual Manual Manual Manual
Exp used: 2 3 4 5 5 5 5 10 10
This example is extended by a few frames to exhibit the simulated "locked"
state. At frame N+5 the application has confirmed that the manual mode has been
entered, but does not provide a manual value until request N+7. Thus, the value
that is used in requests N+5 and N+6 (where the mode is disabled), comes from
the last value that was used when the mode was auto, which comes from frame
N+4.
Then, in N+7, a manual value of 10 is supplied. It takes until frame N+9 for
the exposure to be applied. N+8 does not supply a manual value, but the last
supplied value is retained, so a manual value of 10 is still used and set in
frame N+10.
Although this behavior is the same as what we had with waiting for the manual
mode to propagate (in the section "Description of the problem"), this time it
is correct as we have defined specifically that if a manual value was specified
while the mode was auto, it will not be retained.
Description of the controls
---------------------------
As described above, libcamera offers the following controls related to exposure
and gain:
- AnalogueGain
- AnalogueGainMode
- ExposureTime
- ExposureTimeMode
- AeState
- AeEnable
Auto-exposure and auto-gain can be enabled and disabled separately using the
ExposureTimeMode and AnalogueGainMode controls respectively. The AeEnable
control can also be used, as it sets both of the modes simultaneously. The
AeEnable control is not returned in metadata.
When the respective mode is set to auto, the respective value that is computed
by the AEGC algorithm is applied to the image sensor. Any value that is
supplied in the manual ExposureTime/AnalogueGain control is ignored and not
retained. Another way to understand this is that when the mode transitions from
auto to manual, the internally stored control value is overwritten with the
last value computed by the auto algorithm.
This means that when we transition from auto to manual without supplying a
manual control value, the last value that was set by the AEGC algorithm will
keep be used. This can be used to do a flickerless transition from auto to
manual as described earlier. If the camera started out in manual mode and no
corresponding value has been supplied yet, then a best-effort default value
shall be set.
The manual control value can be set in the same request as setting the mode to
auto if the desired manual control value is already known.
Transitioning from manual to auto shall be implicitly flickerless, as the AEGC
algorithms are expected to start running from the last manual value.
The AeState metadata reports the state of the AE algorithm. As AE cannot
compute exposure and gain separately, the state of the AE component is
unified. There are three states: Idle, Searching, and Converged.
The state shall be Idle if both ExposureTimeMode and AnalogueGainMode
are set to Manual. If the camera only supports one of the two controls,
then the state shall be Idle if that one control is set to Manual. If
the camera does not support Manual for at least one of the two controls,
then the state will never be Idle, as AE will always be running.
The state shall be Searching if at least one of exposure or gain calculated
by the AE algorithm is used (that is, at least one of the two modes is Auto),
*and* the value(s) have not converged yet.
The state shall be Converged if at least one of exposure or gain calculated
by the AE algorithm is used (that is, at least one of the two modes is Auto),
*and* the value(s) have converged.

View file

@ -57,8 +57,8 @@ LIBCAMERA_RPI_CONFIG_FILE
Example value: ``/usr/local/share/libcamera/pipeline/rpi/vc4/minimal_mem.yaml``
LIBCAMERA_<NAME>_TUNING_FILE
Define a custom IPA tuning file to use with the pipeline handler `NAME`.
LIBCAMERA_RPI_TUNING_FILE
Define a custom JSON tuning file to use in the Raspberry Pi.
Example value: ``/usr/local/share/libcamera/ipa/rpi/vc4/custom_sensor.json``

View file

@ -128,7 +128,7 @@ available.
std::string cameraId = cameras[0]->id();
camera = cm->get(cameraId);
auto camera = cm->get(cameraId);
/*
* Note that `camera` may not compare equal to `cameras[0]`.
* In fact, it might simply be a `nullptr`, as the particular
@ -618,7 +618,7 @@ accordingly. In this example, the application file has been named
simple_cam = executable('simple-cam',
'simple-cam.cpp',
dependencies: dependency('libcamera'))
dependencies: dependency('libcamera', required : true))
The ``dependencies`` line instructs meson to ask ``pkgconfig`` (or ``cmake``) to
locate the ``libcamera`` library, which the test application will be

View file

@ -186,7 +186,7 @@ to the libcamera build options in the top level ``meson_options.txt``.
option('pipelines',
type : 'array',
choices : ['ipu3', 'rkisp1', 'rpi/pisp', 'rpi/vc4', 'simple', 'uvcvideo', 'vimc', 'vivid'],
choices : ['ipu3', 'rkisp1', 'rpi/vc4', 'simple', 'uvcvideo', 'vimc', 'vivid'],
description : 'Select which pipeline handlers to include')
@ -213,7 +213,7 @@ implementations for the overridden class members.
std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;
int start(Camera *camera, const ControlList *controls) override;
void stopDevice(Camera *camera) override;
void stop(Camera *camera) override;
int queueRequestDevice(Camera *camera, Request *request) override;
@ -247,7 +247,7 @@ implementations for the overridden class members.
return -1;
}
void PipelineHandlerVivid::stopDevice(Camera *camera)
void PipelineHandlerVivid::stop(Camera *camera)
{
}
@ -521,14 +521,14 @@ handler and camera manager using `registerCamera`_.
Finally with a successful construction, we return 'true' indicating that the
PipelineHandler successfully matched and constructed a device.
.. _Camera::create: https://libcamera.org/internal-api-html/classlibcamera_1_1Camera.html#adf5e6c22411f953bfaa1ae21155d6c31
.. _Camera::create: https://libcamera.org/api-html/classlibcamera_1_1Camera.html#a453740e0d2a2f495048ae307a85a2574
.. _registerCamera: https://libcamera.org/api-html/classlibcamera_1_1PipelineHandler.html#adf02a7f1bbd87aca73c0e8d8e0e6c98b
.. code-block:: cpp
std::set<Stream *> streams{ &data->stream_ };
std::shared_ptr<Camera> camera = Camera::create(std::move(data), data->video_->deviceName(), streams);
registerCamera(std::move(camera));
std::shared_ptr<Camera> camera = Camera::create(this, data->video_->deviceName(), streams);
registerCamera(std::move(camera), std::move(data));
return true;
@ -554,7 +554,8 @@ Our match function should now look like the following:
/* Create and register the camera. */
std::set<Stream *> streams{ &data->stream_ };
std::shared_ptr<Camera> camera = Camera::create(std::move(data), data->video_->deviceName(), streams);
const std::string &id = data->video_->deviceName();
std::shared_ptr<Camera> camera = Camera::create(data.release(), id, streams);
registerCamera(std::move(camera));
return true;
@ -592,11 +593,11 @@ immutable properties of the ``Camera`` device.
The libcamera controls and properties are defined in YAML form which is
processed to automatically generate documentation and interfaces. Controls are
defined by the src/libcamera/`control_ids_core.yaml`_ file and camera properties
are defined by src/libcamera/`property_ids_core.yaml`_.
are defined by src/libcamera/`properties_ids_core.yaml`_.
.. _controls framework: https://libcamera.org/api-html/controls_8h.html
.. _control_ids_core.yaml: https://libcamera.org/api-html/control__ids_8h.html
.. _property_ids_core.yaml: https://libcamera.org/api-html/property__ids_8h.html
.. _properties_ids_core.yaml: https://libcamera.org/api-html/property__ids_8h.html
Pipeline handlers can optionally register the list of controls an application
can set as well as a list of immutable camera properties. Being both
@ -799,7 +800,8 @@ derived class, and assign it to a base class pointer.
.. code-block:: cpp
auto config = std::make_unique<VividCameraConfiguration>();
VividCameraData *data = cameraData(camera);
CameraConfiguration *config = new VividCameraConfiguration();
A ``CameraConfiguration`` is specific to each pipeline, so you can only create
it from the pipeline handler code path. Applications can also generate an empty
@ -827,7 +829,9 @@ To generate a ``StreamConfiguration``, you need a list of pixel formats and
frame sizes which are supported as outputs of the stream. You can fetch a map of
the ``V4LPixelFormat`` and ``SizeRange`` supported by the underlying output
device, but the pipeline handler needs to convert this to a
``libcamera::PixelFormat`` type to pass to applications.
``libcamera::PixelFormat`` type to pass to applications. We do this here using
``std::transform`` to convert the formats and populate a new ``PixelFormat`` map
as shown below.
Continue adding the following code example to our ``generateConfiguration``
implementation.
@ -837,12 +841,14 @@ implementation.
std::map<V4L2PixelFormat, std::vector<SizeRange>> v4l2Formats =
data->video_->formats();
std::map<PixelFormat, std::vector<SizeRange>> deviceFormats;
for (auto &[v4l2PixelFormat, sizes] : v4l2Formats) {
PixelFormat pixelFormat = v4l2PixelFormat.toPixelFormat();
if (pixelFormat.isValid())
deviceFormats.try_emplace(pixelFormat, std::move(sizes));
}
std::transform(v4l2Formats.begin(), v4l2Formats.end(),
std::inserter(deviceFormats, deviceFormats.begin()),
[&](const decltype(v4l2Formats)::value_type &format) {
return decltype(deviceFormats)::value_type{
format.first.toPixelFormat(),
format.second
};
});
The `StreamFormats`_ class holds information about the pixel formats and frame
sizes that a stream can support. The class groups size information by the pixel
@ -932,9 +938,9 @@ Add the following function implementation to your file:
StreamConfiguration &cfg = config_[0];
const std::vector<libcamera::PixelFormat> &formats = cfg.formats().pixelformats();
const std::vector<libcamera::PixelFormat> formats = cfg.formats().pixelformats();
if (std::find(formats.begin(), formats.end(), cfg.pixelFormat) == formats.end()) {
cfg.pixelFormat = formats[0];
cfg.pixelFormat = cfg.formats().pixelformats()[0];
LOG(VIVID, Debug) << "Adjusting format to " << cfg.pixelFormat.toString();
status = Adjusted;
}
@ -1152,7 +1158,7 @@ available to the devices which have to be started and ready to produce
images. At the end of a capture session the ``Camera`` device needs to be
stopped, to gracefully clean up any allocated memory and stop the hardware
devices. Pipeline handlers implement two functions for these purposes, the
``start()`` and ``stopDevice()`` functions.
``start()`` and ``stop()`` functions.
The memory initialization phase that happens at ``start()`` time serves to
configure video devices to be able to use memory buffers exported as dma-buf
@ -1255,8 +1261,8 @@ algorithms, or other devices you should also stop them.
.. _releaseBuffers: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a191619c152f764e03bc461611f3fcd35
Of course we also need to handle the corresponding actions to stop streaming on
a device, Add the following to the ``stopDevice()`` function, to stop the
stream with the `streamOff`_ function and release all buffers.
a device, Add the following to the ``stop`` function, to stop the stream with
the `streamOff`_ function and release all buffers.
.. _streamOff: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a61998710615bdf7aa25a046c8565ed66

View file

@ -23,9 +23,7 @@
SoftwareISP Benchmarking <software-isp-benchmarking>
Tracing guide <guides/tracing>
Design document: AE <design/ae>
.. toctree::
:hidden:
introduction
introduction

View file

@ -116,8 +116,10 @@ endif
# Sphinx
#
sphinx = find_program('sphinx-build-3', 'sphinx-build',
required : get_option('documentation'))
sphinx = find_program('sphinx-build-3', required : false)
if not sphinx.found()
sphinx = find_program('sphinx-build', required : get_option('documentation'))
endif
if sphinx.found()
docs_sources = [
@ -126,7 +128,6 @@ if sphinx.found()
'coding-style.rst',
'conf.py',
'contributing.rst',
'design/ae.rst',
'documentation-contents.rst',
'environment_variables.rst',
'feature_requirements.rst',

View file

@ -44,7 +44,7 @@ A C++ toolchain: [required]
Either {g++, clang}
Meson Build system: [required]
meson (>= 0.63) ninja-build pkg-config
meson (>= 0.60) ninja-build pkg-config
for the libcamera core: [required]
libyaml-dev python3-yaml python3-ply python3-jinja2
@ -83,10 +83,9 @@ for cam: [optional]
- libdrm-dev: Enables the KMS sink
- libjpeg-dev: Enables MJPEG on the SDL sink
- libsdl2-dev: Enables the SDL sink
- libtiff-dev: Enables writing DNG
for qcam: [optional]
libtiff-dev qt6-base-dev
libtiff-dev qt6-base-dev qt6-tools-dev-tools
for tracing with lttng: [optional]
liblttng-ust-dev python3-jinja2 lttng-tools
@ -94,6 +93,9 @@ for tracing with lttng: [optional]
for android: [optional]
libexif-dev libjpeg-dev
for Python bindings: [optional]
pybind11-dev
for lc-compliance: [optional]
libevent-dev libgtest-dev

View file

@ -98,15 +98,21 @@ public:
using PackType = BoundMethodPack<R, Args...>;
private:
template<std::size_t... I>
void invokePack(BoundMethodPackBase *pack, std::index_sequence<I...>)
template<std::size_t... I, typename T = R>
std::enable_if_t<!std::is_void<T>::value, void>
invokePack(BoundMethodPackBase *pack, std::index_sequence<I...>)
{
[[maybe_unused]] auto *args = static_cast<PackType *>(pack);
PackType *args = static_cast<PackType *>(pack);
args->ret_ = invoke(std::get<I>(args->args_)...);
}
if constexpr (!std::is_void_v<R>)
args->ret_ = invoke(std::get<I>(args->args_)...);
else
invoke(std::get<I>(args->args_)...);
template<std::size_t... I, typename T = R>
std::enable_if_t<std::is_void<T>::value, void>
invokePack(BoundMethodPackBase *pack, std::index_sequence<I...>)
{
/* args is effectively unused when the sequence I is empty. */
PackType *args [[gnu::unused]] = static_cast<PackType *>(pack);
invoke(std::get<I>(args->args_)...);
}
public:

View file

@ -0,0 +1,14 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2021, Google Inc.
*
* Compiler support
*/
#pragma once
#if __cplusplus >= 201703L
#define __nodiscard [[nodiscard]]
#else
#define __nodiscard
#endif

View file

@ -7,9 +7,7 @@
#pragma once
#include <atomic>
#include <sstream>
#include <string_view>
#include <libcamera/base/private.h>
@ -30,22 +28,19 @@ enum LogSeverity {
class LogCategory
{
public:
static LogCategory *create(std::string_view name);
static LogCategory *create(const char *name);
const std::string &name() const { return name_; }
LogSeverity severity() const { return severity_.load(std::memory_order_relaxed); }
void setSeverity(LogSeverity severity) { severity_.store(severity, std::memory_order_relaxed); }
LogSeverity severity() const { return severity_; }
void setSeverity(LogSeverity severity);
static const LogCategory &defaultCategory();
private:
friend class Logger;
explicit LogCategory(std::string_view name);
explicit LogCategory(const char *name);
const std::string name_;
std::atomic<LogSeverity> severity_;
static_assert(decltype(severity_)::is_always_lock_free);
LogSeverity severity_;
};
#define LOG_DECLARE_CATEGORY(name) \
@ -65,7 +60,9 @@ class LogMessage
public:
LogMessage(const char *fileName, unsigned int line,
const LogCategory &category, LogSeverity severity,
std::string prefix = {});
const std::string &prefix = std::string());
LogMessage(LogMessage &&);
~LogMessage();
std::ostream &stream() { return msgStream_; }
@ -78,7 +75,9 @@ public:
const std::string msg() const { return msgStream_.str(); }
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(LogMessage)
LIBCAMERA_DISABLE_COPY(LogMessage)
void init(const char *fileName, unsigned int line);
std::ostringstream msgStream_;
const LogCategory &category_;

View file

@ -5,6 +5,7 @@ libcamera_base_include_dir = libcamera_include_dir / 'base'
libcamera_base_public_headers = files([
'bound_method.h',
'class.h',
'compiler.h',
'flags.h',
'object.h',
'shared_fd.h',

View file

@ -23,6 +23,10 @@ namespace libcamera {
class LIBCAMERA_TSA_CAPABILITY("mutex") Mutex final
{
public:
constexpr Mutex()
{
}
void lock() LIBCAMERA_TSA_ACQUIRE()
{
mutex_.lock();
@ -80,6 +84,10 @@ private:
class ConditionVariable final
{
public:
ConditionVariable()
{
}
void notify_one() noexcept
{
cv_.notify_one();

View file

@ -9,11 +9,9 @@
#include <list>
#include <memory>
#include <utility>
#include <vector>
#include <libcamera/base/bound_method.h>
#include <libcamera/base/class.h>
namespace libcamera {
@ -40,7 +38,7 @@ public:
{
T *obj = static_cast<T *>(this);
auto *method = new BoundMethodMember<T, R, FuncArgs...>(obj, this, func, type);
return method->activate(std::forward<Args>(args)..., true);
return method->activate(args..., true);
}
Thread *thread() const { return thread_; }
@ -54,8 +52,6 @@ protected:
bool assertThreadBound(const char *message);
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Object)
friend class SignalBase;
friend class Thread;

View file

@ -63,8 +63,11 @@ public:
#ifndef __DOXYGEN__
template<typename T, typename Func,
std::enable_if_t<std::is_base_of<Object, T>::value &&
std::is_invocable_v<Func, Args...>> * = nullptr>
std::enable_if_t<std::is_base_of<Object, T>::value
#if __cplusplus >= 201703L
&& std::is_invocable_v<Func, Args...>
#endif
> * = nullptr>
void connect(T *obj, Func func, ConnectionType type = ConnectionTypeAuto)
{
Object *object = static_cast<Object *>(obj);
@ -72,8 +75,11 @@ public:
}
template<typename T, typename Func,
std::enable_if_t<!std::is_base_of<Object, T>::value &&
std::is_invocable_v<Func, Args...>> * = nullptr>
std::enable_if_t<!std::is_base_of<Object, T>::value
#if __cplusplus >= 201703L
&& std::is_invocable_v<Func, Args...>
#endif
> * = nullptr>
#else
template<typename T, typename Func>
#endif

View file

@ -346,7 +346,13 @@ public:
}
constexpr Span(const Span &other) noexcept = default;
constexpr Span &operator=(const Span &other) noexcept = default;
constexpr Span &operator=(const Span &other) noexcept
{
data_ = other.data_;
size_ = other.size_;
return *this;
}
constexpr iterator begin() const { return data(); }
constexpr const_iterator cbegin() const { return begin(); }

View file

@ -13,7 +13,6 @@
#include <libcamera/base/private.h>
#include <libcamera/base/class.h>
#include <libcamera/base/message.h>
#include <libcamera/base/signal.h>
#include <libcamera/base/span.h>
@ -48,16 +47,13 @@ public:
EventDispatcher *eventDispatcher();
void dispatchMessages(Message::Type type = Message::Type::None,
Object *receiver = nullptr);
void dispatchMessages(Message::Type type = Message::Type::None);
protected:
int exec();
virtual void run();
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Thread)
void startThread();
void finishThread();

View file

@ -10,6 +10,7 @@
#include <utility>
#include <libcamera/base/class.h>
#include <libcamera/base/compiler.h>
namespace libcamera {
@ -42,7 +43,7 @@ public:
return *this;
}
[[nodiscard]] int release()
__nodiscard int release()
{
int fd = fd_;
fd_ = -1;

View file

@ -13,7 +13,6 @@
#include <iterator>
#include <ostream>
#include <sstream>
#include <stdint.h>
#include <string.h>
#include <string>
#include <sys/time.h>

View file

@ -9,7 +9,6 @@
#include <memory>
#include <string>
#include <string_view>
#include <sys/types.h>
#include <vector>
@ -32,7 +31,7 @@ public:
void stop();
std::vector<std::shared_ptr<Camera>> cameras() const;
std::shared_ptr<Camera> get(std::string_view id);
std::shared_ptr<Camera> get(const std::string &id);
static const std::string &version() { return version_; }

View file

@ -120,12 +120,12 @@ struct control_type<Point> {
};
template<typename T, std::size_t N>
struct control_type<Span<T, N>, std::enable_if_t<control_type<std::remove_cv_t<T>>::size == 0>> : public control_type<std::remove_cv_t<T>> {
struct control_type<Span<T, N>> : public control_type<std::remove_cv_t<T>> {
static constexpr std::size_t size = N;
};
template<typename T>
struct control_type<T, std::enable_if_t<std::is_enum_v<T> && sizeof(T) == sizeof(int32_t)>> : public control_type<int32_t> {
struct control_type<T, std::enable_if_t<std::is_enum_v<T>>> : public control_type<int32_t> {
};
} /* namespace details */

View file

@ -26,7 +26,6 @@ struct FrameMetadata {
FrameSuccess,
FrameError,
FrameCancelled,
FrameStartup,
};
struct Plane {

View file

@ -11,6 +11,8 @@
#include <ostream>
#include <string>
#include <libcamera/base/compiler.h>
namespace libcamera {
class Rectangle;
@ -108,8 +110,8 @@ public:
return *this;
}
[[nodiscard]] constexpr Size alignedDownTo(unsigned int hAlignment,
unsigned int vAlignment) const
__nodiscard constexpr Size alignedDownTo(unsigned int hAlignment,
unsigned int vAlignment) const
{
return {
width / hAlignment * hAlignment,
@ -117,8 +119,8 @@ public:
};
}
[[nodiscard]] constexpr Size alignedUpTo(unsigned int hAlignment,
unsigned int vAlignment) const
__nodiscard constexpr Size alignedUpTo(unsigned int hAlignment,
unsigned int vAlignment) const
{
return {
(width + hAlignment - 1) / hAlignment * hAlignment,
@ -126,7 +128,7 @@ public:
};
}
[[nodiscard]] constexpr Size boundedTo(const Size &bound) const
__nodiscard constexpr Size boundedTo(const Size &bound) const
{
return {
std::min(width, bound.width),
@ -134,7 +136,7 @@ public:
};
}
[[nodiscard]] constexpr Size expandedTo(const Size &expand) const
__nodiscard constexpr Size expandedTo(const Size &expand) const
{
return {
std::max(width, expand.width),
@ -142,7 +144,7 @@ public:
};
}
[[nodiscard]] constexpr Size grownBy(const Size &margins) const
__nodiscard constexpr Size grownBy(const Size &margins) const
{
return {
width + margins.width,
@ -150,7 +152,7 @@ public:
};
}
[[nodiscard]] constexpr Size shrunkBy(const Size &margins) const
__nodiscard constexpr Size shrunkBy(const Size &margins) const
{
return {
width > margins.width ? width - margins.width : 0,
@ -158,10 +160,10 @@ public:
};
}
[[nodiscard]] Size boundedToAspectRatio(const Size &ratio) const;
[[nodiscard]] Size expandedToAspectRatio(const Size &ratio) const;
__nodiscard Size boundedToAspectRatio(const Size &ratio) const;
__nodiscard Size expandedToAspectRatio(const Size &ratio) const;
[[nodiscard]] Rectangle centeredTo(const Point &center) const;
__nodiscard Rectangle centeredTo(const Point &center) const;
Size operator*(float factor) const;
Size operator/(float factor) const;
@ -292,11 +294,11 @@ public:
Rectangle &scaleBy(const Size &numerator, const Size &denominator);
Rectangle &translateBy(const Point &point);
[[nodiscard]] Rectangle boundedTo(const Rectangle &bound) const;
[[nodiscard]] Rectangle enclosedIn(const Rectangle &boundary) const;
[[nodiscard]] Rectangle scaledBy(const Size &numerator,
const Size &denominator) const;
[[nodiscard]] Rectangle translatedBy(const Point &point) const;
__nodiscard Rectangle boundedTo(const Rectangle &bound) const;
__nodiscard Rectangle enclosedIn(const Rectangle &boundary) const;
__nodiscard Rectangle scaledBy(const Size &numerator,
const Size &denominator) const;
__nodiscard Rectangle translatedBy(const Point &point) const;
Rectangle transformedBetween(const Rectangle &source,
const Rectangle &target) const;

View file

@ -11,7 +11,6 @@
#include <list>
#include <memory>
#include <set>
#include <stdint.h>
#include <string>
#include <libcamera/base/class.h>

View file

@ -7,7 +7,6 @@
#pragma once
#include <memory>
#include <stdint.h>
#include <string>
#include <libcamera/base/class.h>

View file

@ -8,7 +8,6 @@
#pragma once
#include <memory>
#include <stdint.h>
#include <string>
#include <variant>
#include <vector>
@ -63,11 +62,6 @@ public:
Transform transform = Transform::Identity,
V4L2SubdeviceFormat *sensorFormat = nullptr) = 0;
virtual V4L2Subdevice::Stream imageStream() const;
virtual std::optional<V4L2Subdevice::Stream> embeddedDataStream() const;
virtual V4L2SubdeviceFormat embeddedDataFormat() const;
virtual int setEmbeddedDataEnabled(bool enable);
virtual const ControlList &properties() const = 0;
virtual int sensorInfo(IPACameraSensorInfo *info) const = 0;
virtual Transform computeTransform(Orientation *orientation) const = 0;

View file

@ -1,68 +0,0 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Camera recovery algorithm
*/
#pragma once
#include <stdint.h>
namespace libcamera {
class ClockRecovery
{
public:
ClockRecovery();
void configure(unsigned int numSamples = 100, unsigned int maxJitter = 2000,
unsigned int minSamples = 10, unsigned int errorThreshold = 50000);
void reset();
void addSample();
void addSample(uint64_t input, uint64_t output);
uint64_t getOutput(uint64_t input);
private:
/* Approximate number of samples over which the model state persists. */
unsigned int numSamples_;
/* Remove any output jitter larger than this immediately. */
unsigned int maxJitter_;
/* Number of samples required before we start to use model estimates. */
unsigned int minSamples_;
/* Threshold above which we assume the wallclock has been reset. */
unsigned int errorThreshold_;
/* How many samples seen (up to numSamples_). */
unsigned int count_;
/* This gets subtracted from all input values, just to make the numbers easier. */
uint64_t inputBase_;
/* As above, for the output. */
uint64_t outputBase_;
/* The previous input sample. */
uint64_t lastInput_;
/* The previous output sample. */
uint64_t lastOutput_;
/* Average x value seen so far. */
double xAve_;
/* Average y value seen so far */
double yAve_;
/* Average x^2 value seen so far. */
double x2Ave_;
/* Average x*y value seen so far. */
double xyAve_;
/*
* The latest estimate of linear parameters to derive the output clock
* from the input.
*/
double slope_;
double offset_;
/* Use this cumulative error to monitor for spontaneous clock updates. */
double error_;
};
} /* namespace libcamera */

View file

@ -10,15 +10,13 @@
#include <stdint.h>
#include <unordered_map>
#include <libcamera/base/object.h>
#include <libcamera/controls.h>
namespace libcamera {
class V4L2Device;
class DelayedControls : public Object
class DelayedControls
{
public:
struct ControlParams {

View file

@ -8,7 +8,6 @@
#pragma once
#include <memory>
#include <stdint.h>
#include <string>
#include <vector>
@ -61,14 +60,9 @@ public:
explicit DmaSyncer(SharedFD fd, SyncType type = SyncType::ReadWrite);
DmaSyncer(DmaSyncer &&other) = default;
DmaSyncer &operator=(DmaSyncer &&other) = default;
~DmaSyncer();
private:
LIBCAMERA_DISABLE_COPY(DmaSyncer)
void sync(uint64_t step);
SharedFD fd_;

View file

@ -8,7 +8,6 @@
#pragma once
#include <memory>
#include <stdint.h>
#include <utility>
#include <libcamera/base/class.h>

View file

@ -7,7 +7,6 @@
#pragma once
#include <stdint.h>
#include <string.h>
#include <tuple>
#include <type_traits>
@ -309,6 +308,7 @@ public:
serialize(const Flags<E> &data, [[maybe_unused]] ControlSerializer *cs = nullptr)
{
std::vector<uint8_t> dataVec;
dataVec.reserve(sizeof(Flags<E>));
appendPOD<uint32_t>(dataVec, static_cast<typename Flags<E>::Type>(data));
return { dataVec, {} };

View file

@ -7,7 +7,6 @@
#pragma once
#include <memory>
#include <stdint.h>
#include <vector>
@ -68,7 +67,7 @@ private:
bool isSignatureValid(IPAModule *ipa) const;
std::vector<std::unique_ptr<IPAModule>> modules_;
std::vector<IPAModule *> modules_;
#if HAVE_IPA_PUBKEY
static const uint8_t publicKeyData_[];

View file

@ -29,7 +29,7 @@ public:
bool isValid() const;
const struct IPAModuleInfo &info() const;
const std::vector<uint8_t> &signature() const;
const std::vector<uint8_t> signature() const;
const std::string &path() const;
bool load();

View file

@ -7,7 +7,6 @@
#pragma once
#include <stdint.h>
#include <vector>
#include <libcamera/base/shared_fd.h>

View file

@ -9,7 +9,6 @@
#include <map>
#include <memory>
#include <stdint.h>
#include "libcamera/internal/ipc_pipe.h"
#include "libcamera/internal/ipc_unixsocket.h"

View file

@ -8,7 +8,6 @@
#include <algorithm>
#include <sstream>
#include <type_traits>
#include <vector>
#include <libcamera/base/log.h>
@ -21,19 +20,17 @@ namespace libcamera {
LOG_DECLARE_CATEGORY(Matrix)
#ifndef __DOXYGEN__
template<typename T>
bool matrixInvert(Span<const T> dataIn, Span<T> dataOut, unsigned int dim,
Span<T> scratchBuffer, Span<unsigned int> swapBuffer);
#endif /* __DOXYGEN__ */
template<typename T, unsigned int Rows, unsigned int Cols,
std::enable_if_t<std::is_arithmetic_v<T>> * = nullptr>
#else
template<typename T, unsigned int Rows, unsigned int Cols>
#endif /* __DOXYGEN__ */
class Matrix
{
static_assert(std::is_arithmetic_v<T>, "Matrix type must be arithmetic");
public:
constexpr Matrix()
Matrix()
{
data_.fill(static_cast<T>(0));
}
Matrix(const std::array<T, Rows * Cols> &data)
@ -41,12 +38,7 @@ public:
std::copy(data.begin(), data.end(), data_.begin());
}
Matrix(const Span<const T, Rows * Cols> data)
{
std::copy(data.begin(), data.end(), data_.begin());
}
static constexpr Matrix identity()
static Matrix identity()
{
Matrix ret;
for (size_t i = 0; i < std::min(Rows, Cols); i++)
@ -74,14 +66,12 @@ public:
return out.str();
}
constexpr Span<const T, Rows * Cols> data() const { return data_; }
constexpr Span<const T, Cols> operator[](size_t i) const
Span<const T, Cols> operator[](size_t i) const
{
return Span<const T, Cols>{ &data_.data()[i * Cols], Cols };
}
constexpr Span<T, Cols> operator[](size_t i)
Span<T, Cols> operator[](size_t i)
{
return Span<T, Cols>{ &data_.data()[i * Cols], Cols };
}
@ -98,30 +88,8 @@ public:
return *this;
}
Matrix<T, Rows, Cols> inverse(bool *ok = nullptr) const
{
static_assert(Rows == Cols, "Matrix must be square");
Matrix<T, Rows, Cols> inverse;
std::array<T, Rows * Cols * 2> scratchBuffer;
std::array<unsigned int, Rows> swapBuffer;
bool res = matrixInvert(Span<const T>(data_),
Span<T>(inverse.data_),
Rows,
Span<T>(scratchBuffer),
Span<unsigned int>(swapBuffer));
if (ok)
*ok = res;
return inverse;
}
private:
/*
* \todo The initializer is only necessary for the constructor to be
* constexpr in C++17. Remove the initializer as soon as we are on
* C++20.
*/
std::array<T, Rows * Cols> data_ = {};
std::array<T, Rows * Cols> data_;
};
#ifndef __DOXYGEN__
@ -153,16 +121,21 @@ Matrix<U, Rows, Cols> operator*(const Matrix<U, Rows, Cols> &m, T d)
return d * m;
}
template<typename T1, unsigned int R1, unsigned int C1, typename T2, unsigned int R2, unsigned int C2>
constexpr Matrix<std::common_type_t<T1, T2>, R1, C2> operator*(const Matrix<T1, R1, C1> &m1,
const Matrix<T2, R2, C2> &m2)
#ifndef __DOXYGEN__
template<typename T,
unsigned int R1, unsigned int C1,
unsigned int R2, unsigned int C2,
std::enable_if_t<C1 == R2> * = nullptr>
#else
template<typename T, unsigned int R1, unsigned int C1, unsigned int R2, unsigned in C2>
#endif /* __DOXYGEN__ */
Matrix<T, R1, C2> operator*(const Matrix<T, R1, C1> &m1, const Matrix<T, R2, C2> &m2)
{
static_assert(C1 == R2, "Matrix dimensions must match for multiplication");
Matrix<std::common_type_t<T1, T2>, R1, C2> result;
Matrix<T, R1, C2> result;
for (unsigned int i = 0; i < R1; i++) {
for (unsigned int j = 0; j < C2; j++) {
std::common_type_t<T1, T2> sum = 0;
T sum = 0;
for (unsigned int k = 0; k < C1; k++)
sum += m1[i][k] * m2[k][j];
@ -175,7 +148,7 @@ constexpr Matrix<std::common_type_t<T1, T2>, R1, C2> operator*(const Matrix<T1,
}
template<typename T, unsigned int Rows, unsigned int Cols>
constexpr Matrix<T, Rows, Cols> operator+(const Matrix<T, Rows, Cols> &m1, const Matrix<T, Rows, Cols> &m2)
Matrix<T, Rows, Cols> operator+(const Matrix<T, Rows, Cols> &m1, const Matrix<T, Rows, Cols> &m2)
{
Matrix<T, Rows, Cols> result;

View file

@ -55,8 +55,6 @@ public:
Signal<> disconnected;
std::vector<MediaEntity *> locateEntities(unsigned int function);
protected:
std::string logPrefix() const override;

View file

@ -112,7 +112,7 @@ public:
unsigned int deviceMinor() const { return minor_; }
const std::vector<MediaPad *> &pads() const { return pads_; }
const std::vector<MediaEntity *> &ancillaryEntities() const { return ancillaryEntities_; }
const std::vector<MediaEntity *> ancillaryEntities() const { return ancillaryEntities_; }
const MediaPad *getPadByIndex(unsigned int index) const;
const MediaPad *getPadById(unsigned int id) const;

View file

@ -1,59 +0,0 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Ideas on Board Oy
*
* Media pipeline support
*/
#pragma once
#include <list>
#include <string>
#include <libcamera/base/log.h>
namespace libcamera {
class CameraSensor;
class MediaEntity;
class MediaLink;
class MediaPad;
struct V4L2SubdeviceFormat;
class MediaPipeline
{
public:
int init(MediaEntity *source, std::string_view sink);
int initLinks();
int configure(CameraSensor *sensor, V4L2SubdeviceFormat *);
private:
struct Entity {
/* The media entity, always valid. */
MediaEntity *entity;
/*
* Whether or not the entity is a subdev that supports the
* routing API.
*/
bool supportsRouting;
/*
* The local sink pad connected to the upstream entity, null for
* the camera sensor at the beginning of the pipeline.
*/
const MediaPad *sink;
/*
* The local source pad connected to the downstream entity, null
* for the video node at the end of the pipeline.
*/
const MediaPad *source;
/*
* The link on the source pad, to the downstream entity, null
* for the video node at the end of the pipeline.
*/
MediaLink *sourceLink;
};
std::list<Entity> entities_;
};
} /* namespace libcamera */

View file

@ -11,7 +11,6 @@ libcamera_internal_headers = files([
'camera_manager.h',
'camera_sensor.h',
'camera_sensor_properties.h',
'clock_recovery.h',
'control_serializer.h',
'control_validator.h',
'converter.h',
@ -33,7 +32,6 @@ libcamera_internal_headers = files([
'matrix.h',
'media_device.h',
'media_object.h',
'media_pipeline.h',
'pipeline_handler.h',
'process.h',
'pub_key.h',
@ -45,7 +43,6 @@ libcamera_internal_headers = files([
'v4l2_pixelformat.h',
'v4l2_subdevice.h',
'v4l2_videodevice.h',
'vector.h',
'yaml_parser.h',
])

View file

@ -63,8 +63,7 @@ public:
void cancelRequest(Request *request);
std::string configurationFile(const std::string &subdir,
const std::string &name,
bool silent = false) const;
const std::string &name) const;
const char *name() const { return name_; }

View file

@ -11,7 +11,6 @@
#include <string>
#include <vector>
#include <libcamera/base/class.h>
#include <libcamera/base/signal.h>
#include <libcamera/base/unique_fd.h>
@ -43,8 +42,6 @@ public:
Signal<enum ExitStatus, int> finished;
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Process)
void closeAllFdsExcept(const std::vector<int> &fds);
int isolate();
void died(int wstatus);

View file

@ -10,7 +10,6 @@
#include <chrono>
#include <map>
#include <memory>
#include <stdint.h>
#include <unordered_set>
#include <libcamera/base/event_notifier.h>

View file

@ -1,6 +1,6 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2023-2025 Red Hat Inc.
* Copyright (C) 2023, 2024 Red Hat Inc.
*
* Authors:
* Hans de Goede <hdegoede@redhat.com>
@ -18,37 +18,11 @@ namespace libcamera {
struct DebayerParams {
static constexpr unsigned int kRGBLookupSize = 256;
struct CcmColumn {
int16_t r;
int16_t g;
int16_t b;
};
using ColorLookupTable = std::array<uint8_t, kRGBLookupSize>;
using LookupTable = std::array<uint8_t, kRGBLookupSize>;
using CcmLookupTable = std::array<CcmColumn, kRGBLookupSize>;
/*
* Color lookup tables when CCM is not used.
*
* Each color of a debayered pixel is amended by the corresponding
* value in the given table.
*/
LookupTable red;
LookupTable green;
LookupTable blue;
/*
* Color and gamma lookup tables when CCM is used.
*
* Each of the CcmLookupTable's corresponds to a CCM column; together they
* make a complete 3x3 CCM lookup table. The CCM is applied on debayered
* pixels and then the gamma lookup table is used to set the resulting
* values of all the three colors.
*/
CcmLookupTable redCcm;
CcmLookupTable greenCcm;
CcmLookupTable blueCcm;
LookupTable gammaLut;
ColorLookupTable red;
ColorLookupTable green;
ColorLookupTable blue;
};
} /* namespace libcamera */

View file

@ -7,7 +7,6 @@
#pragma once
#include <deque>
#include <functional>
#include <initializer_list>
#include <map>
@ -19,7 +18,6 @@
#include <libcamera/base/class.h>
#include <libcamera/base/log.h>
#include <libcamera/base/object.h>
#include <libcamera/base/signal.h>
#include <libcamera/base/thread.h>
@ -45,7 +43,7 @@ struct StreamConfiguration;
LOG_DECLARE_CATEGORY(SoftwareIsp)
class SoftwareIsp : public Object
class SoftwareIsp
{
public:
SoftwareIsp(PipelineHandler *pipe, const CameraSensor *sensor,
@ -85,7 +83,6 @@ public:
Signal<FrameBuffer *> inputBufferReady;
Signal<FrameBuffer *> outputBufferReady;
Signal<uint32_t, uint32_t> ispStatsReady;
Signal<uint32_t, const ControlList &> metadataReady;
Signal<const ControlList &> setSensorControls;
private:
@ -100,11 +97,8 @@ private:
SharedMemObject<DebayerParams> sharedParams_;
DebayerParams debayerParams_;
DmaBufAllocator dmaHeap_;
bool ccmEnabled_;
std::unique_ptr<ipa::soft::IPAProxySoft> ipa_;
std::deque<FrameBuffer *> queuedInputBuffers_;
std::deque<FrameBuffer *> queuedOutputBuffers_;
};
} /* namespace libcamera */

View file

@ -5,8 +5,6 @@
* request.tp - Tracepoints for the request object
*/
#include <stdint.h>
#include <libcamera/framebuffer.h>
#include "libcamera/internal/request.h"

View file

@ -10,7 +10,6 @@
#include <map>
#include <memory>
#include <optional>
#include <stdint.h>
#include <vector>
#include <linux/videodev2.h>
@ -45,7 +44,6 @@ public:
const std::string &deviceNode() const { return deviceNode_; }
std::string devicePath() const;
bool supportsFrameStartEvent();
int setFrameStartEnabled(bool enable);
Signal<uint32_t> frameStart;

View file

@ -49,8 +49,6 @@ public:
static const std::vector<V4L2PixelFormat> &
fromPixelFormat(const PixelFormat &pixelFormat);
bool isGenericLineBasedMetadata() const;
private:
uint32_t fourcc_;
};

View file

@ -10,7 +10,6 @@
#include <memory>
#include <optional>
#include <ostream>
#include <stdint.h>
#include <string>
#include <vector>

View file

@ -8,6 +8,7 @@
#pragma once
#include <array>
#include <atomic>
#include <memory>
#include <optional>
#include <ostream>
@ -157,7 +158,7 @@ private:
std::vector<Plane> planes_;
};
uint64_t lastUsedCounter_;
std::atomic<uint64_t> lastUsedCounter_;
std::vector<Entry> cache_;
/* \todo Expose the miss counter through an instrumentation API. */
unsigned int missCounter_;

View file

@ -66,7 +66,6 @@ pipeline_ipa_mojom_mapping = {
'ipu3': 'ipu3.mojom',
'mali-c55': 'mali-c55.mojom',
'rkisp1': 'rkisp1.mojom',
'rpi/pisp': 'raspberrypi.mojom',
'rpi/vc4': 'raspberrypi.mojom',
'simple': 'soft.mojom',
'vimc': 'vimc.mojom',

View file

@ -52,8 +52,7 @@ struct ConfigResult {
struct StartResult {
libcamera.ControlList controls;
int32 startupFrameCount;
int32 invalidFrameCount;
int32 dropFrameCount;
};
struct PrepareParams {

View file

@ -16,9 +16,8 @@ interface IPASoftInterface {
init(libcamera.IPASettings settings,
libcamera.SharedFD fdStats,
libcamera.SharedFD fdParams,
libcamera.IPACameraSensorInfo sensorInfo,
libcamera.ControlInfoMap sensorControls)
=> (int32 ret, libcamera.ControlInfoMap ipaControls, bool ccmEnabled);
libcamera.ControlInfoMap sensorCtrlInfoMap)
=> (int32 ret, libcamera.ControlInfoMap ipaControls);
start() => (int32 ret);
stop();
configure(IPAConfigInfo configInfo)
@ -34,5 +33,4 @@ interface IPASoftInterface {
interface IPASoftEventInterface {
setSensorControls(libcamera.ControlList sensorControls);
setIspParams();
metadataReady(uint32 frame, libcamera.ControlList metadata);
};

View file

@ -37,7 +37,6 @@ controls_map = {
'core': 'control_ids_core.yaml',
'debug': 'control_ids_debug.yaml',
'draft': 'control_ids_draft.yaml',
'rpi/pisp': 'control_ids_rpi.yaml',
'rpi/vc4': 'control_ids_rpi.yaml',
},
@ -90,7 +89,6 @@ foreach mode, entry : controls_map
command : [gen_controls, '-o', '@OUTPUT@',
'--mode', mode, '-t', template_file,
'-r', ranges_file, '@INPUT@'],
depend_files : [py_mod_controls],
env : py_build_env,
install : true,
install_dir : libcamera_headers_install_dir)

View file

@ -1,4 +1,4 @@
# SPDX-License-Identifier: CC0-1.0
Files in this directory are imported from v6.13-rc1-68-gf9bbbd9a696d of the Linux kernel. Do not
Files in this directory are imported from next-media-rkisp1-20240814-14-ga043ea54bbb9 of the Linux kernel. Do not
modify them manually.

View file

@ -188,8 +188,4 @@
#define MEDIA_BUS_FMT_META_20 0x8006
#define MEDIA_BUS_FMT_META_24 0x8007
/* Specific metadata formats. Next is 0x9003. */
#define MEDIA_BUS_FMT_CCS_EMBEDDED 0x9001
#define MEDIA_BUS_FMT_OV2740_EMBEDDED 0x9002
#endif /* __LINUX_MEDIA_BUS_FORMAT_H */

View file

@ -206,7 +206,6 @@ struct media_entity_desc {
#define MEDIA_PAD_FL_SINK (1U << 0)
#define MEDIA_PAD_FL_SOURCE (1U << 1)
#define MEDIA_PAD_FL_MUST_CONNECT (1U << 2)
#define MEDIA_PAD_FL_INTERNAL (1U << 3)
struct media_pad_desc {
__u32 entity; /* entity ID */

View file

@ -204,11 +204,6 @@ struct v4l2_subdev_capability {
* on a video node.
*/
#define V4L2_SUBDEV_ROUTE_FL_ACTIVE (1U << 0)
/*
* Is the route immutable? The ACTIVE flag of an immutable route may not be
* unset.
*/
#define V4L2_SUBDEV_ROUTE_FL_IMMUTABLE (1U << 1)
/**
* struct v4l2_subdev_route - A route inside a subdev

View file

@ -843,18 +843,6 @@ struct v4l2_pix_format {
#define V4L2_META_FMT_MALI_C55_PARAMS v4l2_fourcc('C', '5', '5', 'P') /* ARM Mali-C55 Parameters */
#define V4L2_META_FMT_MALI_C55_3A_STATS v4l2_fourcc('C', '5', '5', 'S') /* ARM Mali-C55 3A Statistics */
/*
* Line-based metadata formats. Remember to update v4l_fill_fmtdesc() when
* adding new ones!
*/
#define V4L2_META_FMT_GENERIC_8 v4l2_fourcc('M', 'E', 'T', '8') /* Generic 8-bit metadata */
#define V4L2_META_FMT_GENERIC_CSI2_10 v4l2_fourcc('M', 'C', '1', 'A') /* 10-bit CSI-2 packed 8-bit metadata */
#define V4L2_META_FMT_GENERIC_CSI2_12 v4l2_fourcc('M', 'C', '1', 'C') /* 12-bit CSI-2 packed 8-bit metadata */
#define V4L2_META_FMT_GENERIC_CSI2_14 v4l2_fourcc('M', 'C', '1', 'E') /* 14-bit CSI-2 packed 8-bit metadata */
#define V4L2_META_FMT_GENERIC_CSI2_16 v4l2_fourcc('M', 'C', '1', 'G') /* 16-bit CSI-2 packed 8-bit metadata */
#define V4L2_META_FMT_GENERIC_CSI2_20 v4l2_fourcc('M', 'C', '1', 'K') /* 20-bit CSI-2 packed 8-bit metadata */
#define V4L2_META_FMT_GENERIC_CSI2_24 v4l2_fourcc('M', 'C', '1', 'O') /* 24-bit CSI-2 packed 8-bit metadata */
/* priv field value to indicates that subsequent fields are valid. */
#define V4L2_PIX_FMT_PRIV_MAGIC 0xfeedcafe

View file

@ -2,7 +2,7 @@
project('libcamera', 'c', 'cpp',
meson_version : '>= 0.63',
version : '0.5.1',
version : '0.4.0',
default_options : [
'werror=true',
'warning_level=2',
@ -110,9 +110,7 @@ common_arguments = [
]
c_arguments = []
cpp_arguments = [
'-Wnon-virtual-dtor',
]
cpp_arguments = []
cxx_stdlib = 'libstdc++'
@ -206,7 +204,7 @@ liblttng = dependency('lttng-ust', required : get_option('tracing'))
# Pipeline handlers
#
wanted_pipelines = get_option('pipelines')
pipelines = get_option('pipelines')
arch_arm = ['arm', 'aarch64']
arch_x86 = ['x86', 'x86_64']
@ -215,7 +213,6 @@ pipelines_support = {
'ipu3': arch_x86,
'mali-c55': arch_arm,
'rkisp1': arch_arm,
'rpi/pisp': arch_arm,
'rpi/vc4': arch_arm,
'simple': ['any'],
'uvcvideo': ['any'],
@ -223,18 +220,16 @@ pipelines_support = {
'virtual': ['test'],
}
if wanted_pipelines.contains('all')
if pipelines.contains('all')
pipelines = pipelines_support.keys()
elif wanted_pipelines.contains('auto')
elif pipelines.contains('auto')
host_cpu = host_machine.cpu_family()
pipelines = []
foreach pipeline, archs : pipelines_support
if pipeline in wanted_pipelines or host_cpu in archs or 'any' in archs
if host_cpu in archs or 'any' in archs
pipelines += pipeline
endif
endforeach
else
pipelines = wanted_pipelines
endif
# Tests require the vimc pipeline handler, include it automatically when tests

View file

@ -18,7 +18,6 @@ option('cam',
option('documentation',
type : 'feature',
value : 'auto',
description : 'Generate the project documentation')
option('doc_werror',
@ -33,8 +32,7 @@ option('gstreamer',
option('ipas',
type : 'array',
choices : ['ipu3', 'mali-c55', 'rkisp1', 'rpi/pisp', 'rpi/vc4', 'simple',
'vimc'],
choices : ['ipu3', 'mali-c55', 'rkisp1', 'rpi/vc4', 'simple', 'vimc'],
description : 'Select which IPA modules to build')
option('lc-compliance',
@ -52,7 +50,6 @@ option('pipelines',
'ipu3',
'mali-c55',
'rkisp1',
'rpi/pisp',
'rpi/vc4',
'simple',
'uvcvideo',
@ -87,7 +84,6 @@ option('udev',
description : 'Enable udev support for hotplug')
option('v4l2',
type : 'feature',
value : 'auto',
description : 'Compile the V4L2 compatibility layer',
deprecated : {'true': 'enabled', 'false': 'disabled'})
type : 'boolean',
value : false,
description : 'Compile the V4L2 compatibility layer')

View file

@ -1079,7 +1079,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques
buffer.internalBuffer = frameBuffer;
descriptor->request_->addBuffer(sourceStream->stream(),
frameBuffer);
frameBuffer, nullptr);
requestedStreams.insert(sourceStream);
}

View file

@ -5,12 +5,9 @@
* Camera capture session
*/
#include "camera_session.h"
#include <iomanip>
#include <iostream>
#include <limits.h>
#include <optional>
#include <sstream>
#include <libcamera/control_ids.h>
@ -19,6 +16,7 @@
#include "../common/event_loop.h"
#include "../common/stream_options.h"
#include "camera_session.h"
#include "capture_script.h"
#include "file_sink.h"
#ifdef HAVE_KMS
@ -62,32 +60,11 @@ CameraSession::CameraSession(CameraManager *cm,
return;
}
std::vector<StreamRole> roles =
StreamKeyValueParser::roles(options_[OptStream]);
std::vector<std::vector<StreamRole>> tryRoles;
if (!roles.empty()) {
/*
* If the roles are explicitly specified then there's no need
* to try other roles
*/
tryRoles.push_back(roles);
} else {
tryRoles.push_back({ StreamRole::Viewfinder });
tryRoles.push_back({ StreamRole::Raw });
}
std::vector<StreamRole> roles = StreamKeyValueParser::roles(options_[OptStream]);
std::unique_ptr<CameraConfiguration> config;
bool valid = false;
for (std::vector<StreamRole> &rolesIt : tryRoles) {
config = camera_->generateConfiguration(rolesIt);
if (config && config->size() == rolesIt.size()) {
roles = rolesIt;
valid = true;
break;
}
}
if (!valid) {
std::unique_ptr<CameraConfiguration> config =
camera_->generateConfiguration(roles);
if (!config || config->size() != roles.size()) {
std::cerr << "Failed to get default stream configuration"
<< std::endl;
return;
@ -196,11 +173,6 @@ void CameraSession::listControls() const
std::cout << "Control: " << io.str()
<< id->vendor() << "::" << id->name() << ":"
<< std::endl;
std::optional<int32_t> def;
if (!info.def().isNone())
def = info.def().get<int32_t>();
for (const auto &value : info.values()) {
int32_t val = value.get<int32_t>();
const auto &it = id->enumerators().find(val);
@ -210,10 +182,7 @@ void CameraSession::listControls() const
std::cout << "UNKNOWN";
else
std::cout << it->second;
std::cout << " (" << val << ")"
<< (val == def ? " [default]" : "")
<< std::endl;
std::cout << " (" << val << ")" << std::endl;
}
}

View file

@ -8,7 +8,6 @@
#include "capture_script.h"
#include <iostream>
#include <memory>
#include <stdio.h>
#include <stdlib.h>
@ -522,22 +521,45 @@ ControlValue CaptureScript::parseArrayControl(const ControlId *id,
case ControlTypeNone:
break;
case ControlTypeBool: {
auto values = std::make_unique<bool[]>(repr.size());
/*
* This is unpleasant, but we cannot use an std::vector<> as its
* boolean type overload does not allow to access the raw data,
* as boolean values are stored in a bitmask for efficiency.
*
* As we need a contiguous memory region to wrap in a Span<>,
* use an array instead but be strict about not overflowing it
* by limiting the number of controls we can store.
*
* Be loud but do not fail, as the issue would present at
* runtime and it's not fatal.
*/
static constexpr unsigned int kMaxNumBooleanControls = 1024;
std::array<bool, kMaxNumBooleanControls> values;
unsigned int idx = 0;
for (std::size_t i = 0; i < repr.size(); i++) {
const auto &s = repr[i];
for (const std::string &s : repr) {
bool val;
if (s == "true") {
values[i] = true;
val = true;
} else if (s == "false") {
values[i] = false;
val = false;
} else {
unpackFailure(id, s);
return value;
}
if (idx == kMaxNumBooleanControls) {
std::cerr << "Cannot parse more than "
<< kMaxNumBooleanControls
<< " boolean controls" << std::endl;
break;
}
values[idx++] = val;
}
value = Span<bool>(values.get(), repr.size());
value = Span<bool>(values.data(), idx);
break;
}
case ControlTypeByte: {
@ -578,6 +600,10 @@ ControlValue CaptureScript::parseArrayControl(const ControlId *id,
value = Span<const float>(values.data(), values.size());
break;
}
case ControlTypeString: {
value = Span<const std::string>(repr.data(), repr.size());
break;
}
default:
std::cerr << "Unsupported control type" << std::endl;
break;

View file

@ -450,6 +450,8 @@ int Device::openCard()
}
for (struct dirent *res; (res = readdir(folder));) {
uint64_t cap;
if (strncmp(res->d_name, "card", 4))
continue;
@ -463,22 +465,15 @@ int Device::openCard()
}
/*
* Skip non-display devices. While this could in theory be done
* by checking for support of the mode setting API, some
* out-of-tree render-only GPU drivers (namely powervr)
* incorrectly set the DRIVER_MODESET driver feature. Check for
* the presence of at least one CRTC, encoder and connector
* instead.
* Skip devices that don't support the modeset API, to avoid
* selecting a DRM device corresponding to a GPU. There is no
* modeset capability, but the kernel returns an error for most
* caps if mode setting isn't support by the driver. The
* DRM_CAP_DUMB_BUFFER capability is one of those, other would
* do as well. The capability value itself isn't relevant.
*/
std::unique_ptr<drmModeRes, decltype(&drmModeFreeResources)> resources{
drmModeGetResources(fd_),
&drmModeFreeResources
};
if (!resources ||
resources->count_connectors <= 0 ||
resources->count_crtcs <= 0 ||
resources->count_encoders <= 0) {
resources.reset();
ret = drmGetCap(fd_, DRM_CAP_DUMB_BUFFER, &cap);
if (ret < 0) {
drmClose(fd_);
fd_ = -1;
continue;

View file

@ -5,8 +5,6 @@
* File Sink
*/
#include "file_sink.h"
#include <array>
#include <assert.h>
#include <fcntl.h>
@ -23,6 +21,8 @@
#include "../common/image.h"
#include "../common/ppm_writer.h"
#include "file_sink.h"
using namespace libcamera;
FileSink::FileSink([[maybe_unused]] const libcamera::Camera *camera,

View file

@ -11,7 +11,6 @@
#include <memory>
#include <string>
#include <libcamera/controls.h>
#include <libcamera/stream.h>
#include "frame_sink.h"

View file

@ -5,8 +5,6 @@
* cam - The libcamera swiss army knife
*/
#include "main.h"
#include <atomic>
#include <iomanip>
#include <iostream>
@ -21,6 +19,7 @@
#include "../common/stream_options.h"
#include "camera_session.h"
#include "main.h"
using namespace libcamera;

View file

@ -34,7 +34,6 @@ if libsdl2.found()
cam_sources += files([
'sdl_sink.cpp',
'sdl_texture.cpp',
'sdl_texture_1plane.cpp',
'sdl_texture_yuv.cpp',
])

View file

@ -11,7 +11,6 @@
#include <fcntl.h>
#include <iomanip>
#include <iostream>
#include <optional>
#include <signal.h>
#include <sstream>
#include <string.h>
@ -23,7 +22,6 @@
#include "../common/event_loop.h"
#include "../common/image.h"
#include "sdl_texture_1plane.h"
#ifdef HAVE_LIBJPEG
#include "sdl_texture_mjpg.h"
#endif
@ -33,46 +31,6 @@ using namespace libcamera;
using namespace std::chrono_literals;
namespace {
std::optional<SDL_PixelFormatEnum> singlePlaneFormatToSDL(const libcamera::PixelFormat &f)
{
switch (f) {
case libcamera::formats::RGB888:
return SDL_PIXELFORMAT_BGR24;
case libcamera::formats::BGR888:
return SDL_PIXELFORMAT_RGB24;
case libcamera::formats::RGBA8888:
return SDL_PIXELFORMAT_ABGR32;
case libcamera::formats::ARGB8888:
return SDL_PIXELFORMAT_BGRA32;
case libcamera::formats::BGRA8888:
return SDL_PIXELFORMAT_ARGB32;
case libcamera::formats::ABGR8888:
return SDL_PIXELFORMAT_RGBA32;
#if SDL_VERSION_ATLEAST(2, 29, 1)
case libcamera::formats::RGBX8888:
return SDL_PIXELFORMAT_XBGR32;
case libcamera::formats::XRGB8888:
return SDL_PIXELFORMAT_BGRX32;
case libcamera::formats::BGRX8888:
return SDL_PIXELFORMAT_XRGB32;
case libcamera::formats::XBGR8888:
return SDL_PIXELFORMAT_RGBX32;
#endif
case libcamera::formats::YUYV:
return SDL_PIXELFORMAT_YUY2;
case libcamera::formats::UYVY:
return SDL_PIXELFORMAT_UYVY;
case libcamera::formats::YVYU:
return SDL_PIXELFORMAT_YVYU;
}
return {};
}
} /* namespace */
SDLSink::SDLSink()
: window_(nullptr), renderer_(nullptr), rect_({}),
init_(false)
@ -104,20 +62,25 @@ int SDLSink::configure(const libcamera::CameraConfiguration &config)
rect_.w = cfg.size.width;
rect_.h = cfg.size.height;
if (auto sdlFormat = singlePlaneFormatToSDL(cfg.pixelFormat))
texture_ = std::make_unique<SDLTexture1Plane>(rect_, *sdlFormat, cfg.stride);
switch (cfg.pixelFormat) {
#ifdef HAVE_LIBJPEG
else if (cfg.pixelFormat == libcamera::formats::MJPEG)
case libcamera::formats::MJPEG:
texture_ = std::make_unique<SDLTextureMJPG>(rect_);
break;
#endif
#if SDL_VERSION_ATLEAST(2, 0, 16)
else if (cfg.pixelFormat == libcamera::formats::NV12)
case libcamera::formats::NV12:
texture_ = std::make_unique<SDLTextureNV12>(rect_, cfg.stride);
break;
#endif
else {
std::cerr << "Unsupported pixel format " << cfg.pixelFormat << std::endl;
case libcamera::formats::YUYV:
texture_ = std::make_unique<SDLTextureYUYV>(rect_, cfg.stride);
break;
default:
std::cerr << "Unsupported pixel format "
<< cfg.pixelFormat.toString() << std::endl;
return -EINVAL;
}
};
return 0;
}

View file

@ -7,7 +7,7 @@
#pragma once
#include <libcamera/base/span.h>
#include <vector>
#include <SDL2/SDL.h>
@ -19,7 +19,7 @@ public:
SDLTexture(const SDL_Rect &rect, uint32_t pixelFormat, const int stride);
virtual ~SDLTexture();
int create(SDL_Renderer *renderer);
virtual void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) = 0;
virtual void update(const std::vector<libcamera::Span<const uint8_t>> &data) = 0;
SDL_Texture *get() const { return ptr_; }
protected:

View file

@ -1,17 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2025, Ideas on Board Oy
*
* SDL single plane textures
*/
#include "sdl_texture_1plane.h"
#include <assert.h>
void SDLTexture1Plane::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
{
assert(data.size() == 1);
assert(data[0].size_bytes() == std::size_t(rect_.h) * std::size_t(stride_));
SDL_UpdateTexture(ptr_, nullptr, data[0].data(), stride_);
}

View file

@ -1,18 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2025, Ideas on Board Oy
*
* SDL single plane textures
*/
#pragma once
#include "sdl_texture.h"
class SDLTexture1Plane final : public SDLTexture
{
public:
using SDLTexture::SDLTexture;
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
};

View file

@ -76,7 +76,7 @@ int SDLTextureMJPG::decompress(Span<const uint8_t> data)
return 0;
}
void SDLTextureMJPG::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
void SDLTextureMJPG::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
decompress(data[0]);
SDL_UpdateTexture(ptr_, nullptr, rgb_.get(), stride_);

View file

@ -14,7 +14,7 @@ class SDLTextureMJPG : public SDLTexture
public:
SDLTextureMJPG(const SDL_Rect &rect);
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
private:
int decompress(libcamera::Span<const uint8_t> data);

View file

@ -15,9 +15,19 @@ SDLTextureNV12::SDLTextureNV12(const SDL_Rect &rect, unsigned int stride)
{
}
void SDLTextureNV12::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
void SDLTextureNV12::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
SDL_UpdateNVTexture(ptr_, nullptr, data[0].data(), stride_,
SDL_UpdateNVTexture(ptr_, &rect_, data[0].data(), stride_,
data[1].data(), stride_);
}
#endif
SDLTextureYUYV::SDLTextureYUYV(const SDL_Rect &rect, unsigned int stride)
: SDLTexture(rect, SDL_PIXELFORMAT_YUY2, stride)
{
}
void SDLTextureYUYV::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
SDL_UpdateTexture(ptr_, &rect_, data[0].data(), stride_);
}

View file

@ -14,6 +14,13 @@ class SDLTextureNV12 : public SDLTexture
{
public:
SDLTextureNV12(const SDL_Rect &rect, unsigned int stride);
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
};
#endif
class SDLTextureYUYV : public SDLTexture
{
public:
SDLTextureYUYV(const SDL_Rect &rect, unsigned int stride);
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
};

View file

@ -21,35 +21,12 @@ EventLoop::EventLoop()
evthread_use_pthreads();
base_ = event_base_new();
instance_ = this;
callsTrigger_ = event_new(base_, -1, EV_PERSIST, [](evutil_socket_t, short, void *closure) {
auto *self = static_cast<EventLoop *>(closure);
for (;;) {
std::function<void()> call;
{
std::lock_guard locker(self->lock_);
if (self->calls_.empty())
break;
call = std::move(self->calls_.front());
self->calls_.pop_front();
}
call();
}
}, this);
assert(callsTrigger_);
event_add(callsTrigger_, nullptr);
}
EventLoop::~EventLoop()
{
instance_ = nullptr;
event_free(callsTrigger_);
events_.clear();
event_base_free(base_);
libevent_global_shutdown();
@ -73,20 +50,20 @@ void EventLoop::exit(int code)
event_base_loopbreak(base_);
}
void EventLoop::callLater(std::function<void()> &&func)
void EventLoop::callLater(const std::function<void()> &func)
{
{
std::unique_lock<std::mutex> locker(lock_);
calls_.push_back(std::move(func));
calls_.push_back(func);
}
event_active(callsTrigger_, 0, 0);
event_base_once(base_, -1, EV_TIMEOUT, dispatchCallback, this, nullptr);
}
void EventLoop::addFdEvent(int fd, EventType type,
std::function<void()> &&callback)
const std::function<void()> &callback)
{
std::unique_ptr<Event> event = std::make_unique<Event>(std::move(callback));
std::unique_ptr<Event> event = std::make_unique<Event>(callback);
short events = (type & Read ? EV_READ : 0)
| (type & Write ? EV_WRITE : 0)
| EV_PERSIST;
@ -108,9 +85,9 @@ void EventLoop::addFdEvent(int fd, EventType type,
}
void EventLoop::addTimerEvent(const std::chrono::microseconds period,
std::function<void()> &&callback)
const std::function<void()> &callback)
{
std::unique_ptr<Event> event = std::make_unique<Event>(std::move(callback));
std::unique_ptr<Event> event = std::make_unique<Event>(callback);
event->event_ = event_new(base_, -1, EV_PERSIST, &EventLoop::Event::dispatch,
event.get());
if (!event->event_) {
@ -131,8 +108,31 @@ void EventLoop::addTimerEvent(const std::chrono::microseconds period,
events_.push_back(std::move(event));
}
EventLoop::Event::Event(std::function<void()> &&callback)
: callback_(std::move(callback)), event_(nullptr)
void EventLoop::dispatchCallback([[maybe_unused]] evutil_socket_t fd,
[[maybe_unused]] short flags, void *param)
{
EventLoop *loop = static_cast<EventLoop *>(param);
loop->dispatchCall();
}
void EventLoop::dispatchCall()
{
std::function<void()> call;
{
std::unique_lock<std::mutex> locker(lock_);
if (calls_.empty())
return;
call = calls_.front();
calls_.pop_front();
}
call();
}
EventLoop::Event::Event(const std::function<void()> &callback)
: callback_(callback), event_(nullptr)
{
}

View file

@ -8,14 +8,11 @@
#pragma once
#include <chrono>
#include <deque>
#include <functional>
#include <list>
#include <memory>
#include <mutex>
#include <libcamera/base/class.h>
#include <event2/util.h>
struct event_base;
@ -36,20 +33,18 @@ public:
int exec();
void exit(int code = 0);
void callLater(std::function<void()> &&func);
void callLater(const std::function<void()> &func);
void addFdEvent(int fd, EventType type,
std::function<void()> &&handler);
const std::function<void()> &handler);
using duration = std::chrono::steady_clock::duration;
void addTimerEvent(const std::chrono::microseconds period,
std::function<void()> &&handler);
const std::function<void()> &handler);
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(EventLoop)
struct Event {
Event(std::function<void()> &&callback);
LIBCAMERA_DISABLE_COPY_AND_MOVE(Event)
Event(const std::function<void()> &callback);
~Event();
static void dispatch(int fd, short events, void *arg);
@ -63,9 +58,11 @@ private:
struct event_base *base_;
int exitCode_;
std::deque<std::function<void()>> calls_;
struct event *callsTrigger_ = nullptr;
std::list<std::function<void()>> calls_;
std::list<std::unique_ptr<Event>> events_;
std::mutex lock_;
static void dispatchCallback(evutil_socket_t fd, short flags,
void *param);
void dispatchCall();
};

View file

@ -98,12 +98,12 @@ unsigned int Image::numPlanes() const
Span<uint8_t> Image::data(unsigned int plane)
{
assert(plane < planes_.size());
assert(plane <= planes_.size());
return planes_[plane];
}
Span<const uint8_t> Image::data(unsigned int plane) const
{
assert(plane < planes_.size());
assert(plane <= planes_.size());
return planes_[plane];
}

View file

@ -1040,7 +1040,7 @@ void OptionsParser::usageOptions(const std::list<Option> &options,
std::cerr << std::setw(indent) << argument;
for (const char *help = option.help, *end = help; end;) {
for (const char *help = option.help, *end = help; end; ) {
end = strchr(help, '\n');
if (end) {
std::cerr << std::string(help, end - help + 1);

View file

@ -7,7 +7,6 @@
#include "ppm_writer.h"
#include <errno.h>
#include <fstream>
#include <iostream>
@ -29,7 +28,7 @@ int PPMWriter::write(const char *filename,
std::ofstream output(filename, std::ios::binary);
if (!output) {
std::cerr << "Failed to open ppm file: " << filename << std::endl;
return -EIO;
return -EINVAL;
}
output << "P6" << std::endl
@ -37,7 +36,7 @@ int PPMWriter::write(const char *filename,
<< "255" << std::endl;
if (!output) {
std::cerr << "Failed to write the file header" << std::endl;
return -EIO;
return -EINVAL;
}
const unsigned int rowLength = config.size.width * 3;
@ -46,7 +45,7 @@ int PPMWriter::write(const char *filename,
output.write(row, rowLength);
if (!output) {
std::cerr << "Failed to write image data at row " << y << std::endl;
return -EIO;
return -EINVAL;
}
}

View file

@ -42,8 +42,9 @@ KeyValueParser::Options StreamKeyValueParser::parse(const char *arguments)
std::vector<StreamRole> StreamKeyValueParser::roles(const OptionValue &values)
{
/* If no configuration values to examine default to viewfinder. */
if (values.empty())
return {};
return { StreamRole::Viewfinder };
const std::vector<OptionValue> &streamParameters = values.toArray();

View file

@ -23,5 +23,5 @@ private:
Environment() = default;
std::string cameraId_;
libcamera::CameraManager *cm_ = nullptr;
libcamera::CameraManager *cm_;
};

View file

@ -7,14 +7,13 @@
#include "capture.h"
#include <assert.h>
#include <gtest/gtest.h>
using namespace libcamera;
Capture::Capture(std::shared_ptr<Camera> camera)
: camera_(std::move(camera)), allocator_(camera_)
: loop_(nullptr), camera_(camera),
allocator_(std::make_unique<FrameBufferAllocator>(camera))
{
}
@ -23,29 +22,14 @@ Capture::~Capture()
stop();
}
void Capture::configure(libcamera::Span<const libcamera::StreamRole> roles)
void Capture::configure(StreamRole role)
{
assert(!roles.empty());
config_ = camera_->generateConfiguration({ role });
config_ = camera_->generateConfiguration(roles);
if (!config_)
GTEST_SKIP() << "Roles not supported by camera";
ASSERT_EQ(config_->size(), roles.size()) << "Unexpected number of streams in configuration";
/*
* Set the buffers count to the largest value across all streams.
* \todo: Should all streams from a Camera have the same buffer count ?
*/
auto largest =
std::max_element(config_->begin(), config_->end(),
[](const StreamConfiguration &l, const StreamConfiguration &r)
{ return l.bufferCount < r.bufferCount; });
assert(largest != config_->end());
for (auto &cfg : *config_)
cfg.bufferCount = largest->bufferCount;
if (!config_) {
std::cout << "Role not supported by camera" << std::endl;
GTEST_SKIP();
}
if (config_->validate() != CameraConfiguration::Valid) {
config_.reset();
@ -58,46 +42,144 @@ void Capture::configure(libcamera::Span<const libcamera::StreamRole> roles)
}
}
void Capture::run(unsigned int captureLimit, std::optional<unsigned int> queueLimit)
void Capture::start()
{
assert(!queueLimit || captureLimit <= *queueLimit);
Stream *stream = config_->at(0).stream();
int count = allocator_->allocate(stream);
captureLimit_ = captureLimit;
queueLimit_ = queueLimit;
ASSERT_GE(count, 0) << "Failed to allocate buffers";
EXPECT_EQ(count, config_->at(0).bufferCount) << "Allocated less buffers than expected";
captureCount_ = queueCount_ = 0;
camera_->requestCompleted.connect(this, &Capture::requestComplete);
EventLoop loop;
loop_ = &loop;
ASSERT_EQ(camera_->start(), 0) << "Failed to start camera";
}
void Capture::stop()
{
if (!config_ || !allocator_->allocated())
return;
camera_->stop();
camera_->requestCompleted.disconnect(this);
Stream *stream = config_->at(0).stream();
requests_.clear();
allocator_->free(stream);
}
/* CaptureBalanced */
CaptureBalanced::CaptureBalanced(std::shared_ptr<Camera> camera)
: Capture(camera)
{
}
void CaptureBalanced::capture(unsigned int numRequests)
{
start();
for (const auto &request : requests_)
queueRequest(request.get());
Stream *stream = config_->at(0).stream();
const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator_->buffers(stream);
EXPECT_EQ(loop_->exec(), 0);
/* No point in testing less requests then the camera depth. */
if (buffers.size() > numRequests) {
std::cout << "Camera needs " + std::to_string(buffers.size())
+ " requests, can't test only "
+ std::to_string(numRequests) << std::endl;
GTEST_SKIP();
}
queueCount_ = 0;
captureCount_ = 0;
captureLimit_ = numRequests;
/* Queue the recommended number of requests. */
for (const std::unique_ptr<FrameBuffer> &buffer : buffers) {
std::unique_ptr<Request> request = camera_->createRequest();
ASSERT_TRUE(request) << "Can't create request";
ASSERT_EQ(request->addBuffer(stream, buffer.get()), 0) << "Can't set buffer for request";
ASSERT_EQ(queueRequest(request.get()), 0) << "Failed to queue request";
requests_.push_back(std::move(request));
}
/* Run capture session. */
loop_ = new EventLoop();
loop_->exec();
stop();
delete loop_;
EXPECT_LE(captureLimit_, captureCount_);
EXPECT_LE(captureCount_, queueCount_);
EXPECT_TRUE(!queueLimit_ || queueCount_ <= *queueLimit_);
ASSERT_EQ(captureCount_, captureLimit_);
}
int Capture::queueRequest(libcamera::Request *request)
int CaptureBalanced::queueRequest(Request *request)
{
if (queueLimit_ && queueCount_ >= *queueLimit_)
queueCount_++;
if (queueCount_ > captureLimit_)
return 0;
int ret = camera_->queueRequest(request);
if (ret < 0)
return ret;
queueCount_ += 1;
return 0;
return camera_->queueRequest(request);
}
void Capture::requestComplete(Request *request)
void CaptureBalanced::requestComplete(Request *request)
{
EXPECT_EQ(request->status(), Request::Status::RequestComplete)
<< "Request didn't complete successfully";
captureCount_++;
if (captureCount_ >= captureLimit_) {
loop_->exit(0);
return;
}
request->reuse(Request::ReuseBuffers);
if (queueRequest(request))
loop_->exit(-EINVAL);
}
/* CaptureUnbalanced */
CaptureUnbalanced::CaptureUnbalanced(std::shared_ptr<Camera> camera)
: Capture(camera)
{
}
void CaptureUnbalanced::capture(unsigned int numRequests)
{
start();
Stream *stream = config_->at(0).stream();
const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator_->buffers(stream);
captureCount_ = 0;
captureLimit_ = numRequests;
/* Queue the recommended number of requests. */
for (const std::unique_ptr<FrameBuffer> &buffer : buffers) {
std::unique_ptr<Request> request = camera_->createRequest();
ASSERT_TRUE(request) << "Can't create request";
ASSERT_EQ(request->addBuffer(stream, buffer.get()), 0) << "Can't set buffer for request";
ASSERT_EQ(camera_->queueRequest(request.get()), 0) << "Failed to queue request";
requests_.push_back(std::move(request));
}
/* Run capture session. */
loop_ = new EventLoop();
int status = loop_->exec();
stop();
delete loop_;
ASSERT_EQ(status, 0);
}
void CaptureUnbalanced::requestComplete(Request *request)
{
captureCount_++;
if (captureCount_ >= captureLimit_) {
@ -109,68 +191,6 @@ void Capture::requestComplete(Request *request)
<< "Request didn't complete successfully";
request->reuse(Request::ReuseBuffers);
if (queueRequest(request))
if (camera_->queueRequest(request))
loop_->exit(-EINVAL);
}
void Capture::start()
{
assert(config_);
assert(!config_->empty());
assert(!allocator_.allocated());
assert(requests_.empty());
const auto bufferCount = config_->at(0).bufferCount;
/* No point in testing less requests then the camera depth. */
if (queueLimit_ && *queueLimit_ < bufferCount) {
GTEST_SKIP() << "Camera needs " << bufferCount
<< " requests, can't test only " << *queueLimit_;
}
for (std::size_t i = 0; i < bufferCount; i++) {
std::unique_ptr<Request> request = camera_->createRequest();
ASSERT_TRUE(request) << "Can't create request";
requests_.push_back(std::move(request));
}
for (const auto &cfg : *config_) {
Stream *stream = cfg.stream();
int count = allocator_.allocate(stream);
ASSERT_GE(count, 0) << "Failed to allocate buffers";
const auto &buffers = allocator_.buffers(stream);
ASSERT_EQ(buffers.size(), bufferCount) << "Mismatching buffer count";
for (std::size_t i = 0; i < bufferCount; i++) {
ASSERT_EQ(requests_[i]->addBuffer(stream, buffers[i].get()), 0)
<< "Failed to add buffer to request";
}
}
ASSERT_TRUE(allocator_.allocated());
camera_->requestCompleted.connect(this, &Capture::requestComplete);
ASSERT_EQ(camera_->start(), 0) << "Failed to start camera";
}
void Capture::stop()
{
if (!config_ || !allocator_.allocated())
return;
camera_->stop();
camera_->requestCompleted.disconnect(this);
requests_.clear();
for (const auto &cfg : *config_) {
EXPECT_EQ(allocator_.free(cfg.stream()), 0)
<< "Failed to free buffers associated with stream";
}
EXPECT_FALSE(allocator_.allocated());
}

View file

@ -8,7 +8,6 @@
#pragma once
#include <memory>
#include <optional>
#include <libcamera/libcamera.h>
@ -17,29 +16,51 @@
class Capture
{
public:
void configure(libcamera::StreamRole role);
protected:
Capture(std::shared_ptr<libcamera::Camera> camera);
~Capture();
void configure(libcamera::Span<const libcamera::StreamRole> roles);
void run(unsigned int captureLimit, std::optional<unsigned int> queueLimit = {});
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Capture)
virtual ~Capture();
void start();
void stop();
int queueRequest(libcamera::Request *request);
void requestComplete(libcamera::Request *request);
virtual void requestComplete(libcamera::Request *request) = 0;
EventLoop *loop_;
std::shared_ptr<libcamera::Camera> camera_;
libcamera::FrameBufferAllocator allocator_;
std::unique_ptr<libcamera::FrameBufferAllocator> allocator_;
std::unique_ptr<libcamera::CameraConfiguration> config_;
std::vector<std::unique_ptr<libcamera::Request>> requests_;
EventLoop *loop_ = nullptr;
unsigned int captureLimit_ = 0;
std::optional<unsigned int> queueLimit_;
unsigned int captureCount_ = 0;
unsigned int queueCount_ = 0;
};
class CaptureBalanced : public Capture
{
public:
CaptureBalanced(std::shared_ptr<libcamera::Camera> camera);
void capture(unsigned int numRequests);
private:
int queueRequest(libcamera::Request *request);
void requestComplete(libcamera::Request *request) override;
unsigned int queueCount_;
unsigned int captureCount_;
unsigned int captureLimit_;
};
class CaptureUnbalanced : public Capture
{
public:
CaptureUnbalanced(std::shared_ptr<libcamera::Camera> camera);
void capture(unsigned int numRequests);
private:
void requestComplete(libcamera::Request *request) override;
unsigned int captureCount_;
unsigned int captureLimit_;
};

View file

@ -45,11 +45,13 @@ class ThrowListener : public testing::EmptyTestEventListener
static void listCameras(CameraManager *cm)
{
for (const std::shared_ptr<Camera> &cam : cm->cameras())
std::cout << "- " << cam->id() << std::endl;
std::cout << "- " << cam.get()->id() << std::endl;
}
static int initCamera(CameraManager *cm, OptionsParser::Options options)
{
std::shared_ptr<Camera> camera;
int ret = cm->start();
if (ret) {
std::cout << "Failed to start camera manager: "
@ -64,7 +66,7 @@ static int initCamera(CameraManager *cm, OptionsParser::Options options)
}
const std::string &cameraId = options[OptCamera];
std::shared_ptr<Camera> camera = cm->get(cameraId);
camera = cm->get(cameraId);
if (!camera) {
std::cout << "Camera " << cameraId << " not found, available cameras:" << std::endl;
listCameras(cm);
@ -80,27 +82,45 @@ static int initCamera(CameraManager *cm, OptionsParser::Options options)
static int initGtestParameters(char *arg0, OptionsParser::Options options)
{
std::vector<const char *> argv;
const std::map<std::string, std::string> gtestFlags = { { "list", "--gtest_list_tests" },
{ "filter", "--gtest_filter" } };
int argc = 0;
std::string filterParam;
argv.push_back(arg0);
/*
* +2 to have space for both the 0th argument that is needed but not
* used and the null at the end.
*/
char **argv = new char *[(gtestFlags.size() + 2)];
if (!argv)
return -ENOMEM;
if (options.isSet(OptList))
argv.push_back("--gtest_list_tests");
argv[0] = arg0;
argc++;
if (options.isSet(OptList)) {
argv[argc] = const_cast<char *>(gtestFlags.at("list").c_str());
argc++;
}
if (options.isSet(OptFilter)) {
/*
* The filter flag needs to be passed as a single parameter, in
* the format --gtest_filter=filterStr
*/
filterParam = "--gtest_filter=" + options[OptFilter].toString();
argv.push_back(filterParam.c_str());
filterParam = gtestFlags.at("filter") + "=" +
static_cast<const std::string &>(options[OptFilter]);
argv[argc] = const_cast<char *>(filterParam.c_str());
argc++;
}
argv.push_back(nullptr);
argv[argc] = nullptr;
int argc = argv.size();
::testing::InitGoogleTest(&argc, const_cast<char **>(argv.data()));
::testing::InitGoogleTest(&argc, argv);
delete[] argv;
return 0;
}

View file

@ -15,7 +15,6 @@ lc_compliance_sources = files([
'environment.cpp',
'helpers/capture.cpp',
'main.cpp',
'test_base.cpp',
'tests/capture_test.cpp',
])

View file

@ -1,28 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2021, Collabora Ltd.
*
* test_base.cpp - Base definitions for tests
*/
#include "test_base.h"
#include "environment.h"
void CameraHolder::acquireCamera()
{
Environment *env = Environment::get();
camera_ = env->cm()->get(env->cameraId());
ASSERT_EQ(camera_->acquire(), 0);
}
void CameraHolder::releaseCamera()
{
if (!camera_)
return;
camera_->release();
camera_.reset();
}

View file

@ -1,24 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2021, Collabora Ltd.
*
* test_base.h - Base definitions for tests
*/
#ifndef __LC_COMPLIANCE_TEST_BASE_H__
#define __LC_COMPLIANCE_TEST_BASE_H__
#include <libcamera/libcamera.h>
#include <gtest/gtest.h>
class CameraHolder
{
protected:
void acquireCamera();
void releaseCamera();
std::shared_ptr<libcamera::Camera> camera_;
};
#endif /* __LC_COMPLIANCE_TEST_BASE_H__ */

View file

@ -8,54 +8,69 @@
#include "capture.h"
#include <sstream>
#include <string>
#include <tuple>
#include <vector>
#include <iostream>
#include <gtest/gtest.h>
#include "test_base.h"
namespace {
#include "environment.h"
using namespace libcamera;
class SimpleCapture : public testing::TestWithParam<std::tuple<std::vector<StreamRole>, int>>, public CameraHolder
const std::vector<int> NUMREQUESTS = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 };
const std::vector<StreamRole> ROLES = {
StreamRole::Raw,
StreamRole::StillCapture,
StreamRole::VideoRecording,
StreamRole::Viewfinder
};
class SingleStream : public testing::TestWithParam<std::tuple<StreamRole, int>>
{
public:
static std::string nameParameters(const testing::TestParamInfo<SimpleCapture::ParamType> &info);
static std::string nameParameters(const testing::TestParamInfo<SingleStream::ParamType> &info);
protected:
void SetUp() override;
void TearDown() override;
std::shared_ptr<Camera> camera_;
};
/*
* We use gtest's SetUp() and TearDown() instead of constructor and destructor
* in order to be able to assert on them.
*/
void SimpleCapture::SetUp()
void SingleStream::SetUp()
{
acquireCamera();
Environment *env = Environment::get();
camera_ = env->cm()->get(env->cameraId());
ASSERT_EQ(camera_->acquire(), 0);
}
void SimpleCapture::TearDown()
void SingleStream::TearDown()
{
releaseCamera();
if (!camera_)
return;
camera_->release();
camera_.reset();
}
std::string SimpleCapture::nameParameters(const testing::TestParamInfo<SimpleCapture::ParamType> &info)
std::string SingleStream::nameParameters(const testing::TestParamInfo<SingleStream::ParamType> &info)
{
const auto &[roles, numRequests] = info.param;
std::ostringstream ss;
std::map<StreamRole, std::string> rolesMap = {
{ StreamRole::Raw, "Raw" },
{ StreamRole::StillCapture, "StillCapture" },
{ StreamRole::VideoRecording, "VideoRecording" },
{ StreamRole::Viewfinder, "Viewfinder" }
};
for (StreamRole r : roles)
ss << r << '_';
std::string roleName = rolesMap[std::get<0>(info.param)];
std::string numRequestsName = std::to_string(std::get<1>(info.param));
ss << '_' << numRequests;
return ss.str();
return roleName + "_" + numRequestsName;
}
/*
@ -65,15 +80,15 @@ std::string SimpleCapture::nameParameters(const testing::TestParamInfo<SimpleCap
* failure is a camera that completes less requests than the number of requests
* queued.
*/
TEST_P(SimpleCapture, Capture)
TEST_P(SingleStream, Capture)
{
const auto &[roles, numRequests] = GetParam();
auto [role, numRequests] = GetParam();
Capture capture(camera_);
CaptureBalanced capture(camera_);
capture.configure(roles);
capture.configure(role);
capture.run(numRequests, numRequests);
capture.capture(numRequests);
}
/*
@ -83,17 +98,17 @@ TEST_P(SimpleCapture, Capture)
* a camera that does not clean up correctly in its error path but is only
* tested by single-capture applications.
*/
TEST_P(SimpleCapture, CaptureStartStop)
TEST_P(SingleStream, CaptureStartStop)
{
const auto &[roles, numRequests] = GetParam();
auto [role, numRequests] = GetParam();
unsigned int numRepeats = 3;
Capture capture(camera_);
CaptureBalanced capture(camera_);
capture.configure(roles);
capture.configure(role);
for (unsigned int starts = 0; starts < numRepeats; starts++)
capture.run(numRequests, numRequests);
capture.capture(numRequests);
}
/*
@ -103,43 +118,19 @@ TEST_P(SimpleCapture, CaptureStartStop)
* is a camera that does not handle cancelation of buffers coming back from the
* video device while stopping.
*/
TEST_P(SimpleCapture, UnbalancedStop)
TEST_P(SingleStream, UnbalancedStop)
{
const auto &[roles, numRequests] = GetParam();
auto [role, numRequests] = GetParam();
Capture capture(camera_);
CaptureUnbalanced capture(camera_);
capture.configure(roles);
capture.configure(role);
capture.run(numRequests);
capture.capture(numRequests);
}
const int NUMREQUESTS[] = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 };
const std::vector<StreamRole> SINGLEROLES[] = {
{ StreamRole::Raw, },
{ StreamRole::StillCapture, },
{ StreamRole::VideoRecording, },
{ StreamRole::Viewfinder, },
};
const std::vector<StreamRole> MULTIROLES[] = {
{ StreamRole::Raw, StreamRole::StillCapture },
{ StreamRole::Raw, StreamRole::VideoRecording },
{ StreamRole::StillCapture, StreamRole::VideoRecording },
{ StreamRole::VideoRecording, StreamRole::VideoRecording },
};
INSTANTIATE_TEST_SUITE_P(SingleStream,
SimpleCapture,
testing::Combine(testing::ValuesIn(SINGLEROLES),
INSTANTIATE_TEST_SUITE_P(CaptureTests,
SingleStream,
testing::Combine(testing::ValuesIn(ROLES),
testing::ValuesIn(NUMREQUESTS)),
SimpleCapture::nameParameters);
INSTANTIATE_TEST_SUITE_P(MultiStream,
SimpleCapture,
testing::Combine(testing::ValuesIn(MULTIROLES),
testing::ValuesIn(NUMREQUESTS)),
SimpleCapture::nameParameters);
} /* namespace */
SingleStream::nameParameters);

View file

@ -249,7 +249,7 @@ void FormatConverter::convertYUVPacked(const Image *srcImage, unsigned char *dst
dst_stride = width_ * 4;
for (src_y = 0, dst_y = 0; dst_y < height_; src_y++, dst_y++) {
for (src_x = 0, dst_x = 0; dst_x < width_;) {
for (src_x = 0, dst_x = 0; dst_x < width_; ) {
cb = src[src_y * src_stride + src_x * 4 + cb_pos_];
cr = src[src_y * src_stride + src_x * 4 + cr_pos];

View file

@ -356,9 +356,6 @@ int MainWindow::startCapture()
/* Verify roles are supported. */
switch (roles.size()) {
case 0:
roles.push_back(StreamRole::Viewfinder);
break;
case 1:
if (roles[0] != StreamRole::Viewfinder) {
qWarning() << "Only viewfinder supported for single stream";
@ -389,7 +386,10 @@ int MainWindow::startCapture()
/* Use a format supported by the viewfinder if available. */
std::vector<PixelFormat> formats = vfConfig.formats().pixelformats();
for (const PixelFormat &format : viewfinder_->nativeFormats()) {
auto match = std::find(formats.begin(), formats.end(), format);
auto match = std::find_if(formats.begin(), formats.end(),
[&](const PixelFormat &f) {
return f == format;
});
if (match != formats.end()) {
vfConfig.pixelFormat = format;
break;

View file

@ -39,7 +39,7 @@ static void value_set_rectangle(GValue *value, const Rectangle &rect)
GValue height = G_VALUE_INIT;
g_value_init(&height, G_TYPE_INT);
g_value_set_int(&height, size.height);
g_value_set_int(&x, size.height);
gst_value_array_append_and_take_value(value, &height);
}
@ -68,7 +68,7 @@ static const GEnumValue {{ ctrl.name|snake_case }}_types[] = {
"{{ enum.gst_name }}"
},
{%- endfor %}
{0, nullptr, nullptr}
{0, NULL, NULL}
};
#define TYPE_{{ ctrl.name|snake_case|upper }} \
@ -223,6 +223,7 @@ bool GstCameraControls::setProperty(guint propId, const GValue *value,
{%- for ctrl in ctrls %}
case controls::{{ ctrl.namespace }}{{ ctrl.name|snake_case|upper }}: {
ControlValue control;
{%- if ctrl.is_array %}
size_t size = gst_value_array_get_size(value);
{%- if ctrl.size != 0 %}
@ -253,9 +254,12 @@ bool GstCameraControls::setProperty(guint propId, const GValue *value,
}
{%- if ctrl.size == 0 %}
Span<const {{ ctrl.element_type }}> val(values.data(), size);
control.set(Span<const {{ ctrl.element_type }}>(values.data(),
size));
{%- else %}
Span<const {{ ctrl.element_type }}, {{ ctrl.size }}> val(values.data(), size);
control.set(Span<const {{ ctrl.element_type }},
{{ ctrl.size }}>(values.data(),
{{ ctrl.size }}));
{%- endif %}
{%- else %}
{%- if ctrl.is_rectangle %}
@ -269,9 +273,10 @@ bool GstCameraControls::setProperty(guint propId, const GValue *value,
{%- else %}
auto val = g_value_get_{{ ctrl.gtype }}(value);
{%- endif %}
control.set(val);
{%- endif %}
controls_.set(controls::{{ ctrl.namespace }}{{ ctrl.name }}, val);
controls_acc_.set(controls::{{ ctrl.namespace }}{{ ctrl.name }}, val);
controls_.set(propId, control);
controls_acc_.set(propId, control);
return true;
}
{%- endfor %}

View file

@ -74,7 +74,6 @@ static struct {
{ GST_VIDEO_FORMAT_I420, formats::YUV420 },
{ GST_VIDEO_FORMAT_YV12, formats::YVU420 },
{ GST_VIDEO_FORMAT_Y42B, formats::YUV422 },
{ GST_VIDEO_FORMAT_Y444, formats::YUV444 },
/* YUV Packed */
{ GST_VIDEO_FORMAT_UYVY, formats::UYVY },
@ -494,12 +493,9 @@ void gst_libcamera_configure_stream_from_caps(StreamConfiguration &stream_cfg,
/* Configure colorimetry */
if (gst_structure_has_field(s, "colorimetry")) {
const gchar *colorimetry_str;
const gchar *colorimetry_str = gst_structure_get_string(s, "colorimetry");
GstVideoColorimetry colorimetry;
gst_structure_fixate_field(s, "colorimetry");
colorimetry_str = gst_structure_get_string(s, "colorimetry");
if (!gst_video_colorimetry_from_string(&colorimetry, colorimetry_str))
g_critical("Invalid colorimetry %s", colorimetry_str);
@ -599,43 +595,6 @@ gst_task_resume(GstTask *task)
}
#endif
#if !GST_CHECK_VERSION(1, 22, 0)
/*
* Copyright (C) <1999> Erik Walthinsen <omega@cse.ogi.edu>
* Library <2002> Ronald Bultje <rbultje@ronald.bitfreak.net>
* Copyright (C) <2007> David A. Schleef <ds@schleef.org>
*/
/*
* This function has been imported directly from the gstreamer project to
* support backwards compatibility and should be removed when the older version
* is no longer supported.
*/
gint gst_video_format_info_extrapolate_stride(const GstVideoFormatInfo *finfo, gint plane, gint stride)
{
gint estride;
gint comp[GST_VIDEO_MAX_COMPONENTS];
gint i;
/* There is nothing to extrapolate on first plane. */
if (plane == 0)
return stride;
gst_video_format_info_component(finfo, plane, comp);
/*
* For now, all planar formats have a single component on first plane, but
* if there was a planar format with more, we'd have to make a ratio of the
* number of component on the first plane against the number of component on
* the current plane.
*/
estride = 0;
for (i = 0; i < GST_VIDEO_MAX_COMPONENTS && comp[i] >= 0; i++)
estride += GST_VIDEO_FORMAT_INFO_SCALE_WIDTH(finfo, comp[i], stride);
return estride;
}
#endif
G_LOCK_DEFINE_STATIC(cm_singleton_lock);
static std::weak_ptr<CameraManager> cm_singleton_ptr;

View file

@ -36,11 +36,6 @@ static inline void gst_clear_event(GstEvent **event_ptr)
#if !GST_CHECK_VERSION(1, 17, 1)
gboolean gst_task_resume(GstTask *task);
#endif
#if !GST_CHECK_VERSION(1, 22, 0)
gint gst_video_format_info_extrapolate_stride(const GstVideoFormatInfo *finfo, gint plane, gint stride);
#endif
std::shared_ptr<libcamera::CameraManager> gst_libcamera_get_camera_manager(int &ret);
/**

View file

@ -8,8 +8,6 @@
#include "gstlibcameraallocator.h"
#include <utility>
#include <libcamera/camera.h>
#include <libcamera/framebuffer_allocator.h>
#include <libcamera/stream.h>
@ -201,20 +199,22 @@ GstLibcameraAllocator *
gst_libcamera_allocator_new(std::shared_ptr<Camera> camera,
CameraConfiguration *config_)
{
g_autoptr(GstLibcameraAllocator) self = GST_LIBCAMERA_ALLOCATOR(g_object_new(GST_TYPE_LIBCAMERA_ALLOCATOR,
nullptr));
auto *self = GST_LIBCAMERA_ALLOCATOR(g_object_new(GST_TYPE_LIBCAMERA_ALLOCATOR,
nullptr));
gint ret;
self->cm_ptr = new std::shared_ptr<CameraManager>(gst_libcamera_get_camera_manager(ret));
if (ret)
if (ret) {
g_object_unref(self);
return nullptr;
}
self->fb_allocator = new FrameBufferAllocator(camera);
for (StreamConfiguration &streamCfg : *config_) {
Stream *stream = streamCfg.stream();
ret = self->fb_allocator->allocate(stream);
if (ret <= 0)
if (ret == 0)
return nullptr;
GQueue *pool = g_queue_new();
@ -228,7 +228,7 @@ gst_libcamera_allocator_new(std::shared_ptr<Camera> camera,
g_hash_table_insert(self->pools, stream, pool);
}
return std::exchange(self, nullptr);
return self;
}
bool

View file

@ -18,8 +18,6 @@ struct _GstLibcameraPad {
GstPad parent;
StreamRole role;
GstLibcameraPool *pool;
GstBufferPool *video_pool;
GstVideoInfo info;
GstClockTime latency;
};
@ -72,10 +70,6 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
if (query->type != GST_QUERY_LATENCY)
return gst_pad_query_default(pad, parent, query);
GLibLocker lock(GST_OBJECT(self));
if (self->latency == GST_CLOCK_TIME_NONE)
return FALSE;
/* TRUE here means live, we assumes that max latency is the same as min
* as we have no idea that duration of frames. */
gst_query_set_latency(query, TRUE, self->latency, self->latency);
@ -85,7 +79,6 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
static void
gst_libcamera_pad_init(GstLibcameraPad *self)
{
self->latency = GST_CLOCK_TIME_NONE;
GST_PAD_QUERYFUNC(self) = gst_libcamera_pad_query;
}
@ -107,7 +100,7 @@ gst_libcamera_stream_role_get_type()
"libcamera::Viewfinder",
"view-finder",
},
{ 0, nullptr, nullptr }
{ 0, NULL, NULL }
};
if (!type)
@ -160,35 +153,6 @@ gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool)
self->pool = pool;
}
GstBufferPool *
gst_libcamera_pad_get_video_pool(GstPad *pad)
{
auto *self = GST_LIBCAMERA_PAD(pad);
return self->video_pool;
}
void gst_libcamera_pad_set_video_pool(GstPad *pad, GstBufferPool *video_pool)
{
auto *self = GST_LIBCAMERA_PAD(pad);
if (self->video_pool)
g_object_unref(self->video_pool);
self->video_pool = video_pool;
}
GstVideoInfo gst_libcamera_pad_get_video_info(GstPad *pad)
{
auto *self = GST_LIBCAMERA_PAD(pad);
return self->info;
}
void gst_libcamera_pad_set_video_info(GstPad *pad, const GstVideoInfo *info)
{
auto *self = GST_LIBCAMERA_PAD(pad);
self->info = *info;
}
Stream *
gst_libcamera_pad_get_stream(GstPad *pad)
{

View file

@ -23,14 +23,6 @@ GstLibcameraPool *gst_libcamera_pad_get_pool(GstPad *pad);
void gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool);
GstBufferPool *gst_libcamera_pad_get_video_pool(GstPad *pad);
void gst_libcamera_pad_set_video_pool(GstPad *pad, GstBufferPool *video_pool);
GstVideoInfo gst_libcamera_pad_get_video_info(GstPad *pad);
void gst_libcamera_pad_set_video_info(GstPad *pad, const GstVideoInfo *info);
libcamera::Stream *gst_libcamera_pad_get_stream(GstPad *pad);
void gst_libcamera_pad_set_latency(GstPad *pad, GstClockTime latency);

Some files were not shown because too many files have changed in this diff Show more