Compare commits

..

No commits in common. "master" and "v0.5.0" have entirely different histories.

148 changed files with 898 additions and 4552 deletions

View file

@ -57,8 +57,7 @@ GENERATE_LATEX = NO
MACRO_EXPANSION = YES
EXPAND_ONLY_PREDEF = YES
INCLUDE_PATH = "@TOP_BUILDDIR@/include" \
"@TOP_SRCDIR@/include"
INCLUDE_PATH = "@TOP_SRCDIR@/include/libcamera"
INCLUDE_FILE_PATTERNS = *.h
IMAGE_PATH = "@TOP_SRCDIR@/Documentation/images"

View file

@ -618,7 +618,7 @@ accordingly. In this example, the application file has been named
simple_cam = executable('simple-cam',
'simple-cam.cpp',
dependencies: dependency('libcamera'))
dependencies: dependency('libcamera', required : true))
The ``dependencies`` line instructs meson to ask ``pkgconfig`` (or ``cmake``) to
locate the ``libcamera`` library, which the test application will be

View file

@ -213,7 +213,7 @@ implementations for the overridden class members.
std::vector<std::unique_ptr<FrameBuffer>> *buffers) override;
int start(Camera *camera, const ControlList *controls) override;
void stopDevice(Camera *camera) override;
void stop(Camera *camera) override;
int queueRequestDevice(Camera *camera, Request *request) override;
@ -247,7 +247,7 @@ implementations for the overridden class members.
return -1;
}
void PipelineHandlerVivid::stopDevice(Camera *camera)
void PipelineHandlerVivid::stop(Camera *camera)
{
}
@ -521,14 +521,14 @@ handler and camera manager using `registerCamera`_.
Finally with a successful construction, we return 'true' indicating that the
PipelineHandler successfully matched and constructed a device.
.. _Camera::create: https://libcamera.org/internal-api-html/classlibcamera_1_1Camera.html#adf5e6c22411f953bfaa1ae21155d6c31
.. _Camera::create: https://libcamera.org/api-html/classlibcamera_1_1Camera.html#a453740e0d2a2f495048ae307a85a2574
.. _registerCamera: https://libcamera.org/api-html/classlibcamera_1_1PipelineHandler.html#adf02a7f1bbd87aca73c0e8d8e0e6c98b
.. code-block:: cpp
std::set<Stream *> streams{ &data->stream_ };
std::shared_ptr<Camera> camera = Camera::create(std::move(data), data->video_->deviceName(), streams);
registerCamera(std::move(camera));
std::shared_ptr<Camera> camera = Camera::create(this, data->video_->deviceName(), streams);
registerCamera(std::move(camera), std::move(data));
return true;
@ -554,7 +554,8 @@ Our match function should now look like the following:
/* Create and register the camera. */
std::set<Stream *> streams{ &data->stream_ };
std::shared_ptr<Camera> camera = Camera::create(std::move(data), data->video_->deviceName(), streams);
const std::string &id = data->video_->deviceName();
std::shared_ptr<Camera> camera = Camera::create(data.release(), id, streams);
registerCamera(std::move(camera));
return true;
@ -592,11 +593,11 @@ immutable properties of the ``Camera`` device.
The libcamera controls and properties are defined in YAML form which is
processed to automatically generate documentation and interfaces. Controls are
defined by the src/libcamera/`control_ids_core.yaml`_ file and camera properties
are defined by src/libcamera/`property_ids_core.yaml`_.
are defined by src/libcamera/`properties_ids_core.yaml`_.
.. _controls framework: https://libcamera.org/api-html/controls_8h.html
.. _control_ids_core.yaml: https://libcamera.org/api-html/control__ids_8h.html
.. _property_ids_core.yaml: https://libcamera.org/api-html/property__ids_8h.html
.. _properties_ids_core.yaml: https://libcamera.org/api-html/property__ids_8h.html
Pipeline handlers can optionally register the list of controls an application
can set as well as a list of immutable camera properties. Being both
@ -799,7 +800,8 @@ derived class, and assign it to a base class pointer.
.. code-block:: cpp
auto config = std::make_unique<VividCameraConfiguration>();
VividCameraData *data = cameraData(camera);
CameraConfiguration *config = new VividCameraConfiguration();
A ``CameraConfiguration`` is specific to each pipeline, so you can only create
it from the pipeline handler code path. Applications can also generate an empty
@ -827,7 +829,9 @@ To generate a ``StreamConfiguration``, you need a list of pixel formats and
frame sizes which are supported as outputs of the stream. You can fetch a map of
the ``V4LPixelFormat`` and ``SizeRange`` supported by the underlying output
device, but the pipeline handler needs to convert this to a
``libcamera::PixelFormat`` type to pass to applications.
``libcamera::PixelFormat`` type to pass to applications. We do this here using
``std::transform`` to convert the formats and populate a new ``PixelFormat`` map
as shown below.
Continue adding the following code example to our ``generateConfiguration``
implementation.
@ -837,12 +841,14 @@ implementation.
std::map<V4L2PixelFormat, std::vector<SizeRange>> v4l2Formats =
data->video_->formats();
std::map<PixelFormat, std::vector<SizeRange>> deviceFormats;
for (auto &[v4l2PixelFormat, sizes] : v4l2Formats) {
PixelFormat pixelFormat = v4l2PixelFormat.toPixelFormat();
if (pixelFormat.isValid())
deviceFormats.try_emplace(pixelFormat, std::move(sizes));
}
std::transform(v4l2Formats.begin(), v4l2Formats.end(),
std::inserter(deviceFormats, deviceFormats.begin()),
[&](const decltype(v4l2Formats)::value_type &format) {
return decltype(deviceFormats)::value_type{
format.first.toPixelFormat(),
format.second
};
});
The `StreamFormats`_ class holds information about the pixel formats and frame
sizes that a stream can support. The class groups size information by the pixel
@ -932,9 +938,9 @@ Add the following function implementation to your file:
StreamConfiguration &cfg = config_[0];
const std::vector<libcamera::PixelFormat> &formats = cfg.formats().pixelformats();
const std::vector<libcamera::PixelFormat> formats = cfg.formats().pixelformats();
if (std::find(formats.begin(), formats.end(), cfg.pixelFormat) == formats.end()) {
cfg.pixelFormat = formats[0];
cfg.pixelFormat = cfg.formats().pixelformats()[0];
LOG(VIVID, Debug) << "Adjusting format to " << cfg.pixelFormat.toString();
status = Adjusted;
}
@ -1152,7 +1158,7 @@ available to the devices which have to be started and ready to produce
images. At the end of a capture session the ``Camera`` device needs to be
stopped, to gracefully clean up any allocated memory and stop the hardware
devices. Pipeline handlers implement two functions for these purposes, the
``start()`` and ``stopDevice()`` functions.
``start()`` and ``stop()`` functions.
The memory initialization phase that happens at ``start()`` time serves to
configure video devices to be able to use memory buffers exported as dma-buf
@ -1255,8 +1261,8 @@ algorithms, or other devices you should also stop them.
.. _releaseBuffers: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a191619c152f764e03bc461611f3fcd35
Of course we also need to handle the corresponding actions to stop streaming on
a device, Add the following to the ``stopDevice()`` function, to stop the
stream with the `streamOff`_ function and release all buffers.
a device, Add the following to the ``stop`` function, to stop the stream with
the `streamOff`_ function and release all buffers.
.. _streamOff: https://libcamera.org/api-html/classlibcamera_1_1V4L2VideoDevice.html#a61998710615bdf7aa25a046c8565ed66

View file

@ -116,8 +116,10 @@ endif
# Sphinx
#
sphinx = find_program('sphinx-build-3', 'sphinx-build',
required : get_option('documentation'))
sphinx = find_program('sphinx-build-3', required : false)
if not sphinx.found()
sphinx = find_program('sphinx-build', required : get_option('documentation'))
endif
if sphinx.found()
docs_sources = [

View file

@ -44,7 +44,7 @@ A C++ toolchain: [required]
Either {g++, clang}
Meson Build system: [required]
meson (>= 0.63) ninja-build pkg-config
meson (>= 0.60) ninja-build pkg-config
for the libcamera core: [required]
libyaml-dev python3-yaml python3-ply python3-jinja2
@ -83,10 +83,9 @@ for cam: [optional]
- libdrm-dev: Enables the KMS sink
- libjpeg-dev: Enables MJPEG on the SDL sink
- libsdl2-dev: Enables the SDL sink
- libtiff-dev: Enables writing DNG
for qcam: [optional]
libtiff-dev qt6-base-dev
libtiff-dev qt6-base-dev qt6-tools-dev-tools
for tracing with lttng: [optional]
liblttng-ust-dev python3-jinja2 lttng-tools

View file

@ -120,7 +120,7 @@ struct control_type<Point> {
};
template<typename T, std::size_t N>
struct control_type<Span<T, N>, std::enable_if_t<control_type<std::remove_cv_t<T>>::size == 0>> : public control_type<std::remove_cv_t<T>> {
struct control_type<Span<T, N>> : public control_type<std::remove_cv_t<T>> {
static constexpr std::size_t size = N;
};

View file

@ -26,7 +26,6 @@ struct FrameMetadata {
FrameSuccess,
FrameError,
FrameCancelled,
FrameStartup,
};
struct Plane {

View file

@ -1,68 +0,0 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Camera recovery algorithm
*/
#pragma once
#include <stdint.h>
namespace libcamera {
class ClockRecovery
{
public:
ClockRecovery();
void configure(unsigned int numSamples = 100, unsigned int maxJitter = 2000,
unsigned int minSamples = 10, unsigned int errorThreshold = 50000);
void reset();
void addSample();
void addSample(uint64_t input, uint64_t output);
uint64_t getOutput(uint64_t input);
private:
/* Approximate number of samples over which the model state persists. */
unsigned int numSamples_;
/* Remove any output jitter larger than this immediately. */
unsigned int maxJitter_;
/* Number of samples required before we start to use model estimates. */
unsigned int minSamples_;
/* Threshold above which we assume the wallclock has been reset. */
unsigned int errorThreshold_;
/* How many samples seen (up to numSamples_). */
unsigned int count_;
/* This gets subtracted from all input values, just to make the numbers easier. */
uint64_t inputBase_;
/* As above, for the output. */
uint64_t outputBase_;
/* The previous input sample. */
uint64_t lastInput_;
/* The previous output sample. */
uint64_t lastOutput_;
/* Average x value seen so far. */
double xAve_;
/* Average y value seen so far */
double yAve_;
/* Average x^2 value seen so far. */
double x2Ave_;
/* Average x*y value seen so far. */
double xyAve_;
/*
* The latest estimate of linear parameters to derive the output clock
* from the input.
*/
double slope_;
double offset_;
/* Use this cumulative error to monitor for spontaneous clock updates. */
double error_;
};
} /* namespace libcamera */

View file

@ -10,15 +10,13 @@
#include <stdint.h>
#include <unordered_map>
#include <libcamera/base/object.h>
#include <libcamera/controls.h>
namespace libcamera {
class V4L2Device;
class DelayedControls : public Object
class DelayedControls
{
public:
struct ControlParams {

View file

@ -309,6 +309,7 @@ public:
serialize(const Flags<E> &data, [[maybe_unused]] ControlSerializer *cs = nullptr)
{
std::vector<uint8_t> dataVec;
dataVec.reserve(sizeof(Flags<E>));
appendPOD<uint32_t>(dataVec, static_cast<typename Flags<E>::Type>(data));
return { dataVec, {} };

View file

@ -29,7 +29,7 @@ public:
bool isValid() const;
const struct IPAModuleInfo &info() const;
const std::vector<uint8_t> &signature() const;
const std::vector<uint8_t> signature() const;
const std::string &path() const;
bool load();

View file

@ -8,7 +8,6 @@
#include <algorithm>
#include <sstream>
#include <type_traits>
#include <vector>
#include <libcamera/base/log.h>
@ -21,19 +20,17 @@ namespace libcamera {
LOG_DECLARE_CATEGORY(Matrix)
#ifndef __DOXYGEN__
template<typename T>
bool matrixInvert(Span<const T> dataIn, Span<T> dataOut, unsigned int dim,
Span<T> scratchBuffer, Span<unsigned int> swapBuffer);
#endif /* __DOXYGEN__ */
template<typename T, unsigned int Rows, unsigned int Cols,
std::enable_if_t<std::is_arithmetic_v<T>> * = nullptr>
#else
template<typename T, unsigned int Rows, unsigned int Cols>
#endif /* __DOXYGEN__ */
class Matrix
{
static_assert(std::is_arithmetic_v<T>, "Matrix type must be arithmetic");
public:
constexpr Matrix()
Matrix()
{
data_.fill(static_cast<T>(0));
}
Matrix(const std::array<T, Rows * Cols> &data)
@ -41,12 +38,7 @@ public:
std::copy(data.begin(), data.end(), data_.begin());
}
Matrix(const Span<const T, Rows * Cols> data)
{
std::copy(data.begin(), data.end(), data_.begin());
}
static constexpr Matrix identity()
static Matrix identity()
{
Matrix ret;
for (size_t i = 0; i < std::min(Rows, Cols); i++)
@ -74,14 +66,14 @@ public:
return out.str();
}
constexpr Span<const T, Rows * Cols> data() const { return data_; }
Span<const T, Rows * Cols> data() const { return data_; }
constexpr Span<const T, Cols> operator[](size_t i) const
Span<const T, Cols> operator[](size_t i) const
{
return Span<const T, Cols>{ &data_.data()[i * Cols], Cols };
}
constexpr Span<T, Cols> operator[](size_t i)
Span<T, Cols> operator[](size_t i)
{
return Span<T, Cols>{ &data_.data()[i * Cols], Cols };
}
@ -98,30 +90,8 @@ public:
return *this;
}
Matrix<T, Rows, Cols> inverse(bool *ok = nullptr) const
{
static_assert(Rows == Cols, "Matrix must be square");
Matrix<T, Rows, Cols> inverse;
std::array<T, Rows * Cols * 2> scratchBuffer;
std::array<unsigned int, Rows> swapBuffer;
bool res = matrixInvert(Span<const T>(data_),
Span<T>(inverse.data_),
Rows,
Span<T>(scratchBuffer),
Span<unsigned int>(swapBuffer));
if (ok)
*ok = res;
return inverse;
}
private:
/*
* \todo The initializer is only necessary for the constructor to be
* constexpr in C++17. Remove the initializer as soon as we are on
* C++20.
*/
std::array<T, Rows * Cols> data_ = {};
std::array<T, Rows * Cols> data_;
};
#ifndef __DOXYGEN__
@ -153,16 +123,21 @@ Matrix<U, Rows, Cols> operator*(const Matrix<U, Rows, Cols> &m, T d)
return d * m;
}
template<typename T1, unsigned int R1, unsigned int C1, typename T2, unsigned int R2, unsigned int C2>
constexpr Matrix<std::common_type_t<T1, T2>, R1, C2> operator*(const Matrix<T1, R1, C1> &m1,
const Matrix<T2, R2, C2> &m2)
#ifndef __DOXYGEN__
template<typename T,
unsigned int R1, unsigned int C1,
unsigned int R2, unsigned int C2,
std::enable_if_t<C1 == R2> * = nullptr>
#else
template<typename T, unsigned int R1, unsigned int C1, unsigned int R2, unsigned in C2>
#endif /* __DOXYGEN__ */
Matrix<T, R1, C2> operator*(const Matrix<T, R1, C1> &m1, const Matrix<T, R2, C2> &m2)
{
static_assert(C1 == R2, "Matrix dimensions must match for multiplication");
Matrix<std::common_type_t<T1, T2>, R1, C2> result;
Matrix<T, R1, C2> result;
for (unsigned int i = 0; i < R1; i++) {
for (unsigned int j = 0; j < C2; j++) {
std::common_type_t<T1, T2> sum = 0;
T sum = 0;
for (unsigned int k = 0; k < C1; k++)
sum += m1[i][k] * m2[k][j];
@ -175,7 +150,7 @@ constexpr Matrix<std::common_type_t<T1, T2>, R1, C2> operator*(const Matrix<T1,
}
template<typename T, unsigned int Rows, unsigned int Cols>
constexpr Matrix<T, Rows, Cols> operator+(const Matrix<T, Rows, Cols> &m1, const Matrix<T, Rows, Cols> &m2)
Matrix<T, Rows, Cols> operator+(const Matrix<T, Rows, Cols> &m1, const Matrix<T, Rows, Cols> &m2)
{
Matrix<T, Rows, Cols> result;

View file

@ -55,8 +55,6 @@ public:
Signal<> disconnected;
std::vector<MediaEntity *> locateEntities(unsigned int function);
protected:
std::string logPrefix() const override;

View file

@ -1,59 +0,0 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Ideas on Board Oy
*
* Media pipeline support
*/
#pragma once
#include <list>
#include <string>
#include <libcamera/base/log.h>
namespace libcamera {
class CameraSensor;
class MediaEntity;
class MediaLink;
class MediaPad;
struct V4L2SubdeviceFormat;
class MediaPipeline
{
public:
int init(MediaEntity *source, std::string_view sink);
int initLinks();
int configure(CameraSensor *sensor, V4L2SubdeviceFormat *);
private:
struct Entity {
/* The media entity, always valid. */
MediaEntity *entity;
/*
* Whether or not the entity is a subdev that supports the
* routing API.
*/
bool supportsRouting;
/*
* The local sink pad connected to the upstream entity, null for
* the camera sensor at the beginning of the pipeline.
*/
const MediaPad *sink;
/*
* The local source pad connected to the downstream entity, null
* for the video node at the end of the pipeline.
*/
const MediaPad *source;
/*
* The link on the source pad, to the downstream entity, null
* for the video node at the end of the pipeline.
*/
MediaLink *sourceLink;
};
std::list<Entity> entities_;
};
} /* namespace libcamera */

View file

@ -11,7 +11,6 @@ libcamera_internal_headers = files([
'camera_manager.h',
'camera_sensor.h',
'camera_sensor_properties.h',
'clock_recovery.h',
'control_serializer.h',
'control_validator.h',
'converter.h',
@ -33,7 +32,6 @@ libcamera_internal_headers = files([
'matrix.h',
'media_device.h',
'media_object.h',
'media_pipeline.h',
'pipeline_handler.h',
'process.h',
'pub_key.h',

View file

@ -11,7 +11,6 @@
#include <string>
#include <vector>
#include <libcamera/base/class.h>
#include <libcamera/base/signal.h>
#include <libcamera/base/unique_fd.h>
@ -43,8 +42,6 @@ public:
Signal<enum ExitStatus, int> finished;
private:
LIBCAMERA_DISABLE_COPY_AND_MOVE(Process)
void closeAllFdsExcept(const std::vector<int> &fds);
int isolate();
void died(int wstatus);

View file

@ -13,7 +13,6 @@
#include <numeric>
#include <optional>
#include <ostream>
#include <type_traits>
#include <libcamera/base/log.h>
#include <libcamera/base/span.h>
@ -43,12 +42,8 @@ public:
constexpr Vector(const std::array<T, Rows> &data)
{
std::copy(data.begin(), data.end(), data_.begin());
}
constexpr Vector(const Span<const T, Rows> data)
{
std::copy(data.begin(), data.end(), data_.begin());
for (unsigned int i = 0; i < Rows; i++)
data_[i] = data[i];
}
const T &operator[](size_t i) const
@ -296,13 +291,13 @@ private:
template<typename T>
using RGB = Vector<T, 3>;
template<typename T, typename U, unsigned int Rows, unsigned int Cols>
Vector<std::common_type_t<T, U>, Rows> operator*(const Matrix<T, Rows, Cols> &m, const Vector<U, Cols> &v)
template<typename T, unsigned int Rows, unsigned int Cols>
Vector<T, Rows> operator*(const Matrix<T, Rows, Cols> &m, const Vector<T, Cols> &v)
{
Vector<std::common_type_t<T, U>, Rows> result;
Vector<T, Rows> result;
for (unsigned int i = 0; i < Rows; i++) {
std::common_type_t<T, U> sum = 0;
T sum = 0;
for (unsigned int j = 0; j < Cols; j++)
sum += m[i][j] * v[j];
result[i] = sum;

View file

@ -52,8 +52,7 @@ struct ConfigResult {
struct StartResult {
libcamera.ControlList controls;
int32 startupFrameCount;
int32 invalidFrameCount;
int32 dropFrameCount;
};
struct PrepareParams {

View file

@ -90,7 +90,6 @@ foreach mode, entry : controls_map
command : [gen_controls, '-o', '@OUTPUT@',
'--mode', mode, '-t', template_file,
'-r', ranges_file, '@INPUT@'],
depend_files : [py_mod_controls],
env : py_build_env,
install : true,
install_dir : libcamera_headers_install_dir)

View file

@ -2,7 +2,7 @@
project('libcamera', 'c', 'cpp',
meson_version : '>= 0.63',
version : '0.5.1',
version : '0.5.0',
default_options : [
'werror=true',
'warning_level=2',

View file

@ -18,7 +18,6 @@ option('cam',
option('documentation',
type : 'feature',
value : 'auto',
description : 'Generate the project documentation')
option('doc_werror',

View file

@ -1079,7 +1079,7 @@ int CameraDevice::processCaptureRequest(camera3_capture_request_t *camera3Reques
buffer.internalBuffer = frameBuffer;
descriptor->request_->addBuffer(sourceStream->stream(),
frameBuffer);
frameBuffer, nullptr);
requestedStreams.insert(sourceStream);
}

View file

@ -62,32 +62,11 @@ CameraSession::CameraSession(CameraManager *cm,
return;
}
std::vector<StreamRole> roles =
StreamKeyValueParser::roles(options_[OptStream]);
std::vector<std::vector<StreamRole>> tryRoles;
if (!roles.empty()) {
/*
* If the roles are explicitly specified then there's no need
* to try other roles
*/
tryRoles.push_back(roles);
} else {
tryRoles.push_back({ StreamRole::Viewfinder });
tryRoles.push_back({ StreamRole::Raw });
}
std::vector<StreamRole> roles = StreamKeyValueParser::roles(options_[OptStream]);
std::unique_ptr<CameraConfiguration> config;
bool valid = false;
for (std::vector<StreamRole> &rolesIt : tryRoles) {
config = camera_->generateConfiguration(rolesIt);
if (config && config->size() == rolesIt.size()) {
roles = rolesIt;
valid = true;
break;
}
}
if (!valid) {
std::unique_ptr<CameraConfiguration> config =
camera_->generateConfiguration(roles);
if (!config || config->size() != roles.size()) {
std::cerr << "Failed to get default stream configuration"
<< std::endl;
return;

View file

@ -8,7 +8,6 @@
#include "capture_script.h"
#include <iostream>
#include <memory>
#include <stdio.h>
#include <stdlib.h>
@ -522,22 +521,45 @@ ControlValue CaptureScript::parseArrayControl(const ControlId *id,
case ControlTypeNone:
break;
case ControlTypeBool: {
auto values = std::make_unique<bool[]>(repr.size());
/*
* This is unpleasant, but we cannot use an std::vector<> as its
* boolean type overload does not allow to access the raw data,
* as boolean values are stored in a bitmask for efficiency.
*
* As we need a contiguous memory region to wrap in a Span<>,
* use an array instead but be strict about not overflowing it
* by limiting the number of controls we can store.
*
* Be loud but do not fail, as the issue would present at
* runtime and it's not fatal.
*/
static constexpr unsigned int kMaxNumBooleanControls = 1024;
std::array<bool, kMaxNumBooleanControls> values;
unsigned int idx = 0;
for (std::size_t i = 0; i < repr.size(); i++) {
const auto &s = repr[i];
for (const std::string &s : repr) {
bool val;
if (s == "true") {
values[i] = true;
val = true;
} else if (s == "false") {
values[i] = false;
val = false;
} else {
unpackFailure(id, s);
return value;
}
if (idx == kMaxNumBooleanControls) {
std::cerr << "Cannot parse more than "
<< kMaxNumBooleanControls
<< " boolean controls" << std::endl;
break;
}
values[idx++] = val;
}
value = Span<bool>(values.get(), repr.size());
value = Span<bool>(values.data(), idx);
break;
}
case ControlTypeByte: {
@ -578,6 +600,10 @@ ControlValue CaptureScript::parseArrayControl(const ControlId *id,
value = Span<const float>(values.data(), values.size());
break;
}
case ControlTypeString: {
value = Span<const std::string>(repr.data(), repr.size());
break;
}
default:
std::cerr << "Unsupported control type" << std::endl;
break;

View file

@ -450,6 +450,8 @@ int Device::openCard()
}
for (struct dirent *res; (res = readdir(folder));) {
uint64_t cap;
if (strncmp(res->d_name, "card", 4))
continue;
@ -463,22 +465,15 @@ int Device::openCard()
}
/*
* Skip non-display devices. While this could in theory be done
* by checking for support of the mode setting API, some
* out-of-tree render-only GPU drivers (namely powervr)
* incorrectly set the DRIVER_MODESET driver feature. Check for
* the presence of at least one CRTC, encoder and connector
* instead.
* Skip devices that don't support the modeset API, to avoid
* selecting a DRM device corresponding to a GPU. There is no
* modeset capability, but the kernel returns an error for most
* caps if mode setting isn't support by the driver. The
* DRM_CAP_DUMB_BUFFER capability is one of those, other would
* do as well. The capability value itself isn't relevant.
*/
std::unique_ptr<drmModeRes, decltype(&drmModeFreeResources)> resources{
drmModeGetResources(fd_),
&drmModeFreeResources
};
if (!resources ||
resources->count_connectors <= 0 ||
resources->count_crtcs <= 0 ||
resources->count_encoders <= 0) {
resources.reset();
ret = drmGetCap(fd_, DRM_CAP_DUMB_BUFFER, &cap);
if (ret < 0) {
drmClose(fd_);
fd_ = -1;
continue;

View file

@ -34,7 +34,6 @@ if libsdl2.found()
cam_sources += files([
'sdl_sink.cpp',
'sdl_texture.cpp',
'sdl_texture_1plane.cpp',
'sdl_texture_yuv.cpp',
])

View file

@ -11,7 +11,6 @@
#include <fcntl.h>
#include <iomanip>
#include <iostream>
#include <optional>
#include <signal.h>
#include <sstream>
#include <string.h>
@ -23,7 +22,6 @@
#include "../common/event_loop.h"
#include "../common/image.h"
#include "sdl_texture_1plane.h"
#ifdef HAVE_LIBJPEG
#include "sdl_texture_mjpg.h"
#endif
@ -33,46 +31,6 @@ using namespace libcamera;
using namespace std::chrono_literals;
namespace {
std::optional<SDL_PixelFormatEnum> singlePlaneFormatToSDL(const libcamera::PixelFormat &f)
{
switch (f) {
case libcamera::formats::RGB888:
return SDL_PIXELFORMAT_BGR24;
case libcamera::formats::BGR888:
return SDL_PIXELFORMAT_RGB24;
case libcamera::formats::RGBA8888:
return SDL_PIXELFORMAT_ABGR32;
case libcamera::formats::ARGB8888:
return SDL_PIXELFORMAT_BGRA32;
case libcamera::formats::BGRA8888:
return SDL_PIXELFORMAT_ARGB32;
case libcamera::formats::ABGR8888:
return SDL_PIXELFORMAT_RGBA32;
#if SDL_VERSION_ATLEAST(2, 29, 1)
case libcamera::formats::RGBX8888:
return SDL_PIXELFORMAT_XBGR32;
case libcamera::formats::XRGB8888:
return SDL_PIXELFORMAT_BGRX32;
case libcamera::formats::BGRX8888:
return SDL_PIXELFORMAT_XRGB32;
case libcamera::formats::XBGR8888:
return SDL_PIXELFORMAT_RGBX32;
#endif
case libcamera::formats::YUYV:
return SDL_PIXELFORMAT_YUY2;
case libcamera::formats::UYVY:
return SDL_PIXELFORMAT_UYVY;
case libcamera::formats::YVYU:
return SDL_PIXELFORMAT_YVYU;
}
return {};
}
} /* namespace */
SDLSink::SDLSink()
: window_(nullptr), renderer_(nullptr), rect_({}),
init_(false)
@ -104,20 +62,25 @@ int SDLSink::configure(const libcamera::CameraConfiguration &config)
rect_.w = cfg.size.width;
rect_.h = cfg.size.height;
if (auto sdlFormat = singlePlaneFormatToSDL(cfg.pixelFormat))
texture_ = std::make_unique<SDLTexture1Plane>(rect_, *sdlFormat, cfg.stride);
switch (cfg.pixelFormat) {
#ifdef HAVE_LIBJPEG
else if (cfg.pixelFormat == libcamera::formats::MJPEG)
case libcamera::formats::MJPEG:
texture_ = std::make_unique<SDLTextureMJPG>(rect_);
break;
#endif
#if SDL_VERSION_ATLEAST(2, 0, 16)
else if (cfg.pixelFormat == libcamera::formats::NV12)
case libcamera::formats::NV12:
texture_ = std::make_unique<SDLTextureNV12>(rect_, cfg.stride);
break;
#endif
else {
std::cerr << "Unsupported pixel format " << cfg.pixelFormat << std::endl;
case libcamera::formats::YUYV:
texture_ = std::make_unique<SDLTextureYUYV>(rect_, cfg.stride);
break;
default:
std::cerr << "Unsupported pixel format "
<< cfg.pixelFormat.toString() << std::endl;
return -EINVAL;
}
};
return 0;
}

View file

@ -7,7 +7,7 @@
#pragma once
#include <libcamera/base/span.h>
#include <vector>
#include <SDL2/SDL.h>
@ -19,7 +19,7 @@ public:
SDLTexture(const SDL_Rect &rect, uint32_t pixelFormat, const int stride);
virtual ~SDLTexture();
int create(SDL_Renderer *renderer);
virtual void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) = 0;
virtual void update(const std::vector<libcamera::Span<const uint8_t>> &data) = 0;
SDL_Texture *get() const { return ptr_; }
protected:

View file

@ -1,17 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2025, Ideas on Board Oy
*
* SDL single plane textures
*/
#include "sdl_texture_1plane.h"
#include <assert.h>
void SDLTexture1Plane::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
{
assert(data.size() == 1);
assert(data[0].size_bytes() == std::size_t(rect_.h) * std::size_t(stride_));
SDL_UpdateTexture(ptr_, nullptr, data[0].data(), stride_);
}

View file

@ -1,18 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2025, Ideas on Board Oy
*
* SDL single plane textures
*/
#pragma once
#include "sdl_texture.h"
class SDLTexture1Plane final : public SDLTexture
{
public:
using SDLTexture::SDLTexture;
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
};

View file

@ -76,7 +76,7 @@ int SDLTextureMJPG::decompress(Span<const uint8_t> data)
return 0;
}
void SDLTextureMJPG::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
void SDLTextureMJPG::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
decompress(data[0]);
SDL_UpdateTexture(ptr_, nullptr, rgb_.get(), stride_);

View file

@ -14,7 +14,7 @@ class SDLTextureMJPG : public SDLTexture
public:
SDLTextureMJPG(const SDL_Rect &rect);
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
private:
int decompress(libcamera::Span<const uint8_t> data);

View file

@ -15,9 +15,19 @@ SDLTextureNV12::SDLTextureNV12(const SDL_Rect &rect, unsigned int stride)
{
}
void SDLTextureNV12::update(libcamera::Span<const libcamera::Span<const uint8_t>> data)
void SDLTextureNV12::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
SDL_UpdateNVTexture(ptr_, nullptr, data[0].data(), stride_,
SDL_UpdateNVTexture(ptr_, &rect_, data[0].data(), stride_,
data[1].data(), stride_);
}
#endif
SDLTextureYUYV::SDLTextureYUYV(const SDL_Rect &rect, unsigned int stride)
: SDLTexture(rect, SDL_PIXELFORMAT_YUY2, stride)
{
}
void SDLTextureYUYV::update(const std::vector<libcamera::Span<const uint8_t>> &data)
{
SDL_UpdateTexture(ptr_, &rect_, data[0].data(), stride_);
}

View file

@ -14,6 +14,13 @@ class SDLTextureNV12 : public SDLTexture
{
public:
SDLTextureNV12(const SDL_Rect &rect, unsigned int stride);
void update(libcamera::Span<const libcamera::Span<const uint8_t>> data) override;
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
};
#endif
class SDLTextureYUYV : public SDLTexture
{
public:
SDLTextureYUYV(const SDL_Rect &rect, unsigned int stride);
void update(const std::vector<libcamera::Span<const uint8_t>> &data) override;
};

View file

@ -98,12 +98,12 @@ unsigned int Image::numPlanes() const
Span<uint8_t> Image::data(unsigned int plane)
{
assert(plane < planes_.size());
assert(plane <= planes_.size());
return planes_[plane];
}
Span<const uint8_t> Image::data(unsigned int plane) const
{
assert(plane < planes_.size());
assert(plane <= planes_.size());
return planes_[plane];
}

View file

@ -42,8 +42,9 @@ KeyValueParser::Options StreamKeyValueParser::parse(const char *arguments)
std::vector<StreamRole> StreamKeyValueParser::roles(const OptionValue &values)
{
/* If no configuration values to examine default to viewfinder. */
if (values.empty())
return {};
return { StreamRole::Viewfinder };
const std::vector<OptionValue> &streamParameters = values.toArray();

View file

@ -23,29 +23,12 @@ Capture::~Capture()
stop();
}
void Capture::configure(libcamera::Span<const libcamera::StreamRole> roles)
void Capture::configure(StreamRole role)
{
assert(!roles.empty());
config_ = camera_->generateConfiguration({ role });
config_ = camera_->generateConfiguration(roles);
if (!config_)
GTEST_SKIP() << "Roles not supported by camera";
ASSERT_EQ(config_->size(), roles.size()) << "Unexpected number of streams in configuration";
/*
* Set the buffers count to the largest value across all streams.
* \todo: Should all streams from a Camera have the same buffer count ?
*/
auto largest =
std::max_element(config_->begin(), config_->end(),
[](const StreamConfiguration &l, const StreamConfiguration &r)
{ return l.bufferCount < r.bufferCount; });
assert(largest != config_->end());
for (auto &cfg : *config_)
cfg.bufferCount = largest->bufferCount;
GTEST_SKIP() << "Role not supported by camera";
if (config_->validate() != CameraConfiguration::Valid) {
config_.reset();
@ -120,37 +103,29 @@ void Capture::start()
assert(!allocator_.allocated());
assert(requests_.empty());
const auto bufferCount = config_->at(0).bufferCount;
Stream *stream = config_->at(0).stream();
int count = allocator_.allocate(stream);
ASSERT_GE(count, 0) << "Failed to allocate buffers";
EXPECT_EQ(count, config_->at(0).bufferCount) << "Allocated less buffers than expected";
const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator_.buffers(stream);
/* No point in testing less requests then the camera depth. */
if (queueLimit_ && *queueLimit_ < bufferCount) {
GTEST_SKIP() << "Camera needs " << bufferCount
if (queueLimit_ && *queueLimit_ < buffers.size()) {
GTEST_SKIP() << "Camera needs " << buffers.size()
<< " requests, can't test only " << *queueLimit_;
}
for (std::size_t i = 0; i < bufferCount; i++) {
for (const std::unique_ptr<FrameBuffer> &buffer : buffers) {
std::unique_ptr<Request> request = camera_->createRequest();
ASSERT_TRUE(request) << "Can't create request";
ASSERT_EQ(request->addBuffer(stream, buffer.get()), 0) << "Can't set buffer for request";
requests_.push_back(std::move(request));
}
for (const auto &cfg : *config_) {
Stream *stream = cfg.stream();
int count = allocator_.allocate(stream);
ASSERT_GE(count, 0) << "Failed to allocate buffers";
const auto &buffers = allocator_.buffers(stream);
ASSERT_EQ(buffers.size(), bufferCount) << "Mismatching buffer count";
for (std::size_t i = 0; i < bufferCount; i++) {
ASSERT_EQ(requests_[i]->addBuffer(stream, buffers[i].get()), 0)
<< "Failed to add buffer to request";
}
}
ASSERT_TRUE(allocator_.allocated());
camera_->requestCompleted.connect(this, &Capture::requestComplete);
ASSERT_EQ(camera_->start(), 0) << "Failed to start camera";
@ -165,12 +140,7 @@ void Capture::stop()
camera_->requestCompleted.disconnect(this);
Stream *stream = config_->at(0).stream();
requests_.clear();
for (const auto &cfg : *config_) {
EXPECT_EQ(allocator_.free(cfg.stream()), 0)
<< "Failed to free buffers associated with stream";
}
EXPECT_FALSE(allocator_.allocated());
allocator_.free(stream);
}

View file

@ -20,7 +20,7 @@ public:
Capture(std::shared_ptr<libcamera::Camera> camera);
~Capture();
void configure(libcamera::Span<const libcamera::StreamRole> roles);
void configure(libcamera::StreamRole role);
void run(unsigned int captureLimit, std::optional<unsigned int> queueLimit = {});
private:

View file

@ -15,7 +15,6 @@ lc_compliance_sources = files([
'environment.cpp',
'helpers/capture.cpp',
'main.cpp',
'test_base.cpp',
'tests/capture_test.cpp',
])

View file

@ -1,28 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2021, Collabora Ltd.
*
* test_base.cpp - Base definitions for tests
*/
#include "test_base.h"
#include "environment.h"
void CameraHolder::acquireCamera()
{
Environment *env = Environment::get();
camera_ = env->cm()->get(env->cameraId());
ASSERT_EQ(camera_->acquire(), 0);
}
void CameraHolder::releaseCamera()
{
if (!camera_)
return;
camera_->release();
camera_.reset();
}

View file

@ -1,24 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 2021, Collabora Ltd.
*
* test_base.h - Base definitions for tests
*/
#ifndef __LC_COMPLIANCE_TEST_BASE_H__
#define __LC_COMPLIANCE_TEST_BASE_H__
#include <libcamera/libcamera.h>
#include <gtest/gtest.h>
class CameraHolder
{
protected:
void acquireCamera();
void releaseCamera();
std::shared_ptr<libcamera::Camera> camera_;
};
#endif /* __LC_COMPLIANCE_TEST_BASE_H__ */

View file

@ -8,54 +8,72 @@
#include "capture.h"
#include <sstream>
#include <string>
#include <tuple>
#include <vector>
#include <iostream>
#include <gtest/gtest.h>
#include "test_base.h"
#include "environment.h"
namespace {
using namespace libcamera;
class SimpleCapture : public testing::TestWithParam<std::tuple<std::vector<StreamRole>, int>>, public CameraHolder
const int NUMREQUESTS[] = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 };
const StreamRole ROLES[] = {
StreamRole::Raw,
StreamRole::StillCapture,
StreamRole::VideoRecording,
StreamRole::Viewfinder
};
class SingleStream : public testing::TestWithParam<std::tuple<StreamRole, int>>
{
public:
static std::string nameParameters(const testing::TestParamInfo<SimpleCapture::ParamType> &info);
static std::string nameParameters(const testing::TestParamInfo<SingleStream::ParamType> &info);
protected:
void SetUp() override;
void TearDown() override;
std::shared_ptr<Camera> camera_;
};
/*
* We use gtest's SetUp() and TearDown() instead of constructor and destructor
* in order to be able to assert on them.
*/
void SimpleCapture::SetUp()
void SingleStream::SetUp()
{
acquireCamera();
Environment *env = Environment::get();
camera_ = env->cm()->get(env->cameraId());
ASSERT_EQ(camera_->acquire(), 0);
}
void SimpleCapture::TearDown()
void SingleStream::TearDown()
{
releaseCamera();
if (!camera_)
return;
camera_->release();
camera_.reset();
}
std::string SimpleCapture::nameParameters(const testing::TestParamInfo<SimpleCapture::ParamType> &info)
std::string SingleStream::nameParameters(const testing::TestParamInfo<SingleStream::ParamType> &info)
{
const auto &[roles, numRequests] = info.param;
std::ostringstream ss;
std::map<StreamRole, std::string> rolesMap = {
{ StreamRole::Raw, "Raw" },
{ StreamRole::StillCapture, "StillCapture" },
{ StreamRole::VideoRecording, "VideoRecording" },
{ StreamRole::Viewfinder, "Viewfinder" }
};
for (StreamRole r : roles)
ss << r << '_';
std::string roleName = rolesMap[std::get<0>(info.param)];
std::string numRequestsName = std::to_string(std::get<1>(info.param));
ss << '_' << numRequests;
return ss.str();
return roleName + "_" + numRequestsName;
}
/*
@ -65,13 +83,13 @@ std::string SimpleCapture::nameParameters(const testing::TestParamInfo<SimpleCap
* failure is a camera that completes less requests than the number of requests
* queued.
*/
TEST_P(SimpleCapture, Capture)
TEST_P(SingleStream, Capture)
{
const auto &[roles, numRequests] = GetParam();
auto [role, numRequests] = GetParam();
Capture capture(camera_);
capture.configure(roles);
capture.configure(role);
capture.run(numRequests, numRequests);
}
@ -83,14 +101,14 @@ TEST_P(SimpleCapture, Capture)
* a camera that does not clean up correctly in its error path but is only
* tested by single-capture applications.
*/
TEST_P(SimpleCapture, CaptureStartStop)
TEST_P(SingleStream, CaptureStartStop)
{
const auto &[roles, numRequests] = GetParam();
auto [role, numRequests] = GetParam();
unsigned int numRepeats = 3;
Capture capture(camera_);
capture.configure(roles);
capture.configure(role);
for (unsigned int starts = 0; starts < numRepeats; starts++)
capture.run(numRequests, numRequests);
@ -103,43 +121,21 @@ TEST_P(SimpleCapture, CaptureStartStop)
* is a camera that does not handle cancelation of buffers coming back from the
* video device while stopping.
*/
TEST_P(SimpleCapture, UnbalancedStop)
TEST_P(SingleStream, UnbalancedStop)
{
const auto &[roles, numRequests] = GetParam();
auto [role, numRequests] = GetParam();
Capture capture(camera_);
capture.configure(roles);
capture.configure(role);
capture.run(numRequests);
}
const int NUMREQUESTS[] = { 1, 2, 3, 5, 8, 13, 21, 34, 55, 89 };
const std::vector<StreamRole> SINGLEROLES[] = {
{ StreamRole::Raw, },
{ StreamRole::StillCapture, },
{ StreamRole::VideoRecording, },
{ StreamRole::Viewfinder, },
};
const std::vector<StreamRole> MULTIROLES[] = {
{ StreamRole::Raw, StreamRole::StillCapture },
{ StreamRole::Raw, StreamRole::VideoRecording },
{ StreamRole::StillCapture, StreamRole::VideoRecording },
{ StreamRole::VideoRecording, StreamRole::VideoRecording },
};
INSTANTIATE_TEST_SUITE_P(SingleStream,
SimpleCapture,
testing::Combine(testing::ValuesIn(SINGLEROLES),
INSTANTIATE_TEST_SUITE_P(CaptureTests,
SingleStream,
testing::Combine(testing::ValuesIn(ROLES),
testing::ValuesIn(NUMREQUESTS)),
SimpleCapture::nameParameters);
INSTANTIATE_TEST_SUITE_P(MultiStream,
SimpleCapture,
testing::Combine(testing::ValuesIn(MULTIROLES),
testing::ValuesIn(NUMREQUESTS)),
SimpleCapture::nameParameters);
SingleStream::nameParameters);
} /* namespace */

View file

@ -356,9 +356,6 @@ int MainWindow::startCapture()
/* Verify roles are supported. */
switch (roles.size()) {
case 0:
roles.push_back(StreamRole::Viewfinder);
break;
case 1:
if (roles[0] != StreamRole::Viewfinder) {
qWarning() << "Only viewfinder supported for single stream";

View file

@ -68,7 +68,7 @@ static const GEnumValue {{ ctrl.name|snake_case }}_types[] = {
"{{ enum.gst_name }}"
},
{%- endfor %}
{0, nullptr, nullptr}
{0, NULL, NULL}
};
#define TYPE_{{ ctrl.name|snake_case|upper }} \

View file

@ -494,12 +494,9 @@ void gst_libcamera_configure_stream_from_caps(StreamConfiguration &stream_cfg,
/* Configure colorimetry */
if (gst_structure_has_field(s, "colorimetry")) {
const gchar *colorimetry_str;
const gchar *colorimetry_str = gst_structure_get_string(s, "colorimetry");
GstVideoColorimetry colorimetry;
gst_structure_fixate_field(s, "colorimetry");
colorimetry_str = gst_structure_get_string(s, "colorimetry");
if (!gst_video_colorimetry_from_string(&colorimetry, colorimetry_str))
g_critical("Invalid colorimetry %s", colorimetry_str);
@ -599,43 +596,6 @@ gst_task_resume(GstTask *task)
}
#endif
#if !GST_CHECK_VERSION(1, 22, 0)
/*
* Copyright (C) <1999> Erik Walthinsen <omega@cse.ogi.edu>
* Library <2002> Ronald Bultje <rbultje@ronald.bitfreak.net>
* Copyright (C) <2007> David A. Schleef <ds@schleef.org>
*/
/*
* This function has been imported directly from the gstreamer project to
* support backwards compatibility and should be removed when the older version
* is no longer supported.
*/
gint gst_video_format_info_extrapolate_stride(const GstVideoFormatInfo *finfo, gint plane, gint stride)
{
gint estride;
gint comp[GST_VIDEO_MAX_COMPONENTS];
gint i;
/* There is nothing to extrapolate on first plane. */
if (plane == 0)
return stride;
gst_video_format_info_component(finfo, plane, comp);
/*
* For now, all planar formats have a single component on first plane, but
* if there was a planar format with more, we'd have to make a ratio of the
* number of component on the first plane against the number of component on
* the current plane.
*/
estride = 0;
for (i = 0; i < GST_VIDEO_MAX_COMPONENTS && comp[i] >= 0; i++)
estride += GST_VIDEO_FORMAT_INFO_SCALE_WIDTH(finfo, comp[i], stride);
return estride;
}
#endif
G_LOCK_DEFINE_STATIC(cm_singleton_lock);
static std::weak_ptr<CameraManager> cm_singleton_ptr;

View file

@ -36,11 +36,6 @@ static inline void gst_clear_event(GstEvent **event_ptr)
#if !GST_CHECK_VERSION(1, 17, 1)
gboolean gst_task_resume(GstTask *task);
#endif
#if !GST_CHECK_VERSION(1, 22, 0)
gint gst_video_format_info_extrapolate_stride(const GstVideoFormatInfo *finfo, gint plane, gint stride);
#endif
std::shared_ptr<libcamera::CameraManager> gst_libcamera_get_camera_manager(int &ret);
/**

View file

@ -18,8 +18,6 @@ struct _GstLibcameraPad {
GstPad parent;
StreamRole role;
GstLibcameraPool *pool;
GstBufferPool *video_pool;
GstVideoInfo info;
GstClockTime latency;
};
@ -72,10 +70,6 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
if (query->type != GST_QUERY_LATENCY)
return gst_pad_query_default(pad, parent, query);
GLibLocker lock(GST_OBJECT(self));
if (self->latency == GST_CLOCK_TIME_NONE)
return FALSE;
/* TRUE here means live, we assumes that max latency is the same as min
* as we have no idea that duration of frames. */
gst_query_set_latency(query, TRUE, self->latency, self->latency);
@ -85,7 +79,6 @@ gst_libcamera_pad_query(GstPad *pad, GstObject *parent, GstQuery *query)
static void
gst_libcamera_pad_init(GstLibcameraPad *self)
{
self->latency = GST_CLOCK_TIME_NONE;
GST_PAD_QUERYFUNC(self) = gst_libcamera_pad_query;
}
@ -107,7 +100,7 @@ gst_libcamera_stream_role_get_type()
"libcamera::Viewfinder",
"view-finder",
},
{ 0, nullptr, nullptr }
{ 0, NULL, NULL }
};
if (!type)
@ -160,35 +153,6 @@ gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool)
self->pool = pool;
}
GstBufferPool *
gst_libcamera_pad_get_video_pool(GstPad *pad)
{
auto *self = GST_LIBCAMERA_PAD(pad);
return self->video_pool;
}
void gst_libcamera_pad_set_video_pool(GstPad *pad, GstBufferPool *video_pool)
{
auto *self = GST_LIBCAMERA_PAD(pad);
if (self->video_pool)
g_object_unref(self->video_pool);
self->video_pool = video_pool;
}
GstVideoInfo gst_libcamera_pad_get_video_info(GstPad *pad)
{
auto *self = GST_LIBCAMERA_PAD(pad);
return self->info;
}
void gst_libcamera_pad_set_video_info(GstPad *pad, const GstVideoInfo *info)
{
auto *self = GST_LIBCAMERA_PAD(pad);
self->info = *info;
}
Stream *
gst_libcamera_pad_get_stream(GstPad *pad)
{

View file

@ -23,14 +23,6 @@ GstLibcameraPool *gst_libcamera_pad_get_pool(GstPad *pad);
void gst_libcamera_pad_set_pool(GstPad *pad, GstLibcameraPool *pool);
GstBufferPool *gst_libcamera_pad_get_video_pool(GstPad *pad);
void gst_libcamera_pad_set_video_pool(GstPad *pad, GstBufferPool *video_pool);
GstVideoInfo gst_libcamera_pad_get_video_info(GstPad *pad);
void gst_libcamera_pad_set_video_info(GstPad *pad, const GstVideoInfo *info);
libcamera::Stream *gst_libcamera_pad_get_stream(GstPad *pad);
void gst_libcamera_pad_set_latency(GstPad *pad, GstClockTime latency);

View file

@ -134,20 +134,8 @@ gst_libcamera_pool_class_init(GstLibcameraPoolClass *klass)
G_TYPE_NONE, 0);
}
static void
gst_libcamera_buffer_add_video_meta(GstBuffer *buffer, GstVideoInfo *info)
{
GstVideoMeta *vmeta;
vmeta = gst_buffer_add_video_meta_full(buffer, GST_VIDEO_FRAME_FLAG_NONE,
GST_VIDEO_INFO_FORMAT(info), GST_VIDEO_INFO_WIDTH(info),
GST_VIDEO_INFO_HEIGHT(info), GST_VIDEO_INFO_N_PLANES(info),
info->offset, info->stride);
GST_META_FLAGS(vmeta) = (GstMetaFlags)(GST_META_FLAGS(vmeta) | GST_META_FLAG_POOLED);
}
GstLibcameraPool *
gst_libcamera_pool_new(GstLibcameraAllocator *allocator, Stream *stream,
GstVideoInfo *info)
gst_libcamera_pool_new(GstLibcameraAllocator *allocator, Stream *stream)
{
auto *pool = GST_LIBCAMERA_POOL(g_object_new(GST_TYPE_LIBCAMERA_POOL, nullptr));
@ -157,7 +145,6 @@ gst_libcamera_pool_new(GstLibcameraAllocator *allocator, Stream *stream,
gsize pool_size = gst_libcamera_allocator_get_pool_size(allocator, stream);
for (gsize i = 0; i < pool_size; i++) {
GstBuffer *buffer = gst_buffer_new();
gst_libcamera_buffer_add_video_meta(buffer, info);
pool->queue->push_back(buffer);
}

View file

@ -14,7 +14,6 @@
#include "gstlibcameraallocator.h"
#include <gst/gst.h>
#include <gst/video/video.h>
#include <libcamera/stream.h>
@ -22,7 +21,7 @@
G_DECLARE_FINAL_TYPE(GstLibcameraPool, gst_libcamera_pool, GST_LIBCAMERA, POOL, GstBufferPool)
GstLibcameraPool *gst_libcamera_pool_new(GstLibcameraAllocator *allocator,
libcamera::Stream *stream, GstVideoInfo *info);
libcamera::Stream *stream);
libcamera::Stream *gst_libcamera_pool_get_stream(GstLibcameraPool *self);

View file

@ -29,8 +29,6 @@
#include <atomic>
#include <queue>
#include <tuple>
#include <utility>
#include <vector>
#include <libcamera/camera.h>
@ -270,69 +268,6 @@ GstLibcameraSrcState::requestCompleted(Request *request)
gst_task_resume(src_->task);
}
static void
gst_libcamera_extrapolate_info(GstVideoInfo *info, guint32 stride)
{
guint i, estride;
gsize offset = 0;
/* This should be updated if tiled formats get added in the future. */
for (i = 0; i < GST_VIDEO_INFO_N_PLANES(info); i++) {
estride = gst_video_format_info_extrapolate_stride(info->finfo, i, stride);
info->stride[i] = estride;
info->offset[i] = offset;
offset += estride * GST_VIDEO_FORMAT_INFO_SCALE_HEIGHT(info->finfo, i,
GST_VIDEO_INFO_HEIGHT(info));
}
}
static GstFlowReturn
gst_libcamera_video_frame_copy(GstBuffer *src, GstBuffer *dest,
const GstVideoInfo *dest_info, guint32 stride)
{
/*
* When dropping support for versions earlier than v1.22.0, use
*
* g_auto (GstVideoFrame) src_frame = GST_VIDEO_FRAME_INIT;
* g_auto (GstVideoFrame) dest_frame = GST_VIDEO_FRAME_INIT;
*
* and drop the gst_video_frame_unmap() calls.
*/
GstVideoFrame src_frame, dest_frame;
GstVideoInfo src_info = *dest_info;
gst_libcamera_extrapolate_info(&src_info, stride);
src_info.size = gst_buffer_get_size(src);
if (!gst_video_frame_map(&src_frame, &src_info, src, GST_MAP_READ)) {
GST_ERROR("Could not map src buffer");
return GST_FLOW_ERROR;
}
/*
* When dropping support for versions earlier than 1.20.0, drop the
* const_cast<>().
*/
if (!gst_video_frame_map(&dest_frame, const_cast<GstVideoInfo *>(dest_info),
dest, GST_MAP_WRITE)) {
GST_ERROR("Could not map dest buffer");
gst_video_frame_unmap(&src_frame);
return GST_FLOW_ERROR;
}
if (!gst_video_frame_copy(&dest_frame, &src_frame)) {
GST_ERROR("Could not copy frame");
gst_video_frame_unmap(&src_frame);
gst_video_frame_unmap(&dest_frame);
return GST_FLOW_ERROR;
}
gst_video_frame_unmap(&src_frame);
gst_video_frame_unmap(&dest_frame);
return GST_FLOW_OK;
}
/* Must be called with stream_lock held. */
int GstLibcameraSrcState::processRequest()
{
@ -357,41 +292,11 @@ int GstLibcameraSrcState::processRequest()
GstFlowReturn ret = GST_FLOW_OK;
gst_flow_combiner_reset(src_->flow_combiner);
for (gsize i = 0; i < srcpads_.size(); i++) {
GstPad *srcpad = srcpads_[i];
for (GstPad *srcpad : srcpads_) {
Stream *stream = gst_libcamera_pad_get_stream(srcpad);
GstBuffer *buffer = wrap->detachBuffer(stream);
FrameBuffer *fb = gst_libcamera_buffer_get_frame_buffer(buffer);
const StreamConfiguration &stream_cfg = config_->at(i);
GstBufferPool *video_pool = gst_libcamera_pad_get_video_pool(srcpad);
if (video_pool) {
/* Only set video pool when a copy is needed. */
GstBuffer *copy = nullptr;
const GstVideoInfo info = gst_libcamera_pad_get_video_info(srcpad);
ret = gst_buffer_pool_acquire_buffer(video_pool, &copy, nullptr);
if (ret != GST_FLOW_OK) {
gst_buffer_unref(buffer);
GST_ELEMENT_ERROR(src_, RESOURCE, SETTINGS,
("Failed to acquire buffer"),
("GstLibcameraSrcState::processRequest() failed: %s", g_strerror(-ret)));
return -EPIPE;
}
ret = gst_libcamera_video_frame_copy(buffer, copy, &info, stream_cfg.stride);
gst_buffer_unref(buffer);
if (ret != GST_FLOW_OK) {
gst_buffer_unref(copy);
GST_ELEMENT_ERROR(src_, RESOURCE, SETTINGS,
("Failed to copy buffer"),
("GstLibcameraSrcState::processRequest() failed: %s", g_strerror(-ret)));
return -EPIPE;
}
buffer = copy;
}
if (GST_CLOCK_TIME_IS_VALID(wrap->pts_)) {
GST_BUFFER_PTS(buffer) = wrap->pts_;
@ -523,73 +428,6 @@ gst_libcamera_src_open(GstLibcameraSrc *self)
return true;
}
/**
* \brief Create a video pool for a pad
* \param[in] self The libcamerasrc instance
* \param[in] srcpad The pad
* \param[in] caps The pad caps
* \param[in] info The video info for the pad
*
* This function creates and returns a video buffer pool for the given pad if
* needed to accommodate stride mismatch. If the peer element supports stride
* negotiation through the meta API, no pool is needed and the function will
* return a null pool.
*
* \return A tuple containing the video buffers pool pointer and an error code
*/
static std::tuple<GstBufferPool *, int>
gst_libcamera_create_video_pool(GstLibcameraSrc *self, GstPad *srcpad,
GstCaps *caps, const GstVideoInfo *info)
{
g_autoptr(GstQuery) query = nullptr;
g_autoptr(GstBufferPool) pool = nullptr;
const gboolean need_pool = true;
/*
* Get the peer allocation hints to check if it supports the meta API.
* If so, the stride will be negotiated, and there's no need to create a
* video pool.
*/
query = gst_query_new_allocation(caps, need_pool);
if (!gst_pad_peer_query(srcpad, query))
GST_DEBUG_OBJECT(self, "Didn't get downstream ALLOCATION hints");
else if (gst_query_find_allocation_meta(query, GST_VIDEO_META_API_TYPE, nullptr))
return { nullptr, 0 };
GST_WARNING_OBJECT(self, "Downstream doesn't support video meta, need to copy frame.");
/*
* If the allocation query has pools, use the first one. Otherwise,
* create a new pool.
*/
if (gst_query_get_n_allocation_pools(query) > 0)
gst_query_parse_nth_allocation_pool(query, 0, &pool, nullptr,
nullptr, nullptr);
if (!pool) {
GstStructure *config;
guint min_buffers = 3;
pool = gst_video_buffer_pool_new();
config = gst_buffer_pool_get_config(pool);
gst_buffer_pool_config_set_params(config, caps, info->size, min_buffers, 0);
GST_DEBUG_OBJECT(self, "Own pool config is %" GST_PTR_FORMAT, config);
gst_buffer_pool_set_config(GST_BUFFER_POOL_CAST(pool), config);
}
if (!gst_buffer_pool_set_active(pool, true)) {
GST_ELEMENT_ERROR(self, RESOURCE, SETTINGS,
("Failed to active buffer pool"),
("gst_libcamera_src_negotiate() failed."));
return { nullptr, -EINVAL };
}
return { std::exchange(pool, nullptr), 0 };
}
/* Must be called with stream_lock held. */
static bool
gst_libcamera_src_negotiate(GstLibcameraSrc *self)
@ -661,33 +499,13 @@ gst_libcamera_src_negotiate(GstLibcameraSrc *self)
for (gsize i = 0; i < state->srcpads_.size(); i++) {
GstPad *srcpad = state->srcpads_[i];
const StreamConfiguration &stream_cfg = state->config_->at(i);
GstBufferPool *video_pool = nullptr;
GstVideoInfo info;
g_autoptr(GstCaps) caps = gst_libcamera_stream_configuration_to_caps(stream_cfg, transfer[i]);
gst_video_info_from_caps(&info, caps);
gst_libcamera_pad_set_video_info(srcpad, &info);
/* Stride mismatch between camera stride and that calculated by video-info. */
if (static_cast<unsigned int>(info.stride[0]) != stream_cfg.stride &&
GST_VIDEO_INFO_FORMAT(&info) != GST_VIDEO_FORMAT_ENCODED) {
gst_libcamera_extrapolate_info(&info, stream_cfg.stride);
std::tie(video_pool, ret) =
gst_libcamera_create_video_pool(self, srcpad,
caps, &info);
if (ret)
return false;
}
GstLibcameraPool *pool = gst_libcamera_pool_new(self->allocator,
stream_cfg.stream(), &info);
stream_cfg.stream());
g_signal_connect_swapped(pool, "buffer-notify",
G_CALLBACK(gst_task_resume), self->task);
gst_libcamera_pad_set_pool(srcpad, pool);
gst_libcamera_pad_set_video_pool(srcpad, video_pool);
/* Clear all reconfigure flags. */
gst_pad_check_reconfigure(srcpad);
@ -881,10 +699,8 @@ gst_libcamera_src_task_leave([[maybe_unused]] GstTask *task,
{
GLibRecLocker locker(&self->stream_lock);
for (GstPad *srcpad : state->srcpads_) {
gst_libcamera_pad_set_latency(srcpad, GST_CLOCK_TIME_NONE);
for (GstPad *srcpad : state->srcpads_)
gst_libcamera_pad_set_pool(srcpad, nullptr);
}
}
g_clear_object(&self->allocator);
@ -1068,7 +884,7 @@ gst_libcamera_src_request_new_pad(GstElement *element, GstPadTemplate *templ,
const gchar *name, [[maybe_unused]] const GstCaps *caps)
{
GstLibcameraSrc *self = GST_LIBCAMERA_SRC(element);
g_autoptr(GstPad) pad = nullptr;
g_autoptr(GstPad) pad = NULL;
GST_DEBUG_OBJECT(self, "new request pad created");
@ -1082,12 +898,12 @@ gst_libcamera_src_request_new_pad(GstElement *element, GstPadTemplate *templ,
GST_ELEMENT_ERROR(element, STREAM, FAILED,
("Internal data stream error."),
("Could not add pad to element"));
return nullptr;
return NULL;
}
gst_child_proxy_child_added(GST_CHILD_PROXY(self), G_OBJECT(pad), GST_OBJECT_NAME(pad));
return std::exchange(pad, nullptr);
return reinterpret_cast<GstPad *>(g_steal_pointer(&pad));
}
static void
@ -1106,12 +922,6 @@ gst_libcamera_src_release_pad(GstElement *element, GstPad *pad)
auto end_iterator = pads.end();
auto pad_iterator = std::find(begin_iterator, end_iterator, pad);
GstBufferPool *video_pool = gst_libcamera_pad_get_video_pool(pad);
if (video_pool) {
gst_buffer_pool_set_active(video_pool, false);
gst_object_unref(video_pool);
}
if (pad_iterator != end_iterator) {
g_object_unref(*pad_iterator);
pads.erase(pad_iterator);

View file

@ -33,7 +33,6 @@ libcamera_gst_sources += custom_target('gstlibcamera-controls.cpp',
output : 'gstlibcamera-controls.cpp',
command : [gen_gst_controls, '-o', '@OUTPUT@',
'-t', gen_gst_controls_template, '@INPUT@'],
depend_files : [py_mod_controls],
env : py_build_env)
libcamera_gst_cpp_args = [

View file

@ -218,7 +218,8 @@ int AgcMeanLuminance::parseConstraintModes(const YamlObject &tuningData)
constraintModes_[controls::ConstraintNormal].insert(
constraintModes_[controls::ConstraintNormal].begin(),
constraint);
availableConstraintModes.push_back(controls::ConstraintNormal);
availableConstraintModes.push_back(
AeConstraintModeNameValueMap.at("ConstraintNormal"));
}
controls_[&controls::AeConstraintMode] = ControlInfo(availableConstraintModes);
@ -286,7 +287,7 @@ int AgcMeanLuminance::parseExposureModes(const YamlObject &tuningData)
* possible before touching gain.
*/
if (availableExposureModes.empty()) {
int32_t exposureModeId = controls::ExposureNormal;
int32_t exposureModeId = AeExposureModeNameValueMap.at("ExposureNormal");
std::vector<std::pair<utils::Duration, double>> stages = { };
std::shared_ptr<ExposureModeHelper> helper =

View file

@ -114,7 +114,7 @@ namespace ipa {
* does not take any statistics into account. It is used to compute the colour
* gains when the user manually specifies a colour temperature.
*
* \return The colour gains or std::nullopt if the conversion is not possible
* \return The colour gains
*/
/**

View file

@ -8,7 +8,6 @@
#pragma once
#include <map>
#include <optional>
#include <libcamera/control_ids.h>
#include <libcamera/controls.h>
@ -40,7 +39,7 @@ public:
virtual int init(const YamlObject &tuningData) = 0;
virtual AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) = 0;
virtual std::optional<RGB<double>> gainsFromColourTemperature(double colourTemperature) = 0;
virtual RGB<double> gainsFromColourTemperature(double colourTemperature) = 0;
const ControlInfoMap::Map &controls() const
{

View file

@ -270,7 +270,7 @@ void AwbBayes::handleControls(const ControlList &controls)
}
}
std::optional<RGB<double>> AwbBayes::gainsFromColourTemperature(double colourTemperature)
RGB<double> AwbBayes::gainsFromColourTemperature(double colourTemperature)
{
/*
* \todo In the RaspberryPi code, the ct curve was interpolated in
@ -278,7 +278,7 @@ std::optional<RGB<double>> AwbBayes::gainsFromColourTemperature(double colourTem
* intuitive, as the gains are in linear space. But I can't prove it.
*/
const auto &gains = colourGainCurve_.getInterpolated(colourTemperature);
return RGB<double>{ { gains[0], 1.0, gains[1] } };
return { { gains[0], 1.0, gains[1] } };
}
AwbResult AwbBayes::calculateAwb(const AwbStats &stats, unsigned int lux)

View file

@ -27,7 +27,7 @@ public:
int init(const YamlObject &tuningData) override;
AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) override;
std::optional<RGB<double>> gainsFromColourTemperature(double temperatureK) override;
RGB<double> gainsFromColourTemperature(double temperatureK) override;
void handleControls(const ControlList &controls) override;
private:

View file

@ -98,15 +98,15 @@ AwbResult AwbGrey::calculateAwb(const AwbStats &stats, [[maybe_unused]] unsigned
* \return The colour gains if a colour temperature curve is available,
* [1, 1, 1] otherwise.
*/
std::optional<RGB<double>> AwbGrey::gainsFromColourTemperature(double colourTemperature)
RGB<double> AwbGrey::gainsFromColourTemperature(double colourTemperature)
{
if (!colourGainCurve_) {
LOG(Awb, Error) << "No gains defined";
return std::nullopt;
return RGB<double>({ 1.0, 1.0, 1.0 });
}
auto gains = colourGainCurve_->getInterpolated(colourTemperature);
return RGB<double>{ { gains[0], 1.0, gains[1] } };
return { { gains[0], 1.0, gains[1] } };
}
} /* namespace ipa */

View file

@ -25,7 +25,7 @@ public:
int init(const YamlObject &tuningData) override;
AwbResult calculateAwb(const AwbStats &stats, unsigned int lux) override;
std::optional<RGB<double>> gainsFromColourTemperature(double colourTemperature) override;
RGB<double> gainsFromColourTemperature(double colourTemperature) override;
private:
std::optional<Interpolator<Vector<double, 2>>> colourGainCurve_;

View file

@ -4,7 +4,7 @@ ipa_includes = [
libcamera_includes,
]
ipa_install_dir = libcamera_libdir / 'ipa'
ipa_install_dir = libcamera_libdir
ipa_data_dir = libcamera_datadir / 'ipa'
ipa_sysconf_dir = libcamera_sysconfdir / 'ipa'

View file

@ -68,9 +68,10 @@ int Agc::parseMeteringModes(IPAContext &context, const YamlObject &tuningData)
if (meteringModes_.empty()) {
LOG(RkISP1Agc, Warning)
<< "No metering modes read from tuning file; defaulting to matrix";
int32_t meteringModeId = controls::AeMeteringModeNameValueMap.at("MeteringMatrix");
std::vector<uint8_t> weights(context.hw->numHistogramWeights, 1);
meteringModes_[controls::MeteringMatrix] = weights;
meteringModes_[meteringModeId] = weights;
}
std::vector<ControlValue> meteringModes;

View file

@ -90,8 +90,6 @@ int Awb::init(IPAContext &context, const YamlObject &tuningData)
cmap[&controls::ColourTemperature] = ControlInfo(kMinColourTemperature,
kMaxColourTemperature,
kDefaultColourTemperature);
cmap[&controls::AwbEnable] = ControlInfo(false, true);
cmap[&controls::ColourGains] = ControlInfo(0.0f, 3.996f, 1.0f);
if (!tuningData.contains("algorithm"))
LOG(RkISP1Awb, Info) << "No AWB algorithm specified."
@ -126,16 +124,11 @@ int Awb::init(IPAContext &context, const YamlObject &tuningData)
int Awb::configure(IPAContext &context,
const IPACameraSensorInfo &configInfo)
{
context.activeState.awb.manual.gains = RGB<double>{ 1.0 };
auto gains = awbAlgo_->gainsFromColourTemperature(kDefaultColourTemperature);
if (gains)
context.activeState.awb.automatic.gains = *gains;
else
context.activeState.awb.automatic.gains = RGB<double>{ 1.0 };
context.activeState.awb.gains.manual = RGB<double>{ 1.0 };
context.activeState.awb.gains.automatic =
awbAlgo_->gainsFromColourTemperature(kDefaultColourTemperature);
context.activeState.awb.autoEnabled = true;
context.activeState.awb.manual.temperatureK = kDefaultColourTemperature;
context.activeState.awb.automatic.temperatureK = kDefaultColourTemperature;
context.activeState.awb.temperatureK = kDefaultColourTemperature;
/*
* Define the measurement window for AWB as a centered rectangle
@ -180,8 +173,8 @@ void Awb::queueRequest(IPAContext &context,
const auto &colourTemperature = controls.get(controls::ColourTemperature);
bool update = false;
if (colourGains) {
awb.manual.gains.r() = (*colourGains)[0];
awb.manual.gains.b() = (*colourGains)[1];
awb.gains.manual.r() = (*colourGains)[0];
awb.gains.manual.b() = (*colourGains)[1];
/*
* \todo Colour temperature reported in metadata is now
* incorrect, as we can't deduce the temperature from the gains.
@ -189,21 +182,19 @@ void Awb::queueRequest(IPAContext &context,
*/
update = true;
} else if (colourTemperature) {
awb.manual.temperatureK = *colourTemperature;
const auto &gains = awbAlgo_->gainsFromColourTemperature(*colourTemperature);
if (gains) {
awb.manual.gains.r() = gains->r();
awb.manual.gains.b() = gains->b();
update = true;
}
awb.gains.manual.r() = gains.r();
awb.gains.manual.b() = gains.b();
awb.temperatureK = *colourTemperature;
update = true;
}
if (update)
LOG(RkISP1Awb, Debug)
<< "Set colour gains to " << awb.manual.gains;
<< "Set colour gains to " << awb.gains.manual;
frameContext.awb.gains = awb.manual.gains;
frameContext.awb.temperatureK = awb.manual.temperatureK;
frameContext.awb.gains = awb.gains.manual;
frameContext.awb.temperatureK = awb.temperatureK;
}
/**
@ -217,9 +208,8 @@ void Awb::prepare(IPAContext &context, const uint32_t frame,
* most up-to-date automatic values we can read.
*/
if (frameContext.awb.autoEnabled) {
const auto &awb = context.activeState.awb;
frameContext.awb.gains = awb.automatic.gains;
frameContext.awb.temperatureK = awb.automatic.temperatureK;
frameContext.awb.gains = context.activeState.awb.gains.automatic;
frameContext.awb.temperatureK = context.activeState.awb.temperatureK;
}
auto gainConfig = params->block<BlockType::AwbGain>();
@ -306,11 +296,6 @@ void Awb::process(IPAContext &context,
const rkisp1_cif_isp_stat *params = &stats->params;
const rkisp1_cif_isp_awb_stat *awb = &params->awb;
if (awb->awb_mean[0].cnt == 0) {
LOG(RkISP1Awb, Debug) << "AWB statistics are empty";
return;
}
RGB<double> rgbMeans = calculateRgbMeans(frameContext, awb);
/*
@ -324,6 +309,11 @@ void Awb::process(IPAContext &context,
RkISP1AwbStats awbStats{ rgbMeans };
AwbResult awbResult = awbAlgo_->calculateAwb(awbStats, frameContext.lux.lux);
activeState.awb.temperatureK = awbResult.colourTemperature;
/* Metadata shall contain the up to date measurement */
metadata.set(controls::ColourTemperature, activeState.awb.temperatureK);
/*
* Clamp the gain values to the hardware, which expresses gains as Q2.8
* unsigned integer values. Set the minimum just above zero to avoid
@ -334,19 +324,16 @@ void Awb::process(IPAContext &context,
/* Filter the values to avoid oscillations. */
double speed = 0.2;
double ct = awbResult.colourTemperature;
ct = ct * speed + activeState.awb.automatic.temperatureK * (1 - speed);
awbResult.gains = awbResult.gains * speed +
activeState.awb.automatic.gains * (1 - speed);
activeState.awb.gains.automatic * (1 - speed);
activeState.awb.automatic.temperatureK = static_cast<unsigned int>(ct);
activeState.awb.automatic.gains = awbResult.gains;
activeState.awb.gains.automatic = awbResult.gains;
LOG(RkISP1Awb, Debug)
<< std::showpoint
<< "Means " << rgbMeans << ", gains "
<< activeState.awb.automatic.gains << ", temp "
<< activeState.awb.automatic.temperatureK << "K";
<< activeState.awb.gains.automatic << ", temp "
<< activeState.awb.temperatureK << "K";
}
RGB<double> Awb::calculateRgbMeans(const IPAFrameContext &frameContext, const rkisp1_cif_isp_awb_stat *awb) const
@ -404,18 +391,12 @@ RGB<double> Awb::calculateRgbMeans(const IPAFrameContext &frameContext, const rk
rgbMeans = rgbMeans.max(0.0);
}
/*
* The ISP computes the AWB means after applying the CCM. Apply the
* inverse as we want to get the raw means before the colour gains.
*/
rgbMeans = frameContext.ccm.ccm.inverse() * rgbMeans;
/*
* The ISP computes the AWB means after applying the colour gains,
* divide by the gains that were used to get the raw means from the
* sensor. Apply a minimum value to avoid divisions by near-zero.
* sensor.
*/
rgbMeans /= frameContext.awb.gains.max(0.01);
rgbMeans /= frameContext.awb.gains;
return rgbMeans;
}

View file

@ -7,6 +7,8 @@
#pragma once
#include <optional>
#include "libcamera/internal/vector.h"
#include "libipa/awb.h"

View file

@ -36,25 +36,17 @@ namespace ipa::rkisp1::algorithms {
LOG_DEFINE_CATEGORY(RkISP1Ccm)
constexpr Matrix<float, 3, 3> kIdentity3x3 = Matrix<float, 3, 3>::identity();
/**
* \copydoc libcamera::ipa::Algorithm::init
*/
int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData)
{
auto &cmap = context.ctrlMap;
cmap[&controls::ColourCorrectionMatrix] = ControlInfo(
ControlValue(-8.0f),
ControlValue(7.993f),
ControlValue(kIdentity3x3.data()));
int ret = ccm_.readYaml(tuningData["ccms"], "ct", "ccm");
if (ret < 0) {
LOG(RkISP1Ccm, Warning)
<< "Failed to parse 'ccm' "
<< "parameter from tuning file; falling back to unit matrix";
ccm_.setData({ { 0, kIdentity3x3 } });
ccm_.setData({ { 0, Matrix<float, 3, 3>::identity() } });
}
ret = offsets_.readYaml(tuningData["ccms"], "ct", "offsets");
@ -69,51 +61,13 @@ int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData
return 0;
}
/**
* \copydoc libcamera::ipa::Algorithm::configure
*/
int Ccm::configure(IPAContext &context,
[[maybe_unused]] const IPACameraSensorInfo &configInfo)
{
auto &as = context.activeState;
as.ccm.manual = kIdentity3x3;
as.ccm.automatic = ccm_.getInterpolated(as.awb.automatic.temperatureK);
return 0;
}
void Ccm::queueRequest(IPAContext &context,
[[maybe_unused]] const uint32_t frame,
IPAFrameContext &frameContext,
const ControlList &controls)
{
/* Nothing to do here, the ccm will be calculated in prepare() */
if (frameContext.awb.autoEnabled)
return;
auto &ccm = context.activeState.ccm;
const auto &colourTemperature = controls.get(controls::ColourTemperature);
const auto &ccmMatrix = controls.get(controls::ColourCorrectionMatrix);
if (ccmMatrix) {
ccm.manual = Matrix<float, 3, 3>(*ccmMatrix);
LOG(RkISP1Ccm, Debug)
<< "Setting manual CCM from CCM control to " << ccm.manual;
} else if (colourTemperature) {
ccm.manual = ccm_.getInterpolated(*colourTemperature);
LOG(RkISP1Ccm, Debug)
<< "Setting manual CCM from CT control to " << ccm.manual;
}
frameContext.ccm.ccm = ccm.manual;
}
void Ccm::setParameters(struct rkisp1_cif_isp_ctk_config &config,
const Matrix<float, 3, 3> &matrix,
const Matrix<int16_t, 3, 1> &offsets)
{
/*
* 4 bit integer and 7 bit fractional, ranging from -8 (0x400) to
* +7.9921875 (0x3ff)
* +7.992 (0x3ff)
*/
for (unsigned int i = 0; i < 3; i++) {
for (unsigned int j = 0; j < 3; j++)
@ -134,16 +88,14 @@ void Ccm::setParameters(struct rkisp1_cif_isp_ctk_config &config,
void Ccm::prepare(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext, RkISP1Params *params)
{
if (!frameContext.awb.autoEnabled) {
auto config = params->block<BlockType::Ctk>();
config.setEnabled(true);
setParameters(*config, frameContext.ccm.ccm, Matrix<int16_t, 3, 1>());
return;
}
uint32_t ct = context.activeState.awb.temperatureK;
uint32_t ct = frameContext.awb.temperatureK;
/*
* \todo The colour temperature will likely be noisy, add filtering to
* avoid updating the CCM matrix all the time.
*/
if (frame > 0 && ct == ct_) {
frameContext.ccm.ccm = context.activeState.ccm.automatic;
frameContext.ccm.ccm = context.activeState.ccm.ccm;
return;
}
@ -151,7 +103,7 @@ void Ccm::prepare(IPAContext &context, const uint32_t frame,
Matrix<float, 3, 3> ccm = ccm_.getInterpolated(ct);
Matrix<int16_t, 3, 1> offsets = offsets_.getInterpolated(ct);
context.activeState.ccm.automatic = ccm;
context.activeState.ccm.ccm = ccm;
frameContext.ccm.ccm = ccm;
auto config = params->block<BlockType::Ctk>();

View file

@ -26,12 +26,6 @@ public:
~Ccm() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context,
const IPACameraSensorInfo &configInfo) override;
void queueRequest(IPAContext &context,
const uint32_t frame,
IPAFrameContext &frameContext,
const ControlList &controls) override;
void prepare(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext,
RkISP1Params *params) override;

View file

@ -39,17 +39,6 @@ LOG_DEFINE_CATEGORY(RkISP1Filter)
static constexpr uint32_t kFiltLumWeightDefault = 0x00022040;
static constexpr uint32_t kFiltModeDefault = 0x000004f2;
/**
* \copydoc libcamera::ipa::Algorithm::init
*/
int Filter::init(IPAContext &context,
[[maybe_unused]] const YamlObject &tuningData)
{
auto &cmap = context.ctrlMap;
cmap[&controls::Sharpness] = ControlInfo(0.0f, 10.0f, 1.0f);
return 0;
}
/**
* \copydoc libcamera::ipa::Algorithm::queueRequest
*/

View file

@ -21,7 +21,6 @@ public:
Filter() = default;
~Filter() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
void queueRequest(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext,
const ControlList &controls) override;

View file

@ -404,12 +404,12 @@ void LensShadingCorrection::copyTable(rkisp1_cif_isp_lsc_config &config,
/**
* \copydoc libcamera::ipa::Algorithm::prepare
*/
void LensShadingCorrection::prepare([[maybe_unused]] IPAContext &context,
void LensShadingCorrection::prepare(IPAContext &context,
[[maybe_unused]] const uint32_t frame,
IPAFrameContext &frameContext,
[[maybe_unused]] IPAFrameContext &frameContext,
RkISP1Params *params)
{
uint32_t ct = frameContext.awb.temperatureK;
uint32_t ct = context.activeState.awb.temperatureK;
if (std::abs(static_cast<int>(ct) - static_cast<int>(lastAppliedCt_)) <
kColourTemperatureChangeThreshhold)
return;

View file

@ -191,36 +191,22 @@ namespace libcamera::ipa::rkisp1 {
* \var IPAActiveState::awb
* \brief State for the Automatic White Balance algorithm
*
* \struct IPAActiveState::awb::AwbState
* \brief Struct for the AWB regulation state
*
* \var IPAActiveState::awb::AwbState.gains
* \struct IPAActiveState::awb.gains
* \brief White balance gains
*
* \var IPAActiveState::awb::AwbState.temperatureK
* \brief Color temperature
* \var IPAActiveState::awb.gains.manual
* \brief Manual white balance gains (set through requests)
*
* \var IPAActiveState::awb.manual
* \brief Manual regulation state (set through requests)
* \var IPAActiveState::awb.gains.automatic
* \brief Automatic white balance gains (computed by the algorithm)
*
* \var IPAActiveState::awb.automatic
* \brief Automatic regulation state (computed by the algorithm)
* \var IPAActiveState::awb.temperatureK
* \brief Estimated color temperature
*
* \var IPAActiveState::awb.autoEnabled
* \brief Whether the Auto White Balance algorithm is enabled
*/
/**
* \var IPAActiveState::ccm
* \brief State for the Colour Correction Matrix algorithm
*
* \var IPAActiveState::ccm.manual
* \brief Manual CCM (set through requests)
*
* \var IPAActiveState::awb.automatic
* \brief Automatic CCM (computed by the algorithm)
*/
/**
* \var IPAActiveState::cproc
* \brief State for the Color Processing algorithm
@ -360,23 +346,12 @@ namespace libcamera::ipa::rkisp1 {
* \brief White balance gains
*
* \var IPAFrameContext::awb.temperatureK
* \brief Color temperature used for processing this frame
*
* This does not match the color temperature estimated for this frame as the
* measurements were taken on a previous frame.
* \brief Estimated color temperature
*
* \var IPAFrameContext::awb.autoEnabled
* \brief Whether the Auto White Balance algorithm is enabled
*/
/**
* \var IPAFrameContext::ccm
* \brief Colour Correction Matrix parameters for this frame
*
* \struct IPAFrameContext::ccm.ccm
* \brief Colour Correction Matrix
*/
/**
* \var IPAFrameContext::cproc
* \brief Color Processing parameters for this frame

View file

@ -89,20 +89,17 @@ struct IPAActiveState {
} agc;
struct {
struct AwbState {
RGB<double> gains;
unsigned int temperatureK;
};
AwbState manual;
AwbState automatic;
struct {
RGB<double> manual;
RGB<double> automatic;
} gains;
unsigned int temperatureK;
bool autoEnabled;
} awb;
struct {
Matrix<float, 3, 3> manual;
Matrix<float, 3, 3> automatic;
Matrix<float, 3, 3> ccm;
} ccm;
struct {

View file

@ -115,7 +115,10 @@ const IPAHwSettings ipaHwSettingsV12{
/* List of controls handled by the RkISP1 IPA */
const ControlInfoMap::Map rkisp1Controls{
{ &controls::AwbEnable, ControlInfo(false, true) },
{ &controls::ColourGains, ControlInfo(0.0f, 3.996f, 1.0f) },
{ &controls::DebugMetadataEnable, ControlInfo(false, true, false) },
{ &controls::Sharpness, ControlInfo(0.0f, 10.0f, 1.0f) },
{ &controls::draft::NoiseReductionMode, ControlInfo(controls::draft::NoiseReductionModeValues) },
};

View file

@ -58,24 +58,23 @@ const ControlInfoMap::Map ipaControls{
/* \todo Move this to the Camera class */
{ &controls::AeEnable, ControlInfo(false, true, true) },
{ &controls::ExposureTimeMode,
ControlInfo({ { ControlValue(controls::ExposureTimeModeAuto),
ControlValue(controls::ExposureTimeModeManual) } },
ControlValue(controls::ExposureTimeModeAuto)) },
ControlInfo(static_cast<int32_t>(controls::ExposureTimeModeAuto),
static_cast<int32_t>(controls::ExposureTimeModeManual),
static_cast<int32_t>(controls::ExposureTimeModeAuto)) },
{ &controls::ExposureTime,
ControlInfo(1, 66666, static_cast<int32_t>(defaultExposureTime.get<std::micro>())) },
{ &controls::AnalogueGainMode,
ControlInfo({ { ControlValue(controls::AnalogueGainModeAuto),
ControlValue(controls::AnalogueGainModeManual) } },
ControlValue(controls::AnalogueGainModeAuto)) },
ControlInfo(static_cast<int32_t>(controls::AnalogueGainModeAuto),
static_cast<int32_t>(controls::AnalogueGainModeManual),
static_cast<int32_t>(controls::AnalogueGainModeAuto)) },
{ &controls::AnalogueGain, ControlInfo(1.0f, 16.0f, 1.0f) },
{ &controls::AeMeteringMode, ControlInfo(controls::AeMeteringModeValues) },
{ &controls::AeConstraintMode, ControlInfo(controls::AeConstraintModeValues) },
{ &controls::AeExposureMode, ControlInfo(controls::AeExposureModeValues) },
{ &controls::ExposureValue, ControlInfo(-8.0f, 8.0f, 0.0f) },
{ &controls::AeFlickerMode,
ControlInfo({ { ControlValue(controls::FlickerOff),
ControlValue(controls::FlickerManual) } },
ControlValue(controls::FlickerOff)) },
{ &controls::AeFlickerMode, ControlInfo(static_cast<int>(controls::FlickerOff),
static_cast<int>(controls::FlickerManual),
static_cast<int>(controls::FlickerOff)) },
{ &controls::AeFlickerPeriod, ControlInfo(100, 1000000) },
{ &controls::Brightness, ControlInfo(-1.0f, 1.0f, 0.0f) },
{ &controls::Contrast, ControlInfo(0.0f, 32.0f, 1.0f) },
@ -233,6 +232,25 @@ int32_t IpaBase::configure(const IPACameraSensorInfo &sensorInfo, const ConfigPa
agcStatus.analogueGain = defaultAnalogueGain;
applyAGC(&agcStatus, ctrls);
/*
* Set the lens to the default (typically hyperfocal) position
* on first start.
*/
if (lensPresent_) {
RPiController::AfAlgorithm *af =
dynamic_cast<RPiController::AfAlgorithm *>(controller_.getAlgorithm("af"));
if (af) {
float defaultPos =
ipaAfControls.at(&controls::LensPosition).def().get<float>();
ControlList lensCtrl(lensCtrls_);
int32_t hwpos;
af->setLensPosition(defaultPos, &hwpos);
lensCtrl.set(V4L2_CID_FOCUS_ABSOLUTE, hwpos);
result->lensControls = std::move(lensCtrl);
}
}
}
result->sensorControls = std::move(ctrls);
@ -262,20 +280,8 @@ int32_t IpaBase::configure(const IPACameraSensorInfo &sensorInfo, const ConfigPa
ctrlMap.merge(ControlInfoMap::Map(ipaColourControls));
/* Declare Autofocus controls, only if we have a controllable lens */
if (lensPresent_) {
if (lensPresent_)
ctrlMap.merge(ControlInfoMap::Map(ipaAfControls));
RPiController::AfAlgorithm *af =
dynamic_cast<RPiController::AfAlgorithm *>(controller_.getAlgorithm("af"));
if (af) {
double min, max, dflt;
af->getLensLimits(min, max);
dflt = af->getDefaultLensPosition();
ctrlMap[&controls::LensPosition] =
ControlInfo(static_cast<float>(min),
static_cast<float>(max),
static_cast<float>(dflt));
}
}
result->controlInfo = ControlInfoMap(std::move(ctrlMap), controls::controls);
@ -313,35 +319,14 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
/* Make a note of this as it tells us the HDR status of the first few frames. */
hdrStatus_ = agcStatus.hdr;
/*
* AF: If no lens position was specified, drive lens to a default position.
* This had to be deferred (not initialised by a constructor) until here
* to ensure that exactly ONE starting position is sent to the lens driver.
* It should be the static API default, not dependent on AF range or mode.
*/
if (firstStart_ && lensPresent_) {
RPiController::AfAlgorithm *af = dynamic_cast<RPiController::AfAlgorithm *>(
controller_.getAlgorithm("af"));
if (af && !af->getLensPosition()) {
int32_t hwpos;
double pos = af->getDefaultLensPosition();
if (af->setLensPosition(pos, &hwpos, true)) {
ControlList lensCtrls(lensCtrls_);
lensCtrls.set(V4L2_CID_FOCUS_ABSOLUTE, hwpos);
setLensControls.emit(lensCtrls);
}
}
}
/*
* Initialise frame counts, and decide how many frames must be hidden or
* "mistrusted", which depends on whether this is a startup from cold,
* or merely a mode switch in a running system.
*/
unsigned int agcConvergenceFrames = 0, awbConvergenceFrames = 0;
frameCount_ = 0;
if (firstStart_) {
invalidCount_ = helper_->hideFramesStartup();
dropFrameCount_ = helper_->hideFramesStartup();
mistrustCount_ = helper_->mistrustFramesStartup();
/*
@ -351,6 +336,7 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
* (mistrustCount_) that they won't see. But if zero (i.e.
* no convergence necessary), no frames need to be dropped.
*/
unsigned int agcConvergenceFrames = 0;
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc"));
if (agc) {
@ -359,6 +345,7 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
agcConvergenceFrames += mistrustCount_;
}
unsigned int awbConvergenceFrames = 0;
RPiController::AwbAlgorithm *awb = dynamic_cast<RPiController::AwbAlgorithm *>(
controller_.getAlgorithm("awb"));
if (awb) {
@ -366,18 +353,15 @@ void IpaBase::start(const ControlList &controls, StartResult *result)
if (awbConvergenceFrames)
awbConvergenceFrames += mistrustCount_;
}
dropFrameCount_ = std::max({ dropFrameCount_, agcConvergenceFrames, awbConvergenceFrames });
LOG(IPARPI, Debug) << "Drop " << dropFrameCount_ << " frames on startup";
} else {
invalidCount_ = helper_->hideFramesModeSwitch();
dropFrameCount_ = helper_->hideFramesModeSwitch();
mistrustCount_ = helper_->mistrustFramesModeSwitch();
}
result->startupFrameCount = std::max({ agcConvergenceFrames, awbConvergenceFrames });
result->invalidFrameCount = invalidCount_;
invalidCount_ = std::max({ invalidCount_, agcConvergenceFrames, awbConvergenceFrames });
LOG(IPARPI, Debug) << "Startup frames: " << result->startupFrameCount
<< " Invalid frames: " << result->invalidFrameCount;
result->dropFrameCount = dropFrameCount_;
firstStart_ = false;
lastRunTimestamp_ = 0;
@ -457,7 +441,7 @@ void IpaBase::prepareIsp(const PrepareParams &params)
/* Allow a 10% margin on the comparison below. */
Duration delta = (frameTimestamp - lastRunTimestamp_) * 1.0ns;
if (lastRunTimestamp_ && frameCount_ > invalidCount_ &&
if (lastRunTimestamp_ && frameCount_ > dropFrameCount_ &&
delta < controllerMinFrameDuration * 0.9 && !hdrChange) {
/*
* Ensure we merge the previous frame's metadata with the current
@ -962,17 +946,6 @@ void IpaBase::applyControls(const ControlList &controls)
break;
}
case controls::AE_ENABLE: {
/*
* The AeEnable control is now just a wrapper that will already have been
* converted to ExposureTimeMode and AnalogueGainMode equivalents, so there
* would be nothing to do here. Nonetheless, "handle" the control so as to
* avoid warnings from the "default:" clause of the switch statement.
*/
break;
}
case controls::AE_FLICKER_MODE: {
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc"));
@ -1579,8 +1552,7 @@ void IpaBase::applyFrameDurations(Duration minFrameDuration, Duration maxFrameDu
RPiController::AgcAlgorithm *agc = dynamic_cast<RPiController::AgcAlgorithm *>(
controller_.getAlgorithm("agc"));
if (agc)
agc->setMaxExposureTime(maxExposureTime);
agc->setMaxExposureTime(maxExposureTime);
}
void IpaBase::applyAGC(const struct AgcStatus *agcStatus, ControlList &ctrls)

View file

@ -115,8 +115,8 @@ private:
/* How many frames we should avoid running control algos on. */
unsigned int mistrustCount_;
/* Number of frames that need to be marked as dropped on startup. */
unsigned int invalidCount_;
/* Number of frames that need to be dropped on startup. */
unsigned int dropFrameCount_;
/* Frame timestamp for the last run of the controller. */
uint64_t lastRunTimestamp_;

View file

@ -33,10 +33,6 @@ public:
*
* getMode() is provided mainly for validating controls.
* getLensPosition() is provided for populating DeviceStatus.
*
* getDefaultlensPosition() and getLensLimits() were added for
* populating ControlInfoMap. They return the static API limits
* which should be independent of the current range or mode.
*/
enum AfRange { AfRangeNormal = 0,
@ -70,9 +66,7 @@ public:
}
virtual void setMode(AfMode mode) = 0;
virtual AfMode getMode() const = 0;
virtual double getDefaultLensPosition() const = 0;
virtual void getLensLimits(double &min, double &max) const = 0;
virtual bool setLensPosition(double dioptres, int32_t *hwpos, bool force = false) = 0;
virtual bool setLensPosition(double dioptres, int32_t *hwpos) = 0;
virtual std::optional<double> getLensPosition() const = 0;
virtual void triggerScan() = 0;
virtual void cancelScan() = 0;

View file

@ -46,8 +46,6 @@ Af::SpeedDependentParams::SpeedDependentParams()
: stepCoarse(1.0),
stepFine(0.25),
contrastRatio(0.75),
retriggerRatio(0.75),
retriggerDelay(10),
pdafGain(-0.02),
pdafSquelch(0.125),
maxSlew(2.0),
@ -62,7 +60,6 @@ Af::CfgParams::CfgParams()
confThresh(16),
confClip(512),
skipFrames(5),
checkForIR(false),
map()
{
}
@ -90,8 +87,6 @@ void Af::SpeedDependentParams::read(const libcamera::YamlObject &params)
readNumber<double>(stepCoarse, params, "step_coarse");
readNumber<double>(stepFine, params, "step_fine");
readNumber<double>(contrastRatio, params, "contrast_ratio");
readNumber<double>(retriggerRatio, params, "retrigger_ratio");
readNumber<uint32_t>(retriggerDelay, params, "retrigger_delay");
readNumber<double>(pdafGain, params, "pdaf_gain");
readNumber<double>(pdafSquelch, params, "pdaf_squelch");
readNumber<double>(maxSlew, params, "max_slew");
@ -142,7 +137,6 @@ int Af::CfgParams::read(const libcamera::YamlObject &params)
readNumber<uint32_t>(confThresh, params, "conf_thresh");
readNumber<uint32_t>(confClip, params, "conf_clip");
readNumber<uint32_t>(skipFrames, params, "skip_frames");
readNumber<bool>(checkForIR, params, "check_for_ir");
if (params.contains("map"))
map = params["map"].get<ipa::Pwl>(ipa::Pwl{});
@ -182,38 +176,27 @@ Af::Af(Controller *controller)
useWindows_(false),
phaseWeights_(),
contrastWeights_(),
awbWeights_(),
scanState_(ScanState::Idle),
initted_(false),
irFlag_(false),
ftarget_(-1.0),
fsmooth_(-1.0),
prevContrast_(0.0),
oldSceneContrast_(0.0),
prevAverage_{ 0.0, 0.0, 0.0 },
oldSceneAverage_{ 0.0, 0.0, 0.0 },
prevPhase_(0.0),
skipCount_(0),
stepCount_(0),
dropCount_(0),
sameSignCount_(0),
sceneChangeCount_(0),
scanMaxContrast_(0.0),
scanMinContrast_(1.0e9),
scanStep_(0.0),
scanData_(),
reportState_(AfState::Idle)
{
/*
* Reserve space for data structures, to reduce memory fragmentation.
* It's too early to query the size of the PDAF sensor data, so guess.
* Reserve space for data, to reduce memory fragmentation. It's too early
* to query the size of the PDAF (from camera) and Contrast (from ISP)
* statistics, but these are plausible upper bounds.
*/
windows_.reserve(1);
phaseWeights_.w.reserve(16 * 12);
contrastWeights_.w.reserve(getHardwareConfig().focusRegions.width *
getHardwareConfig().focusRegions.height);
contrastWeights_.w.reserve(getHardwareConfig().awbRegions.width *
getHardwareConfig().awbRegions.height);
scanData_.reserve(32);
}
@ -252,14 +235,13 @@ void Af::switchMode(CameraMode const &cameraMode, [[maybe_unused]] Metadata *met
<< statsRegion_.height;
invalidateWeights();
if (scanState_ >= ScanState::Coarse1 && scanState_ < ScanState::Settle) {
if (scanState_ >= ScanState::Coarse && scanState_ < ScanState::Settle) {
/*
* If a scan was in progress, re-start it, as CDAF statistics
* may have changed. Though if the application is just about
* to take a still picture, this will not help...
*/
startProgrammedScan();
updateLensPosition();
}
skipCount_ = cfg_.skipFrames;
}
@ -325,7 +307,6 @@ void Af::invalidateWeights()
{
phaseWeights_.sum = 0;
contrastWeights_.sum = 0;
awbWeights_.sum = 0;
}
bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf)
@ -347,8 +328,9 @@ bool Af::getPhase(PdafRegions const &regions, double &phase, double &conf)
if (c >= cfg_.confThresh) {
if (c > cfg_.confClip)
c = cfg_.confClip;
c -= (cfg_.confThresh >> 1);
c -= (cfg_.confThresh >> 2);
sumWc += w * c;
c -= (cfg_.confThresh >> 2);
sumWcp += (int64_t)(w * c) * (int64_t)data.phase;
}
}
@ -382,54 +364,6 @@ double Af::getContrast(const FocusRegions &focusStats)
return (contrastWeights_.sum > 0) ? ((double)sumWc / (double)contrastWeights_.sum) : 0.0;
}
/*
* Get the average R, G, B values in AF window[s] (from AWB statistics).
* Optionally, check if all of {R,G,B} are within 4:5 of each other
* across more than 50% of the counted area and within the AF window:
* for an RGB sensor this strongly suggests that IR lighting is in use.
*/
bool Af::getAverageAndTestIr(const RgbyRegions &awbStats, double rgb[3])
{
libcamera::Size size = awbStats.size();
if (size.height != awbWeights_.rows ||
size.width != awbWeights_.cols || awbWeights_.sum == 0) {
LOG(RPiAf, Debug) << "Recompute RGB weights " << size.width << 'x' << size.height;
computeWeights(&awbWeights_, size.height, size.width);
}
uint64_t sr = 0, sg = 0, sb = 0, sw = 1;
uint64_t greyCount = 0, allCount = 0;
for (unsigned i = 0; i < awbStats.numRegions(); ++i) {
uint64_t r = awbStats.get(i).val.rSum;
uint64_t g = awbStats.get(i).val.gSum;
uint64_t b = awbStats.get(i).val.bSum;
uint64_t w = awbWeights_.w[i];
if (w) {
sw += w;
sr += w * r;
sg += w * g;
sb += w * b;
}
if (cfg_.checkForIR) {
if (4 * r < 5 * b && 4 * b < 5 * r &&
4 * r < 5 * g && 4 * g < 5 * r &&
4 * b < 5 * g && 4 * g < 5 * b)
greyCount += awbStats.get(i).counted;
allCount += awbStats.get(i).counted;
}
}
rgb[0] = sr / (double)sw;
rgb[1] = sg / (double)sw;
rgb[2] = sb / (double)sw;
return (cfg_.checkForIR && 2 * greyCount > allCount &&
4 * sr < 5 * sb && 4 * sb < 5 * sr &&
4 * sr < 5 * sg && 4 * sg < 5 * sr &&
4 * sb < 5 * sg && 4 * sg < 5 * sb);
}
void Af::doPDAF(double phase, double conf)
{
/* Apply loop gain */
@ -476,7 +410,7 @@ void Af::doPDAF(double phase, double conf)
bool Af::earlyTerminationByPhase(double phase)
{
if (scanData_.size() > 0 &&
scanData_[scanData_.size() - 1].conf >= cfg_.confThresh) {
scanData_[scanData_.size() - 1].conf >= cfg_.confEpsilon) {
double oldFocus = scanData_[scanData_.size() - 1].focus;
double oldPhase = scanData_[scanData_.size() - 1].phase;
@ -485,12 +419,11 @@ bool Af::earlyTerminationByPhase(double phase)
* Interpolate/extrapolate the lens position for zero phase.
* Check that the extrapolation is well-conditioned.
*/
if ((ftarget_ - oldFocus) * (phase - oldPhase) * cfg_.speeds[speed_].pdafGain < 0.0) {
if ((ftarget_ - oldFocus) * (phase - oldPhase) > 0.0) {
double param = phase / (phase - oldPhase);
if ((-2.5 <= param || mode_ == AfModeContinuous) && param <= 3.0) {
LOG(RPiAf, Debug) << "ETBP: param=" << param;
param = std::max(param, -2.5);
if (-3.0 <= param && param <= 3.5) {
ftarget_ += param * (oldFocus - ftarget_);
LOG(RPiAf, Debug) << "ETBP: param=" << param;
return true;
}
}
@ -503,28 +436,15 @@ double Af::findPeak(unsigned i) const
{
double f = scanData_[i].focus;
if (scanData_.size() >= 3) {
/*
* Given the sample with the highest contrast score and its two
* neighbours either side (or same side if at the end of a scan),
* solve for the best lens position by fitting a parabola.
* Adapted from awb.cpp: interpolateQaudaratic()
*/
if (i == 0)
i++;
else if (i + 1 >= scanData_.size())
i--;
double abx = scanData_[i - 1].focus - scanData_[i].focus;
double aby = scanData_[i - 1].contrast - scanData_[i].contrast;
double cbx = scanData_[i + 1].focus - scanData_[i].focus;
double cby = scanData_[i + 1].contrast - scanData_[i].contrast;
double denom = 2.0 * (aby * cbx - cby * abx);
if (std::abs(denom) >= (1.0 / 64.0) && denom * abx > 0.0) {
f = (aby * cbx * cbx - cby * abx * abx) / denom;
f = std::clamp(f, std::min(abx, cbx), std::max(abx, cbx));
f += scanData_[i].focus;
if (i > 0 && i + 1 < scanData_.size()) {
double dropLo = scanData_[i].contrast - scanData_[i - 1].contrast;
double dropHi = scanData_[i].contrast - scanData_[i + 1].contrast;
if (0.0 <= dropLo && dropLo < dropHi) {
double param = 0.3125 * (1.0 - dropLo / dropHi) * (1.6 - dropLo / dropHi);
f += param * (scanData_[i - 1].focus - f);
} else if (0.0 <= dropHi && dropHi < dropLo) {
double param = 0.3125 * (1.0 - dropHi / dropLo) * (1.6 - dropHi / dropLo);
f += param * (scanData_[i + 1].focus - f);
}
}
@ -538,49 +458,36 @@ void Af::doScan(double contrast, double phase, double conf)
if (scanData_.empty() || contrast > scanMaxContrast_) {
scanMaxContrast_ = contrast;
scanMaxIndex_ = scanData_.size();
if (scanState_ != ScanState::Fine)
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
}
if (contrast < scanMinContrast_)
scanMinContrast_ = contrast;
scanData_.emplace_back(ScanRecord{ ftarget_, contrast, phase, conf });
if ((scanStep_ >= 0.0 && ftarget_ >= cfg_.ranges[range_].focusMax) ||
(scanStep_ <= 0.0 && ftarget_ <= cfg_.ranges[range_].focusMin) ||
(scanState_ == ScanState::Fine && scanData_.size() >= 3) ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
double pk = findPeak(scanMaxIndex_);
/*
* Finished a scan, by hitting a limit or due to constrast dropping off.
* If this is a first coarse scan and we didn't bracket the peak, reverse!
* If this is a fine scan, or no fine step was defined, we've finished.
* Otherwise, start fine scan in opposite direction.
*/
if (scanState_ == ScanState::Coarse1 &&
scanData_[0].contrast >= cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
scanStep_ = -scanStep_;
scanState_ = ScanState::Coarse2;
} else if (scanState_ == ScanState::Fine || cfg_.speeds[speed_].stepFine <= 0.0) {
ftarget_ = pk;
if (scanState_ == ScanState::Coarse) {
if (ftarget_ >= cfg_.ranges[range_].focusMax ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
/*
* Finished course scan, or termination based on contrast.
* Jump to just after max contrast and start fine scan.
*/
ftarget_ = std::min(ftarget_, findPeak(scanMaxIndex_) +
2.0 * cfg_.speeds[speed_].stepFine);
scanState_ = ScanState::Fine;
scanData_.clear();
} else
ftarget_ += cfg_.speeds[speed_].stepCoarse;
} else { /* ScanState::Fine */
if (ftarget_ <= cfg_.ranges[range_].focusMin || scanData_.size() >= 5 ||
contrast < cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
/*
* Finished fine scan, or termination based on contrast.
* Use quadratic peak-finding to find best contrast position.
*/
ftarget_ = findPeak(scanMaxIndex_);
scanState_ = ScanState::Settle;
} else if (scanState_ == ScanState::Coarse1 &&
scanData_[0].contrast >= cfg_.speeds[speed_].contrastRatio * scanMaxContrast_) {
scanStep_ = -scanStep_;
scanState_ = ScanState::Coarse2;
} else if (scanStep_ >= 0.0) {
ftarget_ = std::min(pk + cfg_.speeds[speed_].stepFine,
cfg_.ranges[range_].focusMax);
scanStep_ = -cfg_.speeds[speed_].stepFine;
scanState_ = ScanState::Fine;
} else {
ftarget_ = std::max(pk - cfg_.speeds[speed_].stepFine,
cfg_.ranges[range_].focusMin);
scanStep_ = cfg_.speeds[speed_].stepFine;
scanState_ = ScanState::Fine;
}
scanData_.clear();
} else
ftarget_ += scanStep_;
} else
ftarget_ -= cfg_.speeds[speed_].stepFine;
}
stepCount_ = (ftarget_ == fsmooth_) ? 0 : cfg_.speeds[speed_].stepFrames;
}
@ -594,70 +501,26 @@ void Af::doAF(double contrast, double phase, double conf)
return;
}
/* Count frames for which PDAF phase has had same sign */
if (phase * prevPhase_ <= 0.0)
sameSignCount_ = 0;
else
sameSignCount_++;
prevPhase_ = phase;
if (mode_ == AfModeManual)
return; /* nothing to do */
if (scanState_ == ScanState::Pdaf) {
/*
* Use PDAF closed-loop control whenever available, in both CAF
* mode and (for a limited number of iterations) when triggered.
* If PDAF fails (due to poor contrast, noise or large defocus)
* for at least dropoutFrames, fall back to a CDAF-based scan
* immediately (in triggered-auto) or on scene change (in CAF).
* If PDAF fails (due to poor contrast, noise or large defocus),
* fall back to a CDAF-based scan. To avoid "nuisance" scans,
* scan only after a number of frames with low PDAF confidence.
*/
if (conf >= cfg_.confEpsilon) {
if (mode_ == AfModeAuto || sameSignCount_ >= 3)
doPDAF(phase, conf);
if (conf > (dropCount_ ? 1.0 : 0.25) * cfg_.confEpsilon) {
doPDAF(phase, conf);
if (stepCount_ > 0)
stepCount_--;
else if (mode_ != AfModeContinuous)
scanState_ = ScanState::Idle;
oldSceneContrast_ = contrast;
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
sceneChangeCount_ = 0;
dropCount_ = 0;
return;
} else {
dropCount_++;
if (dropCount_ < cfg_.speeds[speed_].dropoutFrames)
return;
if (mode_ != AfModeContinuous) {
startProgrammedScan();
return;
}
/* else fall through to waiting for a scene change */
}
}
if (scanState_ < ScanState::Coarse1 && mode_ == AfModeContinuous) {
/*
* In CAF mode, not in a scan, and PDAF is unavailable.
* Wait for a scene change, followed by stability.
*/
if (contrast + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneContrast_ ||
oldSceneContrast_ + 1.0 < cfg_.speeds[speed_].retriggerRatio * contrast ||
prevAverage_[0] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[0] ||
oldSceneAverage_[0] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[0] ||
prevAverage_[1] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[1] ||
oldSceneAverage_[1] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[1] ||
prevAverage_[2] + 1.0 < cfg_.speeds[speed_].retriggerRatio * oldSceneAverage_[2] ||
oldSceneAverage_[2] + 1.0 < cfg_.speeds[speed_].retriggerRatio * prevAverage_[2]) {
oldSceneContrast_ = contrast;
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
sceneChangeCount_ = 1;
} else if (sceneChangeCount_)
sceneChangeCount_++;
if (sceneChangeCount_ >= cfg_.speeds[speed_].retriggerDelay)
} else if (++dropCount_ == cfg_.speeds[speed_].dropoutFrames)
startProgrammedScan();
} else if (scanState_ >= ScanState::Coarse1 && fsmooth_ == ftarget_) {
} else if (scanState_ >= ScanState::Coarse && fsmooth_ == ftarget_) {
/*
* CDAF-based scanning sequence.
* Scanning sequence. This means PDAF has become unavailable.
* Allow a delay between steps for CDAF FoM statistics to be
* updated, and a "settling time" at the end of the sequence.
* [A coarse or fine scan can be abandoned if two PDAF samples
@ -676,14 +539,11 @@ void Af::doAF(double contrast, double phase, double conf)
scanState_ = ScanState::Pdaf;
else
scanState_ = ScanState::Idle;
dropCount_ = 0;
sceneChangeCount_ = 0;
oldSceneContrast_ = std::max(scanMaxContrast_, prevContrast_);
scanData_.clear();
} else if (conf >= cfg_.confThresh && earlyTerminationByPhase(phase)) {
std::copy(prevAverage_, prevAverage_ + 3, oldSceneAverage_);
} else if (conf >= cfg_.confEpsilon && earlyTerminationByPhase(phase)) {
scanState_ = ScanState::Settle;
stepCount_ = (mode_ == AfModeContinuous) ? 0 : cfg_.speeds[speed_].stepFrames;
stepCount_ = (mode_ == AfModeContinuous) ? 0
: cfg_.speeds[speed_].stepFrames;
} else
doScan(contrast, phase, conf);
}
@ -713,8 +573,7 @@ void Af::updateLensPosition()
void Af::startAF()
{
/* Use PDAF if the tuning file allows it; else CDAF. */
if (cfg_.speeds[speed_].pdafGain != 0.0 &&
cfg_.speeds[speed_].dropoutFrames > 0 &&
if (cfg_.speeds[speed_].dropoutFrames > 0 &&
(mode_ == AfModeContinuous || cfg_.speeds[speed_].pdafFrames > 0)) {
if (!initted_) {
ftarget_ = cfg_.ranges[range_].focusDefault;
@ -724,30 +583,16 @@ void Af::startAF()
scanState_ = ScanState::Pdaf;
scanData_.clear();
dropCount_ = 0;
oldSceneContrast_ = 0.0;
sceneChangeCount_ = 0;
reportState_ = AfState::Scanning;
} else {
} else
startProgrammedScan();
updateLensPosition();
}
}
void Af::startProgrammedScan()
{
if (!initted_ || mode_ != AfModeContinuous ||
fsmooth_ <= cfg_.ranges[range_].focusMin + 2.0 * cfg_.speeds[speed_].stepCoarse) {
ftarget_ = cfg_.ranges[range_].focusMin;
scanStep_ = cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse2;
} else if (fsmooth_ >= cfg_.ranges[range_].focusMax - 2.0 * cfg_.speeds[speed_].stepCoarse) {
ftarget_ = cfg_.ranges[range_].focusMax;
scanStep_ = -cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse2;
} else {
scanStep_ = -cfg_.speeds[speed_].stepCoarse;
scanState_ = ScanState::Coarse1;
}
ftarget_ = cfg_.ranges[range_].focusMin;
updateLensPosition();
scanState_ = ScanState::Coarse;
scanMaxContrast_ = 0.0;
scanMinContrast_ = 1.0e9;
scanMaxIndex_ = 0;
@ -788,7 +633,7 @@ void Af::prepare(Metadata *imageMetadata)
uint32_t oldSt = stepCount_;
if (imageMetadata->get("pdaf.regions", regions) == 0)
getPhase(regions, phase, conf);
doAF(prevContrast_, phase, irFlag_ ? 0 : conf);
doAF(prevContrast_, phase, conf);
updateLensPosition();
LOG(RPiAf, Debug) << std::fixed << std::setprecision(2)
<< static_cast<unsigned int>(reportState_)
@ -798,8 +643,7 @@ void Af::prepare(Metadata *imageMetadata)
<< " ft" << oldFt << "->" << ftarget_
<< " fs" << oldFs << "->" << fsmooth_
<< " cont=" << (int)prevContrast_
<< " phase=" << (int)phase << " conf=" << (int)conf
<< (irFlag_ ? " IR" : "");
<< " phase=" << (int)phase << " conf=" << (int)conf;
}
/* Report status and produce new lens setting */
@ -812,8 +656,6 @@ void Af::prepare(Metadata *imageMetadata)
if (mode_ == AfModeAuto && scanState_ != ScanState::Idle)
status.state = AfState::Scanning;
else if (mode_ == AfModeManual)
status.state = AfState::Idle;
else
status.state = reportState_;
status.lensSetting = initted_ ? std::optional<int>(cfg_.map.eval(fsmooth_))
@ -825,7 +667,6 @@ void Af::process(StatisticsPtr &stats, [[maybe_unused]] Metadata *imageMetadata)
{
(void)imageMetadata;
prevContrast_ = getContrast(stats->focusRegions);
irFlag_ = getAverageAndTestIr(stats->awbRegions, prevAverage_);
}
/* Controls */
@ -874,23 +715,11 @@ void Af::setWindows(libcamera::Span<libcamera::Rectangle const> const &wins)
invalidateWeights();
}
double Af::getDefaultLensPosition() const
{
return cfg_.ranges[AfRangeNormal].focusDefault;
}
void Af::getLensLimits(double &min, double &max) const
{
/* Limits for manual focus are set by map, not by ranges */
min = cfg_.map.domain().start;
max = cfg_.map.domain().end;
}
bool Af::setLensPosition(double dioptres, int *hwpos, bool force)
bool Af::setLensPosition(double dioptres, int *hwpos)
{
bool changed = false;
if (mode_ == AfModeManual || force) {
if (mode_ == AfModeManual) {
LOG(RPiAf, Debug) << "setLensPosition: " << dioptres;
ftarget_ = cfg_.map.domain().clamp(dioptres);
changed = !(initted_ && fsmooth_ == ftarget_);
@ -934,7 +763,7 @@ void Af::setMode(AfAlgorithm::AfMode mode)
pauseFlag_ = false;
if (mode == AfModeContinuous)
scanState_ = ScanState::Trigger;
else if (mode != AfModeAuto || scanState_ < ScanState::Coarse1)
else if (mode != AfModeAuto || scanState_ < ScanState::Coarse)
goIdle();
}
}
@ -950,14 +779,12 @@ void Af::pause(AfAlgorithm::AfPause pause)
if (mode_ == AfModeContinuous) {
if (pause == AfPauseResume && pauseFlag_) {
pauseFlag_ = false;
if (scanState_ < ScanState::Coarse1)
if (scanState_ < ScanState::Coarse)
scanState_ = ScanState::Trigger;
} else if (pause != AfPauseResume && !pauseFlag_) {
pauseFlag_ = true;
if (pause == AfPauseImmediate || scanState_ < ScanState::Coarse1) {
scanState_ = ScanState::Idle;
scanData_.clear();
}
if (pause == AfPauseImmediate || scanState_ < ScanState::Coarse)
goIdle();
}
}
}

View file

@ -15,28 +15,20 @@
/*
* This algorithm implements a hybrid of CDAF and PDAF, favouring PDAF.
*
* Whenever PDAF is available (and reports sufficiently high confidence),
* it is used for continuous feedback control of the lens position. When
* triggered in Auto mode, we enable the loop for a limited number of frames
* (it may terminate sooner if the phase becomes small). In CAF mode, the
* PDAF loop runs continuously. Very small lens movements are suppressed.
* Whenever PDAF is available, it is used in a continuous feedback loop.
* When triggered in auto mode, we simply enable AF for a limited number
* of frames (it may terminate early if the delta becomes small enough).
*
* When PDAF confidence is low (due e.g. to low contrast or extreme defocus)
* or PDAF data are absent, fall back to CDAF with a programmed scan pattern.
* A coarse and fine scan are performed, using the ISP's CDAF contrast FoM
* to estimate the lens position with peak contrast. (This is slower due to
* extra latency in the ISP, and requires a settling time between steps.)
* The scan may terminate early if PDAF recovers and allows the zero-phase
* lens position to be interpolated.
* A coarse and fine scan are performed, using ISP's CDAF focus FoM to
* estimate the lens position with peak contrast. This is slower due to
* extra latency in the ISP, and requires a settling time between steps.
*
* In CAF mode, the fallback to a CDAF scan is triggered when PDAF fails to
* report high confidence and a configurable number of frames have elapsed
* since the last image change since either PDAF was working or a previous
* scan found peak contrast. Image changes are detected using both contrast
* and AWB statistics (within the AF window[s]).
* Some hysteresis is applied to the switch between PDAF and CDAF, to avoid
* "nuisance" scans. During each interval where PDAF is not working, only
* ONE scan will be performed; CAF cannot track objects using CDAF alone.
*
* IR lighting can interfere with the correct operation of PDAF, so we
* optionally try to detect it (from AWB statistics).
*/
namespace RPiController {
@ -62,9 +54,7 @@ public:
void setWindows(libcamera::Span<libcamera::Rectangle const> const &wins) override;
void setMode(AfMode mode) override;
AfMode getMode() const override;
double getDefaultLensPosition() const override;
void getLensLimits(double &min, double &max) const override;
bool setLensPosition(double dioptres, int32_t *hwpos, bool force) override;
bool setLensPosition(double dioptres, int32_t *hwpos) override;
std::optional<double> getLensPosition() const override;
void triggerScan() override;
void cancelScan() override;
@ -75,8 +65,7 @@ private:
Idle = 0,
Trigger,
Pdaf,
Coarse1,
Coarse2,
Coarse,
Fine,
Settle
};
@ -91,11 +80,9 @@ private:
};
struct SpeedDependentParams {
double stepCoarse; /* in dioptres; used for scans */
double stepFine; /* in dioptres; used for scans */
double stepCoarse; /* used for scans */
double stepFine; /* used for scans */
double contrastRatio; /* used for scan termination and reporting */
double retriggerRatio; /* contrast and RGB ratio for re-triggering */
uint32_t retriggerDelay; /* frames of stability before re-triggering */
double pdafGain; /* coefficient for PDAF feedback loop */
double pdafSquelch; /* PDAF stability parameter (device-specific) */
double maxSlew; /* limit for lens movement per frame */
@ -114,7 +101,6 @@ private:
uint32_t confThresh; /* PDAF confidence cell min (sensor-specific) */
uint32_t confClip; /* PDAF confidence cell max (sensor-specific) */
uint32_t skipFrames; /* frames to skip at start or modeswitch */
bool checkForIR; /* Set this if PDAF is unreliable in IR light */
libcamera::ipa::Pwl map; /* converts dioptres -> lens driver position */
CfgParams();
@ -143,7 +129,6 @@ private:
void invalidateWeights();
bool getPhase(PdafRegions const &regions, double &phase, double &conf);
double getContrast(const FocusRegions &focusStats);
bool getAverageAndTestIr(const RgbyRegions &awbStats, double rgb[3]);
void doPDAF(double phase, double conf);
bool earlyTerminationByPhase(double phase);
double findPeak(unsigned index) const;
@ -165,20 +150,15 @@ private:
bool useWindows_;
RegionWeights phaseWeights_;
RegionWeights contrastWeights_;
RegionWeights awbWeights_;
/* Working state. */
ScanState scanState_;
bool initted_, irFlag_;
bool initted_;
double ftarget_, fsmooth_;
double prevContrast_, oldSceneContrast_;
double prevAverage_[3], oldSceneAverage_[3];
double prevPhase_;
double prevContrast_;
unsigned skipCount_, stepCount_, dropCount_;
unsigned sameSignCount_;
unsigned sceneChangeCount_;
unsigned scanMaxIndex_;
double scanMaxContrast_, scanMinContrast_, scanStep_;
double scanMaxContrast_, scanMinContrast_;
std::vector<ScanRecord> scanData_;
AfState reportState_;
};

View file

@ -717,7 +717,7 @@ static double computeInitialY(StatisticsPtr &stats, AwbStatus const &awb,
/* Factor in the AWB correction if needed. */
if (stats->agcStatsPos == Statistics::AgcStatsPos::PreWb)
sum *= RGB<double>{ { awb.gainR, awb.gainG, awb.gainB } };
sum *= RGB<double>{ { awb.gainR, awb.gainR, awb.gainB } };
double ySum = ipa::rec601LuminanceFromRGB(sum);

View file

@ -165,6 +165,7 @@ int AwbConfig::read(const libcamera::YamlObject &params)
bayes = false;
}
}
fast = params[fast].get<int>(bayes); /* default to fast for Bayesian, otherwise slow */
whitepointR = params["whitepoint_r"].get<double>(0.0);
whitepointB = params["whitepoint_b"].get<double>(0.0);
if (bayes == false)

View file

@ -43,6 +43,7 @@ struct AwbConfig {
uint16_t startupFrames;
unsigned int convergenceFrames; /* approx number of frames to converge */
double speed; /* IIR filter speed applied to algorithm results */
bool fast; /* "fast" mode uses a 16x16 rather than 32x32 grid */
libcamera::ipa::Pwl ctR; /* function maps CT to r (= R/G) */
libcamera::ipa::Pwl ctB; /* function maps CT to b (= B/G) */
libcamera::ipa::Pwl ctRInverse; /* inverse of ctR */

File diff suppressed because it is too large Load diff

View file

@ -1139,27 +1139,11 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -1167,7 +1151,6 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -1284,4 +1267,4 @@
}
}
]
}
}

View file

@ -1156,27 +1156,11 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -1184,7 +1168,6 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -1247,4 +1230,4 @@
}
}
]
}
}

View file

@ -1148,27 +1148,23 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 3.0,
"max_slew": 4.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
"step_frames": 4
},
"fast":
{
"step_coarse": 2.5,
"step_fine": 0.0,
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 5.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -1176,7 +1172,6 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -1295,4 +1290,4 @@
}
}
]
}
}

View file

@ -1057,27 +1057,23 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 3.0,
"max_slew": 4.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
"step_frames": 4
},
"fast":
{
"step_coarse": 2.5,
"step_fine": 0.0,
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio" : 0.8,
"retrigger_delay" : 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 5.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -1085,7 +1081,6 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -1150,4 +1145,4 @@
}
}
]
}
}

View file

@ -3,7 +3,6 @@
conf_files = files([
'imx219.json',
'imx219_noir.json',
'imx283.json',
'imx290.json',
'imx296.json',
'imx296_mono.json',

View file

@ -14,25 +14,25 @@
{
"rpi.lux":
{
"reference_shutter_speed": 10857,
"reference_gain": 1.49,
"reference_shutter_speed": 2461,
"reference_gain": 1.0,
"reference_aperture": 1.0,
"reference_lux": 1050,
"reference_Y": 13959
"reference_lux": 1148,
"reference_Y": 13314
}
},
{
"rpi.noise":
{
"reference_constant": 0,
"reference_slope": 2.147
"reference_slope": 2.204
}
},
{
"rpi.geq":
{
"offset": 249,
"slope": 0.02036
"offset": 199,
"slope": 0.01947
}
},
{
@ -104,35 +104,19 @@
{
"lo": 5500,
"hi": 6500
},
"cloudy":
{
"lo": 6000,
"hi": 6800
}
},
"bayes": 1,
"ct_curve":
[
2500.0, 0.9429, 0.2809,
2820.0, 0.8488, 0.3472,
2830.0, 0.8303, 0.3609,
2885.0, 0.8177, 0.3703,
3601.0, 0.6935, 0.4705,
3615.0, 0.6918, 0.4719,
3622.0, 0.6894, 0.4741,
4345.0, 0.5999, 0.5546,
4410.0, 0.5942, 0.5601,
4486.0, 0.5878, 0.5661,
4576.0, 0.5779, 0.5756,
5672.0, 0.5211, 0.6318,
5710.0, 0.5168, 0.6362,
6850.0, 0.4841, 0.6702
2213.0, 0.9607, 0.2593,
5313.0, 0.4822, 0.5909,
6237.0, 0.4739, 0.6308
],
"sensitivity_r": 1.0,
"sensitivity_b": 1.0,
"transverse_pos": 0.02601,
"transverse_neg": 0.0246
"transverse_pos": 0.0144,
"transverse_neg": 0.01
}
},
{
@ -225,136 +209,7 @@
{
"omega": 1.3,
"n_iter": 100,
"luminance_strength": 0.8,
"calibrations_Cr": [
{
"ct": 2940,
"table":
[
1.021, 1.026, 1.028, 1.029, 1.031, 1.029, 1.029, 1.029, 1.029, 1.031, 1.031, 1.028, 1.027, 1.022, 1.013, 1.008,
1.022, 1.026, 1.027, 1.028, 1.027, 1.026, 1.026, 1.025, 1.026, 1.026, 1.027, 1.027, 1.027, 1.022, 1.014, 1.009,
1.023, 1.026, 1.026, 1.027, 1.026, 1.025, 1.024, 1.024, 1.024, 1.025, 1.026, 1.027, 1.026, 1.023, 1.017, 1.012,
1.024, 1.026, 1.026, 1.026, 1.025, 1.024, 1.024, 1.023, 1.023, 1.024, 1.025, 1.026, 1.026, 1.024, 1.018, 1.013,
1.024, 1.026, 1.026, 1.026, 1.025, 1.024, 1.023, 1.023, 1.023, 1.023, 1.024, 1.026, 1.026, 1.025, 1.019, 1.013,
1.025, 1.026, 1.026, 1.026, 1.025, 1.024, 1.023, 1.023, 1.023, 1.023, 1.024, 1.026, 1.026, 1.025, 1.018, 1.013,
1.025, 1.027, 1.027, 1.027, 1.026, 1.025, 1.024, 1.023, 1.023, 1.024, 1.024, 1.026, 1.026, 1.024, 1.018, 1.013,
1.025, 1.027, 1.028, 1.028, 1.027, 1.026, 1.025, 1.024, 1.024, 1.024, 1.025, 1.026, 1.026, 1.024, 1.017, 1.012,
1.024, 1.027, 1.029, 1.029, 1.028, 1.027, 1.026, 1.026, 1.025, 1.025, 1.026, 1.026, 1.025, 1.022, 1.014, 1.009,
1.024, 1.027, 1.029, 1.031, 1.031, 1.029, 1.028, 1.028, 1.028, 1.028, 1.027, 1.026, 1.025, 1.021, 1.011, 1.007,
1.022, 1.026, 1.031, 1.031, 1.031, 1.032, 1.031, 1.031, 1.029, 1.029, 1.028, 1.026, 1.022, 1.017, 1.007, 1.003,
1.019, 1.024, 1.029, 1.031, 1.032, 1.032, 1.032, 1.031, 1.029, 1.029, 1.027, 1.024, 1.019, 1.013, 1.003, 1.001
]
},
{
"ct": 4000,
"table":
[
1.027, 1.035, 1.039, 1.041, 1.043, 1.043, 1.043, 1.043, 1.044, 1.044, 1.044, 1.041, 1.041, 1.034, 1.021, 1.014,
1.029, 1.035, 1.038, 1.039, 1.039, 1.039, 1.039, 1.039, 1.041, 1.041, 1.041, 1.041, 1.041, 1.035, 1.024, 1.017,
1.029, 1.034, 1.036, 1.038, 1.038, 1.038, 1.039, 1.039, 1.039, 1.039, 1.039, 1.041, 1.039, 1.036, 1.027, 1.021,
1.031, 1.034, 1.036, 1.036, 1.037, 1.037, 1.038, 1.037, 1.037, 1.038, 1.038, 1.039, 1.039, 1.037, 1.029, 1.021,
1.031, 1.034, 1.035, 1.036, 1.037, 1.037, 1.037, 1.037, 1.037, 1.037, 1.038, 1.038, 1.038, 1.037, 1.029, 1.022,
1.031, 1.034, 1.035, 1.036, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.037, 1.038, 1.038, 1.037, 1.029, 1.022,
1.031, 1.035, 1.036, 1.037, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.037, 1.038, 1.038, 1.036, 1.028, 1.021,
1.031, 1.034, 1.036, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.036, 1.036, 1.037, 1.037, 1.035, 1.026, 1.019,
1.028, 1.034, 1.037, 1.037, 1.037, 1.037, 1.037, 1.036, 1.036, 1.036, 1.037, 1.037, 1.037, 1.033, 1.022, 1.016,
1.028, 1.034, 1.037, 1.038, 1.039, 1.038, 1.037, 1.037, 1.037, 1.037, 1.037, 1.037, 1.035, 1.031, 1.017, 1.011,
1.025, 1.031, 1.036, 1.039, 1.039, 1.039, 1.038, 1.038, 1.038, 1.038, 1.038, 1.036, 1.031, 1.024, 1.011, 1.006,
1.021, 1.028, 1.034, 1.037, 1.039, 1.039, 1.039, 1.038, 1.038, 1.038, 1.036, 1.033, 1.027, 1.019, 1.006, 1.001
]
},
{
"ct": 6000,
"table":
[
1.026, 1.037, 1.048, 1.054, 1.057, 1.058, 1.059, 1.059, 1.061, 1.059, 1.059, 1.056, 1.049, 1.038, 1.019, 1.013,
1.031, 1.039, 1.049, 1.054, 1.057, 1.058, 1.059, 1.059, 1.059, 1.059, 1.059, 1.056, 1.051, 1.042, 1.026, 1.018,
1.033, 1.044, 1.051, 1.054, 1.057, 1.058, 1.059, 1.059, 1.059, 1.059, 1.058, 1.058, 1.055, 1.046, 1.031, 1.023,
1.035, 1.045, 1.051, 1.055, 1.057, 1.059, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.056, 1.049, 1.035, 1.026,
1.037, 1.046, 1.052, 1.055, 1.058, 1.059, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.057, 1.051, 1.037, 1.027,
1.037, 1.047, 1.053, 1.056, 1.059, 1.059, 1.061, 1.059, 1.059, 1.059, 1.059, 1.058, 1.057, 1.051, 1.037, 1.027,
1.037, 1.047, 1.053, 1.057, 1.059, 1.059, 1.061, 1.061, 1.059, 1.059, 1.059, 1.058, 1.056, 1.049, 1.036, 1.026,
1.037, 1.047, 1.054, 1.057, 1.059, 1.059, 1.061, 1.061, 1.059, 1.059, 1.059, 1.058, 1.056, 1.048, 1.034, 1.025,
1.034, 1.045, 1.054, 1.057, 1.059, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.057, 1.053, 1.045, 1.029, 1.021,
1.032, 1.043, 1.052, 1.057, 1.058, 1.059, 1.059, 1.059, 1.059, 1.059, 1.058, 1.055, 1.049, 1.041, 1.022, 1.013,
1.028, 1.037, 1.048, 1.053, 1.057, 1.059, 1.059, 1.059, 1.059, 1.058, 1.056, 1.051, 1.044, 1.032, 1.013, 1.007,
1.021, 1.033, 1.044, 1.051, 1.055, 1.058, 1.059, 1.059, 1.058, 1.057, 1.052, 1.047, 1.039, 1.026, 1.007, 1.001
]
}
],
"calibrations_Cb": [
{
"ct": 2940,
"table":
[
1.002, 1.012, 1.031, 1.042, 1.051, 1.056, 1.058, 1.058, 1.058, 1.058, 1.057, 1.055, 1.045, 1.033, 1.017, 1.016,
1.011, 1.026, 1.041, 1.048, 1.056, 1.063, 1.066, 1.067, 1.067, 1.066, 1.064, 1.061, 1.051, 1.045, 1.028, 1.017,
1.016, 1.033, 1.047, 1.056, 1.063, 1.067, 1.071, 1.072, 1.072, 1.071, 1.068, 1.064, 1.061, 1.051, 1.033, 1.024,
1.021, 1.038, 1.051, 1.061, 1.067, 1.071, 1.073, 1.075, 1.075, 1.074, 1.071, 1.068, 1.063, 1.054, 1.036, 1.025,
1.023, 1.041, 1.054, 1.063, 1.069, 1.073, 1.075, 1.077, 1.077, 1.076, 1.074, 1.069, 1.064, 1.055, 1.038, 1.027,
1.023, 1.043, 1.055, 1.063, 1.069, 1.074, 1.076, 1.078, 1.078, 1.077, 1.075, 1.071, 1.065, 1.056, 1.039, 1.028,
1.023, 1.043, 1.055, 1.063, 1.069, 1.074, 1.076, 1.077, 1.078, 1.076, 1.074, 1.071, 1.065, 1.056, 1.039, 1.028,
1.023, 1.041, 1.052, 1.062, 1.068, 1.072, 1.074, 1.076, 1.076, 1.075, 1.073, 1.069, 1.064, 1.055, 1.038, 1.028,
1.021, 1.038, 1.051, 1.059, 1.066, 1.069, 1.072, 1.074, 1.074, 1.073, 1.069, 1.067, 1.062, 1.052, 1.036, 1.027,
1.018, 1.032, 1.046, 1.055, 1.061, 1.066, 1.069, 1.069, 1.069, 1.069, 1.067, 1.062, 1.057, 1.047, 1.031, 1.021,
1.011, 1.023, 1.039, 1.049, 1.056, 1.061, 1.062, 1.064, 1.065, 1.064, 1.062, 1.058, 1.049, 1.038, 1.021, 1.016,
1.001, 1.019, 1.035, 1.046, 1.053, 1.058, 1.061, 1.062, 1.062, 1.062, 1.059, 1.053, 1.043, 1.033, 1.016, 1.011
]
},
{
"ct": 4000,
"table":
[
1.001, 1.003, 1.011, 1.016, 1.019, 1.019, 1.021, 1.021, 1.019, 1.019, 1.019, 1.017, 1.017, 1.013, 1.007, 1.006,
1.003, 1.011, 1.015, 1.021, 1.024, 1.026, 1.027, 1.027, 1.027, 1.026, 1.025, 1.023, 1.022, 1.016, 1.012, 1.007,
1.007, 1.015, 1.021, 1.024, 1.027, 1.029, 1.031, 1.031, 1.031, 1.029, 1.028, 1.026, 1.024, 1.022, 1.015, 1.011,
1.011, 1.017, 1.023, 1.027, 1.029, 1.032, 1.033, 1.033, 1.033, 1.033, 1.031, 1.028, 1.026, 1.024, 1.016, 1.011,
1.012, 1.019, 1.025, 1.029, 1.032, 1.033, 1.034, 1.035, 1.035, 1.034, 1.033, 1.031, 1.028, 1.025, 1.018, 1.014,
1.013, 1.021, 1.026, 1.031, 1.033, 1.034, 1.036, 1.036, 1.036, 1.035, 1.034, 1.032, 1.029, 1.026, 1.019, 1.015,
1.013, 1.021, 1.026, 1.031, 1.033, 1.035, 1.036, 1.037, 1.037, 1.036, 1.034, 1.032, 1.029, 1.027, 1.019, 1.016,
1.013, 1.021, 1.026, 1.031, 1.033, 1.035, 1.036, 1.036, 1.036, 1.036, 1.035, 1.033, 1.031, 1.027, 1.021, 1.016,
1.013, 1.021, 1.025, 1.029, 1.032, 1.034, 1.035, 1.035, 1.036, 1.035, 1.034, 1.032, 1.031, 1.027, 1.021, 1.015,
1.012, 1.019, 1.024, 1.027, 1.029, 1.032, 1.034, 1.034, 1.034, 1.034, 1.032, 1.031, 1.029, 1.026, 1.019, 1.015,
1.009, 1.015, 1.022, 1.025, 1.028, 1.029, 1.031, 1.032, 1.032, 1.031, 1.031, 1.029, 1.026, 1.023, 1.017, 1.015,
1.005, 1.014, 1.021, 1.025, 1.027, 1.029, 1.029, 1.031, 1.031, 1.031, 1.029, 1.029, 1.024, 1.021, 1.016, 1.015
]
},
{
"ct": 6000,
"table":
[
1.001, 1.001, 1.006, 1.007, 1.008, 1.009, 1.009, 1.009, 1.009, 1.009, 1.009, 1.011, 1.011, 1.011, 1.009, 1.008,
1.001, 1.005, 1.008, 1.011, 1.012, 1.013, 1.014, 1.014, 1.014, 1.013, 1.013, 1.014, 1.014, 1.012, 1.011, 1.009,
1.004, 1.008, 1.011, 1.012, 1.014, 1.016, 1.016, 1.016, 1.016, 1.016, 1.015, 1.015, 1.015, 1.014, 1.012, 1.011,
1.005, 1.009, 1.012, 1.014, 1.016, 1.017, 1.018, 1.018, 1.018, 1.018, 1.017, 1.016, 1.016, 1.015, 1.012, 1.011,
1.006, 1.011, 1.013, 1.015, 1.017, 1.018, 1.018, 1.019, 1.019, 1.019, 1.018, 1.017, 1.016, 1.015, 1.012, 1.011,
1.007, 1.011, 1.013, 1.015, 1.017, 1.018, 1.019, 1.019, 1.019, 1.019, 1.019, 1.018, 1.017, 1.016, 1.013, 1.011,
1.007, 1.012, 1.013, 1.015, 1.017, 1.018, 1.019, 1.019, 1.019, 1.019, 1.019, 1.018, 1.018, 1.017, 1.014, 1.013,
1.007, 1.012, 1.013, 1.015, 1.016, 1.018, 1.019, 1.019, 1.019, 1.019, 1.019, 1.018, 1.018, 1.017, 1.015, 1.014,
1.007, 1.011, 1.012, 1.014, 1.016, 1.017, 1.018, 1.018, 1.019, 1.019, 1.019, 1.018, 1.018, 1.018, 1.016, 1.015,
1.007, 1.011, 1.012, 1.013, 1.015, 1.016, 1.017, 1.017, 1.018, 1.018, 1.018, 1.018, 1.018, 1.017, 1.016, 1.015,
1.006, 1.009, 1.012, 1.013, 1.014, 1.015, 1.015, 1.016, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017, 1.016,
1.005, 1.009, 1.012, 1.013, 1.015, 1.015, 1.015, 1.015, 1.016, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017, 1.017
]
}
],
"luminance_lut":
[
1.223, 1.187, 1.129, 1.085, 1.061, 1.049, 1.046, 1.046, 1.046, 1.051, 1.061, 1.089, 1.134, 1.212, 1.359, 1.367,
1.188, 1.141, 1.098, 1.065, 1.048, 1.037, 1.029, 1.029, 1.034, 1.036, 1.046, 1.066, 1.095, 1.158, 1.269, 1.359,
1.158, 1.109, 1.073, 1.049, 1.035, 1.025, 1.019, 1.016, 1.017, 1.022, 1.033, 1.047, 1.072, 1.127, 1.219, 1.269,
1.147, 1.092, 1.058, 1.039, 1.026, 1.017, 1.011, 1.007, 1.009, 1.015, 1.022, 1.035, 1.058, 1.107, 1.191, 1.236,
1.144, 1.082, 1.051, 1.033, 1.021, 1.011, 1.005, 1.002, 1.004, 1.009, 1.017, 1.031, 1.051, 1.097, 1.177, 1.232,
1.144, 1.081, 1.049, 1.031, 1.018, 1.008, 1.002, 1.001, 1.001, 1.006, 1.015, 1.029, 1.048, 1.096, 1.177, 1.232,
1.144, 1.084, 1.051, 1.032, 1.018, 1.009, 1.004, 1.001, 1.002, 1.009, 1.016, 1.029, 1.051, 1.098, 1.183, 1.232,
1.149, 1.096, 1.057, 1.037, 1.022, 1.014, 1.008, 1.005, 1.007, 1.012, 1.019, 1.033, 1.059, 1.113, 1.205, 1.248,
1.166, 1.117, 1.071, 1.046, 1.031, 1.021, 1.014, 1.012, 1.014, 1.019, 1.029, 1.045, 1.078, 1.141, 1.247, 1.314,
1.202, 1.151, 1.096, 1.061, 1.044, 1.031, 1.023, 1.021, 1.022, 1.029, 1.044, 1.067, 1.109, 1.182, 1.314, 1.424,
1.242, 1.202, 1.134, 1.088, 1.061, 1.045, 1.038, 1.036, 1.039, 1.048, 1.066, 1.103, 1.157, 1.248, 1.424, 1.532,
1.318, 1.238, 1.162, 1.111, 1.078, 1.059, 1.048, 1.048, 1.049, 1.063, 1.089, 1.133, 1.189, 1.296, 1.532, 1.606
],
"sigma": 0.00175,
"sigma_Cb": 0.00268
"luminance_strength": 0.7
}
},
{
@ -404,138 +259,48 @@
{
"ccms": [
{
"ct": 2500,
"ct": 2213,
"ccm":
[
1.82257, -0.40941, -0.41316,
-0.52091, 1.83005, -0.30915,
0.22503, -1.41259, 2.18757
1.91264, -0.27609, -0.63655,
-0.65708, 2.11718, -0.46009,
0.03629, -1.38441, 2.34811
]
},
{
"ct": 2820,
"ct": 2255,
"ccm":
[
1.80564, -0.47587, -0.32977,
-0.47385, 1.83075, -0.35691,
0.21369, -1.22609, 2.01239
1.90369, -0.29309, -0.61059,
-0.64693, 2.08169, -0.43476,
0.04086, -1.29999, 2.25914
]
},
{
"ct": 2830,
"ct": 2259,
"ccm":
[
1.80057, -0.51479, -0.28578,
-0.64031, 2.16074, -0.52044,
0.11794, -0.95667, 1.83873
1.92762, -0.35134, -0.57628,
-0.63523, 2.08481, -0.44958,
0.06754, -1.32953, 2.26199
]
},
{
"ct": 2885,
"ct": 5313,
"ccm":
[
1.78452, -0.49769, -0.28683,
-0.63651, 2.13634, -0.49983,
0.08547, -0.86501, 1.77954
1.75924, -0.54053, -0.21871,
-0.38159, 1.88671, -0.50511,
-0.00747, -0.53492, 1.54239
]
},
{
"ct": 3601,
"ct": 6237,
"ccm":
[
1.85165, -0.57008, -0.28156,
-0.56249, 2.08321, -0.52072,
0.03724, -0.70964, 1.67239
]
},
{
"ct": 3615,
"ccm":
[
1.87611, -0.60772, -0.26839,
-0.55497, 2.07257, -0.51761,
0.04151, -0.70635, 1.66485
]
},
{
"ct": 3622,
"ccm":
[
1.85505, -0.58542, -0.26963,
-0.55053, 2.05981, -0.50928,
0.04005, -0.69302, 1.65297
]
},
{
"ct": 4345,
"ccm":
[
1.81872, -0.57511, -0.24361,
-0.49071, 2.16621, -0.67551,
0.02641, -0.67838, 1.65196
]
},
{
"ct": 4410,
"ccm":
[
1.83689, -0.60178, -0.23512,
-0.48204, 2.14729, -0.66525,
0.02773, -0.67615, 1.64841
]
},
{
"ct": 4486,
"ccm":
[
1.85101, -0.60733, -0.24368,
-0.47635, 2.13101, -0.65465,
0.02229, -0.66412, 1.64183
]
},
{
"ct": 4576,
"ccm":
[
1.84076, -0.59449, -0.24626,
-0.47307, 2.13369, -0.66062,
0.01984, -0.65788, 1.63804
]
},
{
"ct": 5657,
"ccm":
[
1.84536, -0.57827, -0.26709,
-0.44532, 2.04086, -0.59554,
-0.01738, -0.52806, 1.54544
]
},
{
"ct": 5672,
"ccm":
[
1.84251, -0.57486, -0.26765,
-0.44925, 2.04615, -0.59689,
-0.03179, -0.51748, 1.54928
]
},
{
"ct": 5710,
"ccm":
[
1.84081, -0.58127, -0.25953,
-0.44169, 2.03593, -0.59424,
-0.02503, -0.52696, 1.55199
]
},
{
"ct": 6850,
"ccm":
[
1.80426, -0.22567, -0.57859,
-0.48629, 2.49024, -1.00395,
-0.10865, -0.63841, 1.74705
2.19299, -0.74764, -0.44536,
-0.51678, 2.27651, -0.75972,
-0.06498, -0.74269, 1.80767
]
}
]

View file

@ -638,27 +638,11 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -666,7 +650,6 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -685,4 +668,4 @@
}
}
]
}
}

View file

@ -737,27 +737,11 @@
"step_coarse": 1.0,
"step_fine": 0.25,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.016,
"pdaf_squelch": 0.125,
"max_slew": 1.5,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
},
"fast":
{
"step_coarse": 1.25,
"step_fine": 0.0,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.02,
"pdaf_squelch": 0.125,
"max_slew": 2.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -765,7 +749,6 @@
"conf_thresh": 16,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 445, 15.0, 925 ]
}
},
@ -784,4 +767,4 @@
}
}
]
}
}

View file

@ -637,27 +637,23 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 3.0,
"max_slew": 4.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
"step_frames": 4
},
"fast":
{
"step_coarse": 2.5,
"step_fine": 0.0,
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 5.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -665,7 +661,6 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": false,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -684,4 +679,4 @@
}
}
]
}
}

View file

@ -628,27 +628,23 @@
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 10,
"pdaf_gain": -0.03,
"pdaf_squelch": 0.2,
"max_slew": 3.0,
"max_slew": 4.0,
"pdaf_frames": 20,
"dropout_frames": 6,
"step_frames": 5
"step_frames": 4
},
"fast":
{
"step_coarse": 2.5,
"step_fine": 0.0,
"step_coarse": 2.0,
"step_fine": 0.5,
"contrast_ratio": 0.75,
"retrigger_ratio": 0.8,
"retrigger_delay": 8,
"pdaf_gain": -0.05,
"pdaf_squelch": 0.2,
"max_slew": 4.0,
"max_slew": 5.0,
"pdaf_frames": 16,
"dropout_frames": 4,
"dropout_frames": 6,
"step_frames": 4
}
},
@ -656,7 +652,6 @@
"conf_thresh": 12,
"conf_clip": 512,
"skip_frames": 5,
"check_for_ir": true,
"map": [ 0.0, 420, 35.0, 920 ]
}
},
@ -675,4 +670,4 @@
}
}
]
}
}

View file

@ -40,7 +40,6 @@ void Awb::prepare(IPAContext &context,
[[maybe_unused]] DebayerParams *params)
{
auto &gains = context.activeState.awb.gains;
/* Just report, the gains are applied in LUT algorithm. */
frameContext.gains.red = gains.r();
frameContext.gains.blue = gains.b();
}

View file

@ -3,7 +3,7 @@
* Copyright (C) 2024, Ideas On Board
* Copyright (C) 2024-2025, Red Hat Inc.
*
* Color correction matrix + saturation
* Color correction matrix
*/
#include "ccm.h"
@ -13,8 +13,6 @@
#include <libcamera/control_ids.h>
#include "libcamera/internal/matrix.h"
namespace {
constexpr unsigned int kTemperatureThreshold = 100;
@ -37,77 +35,28 @@ int Ccm::init([[maybe_unused]] IPAContext &context, const YamlObject &tuningData
}
context.ccmEnabled = true;
context.ctrlMap[&controls::Saturation] = ControlInfo(0.0f, 2.0f, 1.0f);
return 0;
}
int Ccm::configure(IPAContext &context,
[[maybe_unused]] const IPAConfigInfo &configInfo)
{
context.activeState.knobs.saturation = std::optional<double>();
return 0;
}
void Ccm::queueRequest(typename Module::Context &context,
[[maybe_unused]] const uint32_t frame,
[[maybe_unused]] typename Module::FrameContext &frameContext,
const ControlList &controls)
{
const auto &saturation = controls.get(controls::Saturation);
if (saturation.has_value()) {
context.activeState.knobs.saturation = saturation;
LOG(IPASoftCcm, Debug) << "Setting saturation to " << saturation.value();
}
}
void Ccm::applySaturation(Matrix<float, 3, 3> &ccm, float saturation)
{
/* https://en.wikipedia.org/wiki/YCbCr#ITU-R_BT.601_conversion */
const Matrix<float, 3, 3> rgb2ycbcr{
{ 0.256788235294, 0.504129411765, 0.0979058823529,
-0.148223529412, -0.290992156863, 0.439215686275,
0.439215686275, -0.367788235294, -0.0714274509804 }
};
const Matrix<float, 3, 3> ycbcr2rgb{
{ 1.16438356164, 0, 1.59602678571,
1.16438356164, -0.391762290094, -0.812967647235,
1.16438356164, 2.01723214285, 0 }
};
const Matrix<float, 3, 3> saturationMatrix{
{ 1, 0, 0,
0, saturation, 0,
0, 0, saturation }
};
ccm = ycbcr2rgb * saturationMatrix * rgb2ycbcr * ccm;
}
void Ccm::prepare(IPAContext &context, const uint32_t frame,
IPAFrameContext &frameContext, [[maybe_unused]] DebayerParams *params)
{
auto &saturation = context.activeState.knobs.saturation;
const unsigned int ct = context.activeState.awb.temperatureK;
/* Change CCM only on saturation or bigger temperature changes. */
/* Change CCM only on bigger temperature changes. */
if (frame > 0 &&
utils::abs_diff(ct, lastCt_) < kTemperatureThreshold &&
saturation == lastSaturation_) {
utils::abs_diff(ct, lastCt_) < kTemperatureThreshold) {
frameContext.ccm.ccm = context.activeState.ccm.ccm;
context.activeState.ccm.changed = false;
return;
}
lastCt_ = ct;
lastSaturation_ = saturation;
Matrix<float, 3, 3> ccm = ccm_.getInterpolated(ct);
if (saturation)
applySaturation(ccm, saturation.value());
context.activeState.ccm.ccm = ccm;
frameContext.ccm.ccm = ccm;
frameContext.saturation = saturation;
context.activeState.ccm.changed = true;
}
@ -118,9 +67,6 @@ void Ccm::process([[maybe_unused]] IPAContext &context,
ControlList &metadata)
{
metadata.set(controls::ColourCorrectionMatrix, frameContext.ccm.ccm.data());
const auto &saturation = frameContext.saturation;
metadata.set(controls::Saturation, saturation.value_or(1.0));
}
REGISTER_IPA_ALGORITHM(Ccm, "Ccm")

View file

@ -7,8 +7,6 @@
#pragma once
#include <optional>
#include "libcamera/internal/matrix.h"
#include <libipa/interpolator.h>
@ -26,12 +24,6 @@ public:
~Ccm() = default;
int init(IPAContext &context, const YamlObject &tuningData) override;
int configure(IPAContext &context,
const IPAConfigInfo &configInfo) override;
void queueRequest(typename Module::Context &context,
const uint32_t frame,
typename Module::FrameContext &frameContext,
const ControlList &controls) override;
void prepare(IPAContext &context,
const uint32_t frame,
IPAFrameContext &frameContext,
@ -42,10 +34,7 @@ public:
ControlList &metadata) override;
private:
void applySaturation(Matrix<float, 3, 3> &ccm, float saturation);
unsigned int lastCt_;
std::optional<float> lastSaturation_;
Interpolator<Matrix<float, 3, 3>> ccm_;
};

View file

@ -122,7 +122,7 @@ void Lut::prepare(IPAContext &context,
Matrix<float, 3, 3> gainCcm = { { gains.r(), 0, 0,
0, gains.g(), 0,
0, 0, gains.b() } };
auto ccm = context.activeState.ccm.ccm * gainCcm;
auto ccm = gainCcm * context.activeState.ccm.ccm;
auto &red = params->redCcm;
auto &green = params->greenCcm;
auto &blue = params->blueCcm;

View file

@ -63,7 +63,6 @@ struct IPAActiveState {
struct {
/* 0..2 range, 1.0 = normal */
std::optional<double> contrast;
std::optional<float> saturation;
} knobs;
};
@ -76,14 +75,11 @@ struct IPAFrameContext : public FrameContext {
int32_t exposure;
double gain;
} sensor;
struct {
double red;
double blue;
} gains;
std::optional<double> contrast;
std::optional<float> saturation;
};
struct IPAContext {

View file

@ -690,9 +690,8 @@ LogSeverity Logger::parseLogLevel(std::string_view level)
unsigned int severity = LogInvalid;
if (std::isdigit(level[0])) {
const char *levelEnd = level.data() + level.size();
auto [end, ec] = std::from_chars(level.data(), levelEnd, severity);
if (ec != std::errc() || end != levelEnd || severity > LogFatal)
auto [end, ec] = std::from_chars(level.data(), level.data() + level.size(), severity);
if (ec != std::errc() || *end != '\0' || severity > LogFatal)
severity = LogInvalid;
} else {
for (unsigned int i = 0; i < std::size(names); ++i) {

View file

@ -488,7 +488,7 @@ std::size_t CameraConfiguration::size() const
*
* \return A CameraConfiguration::Status value that describes the validation
* status.
* \retval CameraConfiguration::Adjusted The configuration has been adjusted
* \retval CameraConfigutation::Adjusted The configuration has been adjusted
* and is now valid. The color space of some or all of the streams may have
* been changed. The caller shall check the color spaces carefully.
* \retval CameraConfiguration::Valid The configuration was already valid and

View file

@ -1,230 +0,0 @@
/* SPDX-License-Identifier: LGPL-2.1-or-later */
/*
* Copyright (C) 2024, Raspberry Pi Ltd
*
* Clock recovery algorithm
*/
#include "libcamera/internal/clock_recovery.h"
#include <time.h>
#include <libcamera/base/log.h>
/**
* \file clock_recovery.h
* \brief Clock recovery - deriving one clock from another independent clock
*/
namespace libcamera {
LOG_DEFINE_CATEGORY(ClockRec)
/**
* \class ClockRecovery
* \brief Recover an output clock from an input clock
*
* The ClockRecovery class derives an output clock from an input clock,
* modelling the output clock as being linearly related to the input clock.
* For example, we may use it to derive wall clock timestamps from timestamps
* measured by the internal system clock which counts local time since boot.
*
* When pairs of corresponding input and output timestamps are available,
* they should be submitted to the model with addSample(). The model will
* update, and output clock values for known input clock values can be
* obtained using getOutput().
*
* As a convenience, if the input clock is indeed the time since boot, and the
* output clock represents a real wallclock time, then addSample() can be
* called with no arguments, and a pair of timestamps will be captured at
* that moment.
*
* The configure() function accepts some configuration parameters to control
* the linear fitting process.
*/
/**
* \brief Construct a ClockRecovery
*/
ClockRecovery::ClockRecovery()
{
configure();
reset();
}
/**
* \brief Set configuration parameters
* \param[in] numSamples The approximate duration for which the state of the model
* is persistent
* \param[in] maxJitter New output samples are clamped to no more than this
* amount of jitter, to prevent sudden swings from having a large effect
* \param[in] minSamples The fitted clock model is not used to generate outputs
* until this many samples have been received
* \param[in] errorThreshold If the accumulated differences between input and
* output clocks reaches this amount over a few frames, the model is reset
*/
void ClockRecovery::configure(unsigned int numSamples, unsigned int maxJitter,
unsigned int minSamples, unsigned int errorThreshold)
{
LOG(ClockRec, Debug)
<< "configure " << numSamples << " " << maxJitter << " " << minSamples << " " << errorThreshold;
numSamples_ = numSamples;
maxJitter_ = maxJitter;
minSamples_ = minSamples;
errorThreshold_ = errorThreshold;
}
/**
* \brief Reset the clock recovery model and start again from scratch
*/
void ClockRecovery::reset()
{
LOG(ClockRec, Debug) << "reset";
lastInput_ = 0;
lastOutput_ = 0;
xAve_ = 0;
yAve_ = 0;
x2Ave_ = 0;
xyAve_ = 0;
count_ = 0;
error_ = 0.0;
/*
* Setting slope_ and offset_ to zero initially means that the clocks
* advance at exactly the same rate.
*/
slope_ = 0.0;
offset_ = 0.0;
}
/**
* \brief Add a sample point to the clock recovery model, for recovering a wall
* clock value from the internal system time since boot
*
* This is a convenience function to make it easy to derive a wall clock value
* (using the Linux CLOCK_REALTIME) from the time since the system started
* (measured by CLOCK_BOOTTIME).
*/
void ClockRecovery::addSample()
{
LOG(ClockRec, Debug) << "addSample";
struct timespec bootTime1;
struct timespec bootTime2;
struct timespec wallTime;
/* Get boot and wall clocks in microseconds. */
clock_gettime(CLOCK_BOOTTIME, &bootTime1);
clock_gettime(CLOCK_REALTIME, &wallTime);
clock_gettime(CLOCK_BOOTTIME, &bootTime2);
uint64_t boot1 = bootTime1.tv_sec * 1000000ULL + bootTime1.tv_nsec / 1000;
uint64_t boot2 = bootTime2.tv_sec * 1000000ULL + bootTime2.tv_nsec / 1000;
uint64_t boot = (boot1 + boot2) / 2;
uint64_t wall = wallTime.tv_sec * 1000000ULL + wallTime.tv_nsec / 1000;
addSample(boot, wall);
}
/**
* \brief Add a sample point to the clock recovery model, specifying the exact
* input and output clock values
* \param[in] input The input clock value
* \param[in] output The value of the output clock at the same moment, as far
* as possible, that the input clock was sampled
*
* This function should be used for corresponding clocks other than the Linux
* BOOTTIME and REALTIME clocks.
*/
void ClockRecovery::addSample(uint64_t input, uint64_t output)
{
LOG(ClockRec, Debug) << "addSample " << input << " " << output;
if (count_ == 0) {
inputBase_ = input;
outputBase_ = output;
}
/*
* We keep an eye on cumulative drift over the last several frames. If this exceeds a
* threshold, then probably the system clock has been updated and we're going to have to
* reset everything and start over.
*/
if (lastOutput_) {
int64_t inputDiff = getOutput(input) - getOutput(lastInput_);
int64_t outputDiff = output - lastOutput_;
error_ = error_ * 0.95 + (outputDiff - inputDiff);
if (std::abs(error_) > errorThreshold_) {
reset();
inputBase_ = input;
outputBase_ = output;
}
}
lastInput_ = input;
lastOutput_ = output;
/*
* Never let the new output value be more than maxJitter_ away from what
* we would have expected. This is just to reduce the effect of sudden
* large delays in the measured output.
*/
uint64_t expectedOutput = getOutput(input);
output = std::clamp(output, expectedOutput - maxJitter_, expectedOutput + maxJitter_);
/*
* We use x, y, x^2 and x*y sums to calculate the best fit line. Here we
* update them by pretending we have count_ samples at the previous fit,
* and now one new one. Gradually the effect of the older values gets
* lost. This is a very simple way of updating the fit (there are much
* more complicated ones!), but it works well enough. Using averages
* instead of sums makes the relative effect of old values and the new
* sample clearer.
*/
double x = static_cast<int64_t>(input - inputBase_);
double y = static_cast<int64_t>(output - outputBase_) - x;
unsigned int count1 = count_ + 1;
xAve_ = (count_ * xAve_ + x) / count1;
yAve_ = (count_ * yAve_ + y) / count1;
x2Ave_ = (count_ * x2Ave_ + x * x) / count1;
xyAve_ = (count_ * xyAve_ + x * y) / count1;
/*
* Don't update slope and offset until we've seen "enough" sample
* points. Note that the initial settings for slope_ and offset_
* ensures that the wallclock advances at the same rate as the realtime
* clock (but with their respective initial offsets).
*/
if (count_ > minSamples_) {
/* These are the standard equations for least squares linear regression. */
slope_ = (count1 * count1 * xyAve_ - count1 * xAve_ * count1 * yAve_) /
(count1 * count1 * x2Ave_ - count1 * xAve_ * count1 * xAve_);
offset_ = yAve_ - slope_ * xAve_;
}
/*
* Don't increase count_ above numSamples_, as this controls the long-term
* amount of the residual fit.
*/
if (count1 < numSamples_)
count_++;
}
/**
* \brief Calculate the output clock value according to the model from an input
* clock value
* \param[in] input The input clock value
*
* \return Output clock value
*/
uint64_t ClockRecovery::getOutput(uint64_t input)
{
double x = static_cast<int64_t>(input - inputBase_);
double y = slope_ * x + offset_;
uint64_t output = y + x + outputBase_;
LOG(ClockRec, Debug) << "getOutput " << input << " " << output;
return output;
}
} /* namespace libcamera */

View file

@ -212,7 +212,7 @@ controls:
description: |
Exposure time for the frame applied in the sensor device.
This value is specified in microseconds.
This value is specified in micro-seconds.
This control will only take effect if ExposureTimeMode is Manual. If
this control is set when ExposureTimeMode is Auto, the value will be
@ -1268,20 +1268,4 @@ controls:
description: |
Enable or disable the debug metadata.
- FrameWallClock:
type: int64_t
direction: out
description: |
This timestamp corresponds to the same moment in time as the
SensorTimestamp, but is represented as a wall clock time as measured by
the CLOCK_REALTIME clock. Like SensorTimestamp, the timestamp value is
expressed in nanoseconds.
Being a wall clock measurement, it can be used to synchronise timing
across different devices.
\sa SensorTimestamp
The FrameWallClock control can only be returned in metadata.
...

View file

@ -71,116 +71,4 @@ controls:
\sa StatsOutputEnable
- SyncMode:
type: int32_t
direction: in
description: |
Enable or disable camera synchronisation ("sync") mode.
When sync mode is enabled, a camera will synchronise frames temporally
with other cameras, either attached to the same device or a different
one. There should be one "server" device, which broadcasts timing
information to one or more "clients". Communication is one-way, from
server to clients only, and it is only clients that adjust their frame
timings to match the server.
Sync mode requires all cameras to be running at (as far as possible) the
same fixed framerate. Clients may continue to make adjustments to keep
their cameras synchronised with the server for the duration of the
session, though any updates after the initial ones should remain small.
\sa SyncReady
\sa SyncTimer
\sa SyncFrames
enum:
- name: SyncModeOff
value: 0
description: Disable sync mode.
- name: SyncModeServer
value: 1
description: |
Enable sync mode, act as server. The server broadcasts timing
messages to any clients that are listening, so that the clients can
synchronise their camera frames with the server's.
- name: SyncModeClient
value: 2
description: |
Enable sync mode, act as client. A client listens for any server
messages, and arranges for its camera frames to synchronise as
closely as possible with the server's. Many clients can listen out
for the same server. Clients can also be started ahead of any
servers, causing them merely to wait for the server to start.
- SyncReady:
type: bool
direction: out
description: |
When using the camera synchronisation algorithm, the server broadcasts
timing information to the clients. This also includes the time (some
number of frames in the future, called the "ready time") at which the
server will signal its controlling application, using this control, to
start using the image frames.
The client receives the "ready time" from the server, and will signal
its application to start using the frames at this same moment.
While this control value is false, applications (on both client and
server) should continue to wait, and not use the frames.
Once this value becomes true, it means that this is the first frame
where the server and its clients have agreed that they will both be
synchronised and that applications should begin consuming frames.
Thereafter, this control will continue to signal the value true for
the rest of the session.
\sa SyncMode
\sa SyncTimer
\sa SyncFrames
- SyncTimer:
type: int64_t
direction: out
description: |
This reports the amount of time, in microseconds, until the "ready
time", at which the server and client will signal their controlling
applications that the frames are now synchronised and should be
used. The value may be refined slightly over time, becoming more precise
as the "ready time" approaches.
Servers always report this value, whereas clients will omit this control
until they have received a message from the server that enables them to
calculate it.
Normally the value will start positive (the "ready time" is in the
future), and decrease towards zero, before becoming negative (the "ready
time" has elapsed). So there should be just one frame where the timer
value is, or is very close to, zero - the one for which the SyncReady
control becomes true. At this moment, the value indicates how closely
synchronised the client believes it is with the server.
But note that if frames are being dropped, then the "near zero" valued
frame, or indeed any other, could be skipped. In these cases the timer
value allows an application to deduce that this has happened.
\sa SyncMode
\sa SyncReady
\sa SyncFrames
- SyncFrames:
type: int32_t
direction: in
description: |
The number of frames the server should wait, after enabling
SyncModeServer, before signalling (via the SyncReady control) that
frames should be used. This therefore determines the "ready time" for
all synchronised cameras.
This control value should be set only for the device that is to act as
the server, before or at the same moment at which SyncModeServer is
enabled.
\sa SyncMode
\sa SyncReady
\sa SyncTimer
...

Some files were not shown because too many files have changed in this diff Show more