ipa: raspberrypi: Change to C style code comments

As part of the on-going refactor efforts for the source files in
src/ipa/raspberrypi/, switch all C++ style comments to C style comments.

Signed-off-by: Naushir Patuck <naush@raspberrypi.com>
Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
This commit is contained in:
Naushir Patuck 2022-07-27 09:55:18 +01:00 committed by Laurent Pinchart
parent 177df04d2b
commit acd5d9979f
55 changed files with 887 additions and 630 deletions

View file

@ -21,50 +21,52 @@
namespace RPiController { namespace RPiController {
// The CamHelper class provides a number of facilities that anyone trying /*
// to drive a camera will need to know, but which are not provided by the * The CamHelper class provides a number of facilities that anyone trying
// standard driver framework. Specifically, it provides: * to drive a camera will need to know, but which are not provided by the
// * standard driver framework. Specifically, it provides:
// A "CameraMode" structure to describe extra information about the chosen *
// mode of the driver. For example, how it is cropped from the full sensor * A "CameraMode" structure to describe extra information about the chosen
// area, how it is scaled, whether pixels are averaged compared to the full * mode of the driver. For example, how it is cropped from the full sensor
// resolution. * area, how it is scaled, whether pixels are averaged compared to the full
// * resolution.
// The ability to convert between number of lines of exposure and actual *
// exposure time, and to convert between the sensor's gain codes and actual * The ability to convert between number of lines of exposure and actual
// gains. * exposure time, and to convert between the sensor's gain codes and actual
// * gains.
// A function to return the number of frames of delay between updating exposure, *
// analogue gain and vblanking, and for the changes to take effect. For many * A function to return the number of frames of delay between updating exposure,
// sensors these take the values 2, 1 and 2 respectively, but sensors that are * analogue gain and vblanking, and for the changes to take effect. For many
// different will need to over-ride the default function provided. * sensors these take the values 2, 1 and 2 respectively, but sensors that are
// * different will need to over-ride the default function provided.
// A function to query if the sensor outputs embedded data that can be parsed. *
// * A function to query if the sensor outputs embedded data that can be parsed.
// A function to return the sensitivity of a given camera mode. *
// * A function to return the sensitivity of a given camera mode.
// A parser to parse the embedded data buffers provided by some sensors (for *
// example, the imx219 does; the ov5647 doesn't). This allows us to know for * A parser to parse the embedded data buffers provided by some sensors (for
// sure the exposure and gain of the frame we're looking at. CamHelper * example, the imx219 does; the ov5647 doesn't). This allows us to know for
// provides functions for converting analogue gains to and from the sensor's * sure the exposure and gain of the frame we're looking at. CamHelper
// native gain codes. * provides functions for converting analogue gains to and from the sensor's
// * native gain codes.
// Finally, a set of functions that determine how to handle the vagaries of *
// different camera modules on start-up or when switching modes. Some * Finally, a set of functions that determine how to handle the vagaries of
// modules may produce one or more frames that are not yet correctly exposed, * different camera modules on start-up or when switching modes. Some
// or where the metadata may be suspect. We have the following functions: * modules may produce one or more frames that are not yet correctly exposed,
// HideFramesStartup(): Tell the pipeline handler not to return this many * or where the metadata may be suspect. We have the following functions:
// frames at start-up. This can also be used to hide initial frames * HideFramesStartup(): Tell the pipeline handler not to return this many
// while the AGC and other algorithms are sorting themselves out. * frames at start-up. This can also be used to hide initial frames
// HideFramesModeSwitch(): Tell the pipeline handler not to return this * while the AGC and other algorithms are sorting themselves out.
// many frames after a mode switch (other than start-up). Some sensors * HideFramesModeSwitch(): Tell the pipeline handler not to return this
// may produce innvalid frames after a mode switch; others may not. * many frames after a mode switch (other than start-up). Some sensors
// MistrustFramesStartup(): At start-up a sensor may return frames for * may produce innvalid frames after a mode switch; others may not.
// which we should not run any control algorithms (for example, metadata * MistrustFramesStartup(): At start-up a sensor may return frames for
// may be invalid). * which we should not run any control algorithms (for example, metadata
// MistrustFramesModeSwitch(): The number of frames, after a mode switch * may be invalid).
// (other than start-up), for which control algorithms should not run * MistrustFramesModeSwitch(): The number of frames, after a mode switch
// (for example, metadata may be unreliable). * (other than start-up), for which control algorithms should not run
* (for example, metadata may be unreliable).
*/
class CamHelper class CamHelper
{ {
@ -110,8 +112,10 @@ private:
unsigned int frameIntegrationDiff_; unsigned int frameIntegrationDiff_;
}; };
// This is for registering camera helpers with the system, so that the /*
// CamHelper::Create function picks them up automatically. * This is for registering camera helpers with the system, so that the
* CamHelper::Create function picks them up automatically.
*/
typedef CamHelper *(*CamHelperCreateFunc)(); typedef CamHelper *(*CamHelperCreateFunc)();
struct RegisterCamHelper struct RegisterCamHelper
@ -120,4 +124,4 @@ struct RegisterCamHelper
CamHelperCreateFunc createFunc); CamHelperCreateFunc createFunc);
}; };
} // namespace RPi } /* namespace RPi */

View file

@ -16,7 +16,7 @@ class AgcAlgorithm : public Algorithm
{ {
public: public:
AgcAlgorithm(Controller *controller) : Algorithm(controller) {} AgcAlgorithm(Controller *controller) : Algorithm(controller) {}
// An AGC algorithm must provide the following: /* An AGC algorithm must provide the following: */
virtual unsigned int getConvergenceFrames() const = 0; virtual unsigned int getConvergenceFrames() const = 0;
virtual void setEv(double ev) = 0; virtual void setEv(double ev) = 0;
virtual void setFlickerPeriod(libcamera::utils::Duration flickerPeriod) = 0; virtual void setFlickerPeriod(libcamera::utils::Duration flickerPeriod) = 0;
@ -28,4 +28,4 @@ public:
virtual void setConstraintMode(std::string const &contraintModeName) = 0; virtual void setConstraintMode(std::string const &contraintModeName) = 0;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -8,20 +8,24 @@
#include <libcamera/base/utils.h> #include <libcamera/base/utils.h>
// The AGC algorithm should post the following structure into the image's /*
// "agc.status" metadata. * The AGC algorithm should post the following structure into the image's
* "agc.status" metadata.
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
#endif #endif
// Note: total_exposure_value will be reported as zero until the algorithm has /*
// seen statistics and calculated meaningful values. The contents should be * Note: total_exposure_value will be reported as zero until the algorithm has
// ignored until then. * seen statistics and calculated meaningful values. The contents should be
* ignored until then.
*/
struct AgcStatus { struct AgcStatus {
libcamera::utils::Duration totalExposureValue; // value for all exposure and gain for this image libcamera::utils::Duration totalExposureValue; /* value for all exposure and gain for this image */
libcamera::utils::Duration targetExposureValue; // (unfiltered) target total exposure AGC is aiming for libcamera::utils::Duration targetExposureValue; /* (unfiltered) target total exposure AGC is aiming for */
libcamera::utils::Duration shutterTime; libcamera::utils::Duration shutterTime;
double analogueGain; double analogueGain;
char exposureMode[32]; char exposureMode[32];

View file

@ -31,7 +31,7 @@ void Algorithm::process([[maybe_unused]] StatisticsPtr &stats,
{ {
} }
// For registering algorithms with the system: /* For registering algorithms with the system: */
static std::map<std::string, AlgoCreateFunc> algorithms; static std::map<std::string, AlgoCreateFunc> algorithms;
std::map<std::string, AlgoCreateFunc> const &RPiController::getAlgorithms() std::map<std::string, AlgoCreateFunc> const &RPiController::getAlgorithms()

View file

@ -6,8 +6,10 @@
*/ */
#pragma once #pragma once
// All algorithms should be derived from this class and made available to the /*
// Controller. * All algorithms should be derived from this class and made available to the
* Controller.
*/
#include <string> #include <string>
#include <memory> #include <memory>
@ -19,7 +21,7 @@
namespace RPiController { namespace RPiController {
// This defines the basic interface for all control algorithms. /* This defines the basic interface for all control algorithms. */
class Algorithm class Algorithm
{ {
@ -48,8 +50,10 @@ private:
bool paused_; bool paused_;
}; };
// This code is for automatic registration of Front End algorithms with the /*
// system. * This code is for automatic registration of Front End algorithms with the
* system.
*/
typedef Algorithm *(*AlgoCreateFunc)(Controller *controller); typedef Algorithm *(*AlgoCreateFunc)(Controller *controller);
struct RegisterAlgorithm { struct RegisterAlgorithm {
@ -57,4 +61,4 @@ struct RegisterAlgorithm {
}; };
std::map<std::string, AlgoCreateFunc> const &getAlgorithms(); std::map<std::string, AlgoCreateFunc> const &getAlgorithms();
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,8 +6,10 @@
*/ */
#pragma once #pragma once
// The ALSC algorithm should post the following structure into the image's /*
// "alsc.status" metadata. * The ALSC algorithm should post the following structure into the image's
* "alsc.status" metadata.
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -14,10 +14,10 @@ class AwbAlgorithm : public Algorithm
{ {
public: public:
AwbAlgorithm(Controller *controller) : Algorithm(controller) {} AwbAlgorithm(Controller *controller) : Algorithm(controller) {}
// An AWB algorithm must provide the following: /* An AWB algorithm must provide the following: */
virtual unsigned int getConvergenceFrames() const = 0; virtual unsigned int getConvergenceFrames() const = 0;
virtual void setMode(std::string const &modeName) = 0; virtual void setMode(std::string const &modeName) = 0;
virtual void setManualGains(double manualR, double manualB) = 0; virtual void setManualGains(double manualR, double manualB) = 0;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,8 +6,10 @@
*/ */
#pragma once #pragma once
// The AWB algorithm places its results into both the image and global metadata, /*
// under the tag "awb.status". * The AWB algorithm places its results into both the image and global metadata,
* under the tag "awb.status".
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -6,14 +6,14 @@
*/ */
#pragma once #pragma once
// The "black level" algorithm stores the black levels to use. /* The "black level" algorithm stores the black levels to use. */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
#endif #endif
struct BlackLevelStatus { struct BlackLevelStatus {
uint16_t blackLevelR; // out of 16 bits uint16_t blackLevelR; /* out of 16 bits */
uint16_t blackLevelG; uint16_t blackLevelG;
uint16_t blackLevelB; uint16_t blackLevelB;
}; };

View file

@ -10,9 +10,11 @@
#include <libcamera/base/utils.h> #include <libcamera/base/utils.h>
// Description of a "camera mode", holding enough information for control /*
// algorithms to adapt their behaviour to the different modes of the camera, * Description of a "camera mode", holding enough information for control
// including binning, scaling, cropping etc. * algorithms to adapt their behaviour to the different modes of the camera,
* including binning, scaling, cropping etc.
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
@ -21,27 +23,27 @@ extern "C" {
#define CAMERA_MODE_NAME_LEN 32 #define CAMERA_MODE_NAME_LEN 32
struct CameraMode { struct CameraMode {
// bit depth of the raw camera output /* bit depth of the raw camera output */
uint32_t bitdepth; uint32_t bitdepth;
// size in pixels of frames in this mode /* size in pixels of frames in this mode */
uint16_t width, height; uint16_t width, height;
// size of full resolution uncropped frame ("sensor frame") /* size of full resolution uncropped frame ("sensor frame") */
uint16_t sensorWidth, sensorHeight; uint16_t sensorWidth, sensorHeight;
// binning factor (1 = no binning, 2 = 2-pixel binning etc.) /* binning factor (1 = no binning, 2 = 2-pixel binning etc.) */
uint8_t binX, binY; uint8_t binX, binY;
// location of top left pixel in the sensor frame /* location of top left pixel in the sensor frame */
uint16_t cropX, cropY; uint16_t cropX, cropY;
// scaling factor (so if uncropped, width*scaleX is sensorWidth) /* scaling factor (so if uncropped, width*scaleX is sensorWidth) */
double scaleX, scaleY; double scaleX, scaleY;
// scaling of the noise compared to the native sensor mode /* scaling of the noise compared to the native sensor mode */
double noiseFactor; double noiseFactor;
// line time /* line time */
libcamera::utils::Duration lineLength; libcamera::utils::Duration lineLength;
// any camera transform *not* reflected already in the camera tuning /* any camera transform *not* reflected already in the camera tuning */
libcamera::Transform transform; libcamera::Transform transform;
// minimum and maximum fame lengths in units of lines /* minimum and maximum fame lengths in units of lines */
uint32_t minFrameLength, maxFrameLength; uint32_t minFrameLength, maxFrameLength;
// sensitivity of this mode /* sensitivity of this mode */
double sensitivity; double sensitivity;
}; };

View file

@ -14,8 +14,8 @@ class CcmAlgorithm : public Algorithm
{ {
public: public:
CcmAlgorithm(Controller *controller) : Algorithm(controller) {} CcmAlgorithm(Controller *controller) : Algorithm(controller) {}
// A CCM algorithm must provide the following: /* A CCM algorithm must provide the following: */
virtual void setSaturation(double saturation) = 0; virtual void setSaturation(double saturation) = 0;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,7 +6,7 @@
*/ */
#pragma once #pragma once
// The "ccm" algorithm generates an appropriate colour matrix. /* The "ccm" algorithm generates an appropriate colour matrix. */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -14,9 +14,9 @@ class ContrastAlgorithm : public Algorithm
{ {
public: public:
ContrastAlgorithm(Controller *controller) : Algorithm(controller) {} ContrastAlgorithm(Controller *controller) : Algorithm(controller) {}
// A contrast algorithm must provide the following: /* A contrast algorithm must provide the following: */
virtual void setBrightness(double brightness) = 0; virtual void setBrightness(double brightness) = 0;
virtual void setContrast(double contrast) = 0; virtual void setContrast(double contrast) = 0;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,8 +6,10 @@
*/ */
#pragma once #pragma once
// The "contrast" algorithm creates a gamma curve, optionally doing a little bit /*
// of contrast stretching based on the AGC histogram. * The "contrast" algorithm creates a gamma curve, optionally doing a little bit
* of contrast stretching based on the AGC histogram.
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -89,8 +89,10 @@ Metadata &Controller::getGlobalMetadata()
Algorithm *Controller::getAlgorithm(std::string const &name) const Algorithm *Controller::getAlgorithm(std::string const &name) const
{ {
// The passed name must be the entire algorithm name, or must match the /*
// last part of it with a period (.) just before. * The passed name must be the entire algorithm name, or must match the
* last part of it with a period (.) just before.
*/
size_t nameLen = name.length(); size_t nameLen = name.length();
for (auto &algo : algorithms_) { for (auto &algo : algorithms_) {
char const *algoName = algo->name(); char const *algoName = algo->name();

View file

@ -6,9 +6,11 @@
*/ */
#pragma once #pragma once
// The Controller is simply a container for a collecting together a number of /*
// "control algorithms" (such as AWB etc.) and for running them all in a * The Controller is simply a container for a collecting together a number of
// convenient manner. * "control algorithms" (such as AWB etc.) and for running them all in a
* convenient manner.
*/
#include <vector> #include <vector>
#include <string> #include <string>
@ -25,10 +27,12 @@ class Algorithm;
typedef std::unique_ptr<Algorithm> AlgorithmPtr; typedef std::unique_ptr<Algorithm> AlgorithmPtr;
typedef std::shared_ptr<bcm2835_isp_stats> StatisticsPtr; typedef std::shared_ptr<bcm2835_isp_stats> StatisticsPtr;
// The Controller holds a pointer to some global_metadata, which is how /*
// different controllers and control algorithms within them can exchange * The Controller holds a pointer to some global_metadata, which is how
// information. The Prepare function returns a pointer to metadata for this * different controllers and control algorithms within them can exchange
// specific image, and which should be passed on to the Process function. * information. The Prepare function returns a pointer to metadata for this
* specific image, and which should be passed on to the Process function.
*/
class Controller class Controller
{ {
@ -51,4 +55,4 @@ protected:
bool switchModeCalled_; bool switchModeCalled_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -16,8 +16,8 @@ class DenoiseAlgorithm : public Algorithm
{ {
public: public:
DenoiseAlgorithm(Controller *controller) : Algorithm(controller) {} DenoiseAlgorithm(Controller *controller) : Algorithm(controller) {}
// A Denoise algorithm must provide the following: /* A Denoise algorithm must provide the following: */
virtual void setMode(DenoiseMode mode) = 0; virtual void setMode(DenoiseMode mode) = 0;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,7 +6,7 @@
*/ */
#pragma once #pragma once
// This stores the parameters required for Denoise. /* This stores the parameters required for Denoise. */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -6,14 +6,14 @@
*/ */
#pragma once #pragma once
// The "DPC" algorithm sets defective pixel correction strength. /* The "DPC" algorithm sets defective pixel correction strength. */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
#endif #endif
struct DpcStatus { struct DpcStatus {
int strength; // 0 = "off", 1 = "normal", 2 = "strong" int strength; /* 0 = "off", 1 = "normal", 2 = "strong" */
}; };
#ifdef __cplusplus #ifdef __cplusplus

View file

@ -8,9 +8,11 @@
#include <linux/bcm2835-isp.h> #include <linux/bcm2835-isp.h>
// The focus algorithm should post the following structure into the image's /*
// "focus.status" metadata. Recall that it's only reporting focus (contrast) * The focus algorithm should post the following structure into the image's
// measurements, it's not driving any kind of auto-focus algorithm! * "focus.status" metadata. Recall that it's only reporting focus (contrast)
* measurements, it's not driving any kind of auto-focus algorithm!
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -6,7 +6,7 @@
*/ */
#pragma once #pragma once
// The "GEQ" algorithm calculates the green equalisation thresholds /* The "GEQ" algorithm calculates the green equalisation thresholds */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -30,13 +30,13 @@ double Histogram::quantile(double q, int first, int last) const
last = cumulative_.size() - 2; last = cumulative_.size() - 2;
assert(first <= last); assert(first <= last);
uint64_t items = q * total(); uint64_t items = q * total();
while (first < last) // binary search to find the right bin while (first < last) /* binary search to find the right bin */
{ {
int middle = (first + last) / 2; int middle = (first + last) / 2;
if (cumulative_[middle + 1] > items) if (cumulative_[middle + 1] > items)
last = middle; // between first and middle last = middle; /* between first and middle */
else else
first = middle + 1; // after middle first = middle + 1; /* after middle */
} }
assert(items >= cumulative_[first] && items <= cumulative_[last + 1]); assert(items >= cumulative_[first] && items <= cumulative_[last + 1]);
double frac = cumulative_[first + 1] == cumulative_[first] ? 0 double frac = cumulative_[first + 1] == cumulative_[first] ? 0
@ -59,6 +59,6 @@ double Histogram::interQuantileMean(double qLo, double qHi) const
sumBinFreq += bin * freq; sumBinFreq += bin * freq;
cumulFreq += freq; cumulFreq += freq;
} }
// add 0.5 to give an average for bin mid-points /* add 0.5 to give an average for bin mid-points */
return sumBinFreq / cumulFreq + 0.5; return sumBinFreq / cumulFreq + 0.5;
} }

View file

@ -10,8 +10,10 @@
#include <vector> #include <vector>
#include <cassert> #include <cassert>
// A simple histogram class, for use in particular to find "quantiles" and /*
// averages between "quantiles". * A simple histogram class, for use in particular to find "quantiles" and
* averages between "quantiles".
*/
namespace RPiController { namespace RPiController {
@ -29,16 +31,18 @@ public:
} }
uint32_t bins() const { return cumulative_.size() - 1; } uint32_t bins() const { return cumulative_.size() - 1; }
uint64_t total() const { return cumulative_[cumulative_.size() - 1]; } uint64_t total() const { return cumulative_[cumulative_.size() - 1]; }
// Cumulative frequency up to a (fractional) point in a bin. /* Cumulative frequency up to a (fractional) point in a bin. */
uint64_t cumulativeFreq(double bin) const; uint64_t cumulativeFreq(double bin) const;
// Return the (fractional) bin of the point q (0 <= q <= 1) through the /*
// histogram. Optionally provide limits to help. * Return the (fractional) bin of the point q (0 <= q <= 1) through the
* histogram. Optionally provide limits to help.
*/
double quantile(double q, int first = -1, int last = -1) const; double quantile(double q, int first = -1, int last = -1) const;
// Return the average histogram bin value between the two quantiles. /* Return the average histogram bin value between the two quantiles. */
double interQuantileMean(double qLo, double qHi) const; double interQuantileMean(double qLo, double qHi) const;
private: private:
std::vector<uint64_t> cumulative_; std::vector<uint64_t> cumulative_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,14 +6,16 @@
*/ */
#pragma once #pragma once
// The "lux" algorithm looks at the (AGC) histogram statistics of the frame and /*
// estimates the current lux level of the scene. It does this by a simple ratio * The "lux" algorithm looks at the (AGC) histogram statistics of the frame and
// calculation comparing to a reference image that was taken in known conditions * estimates the current lux level of the scene. It does this by a simple ratio
// with known statistics and a properly measured lux level. There is a slight * calculation comparing to a reference image that was taken in known conditions
// problem with aperture, in that it may be variable without the system knowing * with known statistics and a properly measured lux level. There is a slight
// or being aware of it. In this case an external application may set a * problem with aperture, in that it may be variable without the system knowing
// "current_aperture" value if it wishes, which would be used in place of the * or being aware of it. In this case an external application may set a
// (presumably meaningless) value in the image metadata. * "current_aperture" value if it wishes, which would be used in place of the
* (presumably meaningless) value in the image metadata.
*/
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -6,7 +6,7 @@
*/ */
#pragma once #pragma once
// A simple class for carrying arbitrary metadata, for example about an image. /* A simple class for carrying arbitrary metadata, for example about an image. */
#include <any> #include <any>
#include <map> #include <map>
@ -81,8 +81,10 @@ public:
template<typename T> template<typename T>
T *getLocked(std::string const &tag) T *getLocked(std::string const &tag)
{ {
// This allows in-place access to the Metadata contents, /*
// for which you should be holding the lock. * This allows in-place access to the Metadata contents,
* for which you should be holding the lock.
*/
auto it = data_.find(tag); auto it = data_.find(tag);
if (it == data_.end()) if (it == data_.end())
return nullptr; return nullptr;
@ -92,13 +94,15 @@ public:
template<typename T> template<typename T>
void setLocked(std::string const &tag, T const &value) void setLocked(std::string const &tag, T const &value)
{ {
// Use this only if you're holding the lock yourself. /* Use this only if you're holding the lock yourself. */
data_[tag] = value; data_[tag] = value;
} }
// Note: use of (lowercase) lock and unlock means you can create scoped /*
// locks with the standard lock classes. * Note: use of (lowercase) lock and unlock means you can create scoped
// e.g. std::lock_guard<RPiController::Metadata> lock(metadata) * locks with the standard lock classes.
* e.g. std::lock_guard<RPiController::Metadata> lock(metadata)
*/
void lock() { mutex_.lock(); } void lock() { mutex_.lock(); }
void unlock() { mutex_.unlock(); } void unlock() { mutex_.unlock(); }
@ -107,4 +111,4 @@ private:
std::map<std::string, std::any> data_; std::map<std::string, std::any> data_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,7 +6,7 @@
*/ */
#pragma once #pragma once
// The "noise" algorithm stores an estimate of the noise profile for this image. /* The "noise" algorithm stores an estimate of the noise profile for this image. */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {

View file

@ -66,11 +66,15 @@ double Pwl::eval(double x, int *spanPtr, bool updateSpan) const
int Pwl::findSpan(double x, int span) const int Pwl::findSpan(double x, int span) const
{ {
// Pwls are generally small, so linear search may well be faster than /*
// binary, though could review this if large PWls start turning up. * Pwls are generally small, so linear search may well be faster than
* binary, though could review this if large PWls start turning up.
*/
int lastSpan = points_.size() - 2; int lastSpan = points_.size() - 2;
// some algorithms may call us with span pointing directly at the last /*
// control point * some algorithms may call us with span pointing directly at the last
* control point
*/
span = std::max(0, std::min(lastSpan, span)); span = std::max(0, std::min(lastSpan, span));
while (span < lastSpan && x >= points_[span + 1].x) while (span < lastSpan && x >= points_[span + 1].x)
span++; span++;
@ -87,7 +91,7 @@ Pwl::PerpType Pwl::invert(Point const &xy, Point &perp, int &span,
for (span = span + 1; span < (int)points_.size() - 1; span++) { for (span = span + 1; span < (int)points_.size() - 1; span++) {
Point spanVec = points_[span + 1] - points_[span]; Point spanVec = points_[span + 1] - points_[span];
double t = ((xy - points_[span]) % spanVec) / spanVec.len2(); double t = ((xy - points_[span]) % spanVec) / spanVec.len2();
if (t < -eps) // off the start of this span if (t < -eps) /* off the start of this span */
{ {
if (span == 0) { if (span == 0) {
perp = points_[span]; perp = points_[span];
@ -96,14 +100,14 @@ Pwl::PerpType Pwl::invert(Point const &xy, Point &perp, int &span,
perp = points_[span]; perp = points_[span];
return PerpType::Vertex; return PerpType::Vertex;
} }
} else if (t > 1 + eps) // off the end of this span } else if (t > 1 + eps) /* off the end of this span */
{ {
if (span == (int)points_.size() - 2) { if (span == (int)points_.size() - 2) {
perp = points_[span + 1]; perp = points_[span + 1];
return PerpType::End; return PerpType::End;
} }
prevOffEnd = true; prevOffEnd = true;
} else // a true perpendicular } else /* a true perpendicular */
{ {
perp = points_[span] + spanVec * t; perp = points_[span] + spanVec * t;
return PerpType::Perpendicular; return PerpType::Perpendicular;
@ -133,9 +137,11 @@ Pwl Pwl::inverse(bool *trueInverse, const double eps) const
neither = true; neither = true;
} }
// This is not a proper inverse if we found ourselves putting points /*
// onto both ends of the inverse, or if there were points that couldn't * This is not a proper inverse if we found ourselves putting points
// go on either. * onto both ends of the inverse, or if there were points that couldn't
* go on either.
*/
if (trueInverse) if (trueInverse)
*trueInverse = !(neither || (appended && prepended)); *trueInverse = !(neither || (appended && prepended));
@ -154,8 +160,10 @@ Pwl Pwl::compose(Pwl const &other, const double eps) const
otherSpan + 1 < (int)other.points_.size() && otherSpan + 1 < (int)other.points_.size() &&
points_[thisSpan + 1].y >= points_[thisSpan + 1].y >=
other.points_[otherSpan + 1].x + eps) { other.points_[otherSpan + 1].x + eps) {
// next control point in result will be where this /*
// function's y reaches the next span in other * next control point in result will be where this
* function's y reaches the next span in other
*/
thisX = points_[thisSpan].x + thisX = points_[thisSpan].x +
(other.points_[otherSpan + 1].x - (other.points_[otherSpan + 1].x -
points_[thisSpan].y) * points_[thisSpan].y) *
@ -164,15 +172,17 @@ Pwl Pwl::compose(Pwl const &other, const double eps) const
} else if (abs(dy) > eps && otherSpan > 0 && } else if (abs(dy) > eps && otherSpan > 0 &&
points_[thisSpan + 1].y <= points_[thisSpan + 1].y <=
other.points_[otherSpan - 1].x - eps) { other.points_[otherSpan - 1].x - eps) {
// next control point in result will be where this /*
// function's y reaches the previous span in other * next control point in result will be where this
* function's y reaches the previous span in other
*/
thisX = points_[thisSpan].x + thisX = points_[thisSpan].x +
(other.points_[otherSpan + 1].x - (other.points_[otherSpan + 1].x -
points_[thisSpan].y) * points_[thisSpan].y) *
dx / dy; dx / dy;
thisY = other.points_[--otherSpan].x; thisY = other.points_[--otherSpan].x;
} else { } else {
// we stay in the same span in other /* we stay in the same span in other */
thisSpan++; thisSpan++;
thisX = points_[thisSpan].x, thisX = points_[thisSpan].x,
thisY = points_[thisSpan].y; thisY = points_[thisSpan].y;

View file

@ -63,44 +63,56 @@ public:
Interval domain() const; Interval domain() const;
Interval range() const; Interval range() const;
bool empty() const; bool empty() const;
// Evaluate Pwl, optionally supplying an initial guess for the /*
// "span". The "span" may be optionally be updated. If you want to know * Evaluate Pwl, optionally supplying an initial guess for the
// the "span" value but don't have an initial guess you can set it to * "span". The "span" may be optionally be updated. If you want to know
// -1. * the "span" value but don't have an initial guess you can set it to
* -1.
*/
double eval(double x, int *spanPtr = nullptr, double eval(double x, int *spanPtr = nullptr,
bool updateSpan = true) const; bool updateSpan = true) const;
// Find perpendicular closest to xy, starting from span+1 so you can /*
// call it repeatedly to check for multiple closest points (set span to * Find perpendicular closest to xy, starting from span+1 so you can
// -1 on the first call). Also returns "pseudo" perpendiculars; see * call it repeatedly to check for multiple closest points (set span to
// PerpType enum. * -1 on the first call). Also returns "pseudo" perpendiculars; see
* PerpType enum.
*/
enum class PerpType { enum class PerpType {
None, // no perpendicular found None, /* no perpendicular found */
Start, // start of Pwl is closest point Start, /* start of Pwl is closest point */
End, // end of Pwl is closest point End, /* end of Pwl is closest point */
Vertex, // vertex of Pwl is closest point Vertex, /* vertex of Pwl is closest point */
Perpendicular // true perpendicular found Perpendicular /* true perpendicular found */
}; };
PerpType invert(Point const &xy, Point &perp, int &span, PerpType invert(Point const &xy, Point &perp, int &span,
const double eps = 1e-6) const; const double eps = 1e-6) const;
// Compute the inverse function. Indicate if it is a proper (true) /*
// inverse, or only a best effort (e.g. input was non-monotonic). * Compute the inverse function. Indicate if it is a proper (true)
* inverse, or only a best effort (e.g. input was non-monotonic).
*/
Pwl inverse(bool *trueInverse = nullptr, const double eps = 1e-6) const; Pwl inverse(bool *trueInverse = nullptr, const double eps = 1e-6) const;
// Compose two Pwls together, doing "this" first and "other" after. /* Compose two Pwls together, doing "this" first and "other" after. */
Pwl compose(Pwl const &other, const double eps = 1e-6) const; Pwl compose(Pwl const &other, const double eps = 1e-6) const;
// Apply function to (x,y) values at every control point. /* Apply function to (x,y) values at every control point. */
void map(std::function<void(double x, double y)> f) const; void map(std::function<void(double x, double y)> f) const;
// Apply function to (x, y0, y1) values wherever either Pwl has a /*
// control point. * Apply function to (x, y0, y1) values wherever either Pwl has a
* control point.
*/
static void map2(Pwl const &pwl0, Pwl const &pwl1, static void map2(Pwl const &pwl0, Pwl const &pwl1,
std::function<void(double x, double y0, double y1)> f); std::function<void(double x, double y0, double y1)> f);
// Combine two Pwls, meaning we create a new Pwl where the y values are /*
// given by running f wherever either has a knot. * Combine two Pwls, meaning we create a new Pwl where the y values are
* given by running f wherever either has a knot.
*/
static Pwl static Pwl
combine(Pwl const &pwl0, Pwl const &pwl1, combine(Pwl const &pwl0, Pwl const &pwl1,
std::function<double(double x, double y0, double y1)> f, std::function<double(double x, double y0, double y1)> f,
const double eps = 1e-6); const double eps = 1e-6);
// Make "this" match (at least) the given domain. Any extension my be /*
// clipped or linear. * Make "this" match (at least) the given domain. Any extension my be
* clipped or linear.
*/
void matchDomain(Interval const &domain, bool clip = true, void matchDomain(Interval const &domain, bool clip = true,
const double eps = 1e-6); const double eps = 1e-6);
Pwl &operator*=(double d); Pwl &operator*=(double d);
@ -111,4 +123,4 @@ private:
std::vector<Point> points_; std::vector<Point> points_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -28,7 +28,7 @@ LOG_DEFINE_CATEGORY(RPiAgc)
#define NAME "rpi.agc" #define NAME "rpi.agc"
#define PIPELINE_BITS 13 // seems to be a 13-bit pipeline #define PIPELINE_BITS 13 /* seems to be a 13-bit pipeline */
void AgcMeteringMode::read(boost::property_tree::ptree const &params) void AgcMeteringMode::read(boost::property_tree::ptree const &params)
{ {
@ -150,7 +150,7 @@ void AgcConfig::read(boost::property_tree::ptree const &params)
convergenceFrames = params.get<unsigned int>("convergence_frames", 6); convergenceFrames = params.get<unsigned int>("convergence_frames", 6);
fastReduceThreshold = params.get<double>("fast_reduce_threshold", 0.4); fastReduceThreshold = params.get<double>("fast_reduce_threshold", 0.4);
baseEv = params.get<double>("base_ev", 1.0); baseEv = params.get<double>("base_ev", 1.0);
// Start with quite a low value as ramping up is easier than ramping down. /* Start with quite a low value as ramping up is easier than ramping down. */
defaultExposureTime = params.get<double>("default_exposure_time", 1000) * 1us; defaultExposureTime = params.get<double>("default_exposure_time", 1000) * 1us;
defaultAnalogueGain = params.get<double>("default_analogueGain", 1.0); defaultAnalogueGain = params.get<double>("default_analogueGain", 1.0);
} }
@ -170,8 +170,10 @@ Agc::Agc(Controller *controller)
maxShutter_(0s), fixedShutter_(0s), fixedAnalogueGain_(0.0) maxShutter_(0s), fixedShutter_(0s), fixedAnalogueGain_(0.0)
{ {
memset(&awb_, 0, sizeof(awb_)); memset(&awb_, 0, sizeof(awb_));
// Setting status_.totalExposureValue_ to zero initially tells us /*
// it's not been calculated yet (i.e. Process hasn't yet run). * Setting status_.totalExposureValue_ to zero initially tells us
* it's not been calculated yet (i.e. Process hasn't yet run).
*/
memset(&status_, 0, sizeof(status_)); memset(&status_, 0, sizeof(status_));
status_.ev = ev_; status_.ev = ev_;
} }
@ -185,16 +187,18 @@ void Agc::read(boost::property_tree::ptree const &params)
{ {
LOG(RPiAgc, Debug) << "Agc"; LOG(RPiAgc, Debug) << "Agc";
config_.read(params); config_.read(params);
// Set the config's defaults (which are the first ones it read) as our /*
// current modes, until someone changes them. (they're all known to * Set the config's defaults (which are the first ones it read) as our
// exist at this point) * current modes, until someone changes them. (they're all known to
* exist at this point)
*/
meteringModeName_ = config_.defaultMeteringMode; meteringModeName_ = config_.defaultMeteringMode;
meteringMode_ = &config_.meteringModes[meteringModeName_]; meteringMode_ = &config_.meteringModes[meteringModeName_];
exposureModeName_ = config_.defaultExposureMode; exposureModeName_ = config_.defaultExposureMode;
exposureMode_ = &config_.exposureModes[exposureModeName_]; exposureMode_ = &config_.exposureModes[exposureModeName_];
constraintModeName_ = config_.defaultConstraintMode; constraintModeName_ = config_.defaultConstraintMode;
constraintMode_ = &config_.constraintModes[constraintModeName_]; constraintMode_ = &config_.constraintModes[constraintModeName_];
// Set up the "last shutter/gain" values, in case AGC starts "disabled". /* Set up the "last shutter/gain" values, in case AGC starts "disabled". */
status_.shutterTime = config_.defaultExposureTime; status_.shutterTime = config_.defaultExposureTime;
status_.analogueGain = config_.defaultAnalogueGain; status_.analogueGain = config_.defaultAnalogueGain;
} }
@ -218,8 +222,10 @@ void Agc::resume()
unsigned int Agc::getConvergenceFrames() const unsigned int Agc::getConvergenceFrames() const
{ {
// If shutter and gain have been explicitly set, there is no /*
// convergence to happen, so no need to drop any frames - return zero. * If shutter and gain have been explicitly set, there is no
* convergence to happen, so no need to drop any frames - return zero.
*/
if (fixedShutter_ && fixedAnalogueGain_) if (fixedShutter_ && fixedAnalogueGain_)
return 0; return 0;
else else
@ -244,14 +250,14 @@ void Agc::setMaxShutter(Duration maxShutter)
void Agc::setFixedShutter(Duration fixedShutter) void Agc::setFixedShutter(Duration fixedShutter)
{ {
fixedShutter_ = fixedShutter; fixedShutter_ = fixedShutter;
// Set this in case someone calls Pause() straight after. /* Set this in case someone calls Pause() straight after. */
status_.shutterTime = clipShutter(fixedShutter_); status_.shutterTime = clipShutter(fixedShutter_);
} }
void Agc::setFixedAnalogueGain(double fixedAnalogueGain) void Agc::setFixedAnalogueGain(double fixedAnalogueGain)
{ {
fixedAnalogueGain_ = fixedAnalogueGain; fixedAnalogueGain_ = fixedAnalogueGain;
// Set this in case someone calls Pause() straight after. /* Set this in case someone calls Pause() straight after. */
status_.analogueGain = fixedAnalogueGain; status_.analogueGain = fixedAnalogueGain;
} }
@ -280,30 +286,32 @@ void Agc::switchMode(CameraMode const &cameraMode,
Duration fixedShutter = clipShutter(fixedShutter_); Duration fixedShutter = clipShutter(fixedShutter_);
if (fixedShutter && fixedAnalogueGain_) { if (fixedShutter && fixedAnalogueGain_) {
// We're going to reset the algorithm here with these fixed values. /* We're going to reset the algorithm here with these fixed values. */
fetchAwbStatus(metadata); fetchAwbStatus(metadata);
double minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 }); double minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 });
ASSERT(minColourGain != 0.0); ASSERT(minColourGain != 0.0);
// This is the equivalent of computeTargetExposure and applyDigitalGain. /* This is the equivalent of computeTargetExposure and applyDigitalGain. */
target_.totalExposureNoDG = fixedShutter_ * fixedAnalogueGain_; target_.totalExposureNoDG = fixedShutter_ * fixedAnalogueGain_;
target_.totalExposure = target_.totalExposureNoDG / minColourGain; target_.totalExposure = target_.totalExposureNoDG / minColourGain;
// Equivalent of filterExposure. This resets any "history". /* Equivalent of filterExposure. This resets any "history". */
filtered_ = target_; filtered_ = target_;
// Equivalent of divideUpExposure. /* Equivalent of divideUpExposure. */
filtered_.shutter = fixedShutter; filtered_.shutter = fixedShutter;
filtered_.analogueGain = fixedAnalogueGain_; filtered_.analogueGain = fixedAnalogueGain_;
} else if (status_.totalExposureValue) { } else if (status_.totalExposureValue) {
// On a mode switch, various things could happen: /*
// - the exposure profile might change * On a mode switch, various things could happen:
// - a fixed exposure or gain might be set * - the exposure profile might change
// - the new mode's sensitivity might be different * - a fixed exposure or gain might be set
// We cope with the last of these by scaling the target values. After * - the new mode's sensitivity might be different
// that we just need to re-divide the exposure/gain according to the * We cope with the last of these by scaling the target values. After
// current exposure profile, which takes care of everything else. * that we just need to re-divide the exposure/gain according to the
* current exposure profile, which takes care of everything else.
*/
double ratio = lastSensitivity_ / cameraMode.sensitivity; double ratio = lastSensitivity_ / cameraMode.sensitivity;
target_.totalExposureNoDG *= ratio; target_.totalExposureNoDG *= ratio;
@ -313,29 +321,31 @@ void Agc::switchMode(CameraMode const &cameraMode,
divideUpExposure(); divideUpExposure();
} else { } else {
// We come through here on startup, when at least one of the shutter /*
// or gain has not been fixed. We must still write those values out so * We come through here on startup, when at least one of the shutter
// that they will be applied immediately. We supply some arbitrary defaults * or gain has not been fixed. We must still write those values out so
// for any that weren't set. * that they will be applied immediately. We supply some arbitrary defaults
* for any that weren't set.
*/
// Equivalent of divideUpExposure. /* Equivalent of divideUpExposure. */
filtered_.shutter = fixedShutter ? fixedShutter : config_.defaultExposureTime; filtered_.shutter = fixedShutter ? fixedShutter : config_.defaultExposureTime;
filtered_.analogueGain = fixedAnalogueGain_ ? fixedAnalogueGain_ : config_.defaultAnalogueGain; filtered_.analogueGain = fixedAnalogueGain_ ? fixedAnalogueGain_ : config_.defaultAnalogueGain;
} }
writeAndFinish(metadata, false); writeAndFinish(metadata, false);
// We must remember the sensitivity of this mode for the next SwitchMode. /* We must remember the sensitivity of this mode for the next SwitchMode. */
lastSensitivity_ = cameraMode.sensitivity; lastSensitivity_ = cameraMode.sensitivity;
} }
void Agc::prepare(Metadata *imageMetadata) void Agc::prepare(Metadata *imageMetadata)
{ {
status_.digitalGain = 1.0; status_.digitalGain = 1.0;
fetchAwbStatus(imageMetadata); // always fetch it so that Process knows it's been done fetchAwbStatus(imageMetadata); /* always fetch it so that Process knows it's been done */
if (status_.totalExposureValue) { if (status_.totalExposureValue) {
// Process has run, so we have meaningful values. /* Process has run, so we have meaningful values. */
DeviceStatus deviceStatus; DeviceStatus deviceStatus;
if (imageMetadata->get("device.status", deviceStatus) == 0) { if (imageMetadata->get("device.status", deviceStatus) == 0) {
Duration actualExposure = deviceStatus.shutterSpeed * Duration actualExposure = deviceStatus.shutterSpeed *
@ -343,14 +353,16 @@ void Agc::prepare(Metadata *imageMetadata)
if (actualExposure) { if (actualExposure) {
status_.digitalGain = status_.totalExposureValue / actualExposure; status_.digitalGain = status_.totalExposureValue / actualExposure;
LOG(RPiAgc, Debug) << "Want total exposure " << status_.totalExposureValue; LOG(RPiAgc, Debug) << "Want total exposure " << status_.totalExposureValue;
// Never ask for a gain < 1.0, and also impose /*
// some upper limit. Make it customisable? * Never ask for a gain < 1.0, and also impose
* some upper limit. Make it customisable?
*/
status_.digitalGain = std::max(1.0, std::min(status_.digitalGain, 4.0)); status_.digitalGain = std::max(1.0, std::min(status_.digitalGain, 4.0));
LOG(RPiAgc, Debug) << "Actual exposure " << actualExposure; LOG(RPiAgc, Debug) << "Actual exposure " << actualExposure;
LOG(RPiAgc, Debug) << "Use digitalGain " << status_.digitalGain; LOG(RPiAgc, Debug) << "Use digitalGain " << status_.digitalGain;
LOG(RPiAgc, Debug) << "Effective exposure " LOG(RPiAgc, Debug) << "Effective exposure "
<< actualExposure * status_.digitalGain; << actualExposure * status_.digitalGain;
// Decide whether AEC/AGC has converged. /* Decide whether AEC/AGC has converged. */
updateLockStatus(deviceStatus); updateLockStatus(deviceStatus);
} }
} else } else
@ -362,44 +374,52 @@ void Agc::prepare(Metadata *imageMetadata)
void Agc::process(StatisticsPtr &stats, Metadata *imageMetadata) void Agc::process(StatisticsPtr &stats, Metadata *imageMetadata)
{ {
frameCount_++; frameCount_++;
// First a little bit of housekeeping, fetching up-to-date settings and /*
// configuration, that kind of thing. * First a little bit of housekeeping, fetching up-to-date settings and
* configuration, that kind of thing.
*/
housekeepConfig(); housekeepConfig();
// Get the current exposure values for the frame that's just arrived. /* Get the current exposure values for the frame that's just arrived. */
fetchCurrentExposure(imageMetadata); fetchCurrentExposure(imageMetadata);
// Compute the total gain we require relative to the current exposure. /* Compute the total gain we require relative to the current exposure. */
double gain, targetY; double gain, targetY;
computeGain(stats.get(), imageMetadata, gain, targetY); computeGain(stats.get(), imageMetadata, gain, targetY);
// Now compute the target (final) exposure which we think we want. /* Now compute the target (final) exposure which we think we want. */
computeTargetExposure(gain); computeTargetExposure(gain);
// Some of the exposure has to be applied as digital gain, so work out /*
// what that is. This function also tells us whether it's decided to * Some of the exposure has to be applied as digital gain, so work out
// "desaturate" the image more quickly. * what that is. This function also tells us whether it's decided to
* "desaturate" the image more quickly.
*/
bool desaturate = applyDigitalGain(gain, targetY); bool desaturate = applyDigitalGain(gain, targetY);
// The results have to be filtered so as not to change too rapidly. /* The results have to be filtered so as not to change too rapidly. */
filterExposure(desaturate); filterExposure(desaturate);
// The last thing is to divide up the exposure value into a shutter time /*
// and analogue gain, according to the current exposure mode. * The last thing is to divide up the exposure value into a shutter time
* and analogue gain, according to the current exposure mode.
*/
divideUpExposure(); divideUpExposure();
// Finally advertise what we've done. /* Finally advertise what we've done. */
writeAndFinish(imageMetadata, desaturate); writeAndFinish(imageMetadata, desaturate);
} }
void Agc::updateLockStatus(DeviceStatus const &deviceStatus) void Agc::updateLockStatus(DeviceStatus const &deviceStatus)
{ {
const double errorFactor = 0.10; // make these customisable? const double errorFactor = 0.10; /* make these customisable? */
const int maxLockCount = 5; const int maxLockCount = 5;
// Reset "lock count" when we exceed this multiple of errorFactor /* Reset "lock count" when we exceed this multiple of errorFactor */
const double resetMargin = 1.5; const double resetMargin = 1.5;
// Add 200us to the exposure time error to allow for line quantisation. /* Add 200us to the exposure time error to allow for line quantisation. */
Duration exposureError = lastDeviceStatus_.shutterSpeed * errorFactor + 200us; Duration exposureError = lastDeviceStatus_.shutterSpeed * errorFactor + 200us;
double gainError = lastDeviceStatus_.analogueGain * errorFactor; double gainError = lastDeviceStatus_.analogueGain * errorFactor;
Duration targetError = lastTargetExposure_ * errorFactor; Duration targetError = lastTargetExposure_ * errorFactor;
// Note that we don't know the exposure/gain limits of the sensor, so /*
// the values we keep requesting may be unachievable. For this reason * Note that we don't know the exposure/gain limits of the sensor, so
// we only insist that we're close to values in the past few frames. * the values we keep requesting may be unachievable. For this reason
* we only insist that we're close to values in the past few frames.
*/
if (deviceStatus.shutterSpeed > lastDeviceStatus_.shutterSpeed - exposureError && if (deviceStatus.shutterSpeed > lastDeviceStatus_.shutterSpeed - exposureError &&
deviceStatus.shutterSpeed < lastDeviceStatus_.shutterSpeed + exposureError && deviceStatus.shutterSpeed < lastDeviceStatus_.shutterSpeed + exposureError &&
deviceStatus.analogueGain > lastDeviceStatus_.analogueGain - gainError && deviceStatus.analogueGain > lastDeviceStatus_.analogueGain - gainError &&
@ -430,7 +450,7 @@ static void copyString(std::string const &s, char *d, size_t size)
void Agc::housekeepConfig() void Agc::housekeepConfig()
{ {
// First fetch all the up-to-date settings, so no one else has to do it. /* First fetch all the up-to-date settings, so no one else has to do it. */
status_.ev = ev_; status_.ev = ev_;
status_.fixedShutter = clipShutter(fixedShutter_); status_.fixedShutter = clipShutter(fixedShutter_);
status_.fixedAnalogueGain = fixedAnalogueGain_; status_.fixedAnalogueGain = fixedAnalogueGain_;
@ -438,8 +458,10 @@ void Agc::housekeepConfig()
LOG(RPiAgc, Debug) << "ev " << status_.ev << " fixedShutter " LOG(RPiAgc, Debug) << "ev " << status_.ev << " fixedShutter "
<< status_.fixedShutter << " fixedAnalogueGain " << status_.fixedShutter << " fixedAnalogueGain "
<< status_.fixedAnalogueGain; << status_.fixedAnalogueGain;
// Make sure the "mode" pointers point to the up-to-date things, if /*
// they've changed. * Make sure the "mode" pointers point to the up-to-date things, if
* they've changed.
*/
if (strcmp(meteringModeName_.c_str(), status_.meteringMode)) { if (strcmp(meteringModeName_.c_str(), status_.meteringMode)) {
auto it = config_.meteringModes.find(meteringModeName_); auto it = config_.meteringModes.find(meteringModeName_);
if (it == config_.meteringModes.end()) if (it == config_.meteringModes.end())
@ -491,7 +513,7 @@ void Agc::fetchCurrentExposure(Metadata *imageMetadata)
void Agc::fetchAwbStatus(Metadata *imageMetadata) void Agc::fetchAwbStatus(Metadata *imageMetadata)
{ {
awb_.gainR = 1.0; // in case not found in metadata awb_.gainR = 1.0; /* in case not found in metadata */
awb_.gainG = 1.0; awb_.gainG = 1.0;
awb_.gainB = 1.0; awb_.gainB = 1.0;
if (imageMetadata->get("awb.status", awb_) != 0) if (imageMetadata->get("awb.status", awb_) != 0)
@ -502,8 +524,10 @@ static double computeInitialY(bcm2835_isp_stats *stats, AwbStatus const &awb,
double weights[], double gain) double weights[], double gain)
{ {
bcm2835_isp_stats_region *regions = stats->agc_stats; bcm2835_isp_stats_region *regions = stats->agc_stats;
// Note how the calculation below means that equal weights give you /*
// "average" metering (i.e. all pixels equally important). * Note how the calculation below means that equal weights give you
* "average" metering (i.e. all pixels equally important).
*/
double rSum = 0, gSum = 0, bSum = 0, pixelSum = 0; double rSum = 0, gSum = 0, bSum = 0, pixelSum = 0;
for (int i = 0; i < AGC_STATS_SIZE; i++) { for (int i = 0; i < AGC_STATS_SIZE; i++) {
double counted = regions[i].counted; double counted = regions[i].counted;
@ -525,11 +549,13 @@ static double computeInitialY(bcm2835_isp_stats *stats, AwbStatus const &awb,
return ySum / pixelSum / (1 << PIPELINE_BITS); return ySum / pixelSum / (1 << PIPELINE_BITS);
} }
// We handle extra gain through EV by adjusting our Y targets. However, you /*
// simply can't monitor histograms once they get very close to (or beyond!) * We handle extra gain through EV by adjusting our Y targets. However, you
// saturation, so we clamp the Y targets to this value. It does mean that EV * simply can't monitor histograms once they get very close to (or beyond!)
// increases don't necessarily do quite what you might expect in certain * saturation, so we clamp the Y targets to this value. It does mean that EV
// (contrived) cases. * increases don't necessarily do quite what you might expect in certain
* (contrived) cases.
*/
#define EV_GAIN_Y_TARGET_LIMIT 0.9 #define EV_GAIN_Y_TARGET_LIMIT 0.9
@ -546,18 +572,22 @@ void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *imageMetadata,
double &gain, double &targetY) double &gain, double &targetY)
{ {
struct LuxStatus lux = {}; struct LuxStatus lux = {};
lux.lux = 400; // default lux level to 400 in case no metadata found lux.lux = 400; /* default lux level to 400 in case no metadata found */
if (imageMetadata->get("lux.status", lux) != 0) if (imageMetadata->get("lux.status", lux) != 0)
LOG(RPiAgc, Warning) << "Agc: no lux level found"; LOG(RPiAgc, Warning) << "Agc: no lux level found";
Histogram h(statistics->hist[0].g_hist, NUM_HISTOGRAM_BINS); Histogram h(statistics->hist[0].g_hist, NUM_HISTOGRAM_BINS);
double evGain = status_.ev * config_.baseEv; double evGain = status_.ev * config_.baseEv;
// The initial gain and target_Y come from some of the regions. After /*
// that we consider the histogram constraints. * The initial gain and target_Y come from some of the regions. After
* that we consider the histogram constraints.
*/
targetY = config_.yTarget.eval(config_.yTarget.domain().clip(lux.lux)); targetY = config_.yTarget.eval(config_.yTarget.domain().clip(lux.lux));
targetY = std::min(EV_GAIN_Y_TARGET_LIMIT, targetY * evGain); targetY = std::min(EV_GAIN_Y_TARGET_LIMIT, targetY * evGain);
// Do this calculation a few times as brightness increase can be /*
// non-linear when there are saturated regions. * Do this calculation a few times as brightness increase can be
* non-linear when there are saturated regions.
*/
gain = 1.0; gain = 1.0;
for (int i = 0; i < 8; i++) { for (int i = 0; i < 8; i++) {
double initialY = computeInitialY(statistics, awb_, meteringMode_->weights, gain); double initialY = computeInitialY(statistics, awb_, meteringMode_->weights, gain);
@ -565,7 +595,7 @@ void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *imageMetadata,
gain *= extraGain; gain *= extraGain;
LOG(RPiAgc, Debug) << "Initial Y " << initialY << " target " << targetY LOG(RPiAgc, Debug) << "Initial Y " << initialY << " target " << targetY
<< " gives gain " << gain; << " gives gain " << gain;
if (extraGain < 1.01) // close enough if (extraGain < 1.01) /* close enough */
break; break;
} }
@ -592,20 +622,23 @@ void Agc::computeGain(bcm2835_isp_stats *statistics, Metadata *imageMetadata,
void Agc::computeTargetExposure(double gain) void Agc::computeTargetExposure(double gain)
{ {
if (status_.fixedShutter && status_.fixedAnalogueGain) { if (status_.fixedShutter && status_.fixedAnalogueGain) {
// When ag and shutter are both fixed, we need to drive the /*
// total exposure so that we end up with a digital gain of at least * When ag and shutter are both fixed, we need to drive the
// 1/minColourGain. Otherwise we'd desaturate channels causing * total exposure so that we end up with a digital gain of at least
// white to go cyan or magenta. * 1/minColourGain. Otherwise we'd desaturate channels causing
* white to go cyan or magenta.
*/
double minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 }); double minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 });
ASSERT(minColourGain != 0.0); ASSERT(minColourGain != 0.0);
target_.totalExposure = target_.totalExposure =
status_.fixedShutter * status_.fixedAnalogueGain / minColourGain; status_.fixedShutter * status_.fixedAnalogueGain / minColourGain;
} else { } else {
// The statistics reflect the image without digital gain, so the final /*
// total exposure we're aiming for is: * The statistics reflect the image without digital gain, so the final
* total exposure we're aiming for is:
*/
target_.totalExposure = current_.totalExposureNoDG * gain; target_.totalExposure = current_.totalExposureNoDG * gain;
// The final target exposure is also limited to what the exposure /* The final target exposure is also limited to what the exposure mode allows. */
// mode allows.
Duration maxShutter = status_.fixedShutter Duration maxShutter = status_.fixedShutter
? status_.fixedShutter ? status_.fixedShutter
: exposureMode_->shutter.back(); : exposureMode_->shutter.back();
@ -625,17 +658,21 @@ bool Agc::applyDigitalGain(double gain, double targetY)
double minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 }); double minColourGain = std::min({ awb_.gainR, awb_.gainG, awb_.gainB, 1.0 });
ASSERT(minColourGain != 0.0); ASSERT(minColourGain != 0.0);
double dg = 1.0 / minColourGain; double dg = 1.0 / minColourGain;
// I think this pipeline subtracts black level and rescales before we /*
// get the stats, so no need to worry about it. * I think this pipeline subtracts black level and rescales before we
* get the stats, so no need to worry about it.
*/
LOG(RPiAgc, Debug) << "after AWB, target dg " << dg << " gain " << gain LOG(RPiAgc, Debug) << "after AWB, target dg " << dg << " gain " << gain
<< " target_Y " << targetY; << " target_Y " << targetY;
// Finally, if we're trying to reduce exposure but the target_Y is /*
// "close" to 1.0, then the gain computed for that constraint will be * Finally, if we're trying to reduce exposure but the target_Y is
// only slightly less than one, because the measured Y can never be * "close" to 1.0, then the gain computed for that constraint will be
// larger than 1.0. When this happens, demand a large digital gain so * only slightly less than one, because the measured Y can never be
// that the exposure can be reduced, de-saturating the image much more * larger than 1.0. When this happens, demand a large digital gain so
// quickly (and we then approach the correct value more quickly from * that the exposure can be reduced, de-saturating the image much more
// below). * quickly (and we then approach the correct value more quickly from
* below).
*/
bool desaturate = targetY > config_.fastReduceThreshold && bool desaturate = targetY > config_.fastReduceThreshold &&
gain < sqrt(targetY); gain < sqrt(targetY);
if (desaturate) if (desaturate)
@ -649,8 +686,10 @@ bool Agc::applyDigitalGain(double gain, double targetY)
void Agc::filterExposure(bool desaturate) void Agc::filterExposure(bool desaturate)
{ {
double speed = config_.speed; double speed = config_.speed;
// AGC adapts instantly if both shutter and gain are directly specified /*
// or we're in the startup phase. * AGC adapts instantly if both shutter and gain are directly specified
* or we're in the startup phase.
*/
if ((status_.fixedShutter && status_.fixedAnalogueGain) || if ((status_.fixedShutter && status_.fixedAnalogueGain) ||
frameCount_ <= config_.startupFrames) frameCount_ <= config_.startupFrames)
speed = 1.0; speed = 1.0;
@ -658,15 +697,19 @@ void Agc::filterExposure(bool desaturate)
filtered_.totalExposure = target_.totalExposure; filtered_.totalExposure = target_.totalExposure;
filtered_.totalExposureNoDG = target_.totalExposureNoDG; filtered_.totalExposureNoDG = target_.totalExposureNoDG;
} else { } else {
// If close to the result go faster, to save making so many /*
// micro-adjustments on the way. (Make this customisable?) * If close to the result go faster, to save making so many
* micro-adjustments on the way. (Make this customisable?)
*/
if (filtered_.totalExposure < 1.2 * target_.totalExposure && if (filtered_.totalExposure < 1.2 * target_.totalExposure &&
filtered_.totalExposure > 0.8 * target_.totalExposure) filtered_.totalExposure > 0.8 * target_.totalExposure)
speed = sqrt(speed); speed = sqrt(speed);
filtered_.totalExposure = speed * target_.totalExposure + filtered_.totalExposure = speed * target_.totalExposure +
filtered_.totalExposure * (1.0 - speed); filtered_.totalExposure * (1.0 - speed);
// When desaturing, take a big jump down in totalExposureNoDG, /*
// which we'll hide with digital gain. * When desaturing, take a big jump down in totalExposureNoDG,
* which we'll hide with digital gain.
*/
if (desaturate) if (desaturate)
filtered_.totalExposureNoDG = filtered_.totalExposureNoDG =
target_.totalExposureNoDG; target_.totalExposureNoDG;
@ -675,9 +718,11 @@ void Agc::filterExposure(bool desaturate)
speed * target_.totalExposureNoDG + speed * target_.totalExposureNoDG +
filtered_.totalExposureNoDG * (1.0 - speed); filtered_.totalExposureNoDG * (1.0 - speed);
} }
// We can't let the totalExposureNoDG exposure deviate too far below the /*
// total exposure, as there might not be enough digital gain available * We can't let the totalExposureNoDG exposure deviate too far below the
// in the ISP to hide it (which will cause nasty oscillation). * total exposure, as there might not be enough digital gain available
* in the ISP to hide it (which will cause nasty oscillation).
*/
if (filtered_.totalExposureNoDG < if (filtered_.totalExposureNoDG <
filtered_.totalExposure * config_.fastReduceThreshold) filtered_.totalExposure * config_.fastReduceThreshold)
filtered_.totalExposureNoDG = filtered_.totalExposure * config_.fastReduceThreshold; filtered_.totalExposureNoDG = filtered_.totalExposure * config_.fastReduceThreshold;
@ -687,9 +732,11 @@ void Agc::filterExposure(bool desaturate)
void Agc::divideUpExposure() void Agc::divideUpExposure()
{ {
// Sending the fixed shutter/gain cases through the same code may seem /*
// unnecessary, but it will make more sense when extend this to cover * Sending the fixed shutter/gain cases through the same code may seem
// variable aperture. * unnecessary, but it will make more sense when extend this to cover
* variable aperture.
*/
Duration exposureValue = filtered_.totalExposureNoDG; Duration exposureValue = filtered_.totalExposureNoDG;
Duration shutterTime; Duration shutterTime;
double analogueGain; double analogueGain;
@ -721,18 +768,22 @@ void Agc::divideUpExposure()
} }
LOG(RPiAgc, Debug) << "Divided up shutter and gain are " << shutterTime << " and " LOG(RPiAgc, Debug) << "Divided up shutter and gain are " << shutterTime << " and "
<< analogueGain; << analogueGain;
// Finally adjust shutter time for flicker avoidance (require both /*
// shutter and gain not to be fixed). * Finally adjust shutter time for flicker avoidance (require both
* shutter and gain not to be fixed).
*/
if (!status_.fixedShutter && !status_.fixedAnalogueGain && if (!status_.fixedShutter && !status_.fixedAnalogueGain &&
status_.flickerPeriod) { status_.flickerPeriod) {
int flickerPeriods = shutterTime / status_.flickerPeriod; int flickerPeriods = shutterTime / status_.flickerPeriod;
if (flickerPeriods) { if (flickerPeriods) {
Duration newShutterTime = flickerPeriods * status_.flickerPeriod; Duration newShutterTime = flickerPeriods * status_.flickerPeriod;
analogueGain *= shutterTime / newShutterTime; analogueGain *= shutterTime / newShutterTime;
// We should still not allow the ag to go over the /*
// largest value in the exposure mode. Note that this * We should still not allow the ag to go over the
// may force more of the total exposure into the digital * largest value in the exposure mode. Note that this
// gain as a side-effect. * may force more of the total exposure into the digital
* gain as a side-effect.
*/
analogueGain = std::min(analogueGain, exposureMode_->gain.back()); analogueGain = std::min(analogueGain, exposureMode_->gain.back());
shutterTime = newShutterTime; shutterTime = newShutterTime;
} }
@ -749,8 +800,10 @@ void Agc::writeAndFinish(Metadata *imageMetadata, bool desaturate)
status_.targetExposureValue = desaturate ? 0s : target_.totalExposureNoDG; status_.targetExposureValue = desaturate ? 0s : target_.totalExposureNoDG;
status_.shutterTime = filtered_.shutter; status_.shutterTime = filtered_.shutter;
status_.analogueGain = filtered_.analogueGain; status_.analogueGain = filtered_.analogueGain;
// Write to metadata as well, in case anyone wants to update the camera /*
// immediately. * Write to metadata as well, in case anyone wants to update the camera
* immediately.
*/
imageMetadata->set("agc.status", status_); imageMetadata->set("agc.status", status_);
LOG(RPiAgc, Debug) << "Output written, total exposure requested is " LOG(RPiAgc, Debug) << "Output written, total exposure requested is "
<< filtered_.totalExposure; << filtered_.totalExposure;
@ -765,7 +818,7 @@ Duration Agc::clipShutter(Duration shutter)
return shutter; return shutter;
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Agc(controller); return (Algorithm *)new Agc(controller);

View file

@ -15,10 +15,12 @@
#include "../agc_status.h" #include "../agc_status.h"
#include "../pwl.hpp" #include "../pwl.hpp"
// This is our implementation of AGC. /* This is our implementation of AGC. */
// This is the number actually set up by the firmware, not the maximum possible /*
// number (which is 16). * This is the number actually set up by the firmware, not the maximum possible
* number (which is 16).
*/
#define AGC_STATS_SIZE 15 #define AGC_STATS_SIZE 15
@ -73,7 +75,7 @@ public:
Agc(Controller *controller); Agc(Controller *controller);
char const *name() const override; char const *name() const override;
void read(boost::property_tree::ptree const &params) override; void read(boost::property_tree::ptree const &params) override;
// AGC handles "pausing" for itself. /* AGC handles "pausing" for itself. */
bool isPaused() const override; bool isPaused() const override;
void pause() override; void pause() override;
void resume() override; void resume() override;
@ -115,17 +117,17 @@ private:
libcamera::utils::Duration shutter; libcamera::utils::Duration shutter;
double analogueGain; double analogueGain;
libcamera::utils::Duration totalExposure; libcamera::utils::Duration totalExposure;
libcamera::utils::Duration totalExposureNoDG; // without digital gain libcamera::utils::Duration totalExposureNoDG; /* without digital gain */
}; };
ExposureValues current_; // values for the current frame ExposureValues current_; /* values for the current frame */
ExposureValues target_; // calculate the values we want here ExposureValues target_; /* calculate the values we want here */
ExposureValues filtered_; // these values are filtered towards target ExposureValues filtered_; /* these values are filtered towards target */
AgcStatus status_; AgcStatus status_;
int lockCount_; int lockCount_;
DeviceStatus lastDeviceStatus_; DeviceStatus lastDeviceStatus_;
libcamera::utils::Duration lastTargetExposure_; libcamera::utils::Duration lastTargetExposure_;
double lastSensitivity_; // sensitivity of the previous camera mode double lastSensitivity_; /* sensitivity of the previous camera mode */
// Below here the "settings" that applications can change. /* Below here the "settings" that applications can change. */
std::string meteringModeName_; std::string meteringModeName_;
std::string exposureModeName_; std::string exposureModeName_;
std::string constraintModeName_; std::string constraintModeName_;
@ -136,4 +138,4 @@ private:
double fixedAnalogueGain_; double fixedAnalogueGain_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -14,7 +14,7 @@
#include "../awb_status.h" #include "../awb_status.h"
#include "alsc.hpp" #include "alsc.hpp"
// Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm. /* Raspberry Pi ALSC (Auto Lens Shading Correction) algorithm. */
using namespace RPiController; using namespace RPiController;
using namespace libcamera; using namespace libcamera;
@ -68,7 +68,7 @@ static void generateLut(double *lut, boost::property_tree::ptree const &params)
double r2 = (dx * dx + dy * dy) / R2; double r2 = (dx * dx + dy * dy) / R2;
lut[num++] = lut[num++] =
(f1 * r2 + f2) * (f1 * r2 + f2) / (f1 * r2 + f2) * (f1 * r2 + f2) /
(f2 * f2); // this reproduces the cos^4 rule (f2 * f2); /* this reproduces the cos^4 rule */
} }
} }
} }
@ -171,7 +171,7 @@ void Alsc::initialise()
frameCount2_ = frameCount_ = framePhase_ = 0; frameCount2_ = frameCount_ = framePhase_ = 0;
firstTime_ = true; firstTime_ = true;
ct_ = config_.defaultCt; ct_ = config_.defaultCt;
// The lambdas are initialised in the SwitchMode. /* The lambdas are initialised in the SwitchMode. */
} }
void Alsc::waitForAysncThread() void Alsc::waitForAysncThread()
@ -188,8 +188,10 @@ void Alsc::waitForAysncThread()
static bool compareModes(CameraMode const &cm0, CameraMode const &cm1) static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)
{ {
// Return true if the modes crop from the sensor significantly differently, /*
// or if the user transform has changed. * Return true if the modes crop from the sensor significantly differently,
* or if the user transform has changed.
*/
if (cm0.transform != cm1.transform) if (cm0.transform != cm1.transform)
return true; return true;
int leftDiff = abs(cm0.cropX - cm1.cropX); int leftDiff = abs(cm0.cropX - cm1.cropX);
@ -198,9 +200,11 @@ static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)
cm1.cropX - cm1.scaleX * cm1.width); cm1.cropX - cm1.scaleX * cm1.width);
int bottomDiff = fabs(cm0.cropY + cm0.scaleY * cm0.height - int bottomDiff = fabs(cm0.cropY + cm0.scaleY * cm0.height -
cm1.cropY - cm1.scaleY * cm1.height); cm1.cropY - cm1.scaleY * cm1.height);
// These thresholds are a rather arbitrary amount chosen to trigger /*
// when carrying on with the previously calculated tables might be * These thresholds are a rather arbitrary amount chosen to trigger
// worse than regenerating them (but without the adaptive algorithm). * when carrying on with the previously calculated tables might be
* worse than regenerating them (but without the adaptive algorithm).
*/
int thresholdX = cm0.sensorWidth >> 4; int thresholdX = cm0.sensorWidth >> 4;
int thresholdY = cm0.sensorHeight >> 4; int thresholdY = cm0.sensorHeight >> 4;
return leftDiff > thresholdX || rightDiff > thresholdX || return leftDiff > thresholdX || rightDiff > thresholdX ||
@ -210,28 +214,34 @@ static bool compareModes(CameraMode const &cm0, CameraMode const &cm1)
void Alsc::switchMode(CameraMode const &cameraMode, void Alsc::switchMode(CameraMode const &cameraMode,
[[maybe_unused]] Metadata *metadata) [[maybe_unused]] Metadata *metadata)
{ {
// We're going to start over with the tables if there's any "significant" /*
// change. * We're going to start over with the tables if there's any "significant"
* change.
*/
bool resetTables = firstTime_ || compareModes(cameraMode_, cameraMode); bool resetTables = firstTime_ || compareModes(cameraMode_, cameraMode);
// Believe the colour temperature from the AWB, if there is one. /* Believe the colour temperature from the AWB, if there is one. */
ct_ = getCt(metadata, ct_); ct_ = getCt(metadata, ct_);
// Ensure the other thread isn't running while we do this. /* Ensure the other thread isn't running while we do this. */
waitForAysncThread(); waitForAysncThread();
cameraMode_ = cameraMode; cameraMode_ = cameraMode;
// We must resample the luminance table like we do the others, but it's /*
// fixed so we can simply do it up front here. * We must resample the luminance table like we do the others, but it's
* fixed so we can simply do it up front here.
*/
resampleCalTable(config_.luminanceLut, cameraMode_, luminanceTable_); resampleCalTable(config_.luminanceLut, cameraMode_, luminanceTable_);
if (resetTables) { if (resetTables) {
// Upon every "table reset", arrange for something sensible to be /*
// generated. Construct the tables for the previous recorded colour * Upon every "table reset", arrange for something sensible to be
// temperature. In order to start over from scratch we initialise * generated. Construct the tables for the previous recorded colour
// the lambdas, but the rest of this code then echoes the code in * temperature. In order to start over from scratch we initialise
// doAlsc, without the adaptive algorithm. * the lambdas, but the rest of this code then echoes the code in
* doAlsc, without the adaptive algorithm.
*/
for (int i = 0; i < XY; i++) for (int i = 0; i < XY; i++)
lambdaR_[i] = lambdaB_[i] = 1.0; lambdaR_[i] = lambdaB_[i] = 1.0;
double calTableR[XY], calTableB[XY], calTableTmp[XY]; double calTableR[XY], calTableB[XY], calTableTmp[XY];
@ -244,7 +254,7 @@ void Alsc::switchMode(CameraMode const &cameraMode,
addLuminanceToTables(syncResults_, asyncLambdaR_, 1.0, asyncLambdaB_, addLuminanceToTables(syncResults_, asyncLambdaR_, 1.0, asyncLambdaB_,
luminanceTable_, config_.luminanceStrength); luminanceTable_, config_.luminanceStrength);
memcpy(prevSyncResults_, syncResults_, sizeof(prevSyncResults_)); memcpy(prevSyncResults_, syncResults_, sizeof(prevSyncResults_));
framePhase_ = config_.framePeriod; // run the algo again asap framePhase_ = config_.framePeriod; /* run the algo again asap */
firstTime_ = false; firstTime_ = false;
} }
} }
@ -260,7 +270,7 @@ void Alsc::fetchAsyncResults()
double getCt(Metadata *metadata, double defaultCt) double getCt(Metadata *metadata, double defaultCt)
{ {
AwbStatus awbStatus; AwbStatus awbStatus;
awbStatus.temperatureK = defaultCt; // in case nothing found awbStatus.temperatureK = defaultCt; /* in case nothing found */
if (metadata->get("awb.status", awbStatus) != 0) if (metadata->get("awb.status", awbStatus) != 0)
LOG(RPiAlsc, Debug) << "no AWB results found, using " LOG(RPiAlsc, Debug) << "no AWB results found, using "
<< awbStatus.temperatureK; << awbStatus.temperatureK;
@ -282,18 +292,22 @@ static void copyStats(bcm2835_isp_stats_region regions[XY], StatisticsPtr &stats
regions[i].g_sum = inputRegions[i].g_sum / gTable[i]; regions[i].g_sum = inputRegions[i].g_sum / gTable[i];
regions[i].b_sum = inputRegions[i].b_sum / bTable[i]; regions[i].b_sum = inputRegions[i].b_sum / bTable[i];
regions[i].counted = inputRegions[i].counted; regions[i].counted = inputRegions[i].counted;
// (don't care about the uncounted value) /* (don't care about the uncounted value) */
} }
} }
void Alsc::restartAsync(StatisticsPtr &stats, Metadata *imageMetadata) void Alsc::restartAsync(StatisticsPtr &stats, Metadata *imageMetadata)
{ {
LOG(RPiAlsc, Debug) << "Starting ALSC calculation"; LOG(RPiAlsc, Debug) << "Starting ALSC calculation";
// Get the current colour temperature. It's all we need from the /*
// metadata. Default to the last CT value (which could be the default). * Get the current colour temperature. It's all we need from the
* metadata. Default to the last CT value (which could be the default).
*/
ct_ = getCt(imageMetadata, ct_); ct_ = getCt(imageMetadata, ct_);
// We have to copy the statistics here, dividing out our best guess of /*
// the LSC table that the pipeline applied to them. * We have to copy the statistics here, dividing out our best guess of
* the LSC table that the pipeline applied to them.
*/
AlscStatus alscStatus; AlscStatus alscStatus;
if (imageMetadata->get("alsc.status", alscStatus) != 0) { if (imageMetadata->get("alsc.status", alscStatus) != 0) {
LOG(RPiAlsc, Warning) LOG(RPiAlsc, Warning)
@ -317,8 +331,10 @@ void Alsc::restartAsync(StatisticsPtr &stats, Metadata *imageMetadata)
void Alsc::prepare(Metadata *imageMetadata) void Alsc::prepare(Metadata *imageMetadata)
{ {
// Count frames since we started, and since we last poked the async /*
// thread. * Count frames since we started, and since we last poked the async
* thread.
*/
if (frameCount_ < (int)config_.startupFrames) if (frameCount_ < (int)config_.startupFrames)
frameCount_++; frameCount_++;
double speed = frameCount_ < (int)config_.startupFrames double speed = frameCount_ < (int)config_.startupFrames
@ -331,12 +347,12 @@ void Alsc::prepare(Metadata *imageMetadata)
if (asyncStarted_ && asyncFinished_) if (asyncStarted_ && asyncFinished_)
fetchAsyncResults(); fetchAsyncResults();
} }
// Apply IIR filter to results and program into the pipeline. /* Apply IIR filter to results and program into the pipeline. */
double *ptr = (double *)syncResults_, double *ptr = (double *)syncResults_,
*pptr = (double *)prevSyncResults_; *pptr = (double *)prevSyncResults_;
for (unsigned int i = 0; i < sizeof(syncResults_) / sizeof(double); i++) for (unsigned int i = 0; i < sizeof(syncResults_) / sizeof(double); i++)
pptr[i] = speed * ptr[i] + (1.0 - speed) * pptr[i]; pptr[i] = speed * ptr[i] + (1.0 - speed) * pptr[i];
// Put output values into status metadata. /* Put output values into status metadata. */
AlscStatus status; AlscStatus status;
memcpy(status.r, prevSyncResults_[0], sizeof(status.r)); memcpy(status.r, prevSyncResults_[0], sizeof(status.r));
memcpy(status.g, prevSyncResults_[1], sizeof(status.g)); memcpy(status.g, prevSyncResults_[1], sizeof(status.g));
@ -346,8 +362,10 @@ void Alsc::prepare(Metadata *imageMetadata)
void Alsc::process(StatisticsPtr &stats, Metadata *imageMetadata) void Alsc::process(StatisticsPtr &stats, Metadata *imageMetadata)
{ {
// Count frames since we started, and since we last poked the async /*
// thread. * Count frames since we started, and since we last poked the async
* thread.
*/
if (framePhase_ < (int)config_.framePeriod) if (framePhase_ < (int)config_.framePeriod)
framePhase_++; framePhase_++;
if (frameCount2_ < (int)config_.startupFrames) if (frameCount2_ < (int)config_.startupFrames)
@ -415,8 +433,10 @@ void getCalTable(double ct, std::vector<AlscCalibration> const &calibrations,
void resampleCalTable(double const calTableIn[XY], void resampleCalTable(double const calTableIn[XY],
CameraMode const &cameraMode, double calTableOut[XY]) CameraMode const &cameraMode, double calTableOut[XY])
{ {
// Precalculate and cache the x sampling locations and phases to save /*
// recomputing them on every row. * Precalculate and cache the x sampling locations and phases to save
* recomputing them on every row.
*/
int xLo[X], xHi[X]; int xLo[X], xHi[X];
double xf[X]; double xf[X];
double scaleX = cameraMode.sensorWidth / double scaleX = cameraMode.sensorWidth /
@ -434,7 +454,7 @@ void resampleCalTable(double const calTableIn[XY],
xHi[i] = X - 1 - xHi[i]; xHi[i] = X - 1 - xHi[i];
} }
} }
// Now march over the output table generating the new values. /* Now march over the output table generating the new values. */
double scaleY = cameraMode.sensorHeight / double scaleY = cameraMode.sensorHeight /
(cameraMode.height * cameraMode.scaleY); (cameraMode.height * cameraMode.scaleY);
double yOff = cameraMode.cropY / (double)cameraMode.sensorHeight; double yOff = cameraMode.cropY / (double)cameraMode.sensorHeight;
@ -461,7 +481,7 @@ void resampleCalTable(double const calTableIn[XY],
} }
} }
// Calculate chrominance statistics (R/G and B/G) for each region. /* Calculate chrominance statistics (R/G and B/G) for each region. */
static_assert(XY == AWB_REGIONS, "ALSC/AWB statistics region mismatch"); static_assert(XY == AWB_REGIONS, "ALSC/AWB statistics region mismatch");
static void calculateCrCb(bcm2835_isp_stats_region *awbRegion, double cr[XY], static void calculateCrCb(bcm2835_isp_stats_region *awbRegion, double cr[XY],
double cb[XY], uint32_t minCount, uint16_t minG) double cb[XY], uint32_t minCount, uint16_t minG)
@ -512,8 +532,10 @@ void compensateLambdasForCal(double const calTable[XY],
printf("]\n"); printf("]\n");
} }
// Compute weight out of 1.0 which reflects how similar we wish to make the /*
// colours of these two regions. * Compute weight out of 1.0 which reflects how similar we wish to make the
* colours of these two regions.
*/
static double computeWeight(double Ci, double Cj, double sigma) static double computeWeight(double Ci, double Cj, double sigma)
{ {
if (Ci == InsufficientData || Cj == InsufficientData) if (Ci == InsufficientData || Cj == InsufficientData)
@ -522,11 +544,11 @@ static double computeWeight(double Ci, double Cj, double sigma)
return exp(-diff * diff / 2); return exp(-diff * diff / 2);
} }
// Compute all weights. /* Compute all weights. */
static void computeW(double const C[XY], double sigma, double W[XY][4]) static void computeW(double const C[XY], double sigma, double W[XY][4])
{ {
for (int i = 0; i < XY; i++) { for (int i = 0; i < XY; i++) {
// Start with neighbour above and go clockwise. /* Start with neighbour above and go clockwise. */
W[i][0] = i >= X ? computeWeight(C[i], C[i - X], sigma) : 0; W[i][0] = i >= X ? computeWeight(C[i], C[i - X], sigma) : 0;
W[i][1] = i % X < X - 1 ? computeWeight(C[i], C[i + 1], sigma) : 0; W[i][1] = i % X < X - 1 ? computeWeight(C[i], C[i + 1], sigma) : 0;
W[i][2] = i < XY - X ? computeWeight(C[i], C[i + X], sigma) : 0; W[i][2] = i < XY - X ? computeWeight(C[i], C[i + X], sigma) : 0;
@ -534,17 +556,19 @@ static void computeW(double const C[XY], double sigma, double W[XY][4])
} }
} }
// Compute M, the large but sparse matrix such that M * lambdas = 0. /* Compute M, the large but sparse matrix such that M * lambdas = 0. */
static void constructM(double const C[XY], double const W[XY][4], static void constructM(double const C[XY], double const W[XY][4],
double M[XY][4]) double M[XY][4])
{ {
double epsilon = 0.001; double epsilon = 0.001;
for (int i = 0; i < XY; i++) { for (int i = 0; i < XY; i++) {
// Note how, if C[i] == INSUFFICIENT_DATA, the weights will all /*
// be zero so the equation is still set up correctly. * Note how, if C[i] == INSUFFICIENT_DATA, the weights will all
* be zero so the equation is still set up correctly.
*/
int m = !!(i >= X) + !!(i % X < X - 1) + !!(i < XY - X) + int m = !!(i >= X) + !!(i % X < X - 1) + !!(i < XY - X) +
!!(i % X); // total number of neighbours !!(i % X); /* total number of neighbours */
// we'll divide the diagonal out straight away /* we'll divide the diagonal out straight away */
double diagonal = (epsilon + W[i][0] + W[i][1] + W[i][2] + W[i][3]) * C[i]; double diagonal = (epsilon + W[i][0] + W[i][1] + W[i][2] + W[i][3]) * C[i];
M[i][0] = i >= X ? (W[i][0] * C[i - X] + epsilon / m * C[i]) / diagonal : 0; M[i][0] = i >= X ? (W[i][0] * C[i - X] + epsilon / m * C[i]) / diagonal : 0;
M[i][1] = i % X < X - 1 ? (W[i][1] * C[i + 1] + epsilon / m * C[i]) / diagonal : 0; M[i][1] = i % X < X - 1 ? (W[i][1] * C[i + 1] + epsilon / m * C[i]) / diagonal : 0;
@ -553,9 +577,11 @@ static void constructM(double const C[XY], double const W[XY][4],
} }
} }
// In the compute_lambda_ functions, note that the matrix coefficients for the /*
// left/right neighbours are zero down the left/right edges, so we don't need * In the compute_lambda_ functions, note that the matrix coefficients for the
// need to test the i value to exclude them. * left/right neighbours are zero down the left/right edges, so we don't need
* need to test the i value to exclude them.
*/
static double computeLambdaBottom(int i, double const M[XY][4], static double computeLambdaBottom(int i, double const M[XY][4],
double lambda[XY]) double lambda[XY])
{ {
@ -585,7 +611,7 @@ static double computeLambdaTopEnd(int i, double const M[XY][4],
return M[i][0] * lambda[i - X] + M[i][3] * lambda[i - 1]; return M[i][0] * lambda[i - X] + M[i][3] * lambda[i - 1];
} }
// Gauss-Seidel iteration with over-relaxation. /* Gauss-Seidel iteration with over-relaxation. */
static double gaussSeidel2Sor(double const M[XY][4], double omega, static double gaussSeidel2Sor(double const M[XY][4], double omega,
double lambda[XY], double lambdaBound) double lambda[XY], double lambdaBound)
{ {
@ -610,8 +636,10 @@ static double gaussSeidel2Sor(double const M[XY][4], double omega,
} }
lambda[i] = computeLambdaTopEnd(i, M, lambda); lambda[i] = computeLambdaTopEnd(i, M, lambda);
lambda[i] = std::clamp(lambda[i], min, max); lambda[i] = std::clamp(lambda[i], min, max);
// Also solve the system from bottom to top, to help spread the updates /*
// better. * Also solve the system from bottom to top, to help spread the updates
* better.
*/
lambda[i] = computeLambdaTopEnd(i, M, lambda); lambda[i] = computeLambdaTopEnd(i, M, lambda);
lambda[i] = std::clamp(lambda[i], min, max); lambda[i] = std::clamp(lambda[i], min, max);
for (i = XY - 2; i >= XY - X; i--) { for (i = XY - 2; i >= XY - X; i--) {
@ -637,7 +665,7 @@ static double gaussSeidel2Sor(double const M[XY][4], double omega,
return maxDiff; return maxDiff;
} }
// Normalise the values so that the smallest value is 1. /* Normalise the values so that the smallest value is 1. */
static void normalise(double *ptr, size_t n) static void normalise(double *ptr, size_t n)
{ {
double minval = ptr[0]; double minval = ptr[0];
@ -647,7 +675,7 @@ static void normalise(double *ptr, size_t n)
ptr[i] /= minval; ptr[i] /= minval;
} }
// Rescale the values so that the average value is 1. /* Rescale the values so that the average value is 1. */
static void reaverage(Span<double> data) static void reaverage(Span<double> data)
{ {
double sum = std::accumulate(data.begin(), data.end(), 0.0); double sum = std::accumulate(data.begin(), data.end(), 0.0);
@ -670,15 +698,17 @@ static void runMatrixIterations(double const C[XY], double lambda[XY],
<< "Stop after " << i + 1 << " iterations"; << "Stop after " << i + 1 << " iterations";
break; break;
} }
// this happens very occasionally (so make a note), though /*
// doesn't seem to matter * this happens very occasionally (so make a note), though
* doesn't seem to matter
*/
if (maxDiff > lastMaxDiff) if (maxDiff > lastMaxDiff)
LOG(RPiAlsc, Debug) LOG(RPiAlsc, Debug)
<< "Iteration " << i << ": maxDiff gone up " << "Iteration " << i << ": maxDiff gone up "
<< lastMaxDiff << " to " << maxDiff; << lastMaxDiff << " to " << maxDiff;
lastMaxDiff = maxDiff; lastMaxDiff = maxDiff;
} }
// We're going to normalise the lambdas so the total average is 1. /* We're going to normalise the lambdas so the total average is 1. */
reaverage({ lambda, XY }); reaverage({ lambda, XY });
} }
@ -712,41 +742,49 @@ void addLuminanceToTables(double results[3][Y][X], double const lambdaR[XY],
void Alsc::doAlsc() void Alsc::doAlsc()
{ {
double cr[XY], cb[XY], wr[XY][4], wb[XY][4], calTableR[XY], calTableB[XY], calTableTmp[XY]; double cr[XY], cb[XY], wr[XY][4], wb[XY][4], calTableR[XY], calTableB[XY], calTableTmp[XY];
// Calculate our R/B ("Cr"/"Cb") colour statistics, and assess which are /*
// usable. * Calculate our R/B ("Cr"/"Cb") colour statistics, and assess which are
* usable.
*/
calculateCrCb(statistics_, cr, cb, config_.minCount, config_.minG); calculateCrCb(statistics_, cr, cb, config_.minCount, config_.minG);
// Fetch the new calibrations (if any) for this CT. Resample them in /*
// case the camera mode is not full-frame. * Fetch the new calibrations (if any) for this CT. Resample them in
* case the camera mode is not full-frame.
*/
getCalTable(ct_, config_.calibrationsCr, calTableTmp); getCalTable(ct_, config_.calibrationsCr, calTableTmp);
resampleCalTable(calTableTmp, cameraMode_, calTableR); resampleCalTable(calTableTmp, cameraMode_, calTableR);
getCalTable(ct_, config_.calibrationsCb, calTableTmp); getCalTable(ct_, config_.calibrationsCb, calTableTmp);
resampleCalTable(calTableTmp, cameraMode_, calTableB); resampleCalTable(calTableTmp, cameraMode_, calTableB);
// You could print out the cal tables for this image here, if you're /*
// tuning the algorithm... * You could print out the cal tables for this image here, if you're
// Apply any calibration to the statistics, so the adaptive algorithm * tuning the algorithm...
// makes only the extra adjustments. * Apply any calibration to the statistics, so the adaptive algorithm
* makes only the extra adjustments.
*/
applyCalTable(calTableR, cr); applyCalTable(calTableR, cr);
applyCalTable(calTableB, cb); applyCalTable(calTableB, cb);
// Compute weights between zones. /* Compute weights between zones. */
computeW(cr, config_.sigmaCr, wr); computeW(cr, config_.sigmaCr, wr);
computeW(cb, config_.sigmaCb, wb); computeW(cb, config_.sigmaCb, wb);
// Run Gauss-Seidel iterations over the resulting matrix, for R and B. /* Run Gauss-Seidel iterations over the resulting matrix, for R and B. */
runMatrixIterations(cr, lambdaR_, wr, config_.omega, config_.nIter, runMatrixIterations(cr, lambdaR_, wr, config_.omega, config_.nIter,
config_.threshold, config_.lambdaBound); config_.threshold, config_.lambdaBound);
runMatrixIterations(cb, lambdaB_, wb, config_.omega, config_.nIter, runMatrixIterations(cb, lambdaB_, wb, config_.omega, config_.nIter,
config_.threshold, config_.lambdaBound); config_.threshold, config_.lambdaBound);
// Fold the calibrated gains into our final lambda values. (Note that on /*
// the next run, we re-start with the lambda values that don't have the * Fold the calibrated gains into our final lambda values. (Note that on
// calibration gains included.) * the next run, we re-start with the lambda values that don't have the
* calibration gains included.)
*/
compensateLambdasForCal(calTableR, lambdaR_, asyncLambdaR_); compensateLambdasForCal(calTableR, lambdaR_, asyncLambdaR_);
compensateLambdasForCal(calTableB, lambdaB_, asyncLambdaB_); compensateLambdasForCal(calTableB, lambdaB_, asyncLambdaB_);
// Fold in the luminance table at the appropriate strength. /* Fold in the luminance table at the appropriate strength. */
addLuminanceToTables(asyncResults_, asyncLambdaR_, 1.0, addLuminanceToTables(asyncResults_, asyncLambdaR_, 1.0,
asyncLambdaB_, luminanceTable_, asyncLambdaB_, luminanceTable_,
config_.luminanceStrength); config_.luminanceStrength);
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Alsc(controller); return (Algorithm *)new Alsc(controller);

View file

@ -15,7 +15,7 @@
namespace RPiController { namespace RPiController {
// Algorithm to generate automagic LSC (Lens Shading Correction) tables. /* Algorithm to generate automagic LSC (Lens Shading Correction) tables. */
struct AlscCalibration { struct AlscCalibration {
double ct; double ct;
@ -23,11 +23,11 @@ struct AlscCalibration {
}; };
struct AlscConfig { struct AlscConfig {
// Only repeat the ALSC calculation every "this many" frames /* Only repeat the ALSC calculation every "this many" frames */
uint16_t framePeriod; uint16_t framePeriod;
// number of initial frames for which speed taken as 1.0 (maximum) /* number of initial frames for which speed taken as 1.0 (maximum) */
uint16_t startupFrames; uint16_t startupFrames;
// IIR filter speed applied to algorithm results /* IIR filter speed applied to algorithm results */
double speed; double speed;
double sigmaCr; double sigmaCr;
double sigmaCb; double sigmaCb;
@ -39,9 +39,9 @@ struct AlscConfig {
double luminanceStrength; double luminanceStrength;
std::vector<AlscCalibration> calibrationsCr; std::vector<AlscCalibration> calibrationsCr;
std::vector<AlscCalibration> calibrationsCb; std::vector<AlscCalibration> calibrationsCb;
double defaultCt; // colour temperature if no metadata found double defaultCt; /* colour temperature if no metadata found */
double threshold; // iteration termination threshold double threshold; /* iteration termination threshold */
double lambdaBound; // upper/lower bound for lambda from a value of 1 double lambdaBound; /* upper/lower bound for lambda from a value of 1 */
}; };
class Alsc : public Algorithm class Alsc : public Algorithm
@ -57,41 +57,45 @@ public:
void process(StatisticsPtr &stats, Metadata *imageMetadata) override; void process(StatisticsPtr &stats, Metadata *imageMetadata) override;
private: private:
// configuration is read-only, and available to both threads /* configuration is read-only, and available to both threads */
AlscConfig config_; AlscConfig config_;
bool firstTime_; bool firstTime_;
CameraMode cameraMode_; CameraMode cameraMode_;
double luminanceTable_[ALSC_CELLS_X * ALSC_CELLS_Y]; double luminanceTable_[ALSC_CELLS_X * ALSC_CELLS_Y];
std::thread asyncThread_; std::thread asyncThread_;
void asyncFunc(); // asynchronous thread function void asyncFunc(); /* asynchronous thread function */
std::mutex mutex_; std::mutex mutex_;
// condvar for async thread to wait on /* condvar for async thread to wait on */
std::condition_variable asyncSignal_; std::condition_variable asyncSignal_;
// condvar for synchronous thread to wait on /* condvar for synchronous thread to wait on */
std::condition_variable syncSignal_; std::condition_variable syncSignal_;
// for sync thread to check if async thread finished (requires mutex) /* for sync thread to check if async thread finished (requires mutex) */
bool asyncFinished_; bool asyncFinished_;
// for async thread to check if it's been told to run (requires mutex) /* for async thread to check if it's been told to run (requires mutex) */
bool asyncStart_; bool asyncStart_;
// for async thread to check if it's been told to quit (requires mutex) /* for async thread to check if it's been told to quit (requires mutex) */
bool asyncAbort_; bool asyncAbort_;
// The following are only for the synchronous thread to use: /*
// for sync thread to note its has asked async thread to run * The following are only for the synchronous thread to use:
* for sync thread to note its has asked async thread to run
*/
bool asyncStarted_; bool asyncStarted_;
// counts up to framePeriod before restarting the async thread /* counts up to framePeriod before restarting the async thread */
int framePhase_; int framePhase_;
// counts up to startupFrames /* counts up to startupFrames */
int frameCount_; int frameCount_;
// counts up to startupFrames for Process function /* counts up to startupFrames for Process function */
int frameCount2_; int frameCount2_;
double syncResults_[3][ALSC_CELLS_Y][ALSC_CELLS_X]; double syncResults_[3][ALSC_CELLS_Y][ALSC_CELLS_X];
double prevSyncResults_[3][ALSC_CELLS_Y][ALSC_CELLS_X]; double prevSyncResults_[3][ALSC_CELLS_Y][ALSC_CELLS_X];
void waitForAysncThread(); void waitForAysncThread();
// The following are for the asynchronous thread to use, though the main /*
// thread can set/reset them if the async thread is known to be idle: * The following are for the asynchronous thread to use, though the main
* thread can set/reset them if the async thread is known to be idle:
*/
void restartAsync(StatisticsPtr &stats, Metadata *imageMetadata); void restartAsync(StatisticsPtr &stats, Metadata *imageMetadata);
// copy out the results from the async thread so that it can be restarted /* copy out the results from the async thread so that it can be restarted */
void fetchAsyncResults(); void fetchAsyncResults();
double ct_; double ct_;
bcm2835_isp_stats_region statistics_[ALSC_CELLS_Y * ALSC_CELLS_X]; bcm2835_isp_stats_region statistics_[ALSC_CELLS_Y * ALSC_CELLS_X];
@ -103,4 +107,4 @@ private:
double lambdaB_[ALSC_CELLS_X * ALSC_CELLS_Y]; double lambdaB_[ALSC_CELLS_X * ALSC_CELLS_Y];
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -21,8 +21,10 @@ LOG_DEFINE_CATEGORY(RPiAwb)
#define AWB_STATS_SIZE_X DEFAULT_AWB_REGIONS_X #define AWB_STATS_SIZE_X DEFAULT_AWB_REGIONS_X
#define AWB_STATS_SIZE_Y DEFAULT_AWB_REGIONS_Y #define AWB_STATS_SIZE_Y DEFAULT_AWB_REGIONS_Y
// todo - the locking in this algorithm needs some tidying up as has been done /*
// elsewhere (ALSC and AGC). * todo - the locking in this algorithm needs some tidying up as has been done
* elsewhere (ALSC and AGC).
*/
void AwbMode::read(boost::property_tree::ptree const &params) void AwbMode::read(boost::property_tree::ptree const &params)
{ {
@ -107,11 +109,11 @@ void AwbConfig::read(boost::property_tree::ptree const &params)
bayes = false; bayes = false;
} }
} }
fast = params.get<int>("fast", bayes); // default to fast for Bayesian, otherwise slow fast = params.get<int>("fast", bayes); /* default to fast for Bayesian, otherwise slow */
whitepointR = params.get<double>("whitepoint_r", 0.0); whitepointR = params.get<double>("whitepoint_r", 0.0);
whitepointB = params.get<double>("whitepoint_b", 0.0); whitepointB = params.get<double>("whitepoint_b", 0.0);
if (bayes == false) if (bayes == false)
sensitivityR = sensitivityB = 1.0; // nor do sensitivities make any sense sensitivityR = sensitivityB = 1.0; /* nor do sensitivities make any sense */
} }
Awb::Awb(Controller *controller) Awb::Awb(Controller *controller)
@ -147,16 +149,18 @@ void Awb::read(boost::property_tree::ptree const &params)
void Awb::initialise() void Awb::initialise()
{ {
frameCount_ = framePhase_ = 0; frameCount_ = framePhase_ = 0;
// Put something sane into the status that we are filtering towards, /*
// just in case the first few frames don't have anything meaningful in * Put something sane into the status that we are filtering towards,
// them. * just in case the first few frames don't have anything meaningful in
* them.
*/
if (!config_.ctR.empty() && !config_.ctB.empty()) { if (!config_.ctR.empty() && !config_.ctB.empty()) {
syncResults_.temperatureK = config_.ctR.domain().clip(4000); syncResults_.temperatureK = config_.ctR.domain().clip(4000);
syncResults_.gainR = 1.0 / config_.ctR.eval(syncResults_.temperatureK); syncResults_.gainR = 1.0 / config_.ctR.eval(syncResults_.temperatureK);
syncResults_.gainG = 1.0; syncResults_.gainG = 1.0;
syncResults_.gainB = 1.0 / config_.ctB.eval(syncResults_.temperatureK); syncResults_.gainB = 1.0 / config_.ctB.eval(syncResults_.temperatureK);
} else { } else {
// random values just to stop the world blowing up /* random values just to stop the world blowing up */
syncResults_.temperatureK = 4500; syncResults_.temperatureK = 4500;
syncResults_.gainR = syncResults_.gainG = syncResults_.gainB = 1.0; syncResults_.gainR = syncResults_.gainG = syncResults_.gainB = 1.0;
} }
@ -171,7 +175,7 @@ bool Awb::isPaused() const
void Awb::pause() void Awb::pause()
{ {
// "Pause" by fixing everything to the most recent values. /* "Pause" by fixing everything to the most recent values. */
manualR_ = syncResults_.gainR = prevSyncResults_.gainR; manualR_ = syncResults_.gainR = prevSyncResults_.gainR;
manualB_ = syncResults_.gainB = prevSyncResults_.gainB; manualB_ = syncResults_.gainB = prevSyncResults_.gainB;
syncResults_.gainG = prevSyncResults_.gainG; syncResults_.gainG = prevSyncResults_.gainG;
@ -186,8 +190,10 @@ void Awb::resume()
unsigned int Awb::getConvergenceFrames() const unsigned int Awb::getConvergenceFrames() const
{ {
// If not in auto mode, there is no convergence /*
// to happen, so no need to drop any frames - return zero. * If not in auto mode, there is no convergence
* to happen, so no need to drop any frames - return zero.
*/
if (!isAutoEnabled()) if (!isAutoEnabled())
return 0; return 0;
else else
@ -201,11 +207,13 @@ void Awb::setMode(std::string const &modeName)
void Awb::setManualGains(double manualR, double manualB) void Awb::setManualGains(double manualR, double manualB)
{ {
// If any of these are 0.0, we swich back to auto. /* If any of these are 0.0, we swich back to auto. */
manualR_ = manualR; manualR_ = manualR;
manualB_ = manualB; manualB_ = manualB;
// If not in auto mode, set these values into the syncResults which /*
// means that Prepare() will adopt them immediately. * If not in auto mode, set these values into the syncResults which
* means that Prepare() will adopt them immediately.
*/
if (!isAutoEnabled()) { if (!isAutoEnabled()) {
syncResults_.gainR = prevSyncResults_.gainR = manualR_; syncResults_.gainR = prevSyncResults_.gainR = manualR_;
syncResults_.gainG = prevSyncResults_.gainG = 1.0; syncResults_.gainG = prevSyncResults_.gainG = 1.0;
@ -216,8 +224,10 @@ void Awb::setManualGains(double manualR, double manualB)
void Awb::switchMode([[maybe_unused]] CameraMode const &cameraMode, void Awb::switchMode([[maybe_unused]] CameraMode const &cameraMode,
Metadata *metadata) Metadata *metadata)
{ {
// On the first mode switch we'll have no meaningful colour /*
// temperature, so try to dead reckon one if in manual mode. * On the first mode switch we'll have no meaningful colour
* temperature, so try to dead reckon one if in manual mode.
*/
if (!isAutoEnabled() && firstSwitchMode_ && config_.bayes) { if (!isAutoEnabled() && firstSwitchMode_ && config_.bayes) {
Pwl ctRInverse = config_.ctR.inverse(); Pwl ctRInverse = config_.ctR.inverse();
Pwl ctBInverse = config_.ctB.inverse(); Pwl ctBInverse = config_.ctB.inverse();
@ -226,7 +236,7 @@ void Awb::switchMode([[maybe_unused]] CameraMode const &cameraMode,
prevSyncResults_.temperatureK = (ctR + ctB) / 2; prevSyncResults_.temperatureK = (ctR + ctB) / 2;
syncResults_.temperatureK = prevSyncResults_.temperatureK; syncResults_.temperatureK = prevSyncResults_.temperatureK;
} }
// Let other algorithms know the current white balance values. /* Let other algorithms know the current white balance values. */
metadata->set("awb.status", prevSyncResults_); metadata->set("awb.status", prevSyncResults_);
firstSwitchMode_ = false; firstSwitchMode_ = false;
} }
@ -241,8 +251,10 @@ void Awb::fetchAsyncResults()
LOG(RPiAwb, Debug) << "Fetch AWB results"; LOG(RPiAwb, Debug) << "Fetch AWB results";
asyncFinished_ = false; asyncFinished_ = false;
asyncStarted_ = false; asyncStarted_ = false;
// It's possible manual gains could be set even while the async /*
// thread was running, so only copy the results if still in auto mode. * It's possible manual gains could be set even while the async
* thread was running, so only copy the results if still in auto mode.
*/
if (isAutoEnabled()) if (isAutoEnabled())
syncResults_ = asyncResults_; syncResults_ = asyncResults_;
} }
@ -250,9 +262,9 @@ void Awb::fetchAsyncResults()
void Awb::restartAsync(StatisticsPtr &stats, double lux) void Awb::restartAsync(StatisticsPtr &stats, double lux)
{ {
LOG(RPiAwb, Debug) << "Starting AWB calculation"; LOG(RPiAwb, Debug) << "Starting AWB calculation";
// this makes a new reference which belongs to the asynchronous thread /* this makes a new reference which belongs to the asynchronous thread */
statistics_ = stats; statistics_ = stats;
// store the mode as it could technically change /* store the mode as it could technically change */
auto m = config_.modes.find(modeName_); auto m = config_.modes.find(modeName_);
mode_ = m != config_.modes.end() mode_ = m != config_.modes.end()
? &m->second ? &m->second
@ -284,7 +296,7 @@ void Awb::prepare(Metadata *imageMetadata)
if (asyncStarted_ && asyncFinished_) if (asyncStarted_ && asyncFinished_)
fetchAsyncResults(); fetchAsyncResults();
} }
// Finally apply IIR filter to results and put into metadata. /* Finally apply IIR filter to results and put into metadata. */
memcpy(prevSyncResults_.mode, syncResults_.mode, memcpy(prevSyncResults_.mode, syncResults_.mode,
sizeof(prevSyncResults_.mode)); sizeof(prevSyncResults_.mode));
prevSyncResults_.temperatureK = speed * syncResults_.temperatureK + prevSyncResults_.temperatureK = speed * syncResults_.temperatureK +
@ -304,17 +316,17 @@ void Awb::prepare(Metadata *imageMetadata)
void Awb::process(StatisticsPtr &stats, Metadata *imageMetadata) void Awb::process(StatisticsPtr &stats, Metadata *imageMetadata)
{ {
// Count frames since we last poked the async thread. /* Count frames since we last poked the async thread. */
if (framePhase_ < (int)config_.framePeriod) if (framePhase_ < (int)config_.framePeriod)
framePhase_++; framePhase_++;
LOG(RPiAwb, Debug) << "frame_phase " << framePhase_; LOG(RPiAwb, Debug) << "frame_phase " << framePhase_;
// We do not restart the async thread if we're not in auto mode. /* We do not restart the async thread if we're not in auto mode. */
if (isAutoEnabled() && if (isAutoEnabled() &&
(framePhase_ >= (int)config_.framePeriod || (framePhase_ >= (int)config_.framePeriod ||
frameCount_ < (int)config_.startupFrames)) { frameCount_ < (int)config_.startupFrames)) {
// Update any settings and any image metadata that we need. /* Update any settings and any image metadata that we need. */
struct LuxStatus luxStatus = {}; struct LuxStatus luxStatus = {};
luxStatus.lux = 400; // in case no metadata luxStatus.lux = 400; /* in case no metadata */
if (imageMetadata->get("lux.status", luxStatus) != 0) if (imageMetadata->get("lux.status", luxStatus) != 0)
LOG(RPiAwb, Debug) << "No lux metadata found"; LOG(RPiAwb, Debug) << "No lux metadata found";
LOG(RPiAwb, Debug) << "Awb lux value is " << luxStatus.lux; LOG(RPiAwb, Debug) << "Awb lux value is " << luxStatus.lux;
@ -366,15 +378,21 @@ static void generateStats(std::vector<Awb::RGB> &zones,
void Awb::prepareStats() void Awb::prepareStats()
{ {
zones_.clear(); zones_.clear();
// LSC has already been applied to the stats in this pipeline, so stop /*
// any LSC compensation. We also ignore config_.fast in this version. * LSC has already been applied to the stats in this pipeline, so stop
* any LSC compensation. We also ignore config_.fast in this version.
*/
generateStats(zones_, statistics_->awb_stats, config_.minPixels, generateStats(zones_, statistics_->awb_stats, config_.minPixels,
config_.minG); config_.minG);
// we're done with these; we may as well relinquish our hold on the /*
// pointer. * we're done with these; we may as well relinquish our hold on the
* pointer.
*/
statistics_.reset(); statistics_.reset();
// apply sensitivities, so values appear to come from our "canonical" /*
// sensor. * apply sensitivities, so values appear to come from our "canonical"
* sensor.
*/
for (auto &zone : zones_) { for (auto &zone : zones_) {
zone.R *= config_.sensitivityR; zone.R *= config_.sensitivityR;
zone.B *= config_.sensitivityB; zone.B *= config_.sensitivityB;
@ -383,14 +401,16 @@ void Awb::prepareStats()
double Awb::computeDelta2Sum(double gainR, double gainB) double Awb::computeDelta2Sum(double gainR, double gainB)
{ {
// Compute the sum of the squared colour error (non-greyness) as it /*
// appears in the log likelihood equation. * Compute the sum of the squared colour error (non-greyness) as it
* appears in the log likelihood equation.
*/
double delta2Sum = 0; double delta2Sum = 0;
for (auto &z : zones_) { for (auto &z : zones_) {
double deltaR = gainR * z.R - 1 - config_.whitepointR; double deltaR = gainR * z.R - 1 - config_.whitepointR;
double deltaB = gainB * z.B - 1 - config_.whitepointB; double deltaB = gainB * z.B - 1 - config_.whitepointB;
double delta2 = deltaR * deltaR + deltaB * deltaB; double delta2 = deltaR * deltaR + deltaB * deltaB;
//LOG(RPiAwb, Debug) << "deltaR " << deltaR << " deltaB " << deltaB << " delta2 " << delta2; /* LOG(RPiAwb, Debug) << "deltaR " << deltaR << " deltaB " << deltaB << " delta2 " << delta2; */
delta2 = std::min(delta2, config_.deltaLimit); delta2 = std::min(delta2, config_.deltaLimit);
delta2Sum += delta2; delta2Sum += delta2;
} }
@ -399,15 +419,17 @@ double Awb::computeDelta2Sum(double gainR, double gainB)
Pwl Awb::interpolatePrior() Pwl Awb::interpolatePrior()
{ {
// Interpolate the prior log likelihood function for our current lux /*
// value. * Interpolate the prior log likelihood function for our current lux
* value.
*/
if (lux_ <= config_.priors.front().lux) if (lux_ <= config_.priors.front().lux)
return config_.priors.front().prior; return config_.priors.front().prior;
else if (lux_ >= config_.priors.back().lux) else if (lux_ >= config_.priors.back().lux)
return config_.priors.back().prior; return config_.priors.back().prior;
else { else {
int idx = 0; int idx = 0;
// find which two we lie between /* find which two we lie between */
while (config_.priors[idx + 1].lux < lux_) while (config_.priors[idx + 1].lux < lux_)
idx++; idx++;
double lux0 = config_.priors[idx].lux, double lux0 = config_.priors[idx].lux,
@ -424,8 +446,10 @@ Pwl Awb::interpolatePrior()
static double interpolateQuadatric(Pwl::Point const &a, Pwl::Point const &b, static double interpolateQuadatric(Pwl::Point const &a, Pwl::Point const &b,
Pwl::Point const &c) Pwl::Point const &c)
{ {
// Given 3 points on a curve, find the extremum of the function in that /*
// interval by fitting a quadratic. * Given 3 points on a curve, find the extremum of the function in that
* interval by fitting a quadratic.
*/
const double eps = 1e-3; const double eps = 1e-3;
Pwl::Point ca = c - a, ba = b - a; Pwl::Point ca = c - a, ba = b - a;
double denominator = 2 * (ba.y * ca.x - ca.y * ba.x); double denominator = 2 * (ba.y * ca.x - ca.y * ba.x);
@ -434,17 +458,17 @@ static double interpolateQuadatric(Pwl::Point const &a, Pwl::Point const &b,
double result = numerator / denominator + a.x; double result = numerator / denominator + a.x;
return std::max(a.x, std::min(c.x, result)); return std::max(a.x, std::min(c.x, result));
} }
// has degenerated to straight line segment /* has degenerated to straight line segment */
return a.y < c.y - eps ? a.x : (c.y < a.y - eps ? c.x : b.x); return a.y < c.y - eps ? a.x : (c.y < a.y - eps ? c.x : b.x);
} }
double Awb::coarseSearch(Pwl const &prior) double Awb::coarseSearch(Pwl const &prior)
{ {
points_.clear(); // assume doesn't deallocate memory points_.clear(); /* assume doesn't deallocate memory */
size_t bestPoint = 0; size_t bestPoint = 0;
double t = mode_->ctLo; double t = mode_->ctLo;
int spanR = 0, spanB = 0; int spanR = 0, spanB = 0;
// Step down the CT curve evaluating log likelihood. /* Step down the CT curve evaluating log likelihood. */
while (true) { while (true) {
double r = config_.ctR.eval(t, &spanR); double r = config_.ctR.eval(t, &spanR);
double b = config_.ctB.eval(t, &spanB); double b = config_.ctB.eval(t, &spanB);
@ -462,13 +486,15 @@ double Awb::coarseSearch(Pwl const &prior)
bestPoint = points_.size() - 1; bestPoint = points_.size() - 1;
if (t == mode_->ctHi) if (t == mode_->ctHi)
break; break;
// for even steps along the r/b curve scale them by the current t /* for even steps along the r/b curve scale them by the current t */
t = std::min(t + t / 10 * config_.coarseStep, mode_->ctHi); t = std::min(t + t / 10 * config_.coarseStep, mode_->ctHi);
} }
t = points_[bestPoint].x; t = points_[bestPoint].x;
LOG(RPiAwb, Debug) << "Coarse search found CT " << t; LOG(RPiAwb, Debug) << "Coarse search found CT " << t;
// We have the best point of the search, but refine it with a quadratic /*
// interpolation around its neighbours. * We have the best point of the search, but refine it with a quadratic
* interpolation around its neighbours.
*/
if (points_.size() > 2) { if (points_.size() > 2) {
unsigned long bp = std::min(bestPoint, points_.size() - 2); unsigned long bp = std::min(bestPoint, points_.size() - 2);
bestPoint = std::max(1UL, bp); bestPoint = std::max(1UL, bp);
@ -496,17 +522,21 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)
Pwl::Point transverse(bDiff, -rDiff); Pwl::Point transverse(bDiff, -rDiff);
if (transverse.len2() < 1e-6) if (transverse.len2() < 1e-6)
return; return;
// unit vector orthogonal to the b vs. r function (pointing outwards /*
// with r and b increasing) * unit vector orthogonal to the b vs. r function (pointing outwards
* with r and b increasing)
*/
transverse = transverse / transverse.len(); transverse = transverse / transverse.len();
double bestLogLikelihood = 0, bestT = 0, bestR = 0, bestB = 0; double bestLogLikelihood = 0, bestT = 0, bestR = 0, bestB = 0;
double transverseRange = config_.transverseNeg + config_.transversePos; double transverseRange = config_.transverseNeg + config_.transversePos;
const int maxNumDeltas = 12; const int maxNumDeltas = 12;
// a transverse step approximately every 0.01 r/b units /* a transverse step approximately every 0.01 r/b units */
int numDeltas = floor(transverseRange * 100 + 0.5) + 1; int numDeltas = floor(transverseRange * 100 + 0.5) + 1;
numDeltas = numDeltas < 3 ? 3 : (numDeltas > maxNumDeltas ? maxNumDeltas : numDeltas); numDeltas = numDeltas < 3 ? 3 : (numDeltas > maxNumDeltas ? maxNumDeltas : numDeltas);
// Step down CT curve. March a bit further if the transverse range is /*
// large. * Step down CT curve. March a bit further if the transverse range is
* large.
*/
nsteps += numDeltas; nsteps += numDeltas;
for (int i = -nsteps; i <= nsteps; i++) { for (int i = -nsteps; i <= nsteps; i++) {
double tTest = t + i * step; double tTest = t + i * step;
@ -514,10 +544,10 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)
prior.eval(prior.domain().clip(tTest)); prior.eval(prior.domain().clip(tTest));
double rCurve = config_.ctR.eval(tTest, &spanR); double rCurve = config_.ctR.eval(tTest, &spanR);
double bCurve = config_.ctB.eval(tTest, &spanB); double bCurve = config_.ctB.eval(tTest, &spanB);
// x will be distance off the curve, y the log likelihood there /* x will be distance off the curve, y the log likelihood there */
Pwl::Point points[maxNumDeltas]; Pwl::Point points[maxNumDeltas];
int bestPoint = 0; int bestPoint = 0;
// Take some measurements transversely *off* the CT curve. /* Take some measurements transversely *off* the CT curve. */
for (int j = 0; j < numDeltas; j++) { for (int j = 0; j < numDeltas; j++) {
points[j].x = -config_.transverseNeg + points[j].x = -config_.transverseNeg +
(transverseRange * j) / (numDeltas - 1); (transverseRange * j) / (numDeltas - 1);
@ -533,8 +563,10 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)
if (points[j].y < points[bestPoint].y) if (points[j].y < points[bestPoint].y)
bestPoint = j; bestPoint = j;
} }
// We have NUM_DELTAS points transversely across the CT curve, /*
// now let's do a quadratic interpolation for the best result. * We have NUM_DELTAS points transversely across the CT curve,
* now let's do a quadratic interpolation for the best result.
*/
bestPoint = std::max(1, std::min(bestPoint, numDeltas - 2)); bestPoint = std::max(1, std::min(bestPoint, numDeltas - 2));
Pwl::Point rbTest = Pwl::Point(rCurve, bCurve) + Pwl::Point rbTest = Pwl::Point(rCurve, bCurve) +
transverse * interpolateQuadatric(points[bestPoint - 1], transverse * interpolateQuadatric(points[bestPoint - 1],
@ -560,12 +592,16 @@ void Awb::fineSearch(double &t, double &r, double &b, Pwl const &prior)
void Awb::awbBayes() void Awb::awbBayes()
{ {
// May as well divide out G to save computeDelta2Sum from doing it over /*
// and over. * May as well divide out G to save computeDelta2Sum from doing it over
* and over.
*/
for (auto &z : zones_) for (auto &z : zones_)
z.R = z.R / (z.G + 1), z.B = z.B / (z.G + 1); z.R = z.R / (z.G + 1), z.B = z.B / (z.G + 1);
// Get the current prior, and scale according to how many zones are /*
// valid... not entirely sure about this. * Get the current prior, and scale according to how many zones are
* valid... not entirely sure about this.
*/
Pwl prior = interpolatePrior(); Pwl prior = interpolatePrior();
prior *= zones_.size() / (double)(AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y); prior *= zones_.size() / (double)(AWB_STATS_SIZE_X * AWB_STATS_SIZE_Y);
prior.map([](double x, double y) { prior.map([](double x, double y) {
@ -577,19 +613,23 @@ void Awb::awbBayes()
LOG(RPiAwb, Debug) LOG(RPiAwb, Debug)
<< "After coarse search: r " << r << " b " << b << " (gains r " << "After coarse search: r " << r << " b " << b << " (gains r "
<< 1 / r << " b " << 1 / b << ")"; << 1 / r << " b " << 1 / b << ")";
// Not entirely sure how to handle the fine search yet. Mostly the /*
// estimated CT is already good enough, but the fine search allows us to * Not entirely sure how to handle the fine search yet. Mostly the
// wander transverely off the CT curve. Under some illuminants, where * estimated CT is already good enough, but the fine search allows us to
// there may be more or less green light, this may prove beneficial, * wander transverely off the CT curve. Under some illuminants, where
// though I probably need more real datasets before deciding exactly how * there may be more or less green light, this may prove beneficial,
// this should be controlled and tuned. * though I probably need more real datasets before deciding exactly how
* this should be controlled and tuned.
*/
fineSearch(t, r, b, prior); fineSearch(t, r, b, prior);
LOG(RPiAwb, Debug) LOG(RPiAwb, Debug)
<< "After fine search: r " << r << " b " << b << " (gains r " << "After fine search: r " << r << " b " << b << " (gains r "
<< 1 / r << " b " << 1 / b << ")"; << 1 / r << " b " << 1 / b << ")";
// Write results out for the main thread to pick up. Remember to adjust /*
// the gains from the ones that the "canonical sensor" would require to * Write results out for the main thread to pick up. Remember to adjust
// the ones needed by *this* sensor. * the gains from the ones that the "canonical sensor" would require to
* the ones needed by *this* sensor.
*/
asyncResults_.temperatureK = t; asyncResults_.temperatureK = t;
asyncResults_.gainR = 1.0 / r * config_.sensitivityR; asyncResults_.gainR = 1.0 / r * config_.sensitivityR;
asyncResults_.gainG = 1.0; asyncResults_.gainG = 1.0;
@ -599,10 +639,12 @@ void Awb::awbBayes()
void Awb::awbGrey() void Awb::awbGrey()
{ {
LOG(RPiAwb, Debug) << "Grey world AWB"; LOG(RPiAwb, Debug) << "Grey world AWB";
// Make a separate list of the derivatives for each of red and blue, so /*
// that we can sort them to exclude the extreme gains. We could * Make a separate list of the derivatives for each of red and blue, so
// consider some variations, such as normalising all the zones first, or * that we can sort them to exclude the extreme gains. We could
// doing an L2 average etc. * consider some variations, such as normalising all the zones first, or
* doing an L2 average etc.
*/
std::vector<RGB> &derivsR(zones_); std::vector<RGB> &derivsR(zones_);
std::vector<RGB> derivsB(derivsR); std::vector<RGB> derivsB(derivsR);
std::sort(derivsR.begin(), derivsR.end(), std::sort(derivsR.begin(), derivsR.end(),
@ -613,7 +655,7 @@ void Awb::awbGrey()
[](RGB const &a, RGB const &b) { [](RGB const &a, RGB const &b) {
return a.G * b.B < b.G * a.B; return a.G * b.B < b.G * a.B;
}); });
// Average the middle half of the values. /* Average the middle half of the values. */
int discard = derivsR.size() / 4; int discard = derivsR.size() / 4;
RGB sumR(0, 0, 0), sumB(0, 0, 0); RGB sumR(0, 0, 0), sumB(0, 0, 0);
for (auto ri = derivsR.begin() + discard, for (auto ri = derivsR.begin() + discard,
@ -622,7 +664,7 @@ void Awb::awbGrey()
sumR += *ri, sumB += *bi; sumR += *ri, sumB += *bi;
double gainR = sumR.G / (sumR.R + 1), double gainR = sumR.G / (sumR.R + 1),
gainB = sumB.G / (sumB.B + 1); gainB = sumB.G / (sumB.B + 1);
asyncResults_.temperatureK = 4500; // don't know what it is asyncResults_.temperatureK = 4500; /* don't know what it is */
asyncResults_.gainR = gainR; asyncResults_.gainR = gainR;
asyncResults_.gainG = 1.0; asyncResults_.gainG = 1.0;
asyncResults_.gainB = gainB; asyncResults_.gainB = gainB;
@ -645,7 +687,7 @@ void Awb::doAwb()
} }
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Awb(controller); return (Algorithm *)new Awb(controller);

View file

@ -16,63 +16,71 @@
namespace RPiController { namespace RPiController {
// Control algorithm to perform AWB calculations. /* Control algorithm to perform AWB calculations. */
struct AwbMode { struct AwbMode {
void read(boost::property_tree::ptree const &params); void read(boost::property_tree::ptree const &params);
double ctLo; // low CT value for search double ctLo; /* low CT value for search */
double ctHi; // high CT value for search double ctHi; /* high CT value for search */
}; };
struct AwbPrior { struct AwbPrior {
void read(boost::property_tree::ptree const &params); void read(boost::property_tree::ptree const &params);
double lux; // lux level double lux; /* lux level */
Pwl prior; // maps CT to prior log likelihood for this lux level Pwl prior; /* maps CT to prior log likelihood for this lux level */
}; };
struct AwbConfig { struct AwbConfig {
AwbConfig() : defaultMode(nullptr) {} AwbConfig() : defaultMode(nullptr) {}
void read(boost::property_tree::ptree const &params); void read(boost::property_tree::ptree const &params);
// Only repeat the AWB calculation every "this many" frames /* Only repeat the AWB calculation every "this many" frames */
uint16_t framePeriod; uint16_t framePeriod;
// number of initial frames for which speed taken as 1.0 (maximum) /* number of initial frames for which speed taken as 1.0 (maximum) */
uint16_t startupFrames; uint16_t startupFrames;
unsigned int convergenceFrames; // approx number of frames to converge unsigned int convergenceFrames; /* approx number of frames to converge */
double speed; // IIR filter speed applied to algorithm results double speed; /* IIR filter speed applied to algorithm results */
bool fast; // "fast" mode uses a 16x16 rather than 32x32 grid bool fast; /* "fast" mode uses a 16x16 rather than 32x32 grid */
Pwl ctR; // function maps CT to r (= R/G) Pwl ctR; /* function maps CT to r (= R/G) */
Pwl ctB; // function maps CT to b (= B/G) Pwl ctB; /* function maps CT to b (= B/G) */
// table of illuminant priors at different lux levels /* table of illuminant priors at different lux levels */
std::vector<AwbPrior> priors; std::vector<AwbPrior> priors;
// AWB "modes" (determines the search range) /* AWB "modes" (determines the search range) */
std::map<std::string, AwbMode> modes; std::map<std::string, AwbMode> modes;
AwbMode *defaultMode; // mode used if no mode selected AwbMode *defaultMode; /* mode used if no mode selected */
// minimum proportion of pixels counted within AWB region for it to be /*
// "useful" * minimum proportion of pixels counted within AWB region for it to be
* "useful"
*/
double minPixels; double minPixels;
// minimum G value of those pixels, to be regarded a "useful" /* minimum G value of those pixels, to be regarded a "useful" */
uint16_t minG; uint16_t minG;
// number of AWB regions that must be "useful" in order to do the AWB /*
// calculation * number of AWB regions that must be "useful" in order to do the AWB
* calculation
*/
uint32_t minRegions; uint32_t minRegions;
// clamp on colour error term (so as not to penalise non-grey excessively) /* clamp on colour error term (so as not to penalise non-grey excessively) */
double deltaLimit; double deltaLimit;
// step size control in coarse search /* step size control in coarse search */
double coarseStep; double coarseStep;
// how far to wander off CT curve towards "more purple" /* how far to wander off CT curve towards "more purple" */
double transversePos; double transversePos;
// how far to wander off CT curve towards "more green" /* how far to wander off CT curve towards "more green" */
double transverseNeg; double transverseNeg;
// red sensitivity ratio (set to canonical sensor's R/G divided by this /*
// sensor's R/G) * red sensitivity ratio (set to canonical sensor's R/G divided by this
* sensor's R/G)
*/
double sensitivityR; double sensitivityR;
// blue sensitivity ratio (set to canonical sensor's B/G divided by this /*
// sensor's B/G) * blue sensitivity ratio (set to canonical sensor's B/G divided by this
* sensor's B/G)
*/
double sensitivityB; double sensitivityB;
// The whitepoint (which we normally "aim" for) can be moved. /* The whitepoint (which we normally "aim" for) can be moved. */
double whitepointR; double whitepointR;
double whitepointB; double whitepointB;
bool bayes; // use Bayesian algorithm bool bayes; /* use Bayesian algorithm */
}; };
class Awb : public AwbAlgorithm class Awb : public AwbAlgorithm
@ -83,7 +91,7 @@ public:
char const *name() const override; char const *name() const override;
void initialise() override; void initialise() override;
void read(boost::property_tree::ptree const &params) override; void read(boost::property_tree::ptree const &params) override;
// AWB handles "pausing" for itself. /* AWB handles "pausing" for itself. */
bool isPaused() const override; bool isPaused() const override;
void pause() override; void pause() override;
void resume() override; void resume() override;
@ -108,35 +116,39 @@ public:
private: private:
bool isAutoEnabled() const; bool isAutoEnabled() const;
// configuration is read-only, and available to both threads /* configuration is read-only, and available to both threads */
AwbConfig config_; AwbConfig config_;
std::thread asyncThread_; std::thread asyncThread_;
void asyncFunc(); // asynchronous thread function void asyncFunc(); /* asynchronous thread function */
std::mutex mutex_; std::mutex mutex_;
// condvar for async thread to wait on /* condvar for async thread to wait on */
std::condition_variable asyncSignal_; std::condition_variable asyncSignal_;
// condvar for synchronous thread to wait on /* condvar for synchronous thread to wait on */
std::condition_variable syncSignal_; std::condition_variable syncSignal_;
// for sync thread to check if async thread finished (requires mutex) /* for sync thread to check if async thread finished (requires mutex) */
bool asyncFinished_; bool asyncFinished_;
// for async thread to check if it's been told to run (requires mutex) /* for async thread to check if it's been told to run (requires mutex) */
bool asyncStart_; bool asyncStart_;
// for async thread to check if it's been told to quit (requires mutex) /* for async thread to check if it's been told to quit (requires mutex) */
bool asyncAbort_; bool asyncAbort_;
// The following are only for the synchronous thread to use: /*
// for sync thread to note its has asked async thread to run * The following are only for the synchronous thread to use:
* for sync thread to note its has asked async thread to run
*/
bool asyncStarted_; bool asyncStarted_;
// counts up to framePeriod before restarting the async thread /* counts up to framePeriod before restarting the async thread */
int framePhase_; int framePhase_;
int frameCount_; // counts up to startup_frames int frameCount_; /* counts up to startup_frames */
AwbStatus syncResults_; AwbStatus syncResults_;
AwbStatus prevSyncResults_; AwbStatus prevSyncResults_;
std::string modeName_; std::string modeName_;
// The following are for the asynchronous thread to use, though the main /*
// thread can set/reset them if the async thread is known to be idle: * The following are for the asynchronous thread to use, though the main
* thread can set/reset them if the async thread is known to be idle:
*/
void restartAsync(StatisticsPtr &stats, double lux); void restartAsync(StatisticsPtr &stats, double lux);
// copy out the results from the async thread so that it can be restarted /* copy out the results from the async thread so that it can be restarted */
void fetchAsyncResults(); void fetchAsyncResults();
StatisticsPtr statistics_; StatisticsPtr statistics_;
AwbMode *mode_; AwbMode *mode_;
@ -152,11 +164,11 @@ private:
void fineSearch(double &t, double &r, double &b, Pwl const &prior); void fineSearch(double &t, double &r, double &b, Pwl const &prior);
std::vector<RGB> zones_; std::vector<RGB> zones_;
std::vector<Pwl::Point> points_; std::vector<Pwl::Point> points_;
// manual r setting /* manual r setting */
double manualR_; double manualR_;
// manual b setting /* manual b setting */
double manualB_; double manualB_;
bool firstSwitchMode_; // is this the first call to SwitchMode? bool firstSwitchMode_; /* is this the first call to SwitchMode? */
}; };
static inline Awb::RGB operator+(Awb::RGB const &a, Awb::RGB const &b) static inline Awb::RGB operator+(Awb::RGB const &a, Awb::RGB const &b)
@ -176,4 +188,4 @@ static inline Awb::RGB operator*(Awb::RGB const &rgb, double d)
return d * rgb; return d * rgb;
} }
} // namespace RPiController } /* namespace RPiController */

View file

@ -34,7 +34,7 @@ char const *BlackLevel::name() const
void BlackLevel::read(boost::property_tree::ptree const &params) void BlackLevel::read(boost::property_tree::ptree const &params)
{ {
uint16_t blackLevel = params.get<uint16_t>( uint16_t blackLevel = params.get<uint16_t>(
"black_level", 4096); // 64 in 10 bits scaled to 16 bits "black_level", 4096); /* 64 in 10 bits scaled to 16 bits */
blackLevelR_ = params.get<uint16_t>("black_level_r", blackLevel); blackLevelR_ = params.get<uint16_t>("black_level_r", blackLevel);
blackLevelG_ = params.get<uint16_t>("black_level_g", blackLevel); blackLevelG_ = params.get<uint16_t>("black_level_g", blackLevel);
blackLevelB_ = params.get<uint16_t>("black_level_b", blackLevel); blackLevelB_ = params.get<uint16_t>("black_level_b", blackLevel);
@ -46,8 +46,10 @@ void BlackLevel::read(boost::property_tree::ptree const &params)
void BlackLevel::prepare(Metadata *imageMetadata) void BlackLevel::prepare(Metadata *imageMetadata)
{ {
// Possibly we should think about doing this in a switchMode or /*
// something? * Possibly we should think about doing this in a switchMode or
* something?
*/
struct BlackLevelStatus status; struct BlackLevelStatus status;
status.blackLevelR = blackLevelR_; status.blackLevelR = blackLevelR_;
status.blackLevelG = blackLevelG_; status.blackLevelG = blackLevelG_;
@ -55,7 +57,7 @@ void BlackLevel::prepare(Metadata *imageMetadata)
imageMetadata->set("black_level.status", status); imageMetadata->set("black_level.status", status);
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return new BlackLevel(controller); return new BlackLevel(controller);

View file

@ -9,7 +9,7 @@
#include "../algorithm.hpp" #include "../algorithm.hpp"
#include "../black_level_status.h" #include "../black_level_status.h"
// This is our implementation of the "black level algorithm". /* This is our implementation of the "black level algorithm". */
namespace RPiController { namespace RPiController {
@ -27,4 +27,4 @@ private:
double blackLevelB_; double blackLevelB_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -19,11 +19,13 @@ using namespace libcamera;
LOG_DEFINE_CATEGORY(RPiCcm) LOG_DEFINE_CATEGORY(RPiCcm)
// This algorithm selects a CCM (Colour Correction Matrix) according to the /*
// colour temperature estimated by AWB (interpolating between known matricies as * This algorithm selects a CCM (Colour Correction Matrix) according to the
// necessary). Additionally the amount of colour saturation can be controlled * colour temperature estimated by AWB (interpolating between known matricies as
// both according to the current estimated lux level and according to a * necessary). Additionally the amount of colour saturation can be controlled
// saturation setting that is exposed to applications. * both according to the current estimated lux level and according to a
* saturation setting that is exposed to applications.
*/
#define NAME "rpi.ccm" #define NAME "rpi.ccm"
@ -125,11 +127,11 @@ void Ccm::prepare(Metadata *imageMetadata)
{ {
bool awbOk = false, luxOk = false; bool awbOk = false, luxOk = false;
struct AwbStatus awb = {}; struct AwbStatus awb = {};
awb.temperatureK = 4000; // in case no metadata awb.temperatureK = 4000; /* in case no metadata */
struct LuxStatus lux = {}; struct LuxStatus lux = {};
lux.lux = 400; // in case no metadata lux.lux = 400; /* in case no metadata */
{ {
// grab mutex just once to get everything /* grab mutex just once to get everything */
std::lock_guard<Metadata> lock(*imageMetadata); std::lock_guard<Metadata> lock(*imageMetadata);
awbOk = getLocked(imageMetadata, "awb.status", awb); awbOk = getLocked(imageMetadata, "awb.status", awb);
luxOk = getLocked(imageMetadata, "lux.status", lux); luxOk = getLocked(imageMetadata, "lux.status", lux);
@ -162,7 +164,7 @@ void Ccm::prepare(Metadata *imageMetadata)
imageMetadata->set("ccm.status", ccmStatus); imageMetadata->set("ccm.status", ccmStatus);
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Ccm(controller); return (Algorithm *)new Ccm(controller);

View file

@ -13,7 +13,7 @@
namespace RPiController { namespace RPiController {
// Algorithm to calculate colour matrix. Should be placed after AWB. /* Algorithm to calculate colour matrix. Should be placed after AWB. */
struct Matrix { struct Matrix {
Matrix(double m0, double m1, double m2, double m3, double m4, double m5, Matrix(double m0, double m1, double m2, double m3, double m4, double m5,
@ -72,4 +72,4 @@ private:
double saturation_; double saturation_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -18,11 +18,13 @@ using namespace libcamera;
LOG_DEFINE_CATEGORY(RPiContrast) LOG_DEFINE_CATEGORY(RPiContrast)
// This is a very simple control algorithm which simply retrieves the results of /*
// AGC and AWB via their "status" metadata, and applies digital gain to the * This is a very simple control algorithm which simply retrieves the results of
// colour channels in accordance with those instructions. We take care never to * AGC and AWB via their "status" metadata, and applies digital gain to the
// apply less than unity gains, as that would cause fully saturated pixels to go * colour channels in accordance with those instructions. We take care never to
// off-white. * apply less than unity gains, as that would cause fully saturated pixels to go
* off-white.
*/
#define NAME "rpi.contrast" #define NAME "rpi.contrast"
@ -38,15 +40,15 @@ char const *Contrast::name() const
void Contrast::read(boost::property_tree::ptree const &params) void Contrast::read(boost::property_tree::ptree const &params)
{ {
// enable adaptive enhancement by default /* enable adaptive enhancement by default */
config_.ceEnable = params.get<int>("ce_enable", 1); config_.ceEnable = params.get<int>("ce_enable", 1);
// the point near the bottom of the histogram to move /* the point near the bottom of the histogram to move */
config_.loHistogram = params.get<double>("lo_histogram", 0.01); config_.loHistogram = params.get<double>("lo_histogram", 0.01);
// where in the range to try and move it to /* where in the range to try and move it to */
config_.loLevel = params.get<double>("lo_level", 0.015); config_.loLevel = params.get<double>("lo_level", 0.015);
// but don't move by more than this /* but don't move by more than this */
config_.loMax = params.get<double>("lo_max", 500); config_.loMax = params.get<double>("lo_max", 500);
// equivalent values for the top of the histogram... /* equivalent values for the top of the histogram... */
config_.hiHistogram = params.get<double>("hi_histogram", 0.95); config_.hiHistogram = params.get<double>("hi_histogram", 0.95);
config_.hiLevel = params.get<double>("hi_level", 0.95); config_.hiLevel = params.get<double>("hi_level", 0.95);
config_.hiMax = params.get<double>("hi_max", 2000); config_.hiMax = params.get<double>("hi_max", 2000);
@ -81,8 +83,10 @@ static void fillInStatus(ContrastStatus &status, double brightness,
void Contrast::initialise() void Contrast::initialise()
{ {
// Fill in some default values as Prepare will run before Process gets /*
// called. * Fill in some default values as Prepare will run before Process gets
* called.
*/
fillInStatus(status_, brightness_, contrast_, config_.gammaCurve); fillInStatus(status_, brightness_, contrast_, config_.gammaCurve);
} }
@ -97,8 +101,10 @@ Pwl computeStretchCurve(Histogram const &histogram,
{ {
Pwl enhance; Pwl enhance;
enhance.append(0, 0); enhance.append(0, 0);
// If the start of the histogram is rather empty, try to pull it down a /*
// bit. * If the start of the histogram is rather empty, try to pull it down a
* bit.
*/
double histLo = histogram.quantile(config.loHistogram) * double histLo = histogram.quantile(config.loHistogram) *
(65536 / NUM_HISTOGRAM_BINS); (65536 / NUM_HISTOGRAM_BINS);
double levelLo = config.loLevel * 65536; double levelLo = config.loLevel * 65536;
@ -109,13 +115,17 @@ Pwl computeStretchCurve(Histogram const &histogram,
LOG(RPiContrast, Debug) LOG(RPiContrast, Debug)
<< "Final values " << histLo << " -> " << levelLo; << "Final values " << histLo << " -> " << levelLo;
enhance.append(histLo, levelLo); enhance.append(histLo, levelLo);
// Keep the mid-point (median) in the same place, though, to limit the /*
// apparent amount of global brightness shift. * Keep the mid-point (median) in the same place, though, to limit the
* apparent amount of global brightness shift.
*/
double mid = histogram.quantile(0.5) * (65536 / NUM_HISTOGRAM_BINS); double mid = histogram.quantile(0.5) * (65536 / NUM_HISTOGRAM_BINS);
enhance.append(mid, mid); enhance.append(mid, mid);
// If the top to the histogram is empty, try to pull the pixel values /*
// there up. * If the top to the histogram is empty, try to pull the pixel values
* there up.
*/
double histHi = histogram.quantile(config.hiHistogram) * double histHi = histogram.quantile(config.hiHistogram) *
(65536 / NUM_HISTOGRAM_BINS); (65536 / NUM_HISTOGRAM_BINS);
double levelHi = config.hiLevel * 65536; double levelHi = config.hiLevel * 65536;
@ -149,22 +159,30 @@ void Contrast::process(StatisticsPtr &stats,
[[maybe_unused]] Metadata *imageMetadata) [[maybe_unused]] Metadata *imageMetadata)
{ {
Histogram histogram(stats->hist[0].g_hist, NUM_HISTOGRAM_BINS); Histogram histogram(stats->hist[0].g_hist, NUM_HISTOGRAM_BINS);
// We look at the histogram and adjust the gamma curve in the following /*
// ways: 1. Adjust the gamma curve so as to pull the start of the * We look at the histogram and adjust the gamma curve in the following
// histogram down, and possibly push the end up. * ways: 1. Adjust the gamma curve so as to pull the start of the
* histogram down, and possibly push the end up.
*/
Pwl gammaCurve = config_.gammaCurve; Pwl gammaCurve = config_.gammaCurve;
if (config_.ceEnable) { if (config_.ceEnable) {
if (config_.loMax != 0 || config_.hiMax != 0) if (config_.loMax != 0 || config_.hiMax != 0)
gammaCurve = computeStretchCurve(histogram, config_).compose(gammaCurve); gammaCurve = computeStretchCurve(histogram, config_).compose(gammaCurve);
// We could apply other adjustments (e.g. partial equalisation) /*
// based on the histogram...? * We could apply other adjustments (e.g. partial equalisation)
* based on the histogram...?
*/
} }
// 2. Finally apply any manually selected brightness/contrast /*
// adjustment. * 2. Finally apply any manually selected brightness/contrast
* adjustment.
*/
if (brightness_ != 0 || contrast_ != 1.0) if (brightness_ != 0 || contrast_ != 1.0)
gammaCurve = applyManualContrast(gammaCurve, brightness_, contrast_); gammaCurve = applyManualContrast(gammaCurve, brightness_, contrast_);
// And fill in the status for output. Use more points towards the bottom /*
// of the curve. * And fill in the status for output. Use more points towards the bottom
* of the curve.
*/
ContrastStatus status; ContrastStatus status;
fillInStatus(status, brightness_, contrast_, gammaCurve); fillInStatus(status, brightness_, contrast_, gammaCurve);
{ {
@ -173,7 +191,7 @@ void Contrast::process(StatisticsPtr &stats,
} }
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Contrast(controller); return (Algorithm *)new Contrast(controller);

View file

@ -13,8 +13,10 @@
namespace RPiController { namespace RPiController {
// Back End algorithm to appaly correct digital gain. Should be placed after /*
// Back End AWB. * Back End algorithm to appaly correct digital gain. Should be placed after
* Back End AWB.
*/
struct ContrastConfig { struct ContrastConfig {
bool ceEnable; bool ceEnable;
@ -47,4 +49,4 @@ private:
std::mutex mutex_; std::mutex mutex_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -14,8 +14,10 @@ using namespace libcamera;
LOG_DEFINE_CATEGORY(RPiDpc) LOG_DEFINE_CATEGORY(RPiDpc)
// We use the lux status so that we can apply stronger settings in darkness (if /*
// necessary). * We use the lux status so that we can apply stronger settings in darkness (if
* necessary).
*/
#define NAME "rpi.dpc" #define NAME "rpi.dpc"
@ -39,13 +41,13 @@ void Dpc::read(boost::property_tree::ptree const &params)
void Dpc::prepare(Metadata *imageMetadata) void Dpc::prepare(Metadata *imageMetadata)
{ {
DpcStatus dpcStatus = {}; DpcStatus dpcStatus = {};
// Should we vary this with lux level or analogue gain? TBD. /* Should we vary this with lux level or analogue gain? TBD. */
dpcStatus.strength = config_.strength; dpcStatus.strength = config_.strength;
LOG(RPiDpc, Debug) << "strength " << dpcStatus.strength; LOG(RPiDpc, Debug) << "strength " << dpcStatus.strength;
imageMetadata->set("dpc.status", dpcStatus); imageMetadata->set("dpc.status", dpcStatus);
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Dpc(controller); return (Algorithm *)new Dpc(controller);

View file

@ -11,7 +11,7 @@
namespace RPiController { namespace RPiController {
// Back End algorithm to apply appropriate GEQ settings. /* Back End algorithm to apply appropriate GEQ settings. */
struct DpcConfig { struct DpcConfig {
int strength; int strength;
@ -29,4 +29,4 @@ private:
DpcConfig config_; DpcConfig config_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -18,8 +18,10 @@ using namespace libcamera;
LOG_DEFINE_CATEGORY(RPiGeq) LOG_DEFINE_CATEGORY(RPiGeq)
// We use the lux status so that we can apply stronger settings in darkness (if /*
// necessary). * We use the lux status so that we can apply stronger settings in darkness (if
* necessary).
*/
#define NAME "rpi.geq" #define NAME "rpi.geq"
@ -50,7 +52,7 @@ void Geq::prepare(Metadata *imageMetadata)
if (imageMetadata->get("lux.status", luxStatus)) if (imageMetadata->get("lux.status", luxStatus))
LOG(RPiGeq, Warning) << "no lux data found"; LOG(RPiGeq, Warning) << "no lux data found";
DeviceStatus deviceStatus; DeviceStatus deviceStatus;
deviceStatus.analogueGain = 1.0; // in case not found deviceStatus.analogueGain = 1.0; /* in case not found */
if (imageMetadata->get("device.status", deviceStatus)) if (imageMetadata->get("device.status", deviceStatus))
LOG(RPiGeq, Warning) LOG(RPiGeq, Warning)
<< "no device metadata - use analogue gain of 1x"; << "no device metadata - use analogue gain of 1x";
@ -71,7 +73,7 @@ void Geq::prepare(Metadata *imageMetadata)
imageMetadata->set("geq.status", geqStatus); imageMetadata->set("geq.status", geqStatus);
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Geq(controller); return (Algorithm *)new Geq(controller);

View file

@ -11,12 +11,12 @@
namespace RPiController { namespace RPiController {
// Back End algorithm to apply appropriate GEQ settings. /* Back End algorithm to apply appropriate GEQ settings. */
struct GeqConfig { struct GeqConfig {
uint16_t offset; uint16_t offset;
double slope; double slope;
Pwl strength; // lux to strength factor Pwl strength; /* lux to strength factor */
}; };
class Geq : public Algorithm class Geq : public Algorithm
@ -31,4 +31,4 @@ private:
GeqConfig config_; GeqConfig config_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -25,8 +25,10 @@ LOG_DEFINE_CATEGORY(RPiLux)
Lux::Lux(Controller *controller) Lux::Lux(Controller *controller)
: Algorithm(controller) : Algorithm(controller)
{ {
// Put in some defaults as there will be no meaningful values until /*
// Process has run. * Put in some defaults as there will be no meaningful values until
* Process has run.
*/
status_.aperture = 1.0; status_.aperture = 1.0;
status_.lux = 400; status_.lux = 400;
} }
@ -71,7 +73,7 @@ void Lux::process(StatisticsPtr &stats, Metadata *imageMetadata)
sizeof(stats->hist[0].g_hist[0]); sizeof(stats->hist[0].g_hist[0]);
for (int i = 0; i < numBins; i++) for (int i = 0; i < numBins; i++)
sum += bin[i] * (uint64_t)i, num += bin[i]; sum += bin[i] * (uint64_t)i, num += bin[i];
// add .5 to reflect the mid-points of bins /* add .5 to reflect the mid-points of bins */
double currentY = sum / (double)num + .5; double currentY = sum / (double)num + .5;
double gainRatio = referenceGain_ / currentGain; double gainRatio = referenceGain_ / currentGain;
double shutterSpeedRatio = double shutterSpeedRatio =
@ -89,14 +91,16 @@ void Lux::process(StatisticsPtr &stats, Metadata *imageMetadata)
std::unique_lock<std::mutex> lock(mutex_); std::unique_lock<std::mutex> lock(mutex_);
status_ = status; status_ = status;
} }
// Overwrite the metadata here as well, so that downstream /*
// algorithms get the latest value. * Overwrite the metadata here as well, so that downstream
* algorithms get the latest value.
*/
imageMetadata->set("lux.status", status); imageMetadata->set("lux.status", status);
} else } else
LOG(RPiLux, Warning) << ": no device metadata"; LOG(RPiLux, Warning) << ": no device metadata";
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Lux(controller); return (Algorithm *)new Lux(controller);

View file

@ -13,7 +13,7 @@
#include "../lux_status.h" #include "../lux_status.h"
#include "../algorithm.hpp" #include "../algorithm.hpp"
// This is our implementation of the "lux control algorithm". /* This is our implementation of the "lux control algorithm". */
namespace RPiController { namespace RPiController {
@ -28,16 +28,18 @@ public:
void setCurrentAperture(double aperture); void setCurrentAperture(double aperture);
private: private:
// These values define the conditions of the reference image, against /*
// which we compare the new image. * These values define the conditions of the reference image, against
* which we compare the new image.
*/
libcamera::utils::Duration referenceShutterSpeed_; libcamera::utils::Duration referenceShutterSpeed_;
double referenceGain_; double referenceGain_;
double referenceAperture_; // units of 1/f double referenceAperture_; /* units of 1/f */
double referenceY_; // out of 65536 double referenceY_; /* out of 65536 */
double referenceLux_; double referenceLux_;
double currentAperture_; double currentAperture_;
LuxStatus status_; LuxStatus status_;
std::mutex mutex_; std::mutex mutex_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -34,8 +34,10 @@ char const *Noise::name() const
void Noise::switchMode(CameraMode const &cameraMode, void Noise::switchMode(CameraMode const &cameraMode,
[[maybe_unused]] Metadata *metadata) [[maybe_unused]] Metadata *metadata)
{ {
// For example, we would expect a 2x2 binned mode to have a "noise /*
// factor" of sqrt(2x2) = 2. (can't be less than one, right?) * For example, we would expect a 2x2 binned mode to have a "noise
* factor" of sqrt(2x2) = 2. (can't be less than one, right?)
*/
modeFactor_ = std::max(1.0, cameraMode.noiseFactor); modeFactor_ = std::max(1.0, cameraMode.noiseFactor);
} }
@ -48,14 +50,16 @@ void Noise::read(boost::property_tree::ptree const &params)
void Noise::prepare(Metadata *imageMetadata) void Noise::prepare(Metadata *imageMetadata)
{ {
struct DeviceStatus deviceStatus; struct DeviceStatus deviceStatus;
deviceStatus.analogueGain = 1.0; // keep compiler calm deviceStatus.analogueGain = 1.0; /* keep compiler calm */
if (imageMetadata->get("device.status", deviceStatus) == 0) { if (imageMetadata->get("device.status", deviceStatus) == 0) {
// There is a slight question as to exactly how the noise /*
// profile, specifically the constant part of it, scales. For * There is a slight question as to exactly how the noise
// now we assume it all scales the same, and we'll revisit this * profile, specifically the constant part of it, scales. For
// if it proves substantially wrong. NOTE: we may also want to * now we assume it all scales the same, and we'll revisit this
// make some adjustments based on the camera mode (such as * if it proves substantially wrong. NOTE: we may also want to
// binning), if we knew how to discover it... * make some adjustments based on the camera mode (such as
* binning), if we knew how to discover it...
*/
double factor = sqrt(deviceStatus.analogueGain) / modeFactor_; double factor = sqrt(deviceStatus.analogueGain) / modeFactor_;
struct NoiseStatus status; struct NoiseStatus status;
status.noiseConstant = referenceConstant_ * factor; status.noiseConstant = referenceConstant_ * factor;
@ -68,7 +72,7 @@ void Noise::prepare(Metadata *imageMetadata)
LOG(RPiNoise, Warning) << " no metadata"; LOG(RPiNoise, Warning) << " no metadata";
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return new Noise(controller); return new Noise(controller);

View file

@ -9,7 +9,7 @@
#include "../algorithm.hpp" #include "../algorithm.hpp"
#include "../noise_status.h" #include "../noise_status.h"
// This is our implementation of the "noise algorithm". /* This is our implementation of the "noise algorithm". */
namespace RPiController { namespace RPiController {
@ -23,10 +23,10 @@ public:
void prepare(Metadata *imageMetadata) override; void prepare(Metadata *imageMetadata) override;
private: private:
// the noise profile for analogue gain of 1.0 /* the noise profile for analogue gain of 1.0 */
double referenceConstant_; double referenceConstant_;
double referenceSlope_; double referenceSlope_;
double modeFactor_; double modeFactor_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -17,8 +17,10 @@ using namespace libcamera;
LOG_DEFINE_CATEGORY(RPiSdn) LOG_DEFINE_CATEGORY(RPiSdn)
// Calculate settings for the spatial denoise block using the noise profile in /*
// the image metadata. * Calculate settings for the spatial denoise block using the noise profile in
* the image metadata.
*/
#define NAME "rpi.sdn" #define NAME "rpi.sdn"
@ -45,7 +47,7 @@ void Sdn::initialise()
void Sdn::prepare(Metadata *imageMetadata) void Sdn::prepare(Metadata *imageMetadata)
{ {
struct NoiseStatus noiseStatus = {}; struct NoiseStatus noiseStatus = {};
noiseStatus.noiseSlope = 3.0; // in case no metadata noiseStatus.noiseSlope = 3.0; /* in case no metadata */
if (imageMetadata->get("noise.status", noiseStatus) != 0) if (imageMetadata->get("noise.status", noiseStatus) != 0)
LOG(RPiSdn, Warning) << "no noise profile found"; LOG(RPiSdn, Warning) << "no noise profile found";
LOG(RPiSdn, Debug) LOG(RPiSdn, Debug)
@ -65,11 +67,11 @@ void Sdn::prepare(Metadata *imageMetadata)
void Sdn::setMode(DenoiseMode mode) void Sdn::setMode(DenoiseMode mode)
{ {
// We only distinguish between off and all other modes. /* We only distinguish between off and all other modes. */
mode_ = mode; mode_ = mode;
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return (Algorithm *)new Sdn(controller); return (Algorithm *)new Sdn(controller);

View file

@ -11,7 +11,7 @@
namespace RPiController { namespace RPiController {
// Algorithm to calculate correct spatial denoise (SDN) settings. /* Algorithm to calculate correct spatial denoise (SDN) settings. */
class Sdn : public DenoiseAlgorithm class Sdn : public DenoiseAlgorithm
{ {
@ -29,4 +29,4 @@ private:
DenoiseMode mode_; DenoiseMode mode_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -33,7 +33,7 @@ char const *Sharpen::name() const
void Sharpen::switchMode(CameraMode const &cameraMode, void Sharpen::switchMode(CameraMode const &cameraMode,
[[maybe_unused]] Metadata *metadata) [[maybe_unused]] Metadata *metadata)
{ {
// can't be less than one, right? /* can't be less than one, right? */
modeFactor_ = std::max(1.0, cameraMode.noiseFactor); modeFactor_ = std::max(1.0, cameraMode.noiseFactor);
} }
@ -50,24 +50,30 @@ void Sharpen::read(boost::property_tree::ptree const &params)
void Sharpen::setStrength(double strength) void Sharpen::setStrength(double strength)
{ {
// Note that this function is how an application sets the overall /*
// sharpening "strength". We call this the "user strength" field * Note that this function is how an application sets the overall
// as there already is a strength_ field - being an internal gain * sharpening "strength". We call this the "user strength" field
// parameter that gets passed to the ISP control code. Negative * as there already is a strength_ field - being an internal gain
// values are not allowed - coerce them to zero (no sharpening). * parameter that gets passed to the ISP control code. Negative
* values are not allowed - coerce them to zero (no sharpening).
*/
userStrength_ = std::max(0.0, strength); userStrength_ = std::max(0.0, strength);
} }
void Sharpen::prepare(Metadata *imageMetadata) void Sharpen::prepare(Metadata *imageMetadata)
{ {
// The userStrength_ affects the algorithm's internal gain directly, but /*
// we adjust the limit and threshold less aggressively. Using a sqrt * The userStrength_ affects the algorithm's internal gain directly, but
// function is an arbitrary but gentle way of accomplishing this. * we adjust the limit and threshold less aggressively. Using a sqrt
* function is an arbitrary but gentle way of accomplishing this.
*/
double userStrengthSqrt = sqrt(userStrength_); double userStrengthSqrt = sqrt(userStrength_);
struct SharpenStatus status; struct SharpenStatus status;
// Binned modes seem to need the sharpening toned down with this /*
// pipeline, thus we use the modeFactor_ here. Also avoid * Binned modes seem to need the sharpening toned down with this
// divide-by-zero with the userStrengthSqrt. * pipeline, thus we use the modeFactor_ here. Also avoid
* divide-by-zero with the userStrengthSqrt.
*/
status.threshold = threshold_ * modeFactor_ / status.threshold = threshold_ * modeFactor_ /
std::max(0.01, userStrengthSqrt); std::max(0.01, userStrengthSqrt);
status.strength = strength_ / modeFactor_ * userStrength_; status.strength = strength_ / modeFactor_ * userStrength_;
@ -77,7 +83,7 @@ void Sharpen::prepare(Metadata *imageMetadata)
imageMetadata->set("sharpen.status", status); imageMetadata->set("sharpen.status", status);
} }
// Register algorithm with the system. /* Register algorithm with the system. */
static Algorithm *create(Controller *controller) static Algorithm *create(Controller *controller)
{ {
return new Sharpen(controller); return new Sharpen(controller);

View file

@ -9,7 +9,7 @@
#include "../sharpen_algorithm.hpp" #include "../sharpen_algorithm.hpp"
#include "../sharpen_status.h" #include "../sharpen_status.h"
// This is our implementation of the "sharpen algorithm". /* This is our implementation of the "sharpen algorithm". */
namespace RPiController { namespace RPiController {
@ -31,4 +31,4 @@ private:
double userStrength_; double userStrength_;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -14,8 +14,8 @@ class SharpenAlgorithm : public Algorithm
{ {
public: public:
SharpenAlgorithm(Controller *controller) : Algorithm(controller) {} SharpenAlgorithm(Controller *controller) : Algorithm(controller) {}
// A sharpness control algorithm must provide the following: /* A sharpness control algorithm must provide the following: */
virtual void setStrength(double strength) = 0; virtual void setStrength(double strength) = 0;
}; };
} // namespace RPiController } /* namespace RPiController */

View file

@ -6,20 +6,20 @@
*/ */
#pragma once #pragma once
// The "sharpen" algorithm stores the strength to use. /* The "sharpen" algorithm stores the strength to use. */
#ifdef __cplusplus #ifdef __cplusplus
extern "C" { extern "C" {
#endif #endif
struct SharpenStatus { struct SharpenStatus {
// controls the smallest level of detail (or noise!) that sharpening will pick up /* controls the smallest level of detail (or noise!) that sharpening will pick up */
double threshold; double threshold;
// the rate at which the sharpening response ramps once above the threshold /* the rate at which the sharpening response ramps once above the threshold */
double strength; double strength;
// upper limit of the allowed sharpening response /* upper limit of the allowed sharpening response */
double limit; double limit;
// The sharpening strength requested by the user or application. /* The sharpening strength requested by the user or application. */
double userStrength; double userStrength;
}; };

View file

@ -152,4 +152,4 @@ private:
OffsetMap offsets_; OffsetMap offsets_;
}; };
} // namespace RPi } /* namespace RPi */