Solving Nonlinear Least Squares¶
Introduction¶
Effective use of Ceres Solver requires some familiarity with the basic components of a nonlinear least squares solver, so before we describe how to configure and use the solver, we will take a brief look at how some of the core optimization algorithms in Ceres Solver work.
Let \(x \in \mathbb{R}^n\) be an \(n\)dimensional vector of variables, and \(F(x) = \left[f_1(x), ... , f_{m}(x) \right]^{\top}\) be a \(m\)dimensional function of \(x\). We are interested in solving the optimization problem [1]
Where, \(L\) and \(U\) are vector lower and upper bounds on the parameter vector \(x\). The inequality holds componentwise.
Since the efficient global minimization of (1) for general \(F(x)\) is an intractable problem, we will have to settle for finding a local minimum.
In the following, the Jacobian \(J(x)\) of \(F(x)\) is an \(m\times n\) matrix, where \(J_{ij}(x) = D_j f_i(x)\) and the gradient vector is \(g(x) = \nabla \frac{1}{2}\F(x)\^2 = J(x)^\top F(x)\).
The general strategy when solving nonlinear optimization problems is to solve a sequence of approximations to the original problem [NocedalWright]. At each iteration, the approximation is solved to determine a correction \(\Delta x\) to the vector \(x\). For nonlinear least squares, an approximation can be constructed by using the linearization \(F(x+\Delta x) \approx F(x) + J(x)\Delta x\), which leads to the following linear least squares problem:
Unfortunately, naively solving a sequence of these problems and updating \(x \leftarrow x+ \Delta x\) leads to an algorithm that may not converge. To get a convergent algorithm, we need to control the size of the step \(\Delta x\). Depending on how the size of the step \(\Delta x\) is controlled, nonlinear optimization algorithms can be divided into two major categories [NocedalWright].
Trust Region The trust region approach approximates the objective function using a model function (often a quadratic) over a subset of the search space known as the trust region. If the model function succeeds in minimizing the true objective function the trust region is expanded; conversely, otherwise it is contracted and the model optimization problem is solved again.
Line Search The line search approach first finds a descent direction along which the objective function will be reduced and then computes a step size that decides how far should move along that direction. The descent direction can be computed by various methods, such as gradient descent, Newton’s method and QuasiNewton method. The step size can be determined either exactly or inexactly.
Trust region methods are in some sense dual to line search methods: trust region methods first choose a step size (the size of the trust region) and then a step direction while line search methods first choose a step direction and then a step size. Ceres Solver implements multiple algorithms in both categories.
Trust Region Methods¶
The basic trust region algorithm looks something like this.
Given an initial point \(x\) and a trust region radius \(\mu\).
Solve
\[\begin{split}\arg \min_{\Delta x}& \frac{1}{2}\J(x)\Delta x + F(x)\^2 \\ \text{such that} &\D(x)\Delta x\^2 \le \mu\\ &L \le x + \Delta x \le U.\end{split}\]\(\rho = \frac{\displaystyle \F(x + \Delta x)\^2  \F(x)\^2}{\displaystyle \J(x)\Delta x + F(x)\^2  \F(x)\^2}\)
if \(\rho > \epsilon\) then \(x = x + \Delta x\).
if \(\rho > \eta_1\) then \(\mu = 2 \mu\)
else if \(\rho < \eta_2\) then \(\mu = 0.5 * \mu\)
Go to 2.
Here, \(\mu\) is the trust region radius, \(D(x)\) is some matrix used to define a metric on the domain of \(F(x)\) and \(\rho\) measures the quality of the step \(\Delta x\), i.e., how well did the linear model predict the decrease in the value of the nonlinear objective. The idea is to increase or decrease the radius of the trust region depending on how well the linearization predicts the behavior of the nonlinear objective, which in turn is reflected in the value of \(\rho\).
The key computational step in a trustregion algorithm is the solution of the constrained optimization problem
There are a number of different ways of solving this problem, each
giving rise to a different concrete trustregion algorithm. Currently,
Ceres implements two trustregion algorithms  LevenbergMarquardt
and Dogleg, each of which is augmented with a line search if bounds
constraints are present [Kanzow]. The user can choose between them by
setting Solver::Options::trust_region_strategy_type
.
Footnotes
LevenbergMarquardt¶
The LevenbergMarquardt algorithm [Levenberg] [Marquardt] is the most popular algorithm for solving nonlinear least squares problems. It was also the first trust region algorithm to be developed [Levenberg] [Marquardt]. Ceres implements an exact step [Madsen] and an inexact step variant of the LevenbergMarquardt algorithm [WrightHolt] [NashSofer].
It can be shown, that the solution to (3) can be obtained by solving an unconstrained optimization of the form
Where, \(\lambda\) is a Lagrange multiplier that is inversely related to \(\mu\). In Ceres, we solve for
The matrix \(D(x)\) is a nonnegative diagonal matrix, typically the square root of the diagonal of the matrix \(J(x)^\top J(x)\).
Before going further, let us make some notational simplifications.
We will assume that the matrix \(\frac{1}{\sqrt{\mu}} D\) has been concatenated at the bottom of the matrix \(J(x)\) and a corresponding vector of zeroes has been added to the bottom of \(F(x)\), i.e.:
This allows us to rewrite (5) as
and only talk about \(J(x)\) and \(F(x)\) going forward.
For all but the smallest problems the solution of (6) in each iteration of the LevenbergMarquardt algorithm is the dominant computational cost. Ceres provides a number of different options for solving (6). There are two major classes of methods  factorization and iterative.
The factorization methods are based on computing an exact solution of (5) using a Cholesky or a QR factorization and lead to the so called exact step LevenbergMarquardt algorithm. But it is not clear if an exact solution of (5) is necessary at each step of the LevenbergMardquardt algorithm. We have already seen evidence that this may not be the case, as (5) is itself a regularized version of (2). Indeed, it is possible to construct nonlinear optimization algorithms in which the linearized problem is solved approximately. These algorithms are known as inexact Newton or truncated Newton methods [NocedalWright].
An inexact Newton method requires two ingredients. First, a cheap method for approximately solving systems of linear equations. Typically an iterative linear solver like the Conjugate Gradients method is used for this purpose [NocedalWright]. Second, a termination rule for the iterative solver. A typical termination rule is of the form
Here, \(k\) indicates the LevenbergMarquardt iteration number and \(0 < \eta_k <1\) is known as the forcing sequence. [WrightHolt] prove that a truncated LevenbergMarquardt algorithm that uses an inexact Newton step based on (7) converges for any sequence \(\eta_k \leq \eta_0 < 1\) and the rate of convergence depends on the choice of the forcing sequence \(\eta_k\).
Ceres supports both exact and inexact step solution strategies. When the user chooses a factorization based linear solver, the exact step LevenbergMarquardt algorithm is used. When the user chooses an iterative linear solver, the inexact step LevenbergMarquardt algorithm is used.
We will talk more about the various linear solvers that you can use in Linear Solvers.
Dogleg¶
Another strategy for solving the trust region problem (3) was introduced by M. J. D. Powell. The key idea there is to compute two vectors
Note that the vector \(\Delta x^{\text{GaussNewton}}\) is the
solution to (2) and \(\Delta
x^{\text{Cauchy}}\) is the vector that minimizes the linear
approximation if we restrict ourselves to moving along the direction
of the gradient. Dogleg methods finds a vector \(\Delta x\)
defined by \(\Delta x^{\text{GaussNewton}}\) and \(\Delta
x^{\text{Cauchy}}\) that solves the trust region problem. Ceres
supports two variants that can be chose by setting
Solver::Options::dogleg_type
.
TRADITIONAL_DOGLEG
as described by Powell, constructs two line
segments using the GaussNewton and Cauchy vectors and finds the point
farthest along this line shaped like a dogleg (hence the name) that is
contained in the trustregion. For more details on the exact reasoning
and computations, please see Madsen et al [Madsen].
SUBSPACE_DOGLEG
is a more sophisticated method that considers the
entire two dimensional subspace spanned by these two vectors and finds
the point that minimizes the trust region problem in this subspace
[ByrdSchnabel].
The key advantage of the Dogleg over LevenbergMarquardt is that if the step computation for a particular choice of \(\mu\) does not result in sufficient decrease in the value of the objective function, LevenbergMarquardt solves the linear approximation from scratch with a smaller value of \(\mu\). Dogleg on the other hand, only needs to compute the interpolation between the GaussNewton and the Cauchy vectors, as neither of them depend on the value of \(\mu\). As a result the Dogleg method only solves one linear system per successful step, while LevenbergMarquardt may need to solve an arbitrary number of linear systems before it can make progress [LourakisArgyros].
A disadvantage of the Dogleg implementation in Ceres Solver is that is can only be used with method can only be used with exact factorization based linear solvers.
Inner Iterations¶
Some nonlinear least squares problems have additional structure in the way the parameter blocks interact that it is beneficial to modify the way the trust region step is computed. For example, consider the following regression problem
Given a set of pairs \(\{(x_i, y_i)\}\), the user wishes to estimate \(a_1, a_2, b_1, b_2\), and \(c_1\).
Notice that the expression on the left is linear in \(a_1\) and \(a_2\), and given any value for \(b_1, b_2\) and \(c_1\), it is possible to use linear regression to estimate the optimal values of \(a_1\) and \(a_2\). It’s possible to analytically eliminate the variables \(a_1\) and \(a_2\) from the problem entirely. Problems like these are known as separable least squares problem and the most famous algorithm for solving them is the Variable Projection algorithm invented by Golub & Pereyra [GolubPereyra].
Similar structure can be found in the matrix factorization with missing data problem. There the corresponding algorithm is known as Wiberg’s algorithm [Wiberg].
Ruhe & Wedin present an analysis of various algorithms for solving separable nonlinear least squares problems and refer to Variable Projection as Algorithm I in their paper [RuheWedin].
Implementing Variable Projection is tedious and expensive. Ruhe & Wedin present a simpler algorithm with comparable convergence properties, which they call Algorithm II. Algorithm II performs an additional optimization step to estimate \(a_1\) and \(a_2\) exactly after computing a successful Newton step.
This idea can be generalized to cases where the residual is not linear in \(a_1\) and \(a_2\), i.e.,
In this case, we solve for the trust region step for the full problem, and then use it as the starting point to further optimize just a_1 and a_2. For the linear case, this amounts to doing a single linear least squares solve. For nonlinear problems, any method for solving the \(a_1\) and \(a_2\) optimization problems will do. The only constraint on \(a_1\) and \(a_2\) (if they are two different parameter block) is that they do not cooccur in a residual block.
This idea can be further generalized, by not just optimizing \((a_1, a_2)\), but decomposing the graph corresponding to the Hessian matrix’s sparsity structure into a collection of nonoverlapping independent sets and optimizing each of them.
Setting Solver::Options::use_inner_iterations
to true
enables the use of this nonlinear generalization of Ruhe & Wedin’s
Algorithm II. This version of Ceres has a higher iteration
complexity, but also displays better convergence behavior per
iteration.
Setting Solver::Options::num_threads
to the maximum number
possible is highly recommended.
Nonmonotonic Steps¶
Note that the basic trustregion algorithm described in Trust Region Methods is a descent algorithm in that it only accepts a point if it strictly reduces the value of the objective function.
Relaxing this requirement allows the algorithm to be more efficient in the long term at the cost of some local increase in the value of the objective function.
This is because allowing for nondecreasing objective function values in a principled manner allows the algorithm to jump over boulders as the method is not restricted to move into narrow valleys while preserving its convergence properties.
Setting Solver::Options::use_nonmonotonic_steps
to true
enables the nonmonotonic trust region algorithm as described by Conn,
Gould & Toint in [Conn].
Even though the value of the objective function may be larger than the minimum value encountered over the course of the optimization, the final parameters returned to the user are the ones corresponding to the minimum cost over all iterations.
The option to take nonmonotonic steps is available for all trust region strategies.
Line Search Methods¶
Note
The line search method in Ceres Solver cannot handle bounds constraints right now, so it can only be used for solving unconstrained problems.
The basic line search algorithm looks something like this:
Given an initial point \(x\)
\(\Delta x = H^{1}(x) g(x)\)
\(\arg \min_\mu \frac{1}{2} \ F(x + \mu \Delta x) \^2\)
\(x = x + \mu \Delta x\)
Goto 2.
Here \(H(x)\) is some approximation to the Hessian of the objective function, and \(g(x)\) is the gradient at \(x\). Depending on the choice of \(H(x)\) we get a variety of different search directions \(\Delta x\).
Step 4, which is a one dimensional optimization or Line Search along \(\Delta x\) is what gives this class of methods its name.
Different line search algorithms differ in their choice of the search direction \(\Delta x\) and the method used for one dimensional optimization along \(\Delta x\). The choice of \(H(x)\) is the primary source of computational complexity in these methods. Currently, Ceres Solver supports four choices of search directions, all aimed at large scale problems.
STEEPEST_DESCENT
This corresponds to choosing \(H(x)\) to be the identity matrix. This is not a good search direction for anything but the simplest of the problems. It is only included here for completeness.NONLINEAR_CONJUGATE_GRADIENT
A generalization of the Conjugate Gradient method to nonlinear functions. The generalization can be performed in a number of different ways, resulting in a variety of search directions. Ceres Solver currently supportsFLETCHER_REEVES
,POLAK_RIBIERE
andHESTENES_STIEFEL
directions.BFGS
A generalization of the Secant method to multiple dimensions in which a full, dense approximation to the inverse Hessian is maintained and used to compute a quasiNewton step [NocedalWright].BFGS
and its limited memory variantLBFGS
are currently the best known general quasiNewton algorithm.LBFGS
A limited memory approximation to the fullBFGS
method in which the last M iterations are used to approximate the inverse Hessian used to compute a quasiNewton step [Nocedal], [ByrdNocedal].
Currently Ceres Solver supports both a backtracking and interpolation
based Armijo line search algorithm (ARMIJO
)
, and a sectioning / zoom interpolation (strong) Wolfe condition line
search algorithm
(WOLFE
).
Note
In order for the assumptions underlying the BFGS
and LBFGS
methods to be satisfied the WOLFE
algorithm must be used.
Linear Solvers¶
Observe that for both of the trustregion methods described above, the key computational cost is the solution of a linear least squares problem of the form
Let \(H(x)= J(x)^\top J(x)\) and \(g(x) = J(x)^\top F(x)\). For notational convenience let us also drop the dependence on \(x\). Then it is easy to see that solving (8) is equivalent to solving the normal equations.
Ceres provides a number of different options for solving (9).
DENSE_QR¶
For small problems (a couple of hundred parameters and a few thousand residuals) with relatively dense Jacobians, QRdecomposition is the method of choice [Bjorck]. Let \(J = QR\) be the QRdecomposition of \(J\), where \(Q\) is an orthonormal matrix and \(R\) is an upper triangular matrix [TrefethenBau]. Then it can be shown that the solution to (9) is given by
You can use QRdecomposition by setting
Solver::Options::linear_solver_type
to DENSE_QR
.
By default (Solver::Options::dense_linear_algebra_library_type =
EIGEN
) Ceres Solver will use Eigen Householder QR factorization
.
If Ceres Solver has been built with an optimized LAPACK
implementation, then the user can also choose to use LAPACK’s
DGEQRF routine by setting
Solver::Options::dense_linear_algebra_library_type
to
LAPACK
. Depending on the LAPACK and the underlying BLAS
implementation this may perform better than using Eigen’s Householder
QR factorization.
If an NVIDIA GPU is available and Ceres Solver has been built with
CUDA support enabled, then the user can also choose to perform the
QRdecomposition on the GPU by setting
Solver::Options::dense_linear_algebra_library_type
to
CUDA
. Depending on the GPU this can lead to a substantial
speedup. Using CUDA only makes sense for moderate to large sized
problems. This is because to perform the decomposition on the GPU the
matrix \(J\) needs to be transferred from the CPU to the GPU and
this incurs a cost. So unless the speedup from doing the decomposition
on the GPU is large enough to also account for the time taken to
transfer the Jacobian to the GPU, using CUDA will not be better than
just doing the decomposition on the CPU.
DENSE_NORMAL_CHOLESKY¶
It is often the case that the number of rows in the Jacobian \(J\) are much larger than the the number of columns. The complexity of QR factorization scales linearly with the number of rows, so beyond a certain size it is more efficient to solve (9) using a dense Cholesky factorization.
Let \(H = R^\top R\) be the Cholesky factorization of the normal equations, where \(R\) is an upper triangular matrix, then the solution to (9) is given by
The observant reader will note that the \(R\) in the Cholesky factorization of \(H\) is the same upper triangular matrix \(R\) in the QR factorization of \(J\). Since \(Q\) is an orthonormal matrix, \(J=QR\) implies that \(J^\top J = R^\top Q^\top Q R = R^\top R\).
Unfortunately, forming the matrix \(H = J'J\) squares the condition number. As a result while the cost of forming \(H\) and computing its Cholesky factorization is lower than computing the QRfactorization of \(J\), we pay the price in terms of increased numerical instability and potential failure of the Cholesky factorization for illconditioned Jacobians.
You can use dense Cholesky factorization by setting
Solver::Options::linear_solver_type
to
DENSE_NORMAL_CHOLESKY
.
By default (Solver::Options::dense_linear_algebra_library_type =
EIGEN
) Ceres Solver will use Eigen’s LLT factorization routine.
If Ceres Solver has been built with an optimized LAPACK
implementation, then the user can also choose to use LAPACK’s
DPOTRF routine by setting
Solver::Options::dense_linear_algebra_library_type
to
LAPACK
. Depending on the LAPACK and the underlying BLAS
implementation this may perform better than using Eigen’s Cholesky
factorization.
If an NVIDIA GPU is available and Ceres Solver has been built with
CUDA support enabled, then the user can also choose to perform the
Cholesky factorization on the GPU by setting
Solver::Options::dense_linear_algebra_library_type
to
CUDA
. Depending on the GPU this can lead to a substantial speedup.
Using CUDA only makes sense for moderate to large sized problems. This
is because to perform the decomposition on the GPU the matrix
\(H\) needs to be transferred from the CPU to the GPU and this
incurs a cost. So unless the speedup from doing the decomposition on
the GPU is large enough to also account for the time taken to transfer
the Jacobian to the GPU, using CUDA will not be better than just doing
the decomposition on the CPU.
SPARSE_NORMAL_CHOLESKY¶
Large nonlinear least square problems are usually sparse. In such cases, using a dense QR or Cholesky factorization is inefficient. For such problems, Cholesky factorization routines which treat \(H\) as a sparse matrix and computes a sparse factor \(R\) are better suited [Davis]. This can lead to substantial savings in memory and CPU time for large sparse problems.
You can use dense Cholesky factorization by setting
Solver::Options::linear_solver_type
to
SPARSE_NORMAL_CHOLESKY
.
The use of this linear solver requires that Ceres is compiled with support for at least one of:
SuiteSparse (
SUITE_SPARSE
).Apple’s Accelerate framework (
ACCELERATE_SPARSE
).Eigen’s sparse linear solvers (
EIGEN_SPARSE
).
SuiteSparse and Accelerate offer high performance sparse Cholesky factorization routines as they level3 BLAS routines internally. Eigen’s sparse Cholesky routines are simplicial and do not use dense linear algebra routines and as a result cannot compete with SuiteSparse and Accelerate, especially on large problems. As a result to get the best performance out of SuiteSparse it should be linked to high quality BLAS and LAPACK implementations e.g. ATLAS, OpenBLAS or Intel MKL.
A critical part of a sparse Cholesky factorization routine is the use
a fillreducing ordering. By default Ceres Solver uses the Approximate
Minimum Degree (AMD
) ordering, which usually performs well, but
there are other options that may perform better depending on the
actual sparsity structure of the Jacobian. See Ordering
for more details.
CGNR¶
For general sparse problems, if the problem is too large for sparse
Cholesky factorization or a sparse linear algebra library is not
linked into Ceres, another option is the CGNR
solver. This solver
uses the Conjugate Gradients
<https://en.wikipedia.org/wiki/Conjugate_gradient_method>_ method on
the normal equations, but without forming the normal equations
explicitly. It exploits the relation
Because CGNR
never solves the linear system exactly, when the user
chooses CGNR
as the linear solver, Ceres automatically switches
from the exact step algorithm to an inexact step algorithm. This also
means that CGNR
can only be used with LEVENBERG_MARQUARDT
and
not with DOGLEG
trust region strategy.
CGNR
by default runs on the CPU. However, if an NVIDIA GPU is
available and Ceres Solver has been built with CUDA support enabled,
then the user can also choose to run CGNR
on the GPU by setting
Solver::Options::sparse_linear_algebra_library_type
to
CUDA_SPARSE
. The key complexity of CGNR
comes from evaluating
the two sparsematrix vector products (SpMV) \(Jx\) and
\(J'y\). GPUs are particularly well suited for doing sparse
matrixvector products. As a result, for large problems using a GPU
can lead to a substantial speedup.
The convergence of Conjugate Gradients depends on the conditioner number \(\kappa(H)\). Usually \(H\) is quite poorly conditioned and a Preconditioner must be used to get reasonable performance. See section on Preconditioners for more details.
DENSE_SCHUR & SPARSE_SCHUR¶
While it is possible to use SPARSE_NORMAL_CHOLESKY
to solve bundle
adjustment problems, they have a special sparsity structure that can
be exploited to solve the normal equations more efficiently.
Suppose that the bundle adjustment problem consists of \(p\) cameras and \(q\) points and the variable vector \(x\) has the block structure \(x = [y_{1}, ... ,y_{p},z_{1}, ... ,z_{q}]\). Where, \(y\) and \(z\) correspond to camera and point parameters respectively. Further, let the camera blocks be of size \(c\) and the point blocks be of size \(s\) (for most problems \(c\) = \(6\)–9 and \(s = 3\)). Ceres does not impose any constancy requirement on these block sizes, but choosing them to be constant simplifies the exposition.
The key property of bundle adjustment problems which we will exploit is the fact that no term \(f_{i}\) in (1) includes two or more point blocks at the same time. This in turn implies that the matrix \(H\) is of the form
where \(B \in \mathbb{R}^{pc\times pc}\) is a block sparse matrix with \(p\) blocks of size \(c\times c\) and \(C \in \mathbb{R}^{qs\times qs}\) is a block diagonal matrix with \(q\) blocks of size \(s\times s\). \(E \in \mathbb{R}^{pc\times qs}\) is a general block sparse matrix, with a block of size \(c\times s\) for each observation. Let us now block partition \(\Delta x = [\Delta y,\Delta z]\) and \(g=[v,w]\) to restate (9) as the block structured linear system
and apply Gaussian elimination to it. As we noted above, \(C\) is a block diagonal matrix, with small diagonal blocks of size \(s\times s\). Thus, calculating the inverse of \(C\) by inverting each of these blocks is cheap. This allows us to eliminate \(\Delta z\) by observing that \(\Delta z = C^{1}(w  E^\top \Delta y)\), giving us
The matrix
is the Schur complement of \(C\) in \(H\). It is also known as the reduced camera matrix, because the only variables participating in (12) are the ones corresponding to the cameras. \(S \in \mathbb{R}^{pc\times pc}\) is a block structured symmetric positive definite matrix, with blocks of size \(c\times c\). The block \(S_{ij}\) corresponding to the pair of images \(i\) and \(j\) is nonzero if and only if the two images observe at least one common point.
Now (11) can be solved by first forming \(S\), solving for \(\Delta y\), and then backsubstituting \(\Delta y\) to obtain the value of \(\Delta z\). Thus, the solution of what was an \(n\times n\), \(n=pc+qs\) linear system is reduced to the inversion of the block diagonal matrix \(C\), a few matrixmatrix and matrixvector multiplies, and the solution of block sparse \(pc\times pc\) linear system (12). For almost all problems, the number of cameras is much smaller than the number of points, \(p \ll q\), thus solving (12) is significantly cheaper than solving (11). This is the Schur complement trick [Brown].
This still leaves open the question of solving (12). As we discussed when considering the exact solution of the normal equations using Cholesky factorization, we have two options.
1. DENSE_SCHUR
 The first is dense Cholesky factorization,
where we store and factor \(S\) as a dense matrix. This method has
\(O(p^2)\) space complexity and \(O(p^3)\) time complexity and
is only practical for problems with up to a few hundred cameras.
2. SPARSE_SCHUR
 For large bundle adjustment problems \(S\)
is typically a fairly sparse matrix, as most images only see a small
fraction of the scene. This leads us to the second option: sparse
Cholesky factorization [Davis]. Here we store \(S\) as a
sparse matrix, use row and column reordering algorithms to maximize
the sparsity of the Cholesky decomposition, and focus their compute
effort on the nonzero part of the factorization [Davis] [Chen]
. Sparse direct methods, depending on the exact sparsity structure of
the Schur complement, allow bundle adjustment algorithms to scenes
with thousands of cameras.
ITERATIVE_SCHUR¶
Another option for bundle adjustment problems is to apply Conjugate Gradients to the reduced camera matrix \(S\) instead of \(H\). One reason to do this is that \(S\) is a much smaller matrix than \(H\), but more importantly, it can be shown that \(\kappa(S)\leq \kappa(H)\) [Agarwal].
Ceres implements Conjugate Gradients on \(S\) as the
ITERATIVE_SCHUR
solver. When the user chooses ITERATIVE_SCHUR
as the linear solver, Ceres automatically switches from the exact step
algorithm to an inexact step algorithm.
The key computational operation when using Conjuagate Gradients is the evaluation of the matrix vector product \(Sx\) for an arbitrary vector \(x\). Because PCG only needs access to \(S\) via its product with a vector, one way to evaluate \(Sx\) is to observe that
Thus, we can run Conjugate Gradients on \(S\) with the same computational effort per iteration as Conjugate Gradients on \(H\), while reaping the benefits of a more powerful preconditioner. In fact, we do not even need to compute \(H\), (13) can be implemented using just the columns of \(J\).
Equation (13) is closely related to Domain Decomposition methods for solving large linear systems that arise in structural engineering and partial differential equations. In the language of Domain Decomposition, each point in a bundle adjustment problem is a domain, and the cameras form the interface between these domains. The iterative solution of the Schur complement then falls within the subcategory of techniques known as Iterative Substructuring [Saad] [Mathew].
While in most cases the above method for evaluating \(Sx\) is the
way to go, for some problems it is better to compute the Schur
complemenent \(S\) explicitly and then run Conjugate Gradients on
it. This can be done by settin
Solver::Options::use_explicit_schur_complement
to true
. This
option can only be used with the SCHUR_JACOBI
preconditioner.
SCHUR_POWER_SERIES_EXPANSION¶
It can be shown that the inverse of the Schur complement can be written as an infinite powerseries [Weber] [Zheng]:
As a result a truncated version of this power series expansion can be used to approximate the inverse and therefore the solution to (12). Ceres allows the user to use Schur power series expansion in three ways.
As a linear solver. This is what [Weber] calls Power Bundle Adjustment and corresponds to using the truncated power series to approximate the inverse of the Schur complement. This is done by setting the following options.
Solver::Options::linear_solver_type = ITERATIVE_SCHUR Solver::Options::preconditioner_type = IDENTITY Solver::Options::use_spse_initialization = true Solver::Options::max_linear_solver_iterations = 0; // The following two settings are worth tuning for your application. Solver::Options::max_num_spse_iterations = 5; Solver::Options::spse_tolerance = 0.1;
As a preconditioner for
ITERATIVE_SCHUR
. Any method for approximating the inverse of a matrix can also be used as a preconditioner. This is enabled by setting the following options.Solver::Options::linear_solver_type = ITERATIVE_SCHUR Solver::Options::preconditioner_type = SCHUR_POWER_SERIES_EXPANSION; Solver::Options::use_spse_initialization = false; // This is worth tuning for your application. Solver::Options::max_num_spse_iterations = 5;
As initialization for
ITERATIIVE_SCHUR
with any preconditioner. This is a combination of the above two, where the Schur Power Series ExpansionSolver::Options::linear_solver_type = ITERATIVE_SCHUR Solver::Options::preconditioner_type = ... // Preconditioner of your choice. Solver::Options::use_spse_initialization = true Solver::Options::max_linear_solver_iterations = 0; // The following two settings are worth tuning for your application. Solver::Options::max_num_spse_iterations = 5; // This only affects the initialization but not the preconditioner. Solver::Options::spse_tolerance = 0.1;
Mixed Precision Solves¶
Generally speaking Ceres Solver does all its arithmetic in double
precision. Sometimes though, one can use single precision arithmetic
to get substantial speedups. Currently, for linear solvers that
perform Cholesky factorization (sparse or dense) the user has the
option cast the linear system to single precision and then use
single precision Cholesky factorization routines to solve the
resulting linear system. This can be enabled by setting
Solver::Options::use_mixed_precision_solves
to true
.
Depending on the conditioning of the problem, the use of single
precision factorization may lead to some loss of accuracy. Some of
this accuracy can be recovered by performing Iterative Refinement. The number of
iterations of iterative refinement are controlled by
Solver::Options::max_num_refinement_iterations
. The default
value of this parameter is zero, which means if
Solver::Options::use_mixed_precision_solves
is true
,
then no iterative refinement is performed. Usually 23 refinement
iterations are enough.
Mixed precision solves are available in the following linear solver configurations:
DENSE_NORMAL_CHOLESKY
+EIGEN
/LAPACK
/CUDA
.DENSE_SCHUR
+EIGEN
/LAPACK
/CUDA
.SPARSE_NORMAL_CHOLESKY
+EIGEN_SPARSE
/ACCELERATE_SPARSE
SPARSE_SCHUR
+EIGEN_SPARSE
/ACCELERATE_SPARSE
Mixed precision solves area not available when using SUITE_SPARSE
as the sparse linear algebra backend because SuiteSparse/CHOLMOD does
not support single precision solves.
Preconditioners¶
The convergence rate of Conjugate Gradients for solving (9) depends on the distribution of eigenvalues of \(H\) [Saad]. A useful upper bound is \(\sqrt{\kappa(H)}\), where, \(\kappa(H)\) is the condition number of the matrix \(H\). For most nonlinear least squares problems, \(\kappa(H)\) is high and a direct application of Conjugate Gradients to (9) results in extremely poor performance.
The solution to this problem is to replace (9) with a preconditioned system. Given a linear system, \(Ax =b\) and a preconditioner \(M\) the preconditioned system is given by \(M^{1}Ax = M^{1}b\). The resulting algorithm is known as Preconditioned Conjugate Gradients algorithm (PCG) and its worst case complexity now depends on the condition number of the preconditioned matrix \(\kappa(M^{1}A)\).
The computational cost of using a preconditioner \(M\) is the cost of computing \(M\) and evaluating the product \(M^{1}y\) for arbitrary vectors \(y\). Thus, there are two competing factors to consider: How much of \(H\)’s structure is captured by \(M\) so that the condition number \(\kappa(HM^{1})\) is low, and the computational cost of constructing and using \(M\). The ideal preconditioner would be one for which \(\kappa(M^{1}A) =1\). \(M=A\) achieves this, but it is not a practical choice, as applying this preconditioner would require solving a linear system equivalent to the unpreconditioned problem. It is usually the case that the more information \(M\) has about \(H\), the more expensive it is use. For example, Incomplete Cholesky factorization based preconditioners have much better convergence behavior than the Jacobi preconditioner, but are also much more expensive.
For a survey of the state of the art in preconditioning linear least squares problems with general sparsity structure see [GouldScott].
Ceres Solver comes with an number of preconditioners suited for problems with general sparsity as well as the special sparsity structure encountered in bundle adjustment problems.
IDENTITY¶
This is equivalent to using an identity matrix as a preconditioner, i.e. no preconditioner at all.
JACOBI¶
The simplest of all preconditioners is the diagonal or Jacobi
preconditioner, i.e., \(M=\operatorname{diag}(A)\), which for
block structured matrices like \(H\) can be generalized to the
block Jacobi preconditioner. The JACOBI
preconditioner in Ceres
when used with CGNR refers to the block diagonal of
\(H\) and when used with ITERATIVE_SCHUR refers to
the block diagonal of \(B\) [Mandel].
For detailed performance data about the performance of JACOBI
on
bundle adjustment problems see [Agarwal].
SCHUR_JACOBI¶
Another obvious choice for ITERATIVE_SCHUR is the block
diagonal of the Schur complement matrix \(S\), i.e, the block
Jacobi preconditioner for \(S\). In Ceres we refer to it as the
SCHUR_JACOBI
preconditioner.
For detailed performance data about the performance of
SCHUR_JACOBI
on bundle adjustment problems see [Agarwal].
CLUSTER_JACOBI and CLUSTER_TRIDIAGONAL¶
For bundle adjustment problems arising in reconstruction from community photo collections, more effective preconditioners can be constructed by analyzing and exploiting the camerapoint visibility structure of the scene.
The key idea is to cluster the cameras based on the visibility structure of the scene. The similarity between a pair of cameras \(i\) and \(j\) is given by:
\[S_{ij} = \frac{V_i \cap V_j}{V_i V_j}\]
Here \(V_i\) is the set of scene points visible in camera
\(i\). This idea was first exploited by [KushalAgarwal] to create
the CLUSTER_JACOBI
and the CLUSTER_TRIDIAGONAL
preconditioners
which Ceres implements.
The performance of these two preconditioners depends on the speed and
clustering quality of the clustering algorithm used when building the
preconditioner. In the original paper, [KushalAgarwal] used the
Canonical Views algorithm [Simon], which while producing high quality
clusterings can be quite expensive for large graphs. So, Ceres
supports two visibility clustering algorithms  CANONICAL_VIEWS
and SINGLE_LINKAGE
. The former is as the name implies Canonical
Views algorithm of [Simon]. The latter is the the classic Single
Linkage Clustering
algorithm. The choice of clustering algorithm is controlled by
Solver::Options::visibility_clustering_type
.
SCHUR_POWER_SERIES_EXPANSION¶
As explained in SCHUR_POWER_SERIES_EXPANSION, the Schur
complement matrix admits a power series expansion and a truncated
version of this power series can be used as a preconditioner for
ITERATIVE_SCHUR
. When used as a preconditioner
Solver::Options::max_num_spse_iterations
controls the number
of terms in the power series that are used.
SUBSET¶
This is a preconditioner for problems with general sparsity. Given a subset of residual blocks of a problem, it uses the corresponding subset of the rows of the Jacobian to construct a preconditioner [Dellaert].
Suppose the Jacobian \(J\) has been horizontally partitioned as
\[\begin{split}J = \begin{bmatrix} P \\ Q \end{bmatrix}\end{split}\]
Where, \(Q\) is the set of rows corresponding to the residual
blocks in
Solver::Options::residual_blocks_for_subset_preconditioner
. The
preconditioner is the matrix \((Q^\top Q)^{1}\).
The efficacy of the preconditioner depends on how well the matrix \(Q\) approximates \(J^\top J\), or how well the chosen residual blocks approximate the full problem.
This preconditioner is NOT available when running CGNR
using
CUDA
.
Ordering¶
The order in which variables are eliminated in a linear solver can have a significant of impact on the efficiency and accuracy of the method. For example when doing sparse Cholesky factorization, there are matrices for which a good ordering will give a Cholesky factor with \(O(n)\) storage, whereas a bad ordering will result in an completely dense factor.
Ceres allows the user to provide varying amounts of hints to the solver about the variable elimination ordering to use. This can range from no hints, where the solver is free to decide the best possible ordering based on the user’s choices like the linear solver being used, to an exact order in which the variables should be eliminated, and a variety of possibilities in between.
The simplest thing to do is to just set
Solver::Options::linear_solver_ordering_type
to AMD
(default) or NESDIS
based on your understanding of the problem or
empirical testing.
More information can be commmuniucated by using an instance
ParameterBlockOrdering
class.
Formally an ordering is an ordered partitioning of the parameter blocks, i.e, each parameter block belongs to exactly one group, and each group has a unique nonnegative integer associated with it, that determines its order in the set of groups.
e.g. Consider the linear system
There are two ways in which it can be solved. First eliminating \(x\) from the two equations, solving for \(y\) and then back substituting for \(x\), or first eliminating \(y\), solving for \(x\) and back substituting for \(y\). The user can construct three orderings here.
\(\{0: x\}, \{1: y\}\)  eliminate \(x\) first.
\(\{0: y\}, \{1: x\}\)  eliminate \(y\) first.
\(\{0: x, y\}\)  Solver gets to decide the elimination order.
Thus, to have Ceres determine the ordering automatically, put all the variables in group 0 and to control the ordering for every variable, create groups \(0 \dots N1\), one per variable, in the desired order.
linear_solver_ordering == nullptr
and an ordering where all the
parameter blocks are in one elimination group mean the same thing 
the solver is free to choose what it thinks is the best elimination
ordering using the ordering algorithm (specified using
Solver::Options::linear_solver_ordering_type
). Therefore in
the following we will only consider the case where
linear_solver_ordering != nullptr
.
The exact interpretation of the linear_solver_ordering
depends on
the values of Solver::Options::linear_solver_ordering_type
,
Solver::Options::linear_solver_type
,
Solver::Options::preconditioner_type
and
Solver::Options::sparse_linear_algebra_library_type
as we will
explain below.
Bundle Adjustment¶
If the user is using one of the Schur solvers (DENSE_SCHUR
,
SPARSE_SCHUR
, ITERATIVE_SCHUR
) and chooses to specify an
ordering, it must have one important property. The lowest numbered
elimination group must form an independent set in the graph
corresponding to the Hessian, or in other words, no two parameter
blocks in the first elimination group should cooccur in the same
residual block. For the best performance, this elimination group
should be as large as possible. For standard bundle adjustment
problems, this corresponds to the first elimination group containing
all the 3d points, and the second containing the parameter blocks for
all the cameras.
If the user leaves the choice to Ceres, then the solver uses an approximate maximum independent set algorithm to identify the first elimination group [LiSaad].
sparse_linear_algebra_library_type = SUITE_SPARSE
¶
linear_solver_ordering_type = AMD
A constrained Approximate Minimum Degree (CAMD) ordering is used where the parameter blocks in the lowest numbered group are eliminated first, and then the parameter blocks in the next lowest numbered group and so on. Within each group, CAMD is free to order the parameter blocks as it chooses.
linear_solver_ordering_type = NESDIS
linear_solver_type = SPARSE_NORMAL_CHOLESKY
orlinear_solver_type = CGNR
andpreconditioner_type = SUBSET
The value of
linear_solver_ordering
is ignored and a Nested Dissection algorithm is used to compute a fill reducing ordering.linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR
ONLY the lowest group are used to compute the Schur complement, and Nested Dissection is used to compute a fill reducing ordering for the Schur Complement (or its preconditioner).
sparse_linear_algebra_library_type = EIGEN_SPARSE/ACCELERATE_SPARSE
¶
linear_solver_type = SPARSE_NORMAL_CHOLESKY
orlinear_solver_type = CGNR
andpreconditioner_type = SUBSET
The value of
linear_solver_ordering
is ignored andAMD
orNESDIS
is used to compute a fill reducing ordering as requested by the user.linear_solver_type = SPARSE_SCHUR/DENSE_SCHUR/ITERATIVE_SCHUR
ONLY the lowest group are used to compute the Schur complement, and
AMD
orNESID
is used to compute a fill reducing ordering for the Schur Complement (or its preconditioner) as requested by the user.
Solver::Options
¶

class Solver::Options¶
Solver::Options
controls the overall behavior of the solver. We list the various settings and their default values below.

bool Solver::Options::IsValid(std::string *error) const¶
Validate the values in the options struct and returns true on success. If there is a problem, the method returns false with
error
containing a textual description of the cause.

MinimizerType Solver::Options::minimizer_type¶
Default:
TRUST_REGION
Choose between
LINE_SEARCH
andTRUST_REGION
algorithms. See Trust Region Methods and Line Search Methods for more details.

LineSearchDirectionType Solver::Options::line_search_direction_type¶
Default:
LBFGS
Choices are
STEEPEST_DESCENT
,NONLINEAR_CONJUGATE_GRADIENT
,BFGS
andLBFGS
.See Line Search Methods for more details.

LineSearchType Solver::Options::line_search_type¶
Default:
WOLFE
Choices are
ARMIJO
andWOLFE
(strong Wolfe conditions). Note that in order for the assumptions underlying theBFGS
andLBFGS
line search direction algorithms to be satisfied, theWOLFE
line search must be used.See Line Search Methods for more details.

NonlinearConjugateGradientType Solver::Options::nonlinear_conjugate_gradient_type¶
Default:
FLETCHER_REEVES
Choices are
FLETCHER_REEVES
,POLAK_RIBIERE
andHESTENES_STIEFEL
.

int Solver::Options::max_lbfgs_rank¶
Default:
20
The LBFGS hessian approximation is a low rank approximation to the inverse of the Hessian matrix. The rank of the approximation determines (linearly) the space and time complexity of using the approximation. Higher the rank, the better is the quality of the approximation. The increase in quality is however is bounded for a number of reasons.
1. The method only uses secant information and not actual derivatives. 2. The Hessian approximation is constrained to be positive definite.
So increasing this rank to a large number will cost time and space complexity without the corresponding increase in solution quality. There are no hard and fast rules for choosing the maximum rank. The best choice usually requires some problem specific experimentation.
For more theoretical and implementation details of the LBFGS method, please see [Nocedal].

bool Solver::Options::use_approximate_eigenvalue_bfgs_scaling¶
Default:
false
As part of the
BFGS
update step /LBFGS
rightmultiply step, the initial inverse Hessian approximation is taken to be the Identity. However, [Oren] showed that using instead \(I * \gamma\), where \(\gamma\) is a scalar chosen to approximate an eigenvalue of the true inverse Hessian can result in improved convergence in a wide variety of cases. Settinguse_approximate_eigenvalue_bfgs_scaling
to true enables this scaling inBFGS
(before first iteration) andLBFGS
(at each iteration).Precisely, approximate eigenvalue scaling equates to
\[\gamma = \frac{y_k' s_k}{y_k' y_k}\]With:
\[y_k = \nabla f_{k+1}  \nabla f_k\]\[s_k = x_{k+1}  x_k\]Where \(f()\) is the line search objective and \(x\) the vector of parameter values [NocedalWright].
It is important to note that approximate eigenvalue scaling does not always improve convergence, and that it can in fact significantly degrade performance for certain classes of problem, which is why it is disabled by default. In particular it can degrade performance when the sensitivity of the problem to different parameters varies significantly, as in this case a single scalar factor fails to capture this variation and detrimentally downscales parts of the Jacobian approximation which correspond to lowsensitivity parameters. It can also reduce the robustness of the solution to errors in the Jacobians.

LineSearchIterpolationType Solver::Options::line_search_interpolation_type¶
Default:
CUBIC
Degree of the polynomial used to approximate the objective function. Valid values are
BISECTION
,QUADRATIC
andCUBIC
.

double Solver::Options::min_line_search_step_size¶
Default:
1e9
The line search terminates if:
\[\\Delta x_k\_\infty < \text{min_line_search_step_size}\]where \(\\cdot\_\infty\) refers to the max norm, and \(\Delta x_k\) is the step change in the parameter values at the \(k\)th iteration.

double Solver::Options::line_search_sufficient_function_decrease¶
Default:
1e4
Solving the line search problem exactly is computationally prohibitive. Fortunately, line search based optimization algorithms can still guarantee convergence if instead of an exact solution, the line search algorithm returns a solution which decreases the value of the objective function sufficiently. More precisely, we are looking for a step size s.t.
\[f(\text{step_size}) \le f(0) + \text{sufficient_decrease} * [f'(0) * \text{step_size}]\]This condition is known as the Armijo condition.

double Solver::Options::max_line_search_step_contraction¶
Default:
1e3
In each iteration of the line search,
\[\text{new_step_size} >= \text{max_line_search_step_contraction} * \text{step_size}\]Note that by definition, for contraction:
\[0 < \text{max_step_contraction} < \text{min_step_contraction} < 1\]

double Solver::Options::min_line_search_step_contraction¶
Default:
0.6
In each iteration of the line search,
\[\text{new_step_size} <= \text{min_line_search_step_contraction} * \text{step_size}\]Note that by definition, for contraction:
\[0 < \text{max_step_contraction} < \text{min_step_contraction} < 1\]

int Solver::Options::max_num_line_search_step_size_iterations¶
Default:
20
Maximum number of trial step size iterations during each line search, if a step size satisfying the search conditions cannot be found within this number of trials, the line search will stop.
The minimum allowed value is 0 for trust region minimizer and 1 otherwise. If 0 is specified for the trust region minimizer, then line search will not be used when solving constrained optimization problems.
As this is an ‘artificial’ constraint (one imposed by the user, not the underlying math), if
WOLFE
line search is being used, and points satisfying the Armijo sufficient (function) decrease condition have been found during the current search (in \(<=\)max_num_line_search_step_size_iterations
). Then, the step size with the lowest function value which satisfies the Armijo condition will be returned as the new valid step, even though it does not satisfy the strong Wolfe conditions. This behaviour protects against early termination of the optimizer at a suboptimal point.

int Solver::Options::max_num_line_search_direction_restarts¶
Default:
5
Maximum number of restarts of the line search direction algorithm before terminating the optimization. Restarts of the line search direction algorithm occur when the current algorithm fails to produce a new descent direction. This typically indicates a numerical failure, or a breakdown in the validity of the approximations used.

double Solver::Options::line_search_sufficient_curvature_decrease¶
Default:
0.9
The strong Wolfe conditions consist of the Armijo sufficient decrease condition, and an additional requirement that the step size be chosen s.t. the magnitude (‘strong’ Wolfe conditions) of the gradient along the search direction decreases sufficiently. Precisely, this second condition is that we seek a step size s.t.
\[\f'(\text{step_size})\ <= \text{sufficient_curvature_decrease} * \f'(0)\\]Where \(f()\) is the line search objective and \(f'()\) is the derivative of \(f\) with respect to the step size: \(\frac{d f}{d~\text{step size}}\).

double Solver::Options::max_line_search_step_expansion¶
Default:
10.0
During the bracketing phase of a Wolfe line search, the step size is increased until either a point satisfying the Wolfe conditions is found, or an upper bound for a bracket containing a point satisfying the conditions is found. Precisely, at each iteration of the expansion:
\[\text{new_step_size} <= \text{max_step_expansion} * \text{step_size}\]By definition for expansion
\[\text{max_step_expansion} > 1.0\]

TrustRegionStrategyType Solver::Options::trust_region_strategy_type¶
Default:
LEVENBERG_MARQUARDT
The trust region step computation algorithm used by Ceres. Currently
LEVENBERG_MARQUARDT
andDOGLEG
are the two valid choices. See LevenbergMarquardt and Dogleg for more details.

DoglegType Solver::Options::dogleg_type¶
Default:
TRADITIONAL_DOGLEG
Ceres supports two different dogleg strategies.
TRADITIONAL_DOGLEG
method by Powell and theSUBSPACE_DOGLEG
method described by [ByrdSchnabel] . See Dogleg for more details.

bool Solver::Options::use_nonmonotonic_steps¶
Default:
false
Relax the requirement that the trustregion algorithm take strictly decreasing steps. See Nonmonotonic Steps for more details.

int Solver::Options::max_consecutive_nonmonotonic_steps¶
Default:
5
The window size used by the step selection algorithm to accept nonmonotonic steps.

int Solver::Options::max_num_iterations¶
Default:
50
Maximum number of iterations for which the solver should run.

double Solver::Options::max_solver_time_in_seconds¶
Default:
1e9
Maximum amount of time for which the solver should run.

int Solver::Options::num_threads¶
Default:
1
Number of threads used by Ceres to evaluate the Jacobian.

double Solver::Options::initial_trust_region_radius¶
Default:
1e4
The size of the initial trust region. When the
LEVENBERG_MARQUARDT
strategy is used, the reciprocal of this number is the initial regularization parameter.

double Solver::Options::max_trust_region_radius¶
Default:
1e16
The trust region radius is not allowed to grow beyond this value.

double Solver::Options::min_trust_region_radius¶
Default:
1e32
The solver terminates, when the trust region becomes smaller than this value.

double Solver::Options::min_relative_decrease¶
Default:
1e3
Lower threshold for relative decrease before a trustregion step is accepted.

double Solver::Options::min_lm_diagonal¶
Default:
1e6
The
LEVENBERG_MARQUARDT
strategy, uses a diagonal matrix to regularize the trust region step. This is the lower bound on the values of this diagonal matrix.

double Solver::Options::max_lm_diagonal¶
Default:
1e32
The
LEVENBERG_MARQUARDT
strategy, uses a diagonal matrix to regularize the trust region step. This is the upper bound on the values of this diagonal matrix.

int Solver::Options::max_num_consecutive_invalid_steps¶
Default:
5
The step returned by a trust region strategy can sometimes be numerically invalid, usually because of conditioning issues. Instead of crashing or stopping the optimization, the optimizer can go ahead and try solving with a smaller trust region/better conditioned problem. This parameter sets the number of consecutive retries before the minimizer gives up.

double Solver::Options::function_tolerance¶
Default:
1e6
Solver terminates if
\[\frac{\Delta \text{cost}}{\text{cost}} <= \text{function_tolerance}\]where, \(\Delta \text{cost}\) is the change in objective function value (up or down) in the current iteration of LevenbergMarquardt.

double Solver::Options::gradient_tolerance¶
Default:
1e10
Solver terminates if
\[\x  \Pi \boxplus(x, g(x))\_\infty <= \text{gradient_tolerance}\]where \(\\cdot\_\infty\) refers to the max norm, \(\Pi\) is projection onto the bounds constraints and \(\boxplus\) is Plus operation for the overall manifold associated with the parameter vector.

double Solver::Options::parameter_tolerance¶
Default:
1e8
Solver terminates if
\[\\Delta x\ <= (\x\ + \text{parameter_tolerance}) * \text{parameter_tolerance}\]where \(\Delta x\) is the step computed by the linear solver in the current iteration.

LinearSolverType Solver::Options::linear_solver_type¶
Default:
SPARSE_NORMAL_CHOLESKY
/DENSE_QR
Type of linear solver used to compute the solution to the linear least squares problem in each iteration of the LevenbergMarquardt algorithm. If Ceres is built with support for
SuiteSparse
orAccelerate
orEigen
’s sparse Cholesky factorization, the default isSPARSE_NORMAL_CHOLESKY
, it isDENSE_QR
otherwise.

PreconditionerType Solver::Options::preconditioner_type¶
Default:
JACOBI
The preconditioner used by the iterative linear solver. The default is the block Jacobi preconditioner. Valid values are (in increasing order of complexity)
IDENTITY
,JACOBI
,SCHUR_JACOBI
,CLUSTER_JACOBI
,CLUSTER_TRIDIAGONAL
,SUBSET
andSCHUR_POWER_SERIES_EXPANSION
. See Preconditioners for more details.

VisibilityClusteringType Solver::Options::visibility_clustering_type¶
Default:
CANONICAL_VIEWS
Type of clustering algorithm to use when constructing a visibility based preconditioner. The original visibility based preconditioning paper and implementation only used the canonical views algorithm.
This algorithm gives high quality results but for large dense graphs can be particularly expensive. As its worst case complexity is cubic in size of the graph.
Another option is to use
SINGLE_LINKAGE
which is a simple thresholded single linkage clustering algorithm that only pays attention to tightly coupled blocks in the Schur complement. This is a fast algorithm that works well.The optimal choice of the clustering algorithm depends on the sparsity structure of the problem, but generally speaking we recommend that you try
CANONICAL_VIEWS
first and if it is too expensive trySINGLE_LINKAGE
.

std::unordered_set<ResidualBlockId> Solver::Options::residual_blocks_for_subset_preconditioner¶
SUBSET
preconditioner is a preconditioner for problems with general sparsity. Given a subset of residual blocks of a problem, it uses the corresponding subset of the rows of the Jacobian to construct a preconditioner.Suppose the Jacobian \(J\) has been horizontally partitioned as
\[\begin{split}J = \begin{bmatrix} P \\ Q \end{bmatrix}\end{split}\]Where, \(Q\) is the set of rows corresponding to the residual blocks in
Solver::Options::residual_blocks_for_subset_preconditioner
. The preconditioner is the matrix \((Q^\top Q)^{1}\).The efficacy of the preconditioner depends on how well the matrix \(Q\) approximates \(J^\top J\), or how well the chosen residual blocks approximate the full problem.
If
Solver::Options::preconditioner_type == SUBSET
, thenresidual_blocks_for_subset_preconditioner
must be nonempty.

DenseLinearAlgebraLibrary Solver::Options::dense_linear_algebra_library_type¶
Default:
EIGEN
Ceres supports using multiple dense linear algebra libraries for dense matrix factorizations. Currently
EIGEN
,LAPACK
andCUDA
are the valid choices.EIGEN
is always available,LAPACK
refers to the systemBLAS + LAPACK
library which may or may not be available.CUDA
refers to Nvidia’s GPU based dense linear algebra library which may or may not be available.This setting affects the
DENSE_QR
,DENSE_NORMAL_CHOLESKY
andDENSE_SCHUR
solvers. For small to moderate sized problemEIGEN
is a fine choice but for large problems, an optimizedLAPACK + BLAS
orCUDA
implementation can make a substantial difference in performance.

SparseLinearAlgebraLibrary Solver::Options::sparse_linear_algebra_library_type¶
Default: The highest available according to:
SUITE_SPARSE
>ACCELERATE_SPARSE
>EIGEN_SPARSE
>NO_SPARSE
Ceres supports the use of three sparse linear algebra libraries,
SuiteSparse
, which is enabled by setting this parameter toSUITE_SPARSE
,Acclerate
, which can be selected by setting this parameter toACCELERATE_SPARSE
andEigen
which is enabled by setting this parameter toEIGEN_SPARSE
. Lastly,NO_SPARSE
means that no sparse linear solver should be used; note that this is irrespective of whether Ceres was compiled with support for one.SuiteSparse
is a sophisticated sparse linear algebra library and should be used in general. On MacOS you may want to use theAccelerate
framework.If your needs/platforms prevent you from using
SuiteSparse
, consider using the sparse linear algebra routines inEigen
. The sparse Cholesky algorithms currently included withEigen
are not as sophisticated as the ones inSuiteSparse
andAccelerate
and as a result its performance is considerably worse.

LinearSolverOrderingType Solver::Options::linear_solver_ordering_type¶
Default:
AMD
The order in which variables are eliminated in a linear solver can have a significant impact on the efficiency and accuracy of the method. e.g., when doing sparse Cholesky factorization, there are matrices for which a good ordering will give a Cholesky factor with \(O(n)\) storage, where as a bad ordering will result in an completely dense factor.
Sparse direct solvers like
SPARSE_NORMAL_CHOLESKY
andSPARSE_SCHUR
use a fill reducing ordering of the columns and rows of the matrix being factorized before computing the numeric factorization.This enum controls the type of algorithm used to compute this fill reducing ordering. There is no single algorithm that works on all matrices, so determining which algorithm works better is a matter of empirical experimentation.

std::shared_ptr<ParameterBlockOrdering> Solver::Options::linear_solver_ordering¶
Default:
nullptr
An instance of the ordering object informs the solver about the desired order in which parameter blocks should be eliminated by the linear solvers.
If
nullptr
, the solver is free to choose an ordering that it thinks is best.See Ordering for more details.

bool Solver::Options::use_explicit_schur_complement¶
Default:
false
Use an explicitly computed Schur complement matrix with
ITERATIVE_SCHUR
.By default this option is disabled and
ITERATIVE_SCHUR
evaluates evaluates matrixvector products between the Schur complement and a vector implicitly by exploiting the algebraic expression for the Schur complement.The cost of this evaluation scales with the number of nonzeros in the Jacobian.
For small to medium sized problems there is a sweet spot where computing the Schur complement is cheap enough that it is much more efficient to explicitly compute it and use it for evaluating the matrixvector products.
Note
This option can only be used with the
SCHUR_JACOBI
preconditioner.

bool Solver::Options::dynamic_sparsity¶
Default:
false
Some nonlinear least squares problems are symbolically dense but numerically sparse. i.e. at any given state only a small number of Jacobian entries are nonzero, but the position and number of nonzeros is different depending on the state. For these problems it can be useful to factorize the sparse jacobian at each solver iteration instead of including all of the zero entries in a single general factorization.
If your problem does not have this property (or you do not know), then it is probably best to keep this false, otherwise it will likely lead to worse performance.
This setting only affects the SPARSE_NORMAL_CHOLESKY solver.

bool Solver::Options::use_mixed_precision_solves¶
Default:
false
If true, the GaussNewton matrix is computed in double precision, but its factorization is computed in single precision. This can result in significant time and memory savings at the cost of some accuracy in the GaussNewton step. Iterative refinement is used to recover some of this accuracy back.
If
use_mixed_precision_solves
is true, we recommend settingmax_num_refinement_iterations
to 23.See Mixed Precision Solves for more details.

int Solver::Options::max_num_refinement_iterations¶
Default:
0
Number steps of the iterative refinement process to run when computing the GaussNewton step, see
Solver::Options::use_mixed_precision_solves
.

int Solver::Options::min_linear_solver_iterations¶
Default:
0
Minimum number of iterations used by the linear solver. This only makes sense when the linear solver is an iterative solver, e.g.,
ITERATIVE_SCHUR
orCGNR
.

int Solver::Options::max_linear_solver_iterations¶
Default:
500
Minimum number of iterations used by the linear solver. This only makes sense when the linear solver is an iterative solver, e.g.,
ITERATIVE_SCHUR
orCGNR
.

int Solver::Options::max_num_spse_iterations¶
Default: 5
Maximum number of iterations performed by
SCHUR_POWER_SERIES_EXPANSION
. Each iteration corresponds to one more term in the power series expansion od the inverse of the Schur complement. This value controls the maximum number of iterations whether it is used as a preconditioner or just to initialize the solution forITERATIVE_SCHUR
.

bool Solver : Options::use_spse_initialization¶
Default:
false
Use Schur power series expansion to initialize the solution for
ITERATIVE_SCHUR
. This option can be settrue
regardless of what preconditioner is being used.

double Solver::Options::spse_tolerance¶
Default: 0.1
When
use_spse_initialization
istrue
, this parameter along withmax_num_spse_iterations
controls the number ofSCHUR_POWER_SERIES_EXPANSION
iterations performed for initialization. It is not used to control the preconditioner.

double Solver::Options::eta¶
Default:
1e1
Forcing sequence parameter. The truncated Newton solver uses this number to control the relative accuracy with which the Newton step is computed. This constant is passed to
ConjugateGradientsSolver
which uses it to terminate the iterations when\[\frac{Q_i  Q_{i1}}{Q_i} < \frac{\eta}{i}\]

bool Solver::Options::jacobi_scaling¶
Default:
true
true
means that the Jacobian is scaled by the norm of its columns before being passed to the linear solver. This improves the numerical conditioning of the normal equations.

bool Solver::Options::use_inner_iterations¶
Default:
false
Use a nonlinear version of a simplified variable projection algorithm. Essentially this amounts to doing a further optimization on each Newton/Trust region step using a coordinate descent algorithm. For more details, see Inner Iterations.
Note Inner iterations cannot be used with
Problem
objects that have anEvaluationCallback
associated with them.

std::shared_ptr<ParameterBlockOrdering> Solver::Options::inner_iteration_ordering¶
Default:
nullptr
If
Solver::Options::use_inner_iterations
true, then the user has two choices.Let the solver heuristically decide which parameter blocks to optimize in each inner iteration. To do this, set
Solver::Options::inner_iteration_ordering
tonullptr
.Specify a collection of of ordered independent sets. The lower numbered groups are optimized before the higher number groups during the inner optimization phase. Each group must be an independent set. Not all parameter blocks need to be included in the ordering.
See Ordering for more details.

double Solver::Options::inner_iteration_tolerance¶
Default:
1e3
Generally speaking, inner iterations make significant progress in the early stages of the solve and then their contribution drops down sharply, at which point the time spent doing inner iterations is not worth it.
Once the relative decrease in the objective function due to inner iterations drops below
inner_iteration_tolerance
, the use of inner iterations in subsequent trust region minimizer iterations is disabled.

LoggingType Solver::Options::logging_type¶
Default:
PER_MINIMIZER_ITERATION
Valid values are
SILENT
andPER_MINIMIZER_ITERATION
.

bool Solver::Options::minimizer_progress_to_stdout¶
Default:
false
By default the Minimizer’s progress is logged to
STDERR
depending on thevlog
level. If this flag is set to true, andSolver::Options::logging_type
is notSILENT
, the logging output is sent toSTDOUT
.For
TRUST_REGION_MINIMIZER
the progress display looks likeiter cost cost_change gradient step tr_ratio tr_radius ls_iter iter_time total_time 0 4.185660e+06 0.00e+00 1.09e+08 0.00e+00 0.00e+00 1.00e+04 0 7.59e02 3.37e01 1 1.062590e+05 4.08e+06 8.99e+06 5.36e+02 9.82e01 3.00e+04 1 1.65e01 5.03e01 2 4.992817e+04 5.63e+04 8.32e+06 3.19e+02 6.52e01 3.09e+04 1 1.45e01 6.48e01
Here
cost
is the value of the objective function.cost_change
is the change in the value of the objective function if the step computed in this iteration is accepted.gradient
is the max norm of the gradient.step
is the change in the parameter vector.tr_ratio
is the ratio of the actual change in the objective function value to the change in the value of the trust region model.tr_radius
is the size of the trust region radius.ls_iter
is the number of linear solver iterations used to compute the trust region step. For direct/factorization based solvers it is always 1, for iterative solvers likeITERATIVE_SCHUR
it is the number of iterations of the Conjugate Gradients algorithm.iter_time
is the time take by the current iteration.total_time
is the total time taken by the minimizer.
For
LINE_SEARCH_MINIMIZER
the progress display looks like0: f: 2.317806e+05 d: 0.00e+00 g: 3.19e01 h: 0.00e+00 s: 0.00e+00 e: 0 it: 2.98e02 tt: 8.50e02 1: f: 2.312019e+05 d: 5.79e+02 g: 3.18e01 h: 2.41e+01 s: 1.00e+00 e: 1 it: 4.54e02 tt: 1.31e01 2: f: 2.300462e+05 d: 1.16e+03 g: 3.17e01 h: 4.90e+01 s: 2.54e03 e: 1 it: 4.96e02 tt: 1.81e01
Here
f
is the value of the objective function.d
is the change in the value of the objective function if the step computed in this iteration is accepted.g
is the max norm of the gradient.h
is the change in the parameter vector.s
is the optimal step length computed by the line search.it
is the time take by the current iteration.tt
is the total time taken by the minimizer.

std::vector<int> Solver::Options::trust_region_minimizer_iterations_to_dump¶
Default:
empty
List of iterations at which the trust region minimizer should dump the trust region problem. Useful for testing and benchmarking. If
empty
, no problems are dumped.

std::string Solver::Options::trust_region_problem_dump_directory¶
Default:
/tmp
Directory to which the problems should be written to. Should be nonempty if
Solver::Options::trust_region_minimizer_iterations_to_dump
is nonempty andSolver::Options::trust_region_problem_dump_format_type
is notCONSOLE
.

DumpFormatType Solver::Options::trust_region_problem_dump_format_type¶
Default:
TEXTFILE
The format in which trust region problems should be logged when
Solver::Options::trust_region_minimizer_iterations_to_dump
is nonempty. There are three options:CONSOLE
prints the linear least squares problem in a humanreadable format to
stderr
. The Jacobian is printed as a dense matrix. The vectors \(D\), \(x\) and \(f\) are printed as dense vectors. This should only be used for small problems.
TEXTFILE
Write out the linear least squares problem to the directory pointed to bySolver::Options::trust_region_problem_dump_directory
as text files which can be read intoMATLAB/Octave
. The Jacobian is dumped as a text file containing \((i,j,s)\) triplets, the vectors \(D\), x and f are dumped as text files containing a list of their values.A
MATLAB/Octave
script calledceres_solver_iteration_???.m
is also output, which can be used to parse and load the problem into memory.

bool Solver::Options::check_gradients¶
Default:
false
Check all Jacobians computed by each residual block with finite differences. This is expensive since it involves computing the derivative by normal means (e.g. user specified, autodiff, etc), then also computing it using finite differences. The results are compared, and if they differ substantially, the optimization fails and the details are stored in the solver summary.

double Solver::Options::gradient_check_relative_precision¶
Default:
1e8
Precision to check for in the gradient checker. If the relative difference between an element in a Jacobian exceeds this number, then the Jacobian for that cost term is dumped.

double Solver::Options::gradient_check_numeric_derivative_relative_step_size¶
Default:
1e6
Note
This option only applies to the numeric differentiation used for checking the user provided derivatives when when Solver::Options::check_gradients is true. If you are using
NumericDiffCostFunction
and are interested in changing the step size for numeric differentiation in your cost function, please have a look atNumericDiffOptions
.Relative shift used for taking numeric derivatives when Solver::Options::check_gradients is true.
For finite differencing, each dimension is evaluated at slightly shifted values, e.g., for forward differences, the numerical derivative is
\[\begin{split}\delta &= gradient\_check\_numeric\_derivative\_relative\_step\_size\\ \Delta f &= \frac{f((1 + \delta) x)  f(x)}{\delta x}\end{split}\]The finite differencing is done along each dimension. The reason to use a relative (rather than absolute) step size is that this way, numeric differentiation works for functions where the arguments are typically large (e.g. \(10^9\)) and when the values are small (e.g. \(10^{5}\)). It is possible to construct torture cases which break this finite difference heuristic, but they do not come up often in practice.

bool Solver::Options::update_state_every_iteration¶
Default:
false
If
update_state_every_iteration
istrue
, then Ceres Solver will guarantee that at the end of every iteration and before any userIterationCallback
is called, the parameter blocks are updated to the current best solution found by the solver. Thus the IterationCallback can inspect the values of the parameter blocks for purposes of computation, visualization or termination.If
update_state_every_iteration
isfalse
then there is no such guarantee, and user providedIterationCallback
s should not expect to look at the parameter blocks and interpret their values.

std::vector<IterationCallback*> Solver::Options::callbacks¶
Default:
empty
Callbacks that are executed at the end of each iteration of the minimizer. They are executed in the order that they are specified in this vector.
By default, parameter blocks are updated only at the end of the optimization, i.e., when the minimizer terminates. This means that by default, if an
IterationCallback
inspects the parameter blocks, they will not see them changing in the course of the optimization.To tell Ceres to update the parameter blocks at the end of each iteration and before calling the user’s callback, set
Solver::Options::update_state_every_iteration
totrue
.See examples/iteration_callback_example.cc for an example of an
IterationCallback
that usesSolver::Options::update_state_every_iteration
to log changes to the parameter blocks over the course of the optimization.The solver does NOT take ownership of these pointers.
ParameterBlockOrdering
¶

class ParameterBlockOrdering¶
ParameterBlockOrdering
is a class for storing and manipulating an ordered collection of groups/sets with the following semantics:Group IDs are nonnegative integer values. Elements are any type that can serve as a key in a map or an element of a set.
An element can only belong to one group at a time. A group may contain an arbitrary number of elements.
Groups are ordered by their group id.

bool ParameterBlockOrdering::AddElementToGroup(const double *element, const int group)¶
Add an element to a group. If a group with this id does not exist, one is created. This method can be called any number of times for the same element. Group ids should be nonnegative numbers. Return value indicates if adding the element was a success.

void ParameterBlockOrdering::Clear()¶
Clear the ordering.

bool ParameterBlockOrdering::Remove(const double *element)¶
Remove the element, no matter what group it is in. If the element is not a member of any group, calling this method will result in a crash. Return value indicates if the element was actually removed.

void ParameterBlockOrdering::Reverse()¶
Reverse the order of the groups in place.

int ParameterBlockOrdering::GroupId(const double *element) const¶
Return the group id for the element. If the element is not a member of any group, return 1.

bool ParameterBlockOrdering::IsMember(const double *element) const¶
True if there is a group containing the parameter block.

int ParameterBlockOrdering::GroupSize(const int group) const¶
This function always succeeds, i.e., implicitly there exists a group for every integer.

int ParameterBlockOrdering::NumElements() const¶
Number of elements in the ordering.

int ParameterBlockOrdering::NumGroups() const¶
Number of groups with one or more elements.
IterationSummary
¶

class IterationSummary¶
IterationSummary
describes the state of the minimizer at the end of each iteration.

int IterationSummary::iteration¶
Current iteration number.

bool IterationSummary::step_is_valid¶
Step was numerically valid, i.e., all values are finite and the step reduces the value of the linearized model.
Note:
IterationSummary::step_is_valid
is false whenIterationSummary::iteration
= 0.

bool IterationSummary::step_is_nonmonotonic¶
Step did not reduce the value of the objective function sufficiently, but it was accepted because of the relaxed acceptance criterion used by the nonmonotonic trust region algorithm.
Note:
IterationSummary::step_is_nonmonotonic
is false when whenIterationSummary::iteration
= 0.

bool IterationSummary::step_is_successful¶
Whether or not the minimizer accepted this step or not.
If the ordinary trust region algorithm is used, this means that the relative reduction in the objective function value was greater than
Solver::Options::min_relative_decrease
. However, if the nonmonotonic trust region algorithm is used (Solver::Options::use_nonmonotonic_steps
= true), then even if the relative decrease is not sufficient, the algorithm may accept the step and the step is declared successful.Note:
IterationSummary::step_is_successful
is false when whenIterationSummary::iteration
= 0.

double IterationSummary::cost¶
Value of the objective function.

double IterationSummary::cost_change¶
Change in the value of the objective function in this iteration. This can be positive or negative.

double IterationSummary::gradient_max_norm¶
Infinity norm of the gradient vector.

double IterationSummary::gradient_norm¶
2norm of the gradient vector.

double IterationSummary::step_norm¶
2norm of the size of the step computed in this iteration.

double IterationSummary::relative_decrease¶
For trust region algorithms, the ratio of the actual change in cost and the change in the cost of the linearized approximation.
This field is not used when a linear search minimizer is used.

double IterationSummary::trust_region_radius¶
Size of the trust region at the end of the current iteration. For the LevenbergMarquardt algorithm, the regularization parameter is 1.0 /
IterationSummary::trust_region_radius
.

double IterationSummary::eta¶
For the inexact step LevenbergMarquardt algorithm, this is the relative accuracy with which the step is solved. This number is only applicable to the iterative solvers capable of solving linear systems inexactly. Factorizationbased exact solvers always have an eta of 0.0.

double IterationSummary::step_size¶
Step sized computed by the line search algorithm.
This field is not used when a trust region minimizer is used.

int IterationSummary::line_search_function_evaluations¶
Number of function evaluations used by the line search algorithm.
This field is not used when a trust region minimizer is used.

int IterationSummary::linear_solver_iterations¶
Number of iterations taken by the linear solver to solve for the trust region step.
Currently this field is not used when a line search minimizer is used.

double IterationSummary::iteration_time_in_seconds¶
Time (in seconds) spent inside the minimizer loop in the current iteration.

double IterationSummary::step_solver_time_in_seconds¶
Time (in seconds) spent inside the trust region step solver.

double IterationSummary::cumulative_time_in_seconds¶
Time (in seconds) since the user called Solve().
IterationCallback
¶

class IterationCallback¶
Interface for specifying callbacks that are executed at the end of each iteration of the minimizer.
class IterationCallback { public: virtual ~IterationCallback() {} virtual CallbackReturnType operator()(const IterationSummary& summary) = 0; };
The solver uses the return value of
operator()
to decide whether to continue solving or to terminate. The user can return three values.SOLVER_ABORT
indicates that the callback detected an abnormal situation. The solver returns without updating the parameter blocks (unlessSolver::Options::update_state_every_iteration
is set true). Solver returns withSolver::Summary::termination_type
set toUSER_FAILURE
.SOLVER_TERMINATE_SUCCESSFULLY
indicates that there is no need to optimize anymore (some user specified termination criterion has been met). Solver returns withSolver::Summary::termination_type`
set toUSER_SUCCESS
.SOLVER_CONTINUE
indicates that the solver should continue optimizing.
The return values can be used to implement custom termination criterion that supercede the iteration/time/tolerance based termination implemented by Ceres.
For example, the following
IterationCallback
is used internally by Ceres to log the progress of the optimization.class LoggingCallback : public IterationCallback { public: explicit LoggingCallback(bool log_to_stdout) : log_to_stdout_(log_to_stdout) {} ~LoggingCallback() {} CallbackReturnType operator()(const IterationSummary& summary) { const char* kReportRowFormat = "% 4d: f:% 8e d:% 3.2e g:% 3.2e h:% 3.2e " "rho:% 3.2e mu:% 3.2e eta:% 3.2e li:% 3d"; string output = StringPrintf(kReportRowFormat, summary.iteration, summary.cost, summary.cost_change, summary.gradient_max_norm, summary.step_norm, summary.relative_decrease, summary.trust_region_radius, summary.eta, summary.linear_solver_iterations); if (log_to_stdout_) { cout << output << endl; } else { VLOG(1) << output; } return SOLVER_CONTINUE; } private: const bool log_to_stdout_; };
See examples/evaluation_callback_example.cc for another example that uses
Solver::Options::update_state_every_iteration
to log changes to the parameter blocks over the course of the optimization.
CRSMatrix
¶

class CRSMatrix¶
A compressed row sparse matrix used primarily for communicating the Jacobian matrix to the user.

std::vector<int> CRSMatrix::rows¶
CRSMatrix::rows
is aCRSMatrix::num_rows
+ 1 sized array that points into theCRSMatrix::cols
andCRSMatrix::values
array.

std::vector<int> CRSMatrix::cols¶
CRSMatrix::cols
contain as many entries as there are nonzeros in the matrix.For each row
i
,cols[rows[i]]
…cols[rows[i + 1]  1]
are the indices of the nonzero columns of rowi
.

std::vector<double> CRSMatrix::values¶
CRSMatrix::values
contain as many entries as there are nonzeros in the matrix.For each row
i
,values[rows[i]]
…values[rows[i + 1]  1]
are the values of the nonzero columns of rowi
.
e.g., consider the 3x4 sparse matrix
0 10 0 4
0 2 3 2
1 2 0 0
The three arrays will be:
row0 row1 row2
rows = [ 0, 2, 5, 7]
cols = [ 1, 3, 1, 2, 3, 0, 1]
values = [10, 4, 2, 3, 2, 1, 2]
Solver::Summary
¶

std::string Solver::Summary::BriefReport() const¶
A brief one line description of the state of the solver after termination.

std::string Solver::Summary::FullReport() const¶
A full multiline description of the state of the solver after termination.

bool Solver::Summary::IsSolutionUsable() const¶
Whether the solution returned by the optimization algorithm can be relied on to be numerically sane. This will be the case if Solver::Summary:termination_type is set to CONVERGENCE, USER_SUCCESS or NO_CONVERGENCE, i.e., either the solver converged by meeting one of the convergence tolerances or because the user indicated that it had converged or it ran to the maximum number of iterations or time.

double Solver::Summary::initial_cost¶
Cost of the problem (value of the objective function) before the optimization.

double Solver::Summary::final_cost¶
Cost of the problem (value of the objective function) after the optimization.

double Solver::Summary::fixed_cost¶
The part of the total cost that comes from residual blocks that were held fixed by the preprocessor because all the parameter blocks that they depend on were fixed.

std::vector<IterationSummary> Solver::Summary::iterations¶
IterationSummary
for each minimizer iteration in order.

int Solver::Summary::num_successful_steps¶
Number of minimizer iterations in which the step was accepted. Unless
Solver::Options::use_nonmonotonic_steps
is true this is also the number of steps in which the objective function value/cost went down.

int Solver::Summary::num_unsuccessful_steps¶
Number of minimizer iterations in which the step was rejected either because it did not reduce the cost enough or the step was not numerically valid.

int Solver::Summary::num_line_search_steps¶
Total number of iterations inside the line search algorithm across all invocations. We call these iterations “steps” to distinguish them from the outer iterations of the line search and trust region minimizer algorithms which call the line search algorithm as a subroutine.

double Solver::Summary::postprocessor_time_in_seconds¶
Time (in seconds) spent in the post processor.

double Solver::Summary::linear_solver_time_in_seconds¶
Time (in seconds) spent in the linear solver computing the trust region step.

int Solver::Summary::num_linear_solves¶
Number of times the Newton step was computed by solving a linear system. This does not include linear solves used by inner iterations.

double Solver::Summary::residual_evaluation_time_in_seconds¶
Time (in seconds) spent evaluating the residual vector.

double Solver::Summary::jacobian_evaluation_time_in_seconds¶
Time (in seconds) spent evaluating the Jacobian matrix.

int Solver::Summary::num_jacobian_evaluations¶
Number of times only the Jacobian and the residuals were evaluated.

double Solver::Summary::inner_iteration_time_in_seconds¶
Time (in seconds) spent doing inner iterations.

int Solver::Summary::num_effective_parameters¶
Dimension of the tangent space of the problem (or the number of columns in the Jacobian for the problem). This is different from
Solver::Summary::num_parameters
if a parameter block is associated with aManifold
.

int Solver::Summary::num_parameter_blocks_reduced¶
Number of parameter blocks in the problem after the inactive and constant parameter blocks have been removed. A parameter block is inactive if no residual block refers to it.

int Solver::Summary::num_effective_parameters_reduced¶
Dimension of the tangent space of the reduced problem (or the number of columns in the Jacobian for the reduced problem). This is different from
Solver::Summary::num_parameters_reduced
if a parameter block in the reduced problem is associated with aManifold
.

int Solver::Summary::num_threads_given¶
Number of threads specified by the user for Jacobian and residual evaluation.

int Solver::Summary::num_threads_used¶
Number of threads actually used by the solver for Jacobian and residual evaluation.

LinearSolverType Solver::Summary::linear_solver_type_given¶
Type of the linear solver requested by the user.

LinearSolverType Solver::Summary::linear_solver_type_used¶
Type of the linear solver actually used. This may be different from
Solver::Summary::linear_solver_type_given
if Ceres determines that the problem structure is not compatible with the linear solver requested or if the linear solver requested by the user is not available, e.g. The user requested SPARSE_NORMAL_CHOLESKY but no sparse linear algebra library was available.

std::vector<int> Solver::Summary::linear_solver_ordering_given¶
Size of the elimination groups given by the user as hints to the linear solver.

std::vector<int> Solver::Summary::linear_solver_ordering_used¶
Size of the parameter groups used by the solver when ordering the columns of the Jacobian. This maybe different from
Solver::Summary::linear_solver_ordering_given
if the user leftSolver::Summary::linear_solver_ordering_given
blank and asked for an automatic ordering, or if the problem contains some constant or inactive parameter blocks.

std::string Solver::Summary::schur_structure_given¶
For Schur type linear solvers, this string describes the template specialization which was detected in the problem and should be used.

std::string Solver::Summary::schur_structure_used¶
For Schur type linear solvers, this string describes the template specialization that was actually instantiated and used. The reason this will be different from
Solver::Summary::schur_structure_given
is because the corresponding template specialization does not exist.Template specializations can be added to ceres by editing
internal/ceres/generate_template_specializations.py

bool Solver::Summary::inner_iterations_given¶
True if the user asked for inner iterations to be used as part of the optimization.

bool Solver::Summary::inner_iterations_used¶
True if the user asked for inner iterations to be used as part of the optimization and the problem structure was such that they were actually performed. For example, in a problem with just one parameter block, inner iterations are not performed.

std::vector<int> Solver::Summary::inner_iteration_ordering_given¶
Size of the parameter groups given by the user for performing inner iterations.

std::vector<int> Solver::Summary::inner_iteration_ordering_used¶
Size of the parameter groups given used by the solver for performing inner iterations. This maybe different from
Solver::Summary::inner_iteration_ordering_given
if the user leftSolver::Summary::inner_iteration_ordering_given
blank and asked for an automatic ordering, or if the problem contains some constant or inactive parameter blocks.

PreconditionerType Solver::Summary::preconditioner_type_given¶
Type of the preconditioner requested by the user.

PreconditionerType Solver::Summary::preconditioner_type_used¶
Type of the preconditioner actually used. This may be different from
Solver::Summary::linear_solver_type_given
if Ceres determines that the problem structure is not compatible with the linear solver requested or if the linear solver requested by the user is not available.

VisibilityClusteringType Solver::Summary::visibility_clustering_type¶
Type of clustering algorithm used for visibility based preconditioning. Only meaningful when the
Solver::Summary::preconditioner_type_used
isCLUSTER_JACOBI
orCLUSTER_TRIDIAGONAL
.

DoglegType Solver::Summary::dogleg_type¶
Type of dogleg strategy used for solving the trust region problem.

DenseLinearAlgebraLibraryType Solver::Summary::dense_linear_algebra_library_type¶
Type of the dense linear algebra library used.

SparseLinearAlgebraLibraryType Solver::Summary::sparse_linear_algebra_library_type¶
Type of the sparse linear algebra library used.

LineSearchDirectionType Solver::Summary::line_search_direction_type¶
Type of line search direction used.

LineSearchInterpolationType Solver::Summary::line_search_interpolation_type¶
When performing line search, the degree of the polynomial used to approximate the objective function.