Ipopt Documentation  
 
Loading...
Searching...
No Matches
Ipopt::TNLP Class Referenceabstract

Base class for all NLP's that use standard triplet matrix form and dense vectors. More...

#include <IpTNLP.hpp>

+ Inheritance diagram for Ipopt::TNLP:

Public Types

enum  LinearityType { LINEAR , NON_LINEAR }
 Linearity-types of variables and constraints. More...
 
enum  IndexStyleEnum { C_STYLE = 0 , FORTRAN_STYLE = 1 }
 
typedef std::map< std::string, std::vector< std::string > > StringMetaDataMapType
 
typedef std::map< std::string, std::vector< Index > > IntegerMetaDataMapType
 
typedef std::map< std::string, std::vector< Number > > NumericMetaDataMapType
 

Public Member Functions

 DECLARE_STD_EXCEPTION (INVALID_TNLP)
 
Constructors/Destructors
 TNLP ()
 
virtual ~TNLP ()
 Default destructor.
 
Methods to gather information about the NLP
virtual bool get_nlp_info (Index &n, Index &m, Index &nnz_jac_g, Index &nnz_h_lag, IndexStyleEnum &index_style)=0
 Method to request the initial information about the problem.
 
virtual bool get_var_con_metadata (Index n, StringMetaDataMapType &var_string_md, IntegerMetaDataMapType &var_integer_md, NumericMetaDataMapType &var_numeric_md, Index m, StringMetaDataMapType &con_string_md, IntegerMetaDataMapType &con_integer_md, NumericMetaDataMapType &con_numeric_md)
 Method to request meta data for the variables and the constraints.
 
virtual bool get_bounds_info (Index n, Number *x_l, Number *x_u, Index m, Number *g_l, Number *g_u)=0
 Method to request bounds on the variables and constraints.
 
virtual bool get_scaling_parameters (Number &obj_scaling, bool &use_x_scaling, Index n, Number *x_scaling, bool &use_g_scaling, Index m, Number *g_scaling)
 Method to request scaling parameters.
 
virtual bool get_variables_linearity (Index n, LinearityType *var_types)
 Method to request the variables linearity.
 
virtual bool get_constraints_linearity (Index m, LinearityType *const_types)
 Method to request the constraints linearity.
 
virtual bool get_starting_point (Index n, bool init_x, Number *x, bool init_z, Number *z_L, Number *z_U, Index m, bool init_lambda, Number *lambda)=0
 Method to request the starting point before iterating.
 
virtual bool get_warm_start_iterate (IteratesVector &warm_start_iterate)
 Method to provide an Ipopt warm start iterate which is already in the form Ipopt requires it internally for warm starts.
 
virtual bool eval_f (Index n, const Number *x, bool new_x, Number &obj_value)=0
 Method to request the value of the objective function.
 
virtual bool eval_grad_f (Index n, const Number *x, bool new_x, Number *grad_f)=0
 Method to request the gradient of the objective function.
 
virtual bool eval_g (Index n, const Number *x, bool new_x, Index m, Number *g)=0
 Method to request the constraint values.
 
virtual bool eval_jac_g (Index n, const Number *x, bool new_x, Index m, Index nele_jac, Index *iRow, Index *jCol, Number *values)=0
 Method to request either the sparsity structure or the values of the Jacobian of the constraints.
 
virtual bool eval_h (Index n, const Number *x, bool new_x, Number obj_factor, Index m, const Number *lambda, bool new_lambda, Index nele_hess, Index *iRow, Index *jCol, Number *values)
 Method to request either the sparsity structure or the values of the Hessian of the Lagrangian.
 
Methods for quasi-Newton approximation.

If the second derivatives are approximated by Ipopt, it is better to do this only in the space of nonlinear variables.

The following methods are call by Ipopt if the quasi-Newton approximation is selected.

virtual Index get_number_of_nonlinear_variables ()
 Return the number of variables that appear nonlinearly in the objective function or in at least one constraint function.
 
virtual bool get_list_of_nonlinear_variables (Index num_nonlin_vars, Index *pos_nonlin_vars)
 Return the indices of all nonlinear variables.
 
Solution Methods
virtual void finalize_solution (SolverReturn status, Index n, const Number *x, const Number *z_L, const Number *z_U, Index m, const Number *g, const Number *lambda, Number obj_value, const IpoptData *ip_data, IpoptCalculatedQuantities *ip_cq)=0
 This method is called when the algorithm has finished (successfully or not) so the TNLP can digest the outcome, e.g., store/write the solution, if any.
 
virtual void finalize_metadata (Index n, const StringMetaDataMapType &var_string_md, const IntegerMetaDataMapType &var_integer_md, const NumericMetaDataMapType &var_numeric_md, Index m, const StringMetaDataMapType &con_string_md, const IntegerMetaDataMapType &con_integer_md, const NumericMetaDataMapType &con_numeric_md)
 This method returns any metadata collected during the run of the algorithm.
 
virtual bool intermediate_callback (AlgorithmMode mode, Index iter, Number obj_value, Number inf_pr, Number inf_du, Number mu, Number d_norm, Number regularization_size, Number alpha_du, Number alpha_pr, Index ls_trials, const IpoptData *ip_data, IpoptCalculatedQuantities *ip_cq)
 Intermediate Callback method for the user.
 
bool get_curr_iterate (const IpoptData *ip_data, IpoptCalculatedQuantities *ip_cq, bool scaled, Index n, Number *x, Number *z_L, Number *z_U, Index m, Number *g, Number *lambda) const
 Get primal and dual variable values of the current iterate.
 
bool get_curr_violations (const IpoptData *ip_data, IpoptCalculatedQuantities *ip_cq, bool scaled, Index n, Number *x_L_violation, Number *x_U_violation, Number *compl_x_L, Number *compl_x_U, Number *grad_lag_x, Index m, Number *nlp_constraint_violation, Number *compl_g) const
 Get primal and dual infeasibility of the current iterate.
 
- Public Member Functions inherited from Ipopt::ReferencedObject
 ReferencedObject ()
 
virtual ~ReferencedObject ()
 
Index ReferenceCount () const
 
void AddRef (const Referencer *referencer) const
 
void ReleaseRef (const Referencer *referencer) const
 

Private Member Functions

Default Compiler Generated Methods

(Hidden to avoid implicit creation/calling).

These methods are not implemented and we do not want the compiler to implement them for us, so we declare them private and do not define them. This ensures that they will not be implicitly created/called.

 TNLP (const TNLP &)
 Copy Constructor.
 
void operator= (const TNLP &)
 Default Assignment Operator.
 

Detailed Description

Base class for all NLP's that use standard triplet matrix form and dense vectors.

This is the standard base class for all NLP's that use the standard triplet matrix form and dense vectors. The class TNLPAdapter then converts this interface to an interface that can be used directly by Ipopt.

This interface presents the problem form:

\begin{eqnarray*} \mathrm{min} && f(x) \\ \mathrm{s.t.} && g_L \leq g(x) \leq g_U \\ && x_L \leq x \leq x_U \end{eqnarray*}

In order to specify an equality constraint, set \(g_{L,i} = g_{U,i}\). The value that indicates "infinity" for the bounds (i.e. the variable or constraint has no lower bound (-infinity) or upper bound (+infinity)) is set through the option nlp_lower_bound_inf and nlp_upper_bound_inf, respectively. To indicate that a variable has no upper or lower bound, set the bound to -ipopt_inf or +ipopt_inf, respectively.

Definition at line 47 of file IpTNLP.hpp.

Member Typedef Documentation

◆ StringMetaDataMapType

typedef std::map<std::string, std::vector<std::string> > Ipopt::TNLP::StringMetaDataMapType

Definition at line 70 of file IpTNLP.hpp.

◆ IntegerMetaDataMapType

typedef std::map<std::string, std::vector<Index> > Ipopt::TNLP::IntegerMetaDataMapType

Definition at line 71 of file IpTNLP.hpp.

◆ NumericMetaDataMapType

typedef std::map<std::string, std::vector<Number> > Ipopt::TNLP::NumericMetaDataMapType

Definition at line 72 of file IpTNLP.hpp.

Member Enumeration Documentation

◆ LinearityType

Linearity-types of variables and constraints.

Enumerator
LINEAR 

Constraint/Variable is linear.

NON_LINEAR 

Constraint/Variable is non-linear.

Definition at line 52 of file IpTNLP.hpp.

◆ IndexStyleEnum

Enumerator
C_STYLE 
FORTRAN_STYLE 

Definition at line 74 of file IpTNLP.hpp.

Constructor & Destructor Documentation

◆ TNLP() [1/2]

Ipopt::TNLP::TNLP ( )
inline

Definition at line 60 of file IpTNLP.hpp.

◆ ~TNLP()

virtual Ipopt::TNLP::~TNLP ( )
inlinevirtual

Default destructor.

Definition at line 64 of file IpTNLP.hpp.

◆ TNLP() [2/2]

Ipopt::TNLP::TNLP ( const TNLP )
private

Copy Constructor.

Member Function Documentation

◆ DECLARE_STD_EXCEPTION()

Ipopt::TNLP::DECLARE_STD_EXCEPTION ( INVALID_TNLP  )

◆ get_nlp_info()

virtual bool Ipopt::TNLP::get_nlp_info ( Index n,
Index m,
Index nnz_jac_g,
Index nnz_h_lag,
IndexStyleEnum index_style 
)
pure virtual

Method to request the initial information about the problem.

Ipopt uses this information when allocating the arrays that it will later ask you to fill with values. Be careful in this method since incorrect values will cause memory bugs which may be very difficult to find.

Parameters
n(out) Storage for the number of variables \(x\)
m(out) Storage for the number of constraints \(g(x)\)
nnz_jac_g(out) Storage for the number of nonzero entries in the Jacobian
nnz_h_lag(out) Storage for the number of nonzero entries in the Hessian
index_style(out) Storage for the index style, the numbering style used for row/col entries in the sparse matrix format (TNLP::C_STYLE: 0-based, TNLP::FORTRAN_STYLE: 1-based; see also Triplet Format for Sparse Matrices)

Implemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ get_var_con_metadata()

virtual bool Ipopt::TNLP::get_var_con_metadata ( Index  n,
StringMetaDataMapType var_string_md,
IntegerMetaDataMapType var_integer_md,
NumericMetaDataMapType var_numeric_md,
Index  m,
StringMetaDataMapType con_string_md,
IntegerMetaDataMapType con_integer_md,
NumericMetaDataMapType con_numeric_md 
)
inlinevirtual

Method to request meta data for the variables and the constraints.

This method is used to pass meta data about variables or constraints to Ipopt. The data can be either of integer, numeric, or string type. Ipopt passes this data on to its internal problem representation. The meta data type is a std::map with std::string as key type and a std::vector as value type. So far, Ipopt itself makes only use of string meta data under the key idx_names. With this key, variable and constraint names can be passed to Ipopt, which are shown when printing internal vector or matrix data structures if Ipopt is run with a high value for the option. This allows a user to identify the original variables and constraints corresponding to Ipopt's internal problem representation.

If this method is not overloaded, the default implementation does not set any meta data and returns false.

Reimplemented in Ipopt::SensAmplTNLP, and Ipopt::AmplTNLP.

Definition at line 126 of file IpTNLP.hpp.

◆ get_bounds_info()

virtual bool Ipopt::TNLP::get_bounds_info ( Index  n,
Number x_l,
Number x_u,
Index  m,
Number g_l,
Number g_u 
)
pure virtual

Method to request bounds on the variables and constraints.

Parameters
n(in) the number of variables \(x\) in the problem
x_l(out) the lower bounds \(x^L\) for the variables \(x\)
x_u(out) the upper bounds \(x^U\) for the variables \(x\)
m(in) the number of constraints \(g(x)\) in the problem
g_l(out) the lower bounds \(g^L\) for the constraints \(g(x)\)
g_u(out) the upper bounds \(g^U\) for the constraints \(g(x)\)
Returns
true if success, false otherwise.

The values of n and m that were specified in TNLP::get_nlp_info are passed here for debug checking. Setting a lower bound to a value less than or equal to the value of the option nlp_lower_bound_inf will cause Ipopt to assume no lower bound. Likewise, specifying the upper bound above or equal to the value of the option nlp_upper_bound_inf will cause Ipopt to assume no upper bound. These options are set to -1019 and 1019, respectively, by default, but may be modified by changing these options.

Implemented in Ipopt::SensAmplTNLP, Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ get_scaling_parameters()

virtual bool Ipopt::TNLP::get_scaling_parameters ( Number obj_scaling,
bool use_x_scaling,
Index  n,
Number x_scaling,
bool use_g_scaling,
Index  m,
Number g_scaling 
)
inlinevirtual

Method to request scaling parameters.

This is only called if the options are set to retrieve user scaling, that is, if nlp_scaling_method is chosen as "user-scaling". The method should provide scaling factors for the objective function as well as for the optimization variables and/or constraints. The return value should be true, unless an error occurred, and the program is to be aborted.

The value returned in obj_scaling determines, how Ipopt should internally scale the objective function. For example, if this number is chosen to be 10, then Ipopt solves internally an optimization problem that has 10 times the value of the original objective function provided by the TNLP. In particular, if this value is negative, then Ipopt will maximize the objective function instead of minimizing it.

The scaling factors for the variables can be returned in x_scaling, which has the same length as x in the other TNLP methods, and the factors are ordered like x. use_x_scaling needs to be set to true, if Ipopt should scale the variables. If it is false, no internal scaling of the variables is done. Similarly, the scaling factors for the constraints can be returned in g_scaling, and this scaling is activated by setting use_g_scaling to true.

As a guideline, we suggest to scale the optimization problem (either directly in the original formulation, or after using scaling factors) so that all sensitivities, i.e., all non-zero first partial derivatives, are typically of the order 0.1-10.

Reimplemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

Definition at line 211 of file IpTNLP.hpp.

◆ get_variables_linearity()

virtual bool Ipopt::TNLP::get_variables_linearity ( Index  n,
LinearityType var_types 
)
inlinevirtual

Method to request the variables linearity.

This method is never called by Ipopt, but is used by Bonmin to get information about which variables occur only in linear terms. Ipopt passes the array var_types of length at least n, which should be filled with the appropriate linearity type of the variables (TNLP::LINEAR or TNLP::NON_LINEAR).

The default implementation just returns false and does not fill the array.

Reimplemented in Ipopt::TNLPReducer.

Definition at line 243 of file IpTNLP.hpp.

◆ get_constraints_linearity()

virtual bool Ipopt::TNLP::get_constraints_linearity ( Index  m,
LinearityType const_types 
)
inlinevirtual

Method to request the constraints linearity.

This method is never called by Ipopt, but is used by Bonmin to get information about which constraints are linear. Ipopt passes the array const_types of size m, which should be filled with the appropriate linearity type of the constraints (TNLP::LINEAR or TNLP::NON_LINEAR).

The default implementation just returns false and does not fill the array.

Reimplemented in Ipopt::AmplTNLP, and Ipopt::TNLPReducer.

Definition at line 265 of file IpTNLP.hpp.

◆ get_starting_point()

virtual bool Ipopt::TNLP::get_starting_point ( Index  n,
bool  init_x,
Number x,
bool  init_z,
Number z_L,
Number z_U,
Index  m,
bool  init_lambda,
Number lambda 
)
pure virtual

Method to request the starting point before iterating.

Parameters
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
init_x(in) if true, this method must provide an initial value for \(x\)
x(out) the initial values for the primal variables \(x\)
init_z(in) if true, this method must provide an initial value for the bound multipliers \(z^L\) and \(z^U\)
z_L(out) the initial values for the bound multipliers \(z^L\)
z_U(out) the initial values for the bound multipliers \(z^U\)
m(in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
init_lambda(in) if true, this method must provide an initial value for the constraint multipliers \(\lambda\)
lambda(out) the initial values for the constraint multipliers, \(\lambda\)
Returns
true if success, false otherwise.

The boolean variables indicate whether the algorithm requires to have x, z_L/z_u, and lambda initialized, respectively. If, for some reason, the algorithm requires initializations that cannot be provided, false should be returned and Ipopt will stop. The default options only require initial values for the primal variables \(x\).

Note, that the initial values for bound multiplier components for absent bounds ( \(x^L_i=-\infty\) or \(x^U_i=\infty\)) are ignored.

Implemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ get_warm_start_iterate()

virtual bool Ipopt::TNLP::get_warm_start_iterate ( IteratesVector warm_start_iterate)
inlinevirtual

Method to provide an Ipopt warm start iterate which is already in the form Ipopt requires it internally for warm starts.

This method is only for expert users. The default implementation does not provide a warm start iterate and returns false.

Parameters
warm_start_iteratestorage for warm start iterate in the form Ipopt requires it internally

Reimplemented in Ipopt::TNLPReducer.

Definition at line 322 of file IpTNLP.hpp.

◆ eval_f()

virtual bool Ipopt::TNLP::eval_f ( Index  n,
const Number x,
bool  new_x,
Number obj_value 
)
pure virtual

Method to request the value of the objective function.

Parameters
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
x(in) the values for the primal variables \(x\) at which the objective function \(f(x)\) is to be evaluated
new_x(in) false if any evaluation method (eval_*) was previously called with the same values in x, true otherwise. This can be helpful when users have efficient implementations that calculate multiple outputs at once. Ipopt internally caches results from the TNLP and generally, this flag can be ignored.
obj_value(out) storage for the value of the objective function \(f(x)\)
Returns
true if success, false otherwise.

Implemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ eval_grad_f()

virtual bool Ipopt::TNLP::eval_grad_f ( Index  n,
const Number x,
bool  new_x,
Number grad_f 
)
pure virtual

Method to request the gradient of the objective function.

Parameters
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
x(in) the values for the primal variables \(x\) at which the gradient \(\nabla f(x)\) is to be evaluated
new_x(in) false if any evaluation method (eval_*) was previously called with the same values in x, true otherwise; see also TNLP::eval_f
grad_f(out) array to store values of the gradient of the objective function \(\nabla f(x)\). The gradient array is in the same order as the \(x\) variables (i.e., the gradient of the objective with respect to x[2] should be put in grad_f[2]).
Returns
true if success, false otherwise.

Implemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ eval_g()

virtual bool Ipopt::TNLP::eval_g ( Index  n,
const Number x,
bool  new_x,
Index  m,
Number g 
)
pure virtual

Method to request the constraint values.

Parameters
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
x(in) the values for the primal variables \(x\) at which the constraint functions \(g(x)\) are to be evaluated
new_x(in) false if any evaluation method (eval_*) was previously called with the same values in x, true otherwise; see also TNLP::eval_f
m(in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
g(out) array to store constraint function values \(g(x)\), do not add or subtract the bound values \(g^L\) or \(g^U\).
Returns
true if success, false otherwise.

Implemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ eval_jac_g()

virtual bool Ipopt::TNLP::eval_jac_g ( Index  n,
const Number x,
bool  new_x,
Index  m,
Index  nele_jac,
Index iRow,
Index jCol,
Number values 
)
pure virtual

Method to request either the sparsity structure or the values of the Jacobian of the constraints.

The Jacobian is the matrix of derivatives where the derivative of constraint function \(g_i\) with respect to variable \(x_j\) is placed in row \(i\) and column \(j\). See Triplet Format for Sparse Matrices for a discussion of the sparse matrix format used in this method.

Parameters
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
x(in) first call: NULL; later calls: the values for the primal variables \(x\) at which the constraint Jacobian \(\nabla g(x)^T\) is to be evaluated
new_x(in) false if any evaluation method (eval_*) was previously called with the same values in x, true otherwise; see also TNLP::eval_f
m(in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
nele_jac(in) the number of nonzero elements in the Jacobian; it will have the same value that was specified in TNLP::get_nlp_info
iRow(out) first call: array of length nele_jac to store the row indices of entries in the Jacobian of the constraints; later calls: NULL
jCol(out) first call: array of length nele_jac to store the column indices of entries in the Jacobian of the constraints; later calls: NULL
values(out) first call: NULL; later calls: array of length nele_jac to store the values of the entries in the Jacobian of the constraints
Returns
true if success, false otherwise.
Note
The arrays iRow and jCol only need to be filled once. If the iRow and jCol arguments are not NULL (first call to this function), then Ipopt expects that the sparsity structure of the Jacobian (the row and column indices only) are written into iRow and jCol. At this call, the arguments x and values will be NULL. If the arguments x and values are not NULL, then Ipopt expects that the value of the Jacobian as calculated from array x is stored in array values (using the same order as used when specifying the sparsity structure). At this call, the arguments iRow and jCol will be NULL.

Implemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ eval_h()

virtual bool Ipopt::TNLP::eval_h ( Index  n,
const Number x,
bool  new_x,
Number  obj_factor,
Index  m,
const Number lambda,
bool  new_lambda,
Index  nele_hess,
Index iRow,
Index jCol,
Number values 
)
inlinevirtual

Method to request either the sparsity structure or the values of the Hessian of the Lagrangian.

The Hessian matrix that Ipopt uses is

\[ \sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k) \]

for the given values for \(x\), \(\sigma_f\), and \(\lambda\). See Triplet Format for Sparse Matrices for a discussion of the sparse matrix format used in this method.

Parameters
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
x(in) first call: NULL; later calls: the values for the primal variables \(x\) at which the Hessian is to be evaluated
new_x(in) false if any evaluation method (eval_*) was previously called with the same values in x, true otherwise; see also TNLP::eval_f
obj_factor(in) factor \(\sigma_f\) in front of the objective term in the Hessian
m(in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
lambda(in) the values for the constraint multipliers \(\lambda\) at which the Hessian is to be evaluated
new_lambda(in) false if any evaluation method was previously called with the same values in lambda, true otherwise
nele_hess(in) the number of nonzero elements in the Hessian; it will have the same value that was specified in TNLP::get_nlp_info
iRow(out) first call: array of length nele_hess to store the row indices of entries in the Hessian; later calls: NULL
jCol(out) first call: array of length nele_hess to store the column indices of entries in the Hessian; later calls: NULL
values(out) first call: NULL; later calls: array of length nele_hess to store the values of the entries in the Hessian
Returns
true if success, false otherwise.
Note
The arrays iRow and jCol only need to be filled once. If the iRow and jCol arguments are not NULL (first call to this function), then Ipopt expects that the sparsity structure of the Hessian (the row and column indices only) are written into iRow and jCol. At this call, the arguments x, lambda, and values will be NULL. If the arguments x, lambda, and values are not NULL, then Ipopt expects that the value of the Hessian as calculated from arrays x and lambda are stored in array values (using the same order as used when specifying the sparsity structure). At this call, the arguments iRow and jCol will be NULL.
Attention
As this matrix is symmetric, Ipopt expects that only the lower diagonal entries are specified.

A default implementation is provided, in case the user wants to set quasi-Newton approximations to estimate the second derivatives and doesn't not need to implement this method.

Reimplemented in Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

Definition at line 472 of file IpTNLP.hpp.

◆ get_number_of_nonlinear_variables()

virtual Index Ipopt::TNLP::get_number_of_nonlinear_variables ( )
inlinevirtual

Return the number of variables that appear nonlinearly in the objective function or in at least one constraint function.

If -1 is returned as number of nonlinear variables, Ipopt assumes that all variables are nonlinear. Otherwise, it calls get_list_of_nonlinear_variables with an array into which the indices of the nonlinear variables should be written - the array has the length num_nonlin_vars, which is identical with the return value of get_number_of_nonlinear_variables(). It is assumed that the indices are counted starting with 1 in the FORTRAN_STYLE, and 0 for the C_STYLE.

The default implementation returns -1, i.e., all variables are assumed to be nonlinear.

Reimplemented in Ipopt::AmplTNLP, and Ipopt::TNLPReducer.

Definition at line 526 of file IpTNLP.hpp.

◆ get_list_of_nonlinear_variables()

virtual bool Ipopt::TNLP::get_list_of_nonlinear_variables ( Index  num_nonlin_vars,
Index pos_nonlin_vars 
)
inlinevirtual

Return the indices of all nonlinear variables.

This method is called only if limited-memory quasi-Newton option is used and get_number_of_nonlinear_variables() returned a positive number. This number is provided in parameter num_nonlin_var.

The method must store the indices of all nonlinear variables in pos_nonlin_vars, where the numbering starts with 0 order 1, depending on the numbering style determined in get_nlp_info.

Reimplemented in Ipopt::AmplTNLP, and Ipopt::TNLPReducer.

Definition at line 543 of file IpTNLP.hpp.

◆ finalize_solution()

virtual void Ipopt::TNLP::finalize_solution ( SolverReturn  status,
Index  n,
const Number x,
const Number z_L,
const Number z_U,
Index  m,
const Number g,
const Number lambda,
Number  obj_value,
const IpoptData ip_data,
IpoptCalculatedQuantities ip_cq 
)
pure virtual

This method is called when the algorithm has finished (successfully or not) so the TNLP can digest the outcome, e.g., store/write the solution, if any.

Parameters
status(in) gives the status of the algorithm
  • SUCCESS: Algorithm terminated successfully at a locally optimal point, satisfying the convergence tolerances (can be specified by options).
  • MAXITER_EXCEEDED: Maximum number of iterations exceeded (can be specified by an option).
  • CPUTIME_EXCEEDED: Maximum number of CPU seconds exceeded (can be specified by an option).
  • STOP_AT_TINY_STEP: Algorithm proceeds with very little progress.
  • STOP_AT_ACCEPTABLE_POINT: Algorithm stopped at a point that was converged, not to "desired" tolerances, but to "acceptable" tolerances (see the acceptable-... options).
  • LOCAL_INFEASIBILITY: Algorithm converged to a point of local infeasibility. Problem may be infeasible.
  • USER_REQUESTED_STOP: The user call-back function TNLP::intermediate_callback returned false, i.e., the user code requested a premature termination of the optimization.
  • DIVERGING_ITERATES: It seems that the iterates diverge.
  • RESTORATION_FAILURE: Restoration phase failed, algorithm doesn't know how to proceed.
  • ERROR_IN_STEP_COMPUTATION: An unrecoverable error occurred while Ipopt tried to compute the search direction.
  • INVALID_NUMBER_DETECTED: Algorithm received an invalid number (such as NaN or Inf) from the NLP; see also option check_derivatives_for_nan_inf).
  • INTERNAL_ERROR: An unknown internal error occurred.
n(in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
x(in) the final values for the primal variables
z_L(in) the final values for the lower bound multipliers
z_U(in) the final values for the upper bound multipliers
m(in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info
g(in) the final values of the constraint functions
lambda(in) the final values of the constraint multipliers
obj_value(in) the final value of the objective function
ip_data(in) provided for expert users
ip_cq(in) provided for expert users

Implemented in Ipopt::SensAmplTNLP, Ipopt::AmplTNLP, Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

◆ finalize_metadata()

virtual void Ipopt::TNLP::finalize_metadata ( Index  n,
const StringMetaDataMapType var_string_md,
const IntegerMetaDataMapType var_integer_md,
const NumericMetaDataMapType var_numeric_md,
Index  m,
const StringMetaDataMapType con_string_md,
const IntegerMetaDataMapType con_integer_md,
const NumericMetaDataMapType con_numeric_md 
)
inlinevirtual

This method returns any metadata collected during the run of the algorithm.

This method is called just before finalize_solution is called. The returned data includes the metadata provided by TNLP::get_var_con_metadata. Each metadata can be of type string, integer, or numeric. It can be associated to either the variables or the constraints. The metadata that was associated with the primal variable vector is stored in var_..._md. The metadata associated with the constraint multipliers is stored in con_..._md. The metadata associated with the bound multipliers is stored in var_..._md, with the suffixes "_z_L", and "_z_U", denoting lower and upper bounds.

If the user doesn't overload this method in her implementation of the class derived from TNLP, the default implementation does nothing.

Reimplemented in Ipopt::SensAmplTNLP.

Definition at line 619 of file IpTNLP.hpp.

◆ intermediate_callback()

virtual bool Ipopt::TNLP::intermediate_callback ( AlgorithmMode  mode,
Index  iter,
Number  obj_value,
Number  inf_pr,
Number  inf_du,
Number  mu,
Number  d_norm,
Number  regularization_size,
Number  alpha_du,
Number  alpha_pr,
Index  ls_trials,
const IpoptData ip_data,
IpoptCalculatedQuantities ip_cq 
)
inlinevirtual

Intermediate Callback method for the user.

This method is called once per iteration (during the convergence check), and can be used to obtain information about the optimization status while Ipopt solves the problem, and also to request a premature termination.

The information provided by the entities in the argument list correspond to what Ipopt prints in the iteration summary (see also Ipopt Output), except for inf_pr, which by default corresponds to the original problem in the log but to the scaled internal problem in this callback. Further information can be obtained from the ip_data and ip_cq objects. The current iterate and violations of feasibility and optimality can be accessed via the methods Ipopt::TNLP::get_curr_iterate() and Ipopt::TNLP::get_curr_violations(). These methods translate values for the internal representation of the problem from ip_data and ip_cq objects into the TNLP representation.

Returns
If this method returns false, Ipopt will terminate with the User_Requested_Stop status.

It is not required to implement (overload) this method. The default implementation always returns true.

Reimplemented in Ipopt::StdInterfaceTNLP, and Ipopt::TNLPReducer.

Definition at line 665 of file IpTNLP.hpp.

◆ get_curr_iterate()

bool Ipopt::TNLP::get_curr_iterate ( const IpoptData ip_data,
IpoptCalculatedQuantities ip_cq,
bool  scaled,
Index  n,
Number x,
Number z_L,
Number z_U,
Index  m,
Number g,
Number lambda 
) const

Get primal and dual variable values of the current iterate.

This method can be used to get the values of the current iterate, e.g., during intermediate_callback(). The method expects the number of variables (dimension of x), number of constraints (dimension of g(x)), and allocated arrays of appropriate lengths as input.

The method translates the x(), c(), d(), y_c(), y_d(), z_L(), and z_U() vectors from IpoptData::curr() of the internal NLP representation into the form used by the TNLP. For the correspondence between scaled and unscaled solutions, see the detailed description of OrigIpoptNLP. If Ipopt is in restoration mode, it maps the current iterate of restoration NLP (see RestoIpoptNLP) back to the original TNLP.

If there are fixed variables and fixed_variable_treatment=make_parameter, then requesting z_L and z_U can trigger a reevaluation of the Gradient of the objective function and the Jacobian of the constraint functions.

Parameters
ip_data(in) Ipopt Data
ip_cq(in) Ipopt Calculated Quantities
scaled(in) whether to retrieve scaled or unscaled iterate
n(in) the number of variables \(x\) in the problem; can be arbitrary if skipping x, z_L, and z_U
x(out) buffer to store value of primal variables \(x\), must have length at least n; pass NULL to skip retrieving x
z_L(out) buffer to store the lower bound multipliers \(z_L\), must have length at least n; pass NULL to skip retrieving z_L and z_U
z_U(out) buffer to store the upper bound multipliers \(z_U\), must have length at least n; pass NULL to skip retrieving z_U and z_U
m(in) the number of constraints \(g(x)\); can be arbitrary if skipping g and lambda
g(out) buffer to store the constraint values \(g(x)\), must have length at least m; pass NULL to skip retrieving g
lambda(out) buffer to store the constraint multipliers \(\lambda\), must have length at least m; pass NULL to skip retrieving lambda
Returns
Whether Ipopt has successfully filled the given arrays
Since
3.14.0

◆ get_curr_violations()

bool Ipopt::TNLP::get_curr_violations ( const IpoptData ip_data,
IpoptCalculatedQuantities ip_cq,
bool  scaled,
Index  n,
Number x_L_violation,
Number x_U_violation,
Number compl_x_L,
Number compl_x_U,
Number grad_lag_x,
Index  m,
Number nlp_constraint_violation,
Number compl_g 
) const

Get primal and dual infeasibility of the current iterate.

This method can be used to get the violations of constraints and optimality conditions at the current iterate, e.g., during intermediate_callback(). The method expects the number of variables (dimension of x), number of constraints (dimension of g(x)), and allocated arrays of appropriate lengths as input.

The method makes the vectors behind (unscaled_)curr_orig_bounds_violation(), (unscaled_)curr_nlp_constraint_violation(), (unscaled_)curr_dual_infeasibility(), (unscaled_)curr_complementarity() from IpoptCalculatedQuantities of the internal NLP representation available into the form used by the TNLP. If Ipopt is in restoration mode, it maps the current iterate of restoration NLP (see RestoIpoptNLP) back to the original TNLP.

Note
If in restoration phase, then requesting grad_lag_x can trigger a call to eval_grad_f().
Ipopt by default relaxes variable bounds (option bound_relax_factor > 0.0). x_L_violation and x_U_violation report the violation of a solution w.r.t. the original unrelaxed bounds. However, compl_x_L and compl_x_U use the relaxed variable bounds to calculate the complementarity.
Parameters
ip_data(in) Ipopt Data
ip_cq(in) Ipopt Calculated Quantities
scaled(in) whether to retrieve scaled or unscaled violations
n(in) the number of variables \(x\) in the problem; can be arbitrary if skipping compl_x_L, compl_x_U, and grad_lag_x
x_L_violation(out) buffer to store violation of original lower bounds on variables (max(orig_x_L-x,0)), must have length at least n; pass NULL to skip retrieving orig_x_L
x_U_violation(out) buffer to store violation of original upper bounds on variables (max(x-orig_x_U,0)), must have length at least n; pass NULL to skip retrieving orig_x_U
compl_x_L(out) buffer to store violation of complementarity for lower bounds on variables ( \((x-x_L)z_L\)), must have length at least n; pass NULL to skip retrieving compl_x_L
compl_x_U(out) buffer to store violation of complementarity for upper bounds on variables ( \((x_U-x)z_U\)), must have length at least n; pass NULL to skip retrieving compl_x_U
grad_lag_x(out) buffer to store gradient of Lagrangian w.r.t. variables \(x\), must have length at least n; pass NULL to skip retrieving grad_lag_x
m(in) the number of constraints \(g(x)\); can be arbitrary if skipping lambda
nlp_constraint_violation(out) buffer to store violation of constraints \(max(g_l-g(x),g(x)-g_u,0)\), must have length at least m; pass NULL to skip retrieving constraint_violation
compl_g(out) buffer to store violation of complementarity of constraint ( \((g(x)-g_l)*\lambda^+ + (g_l-g(x))*\lambda^-\), where \(\lambda^+=max(0,\lambda)\) and \(\lambda^-=max(0,-\lambda)\) (componentwise)), must have length at least m; pass NULL to skip retrieving compl_g
Returns
Whether Ipopt has successfully filled the given arrays
Since
3.14.0

◆ operator=()

void Ipopt::TNLP::operator= ( const TNLP )
private

Default Assignment Operator.


The documentation for this class was generated from the following file: