Implementation of a TNLP for the Standard C interface. More...
#include <IpStdInterfaceTNLP.hpp>
Private Member Functions  
Default Compiler Generated Methods  
(Hidden to avoid implicit creation/calling). These methods are not implemented and we do not want the compiler to implement them for us, so we declare them private and do not define them. This ensures that they will not be implicitly created/called.  
StdInterfaceTNLP ()  
Default Constructor.  
StdInterfaceTNLP (const StdInterfaceTNLP &)  
Copy Constructor.  
void  operator= (const StdInterfaceTNLP &) 
Default Assignment Operator.  
Private Attributes  
SmartPtr< const Journalist >  jnlst_ 
Journalist.  
Number *  non_const_x_ 
A nonconst copy of x  this is kept uptodate in apply_new_x.  
Information about the problem  
const Index  n_var_ 
Number of variables.  
const Index  n_con_ 
Number of constraints.  
const Number *  x_L_ 
Pointer to Number array containing lower bounds for variables.  
const Number *  x_U_ 
Pointer to Number array containing upper bounds for variables.  
const Number *  g_L_ 
Pointer to Number array containing lower bounds for constraints.  
const Number *  g_U_ 
Pointer to Number array containing upper bounds for constraints.  
const Index  nele_jac_ 
Number of nonzero elements in the constraint Jacobian.  
const Index  nele_hess_ 
Number of nonzero elements in the Hessian.  
const Index  index_style_ 
Starting value of the iRow and jCol parameters for matrices.  
const Number *  start_x_ 
Pointer to Number array containing starting point for variables.  
const Number *  start_lam_ 
Pointer to Number array containing starting values for constraint multipliers.  
const Number *  start_z_L_ 
Pointer to Number array containing starting values for lower bound multipliers.  
const Number *  start_z_U_ 
Pointer to Number array containing starting values for upper bound multipliers.  
Eval_F_CB  eval_f_ 
Pointer to callback function evaluating value of objective function.  
Eval_G_CB  eval_g_ 
Pointer to callback function evaluating value of constraints.  
Eval_Grad_F_CB  eval_grad_f_ 
Pointer to callback function evaluating gradient of objective function.  
Eval_Jac_G_CB  eval_jac_g_ 
Pointer to callback function evaluating Jacobian of constraints.  
Eval_H_CB  eval_h_ 
Pointer to callback function evaluating Hessian of Lagrangian.  
Intermediate_CB  intermediate_cb_ 
Pointer to intermediate callback function giving control to user.  
UserDataPtr  user_data_ 
Pointer to user data.  
Number  obj_scaling_ 
Objective scaling factor.  
const Number *  x_scaling_ 
Scaling factors for variables (if not NULL)  
const Number *  g_scaling_ 
Scaling factors for constraints (if not NULL)  
Pointers to the user provided vectors for solution  
Number *  x_sol_ 
Number *  z_L_sol_ 
Number *  z_U_sol_ 
Number *  g_sol_ 
Number *  lambda_sol_ 
Number *  obj_sol_ 
Temporary pointers to IpoptData and IpoptCalculatedQuantities  
For implementation of GetIpoptCurrentIterate() and GetIpoptCurrentViolations() (without API change).  
const IpoptData *  ip_data_ 
IpoptCalculatedQuantities *  ip_cq_ 
void  apply_new_x (bool new_x, Index n, const Number *x) 
Update the internal state if the x value changes.  
Additional Inherited Members  
Public Types inherited from Ipopt::TNLP  
enum  LinearityType { LINEAR , NON_LINEAR } 
Linearitytypes of variables and constraints. More...  
enum  IndexStyleEnum { C_STYLE = 0 , FORTRAN_STYLE = 1 } 
typedef std::map< std::string, std::vector< std::string > >  StringMetaDataMapType 
typedef std::map< std::string, std::vector< Index > >  IntegerMetaDataMapType 
typedef std::map< std::string, std::vector< Number > >  NumericMetaDataMapType 
Implementation of a TNLP for the Standard C interface.
The standard C interface is exposed to the user as a single C function that is given problem dimension, starting points, and pointers for functions that evaluate objective function etc.
Definition at line 28 of file IpStdInterfaceTNLP.hpp.
Ipopt::StdInterfaceTNLP::StdInterfaceTNLP  (  Index  n_var, 
const Number *  x_L,  
const Number *  x_U,  
Index  n_con,  
const Number *  g_L,  
const Number *  g_U,  
Index  nele_jac,  
Index  nele_hess,  
Index  index_style,  
const Number *  start_x,  
const Number *  start_lam,  
const Number *  start_z_L,  
const Number *  start_z_U,  
Eval_F_CB  eval_f,  
Eval_G_CB  eval_g,  
Eval_Grad_F_CB  eval_grad_f,  
Eval_Jac_G_CB  eval_jac_g,  
Eval_H_CB  eval_h,  
Intermediate_CB  intermediate_cb,  
Number *  x_sol,  
Number *  z_L_sol,  
Number *  z_U_sol,  
Number *  g_sol,  
Number *  lam_sol,  
Number *  obj_sol,  
UserDataPtr  user_data,  
Number  obj_scaling = 1 , 

const Number *  x_scaling = NULL , 

const Number *  g_scaling = NULL 

) 
Constructor, given dimensions of problem, function pointers for evaluation callback functions, and starting points.
Note that the constructor does not make a copy of any of the Number arrays, i.e. it is up to the called to keep them around.

virtual 
Default destructor.

private 
Default Constructor.

private 
Copy Constructor.

virtual 
Method to request the initial information about the problem.
Ipopt uses this information when allocating the arrays that it will later ask you to fill with values. Be careful in this method since incorrect values will cause memory bugs which may be very difficult to find.
n  (out) Storage for the number of variables \(x\) 
m  (out) Storage for the number of constraints \(g(x)\) 
nnz_jac_g  (out) Storage for the number of nonzero entries in the Jacobian 
nnz_h_lag  (out) Storage for the number of nonzero entries in the Hessian 
index_style  (out) Storage for the index style, the numbering style used for row/col entries in the sparse matrix format (TNLP::C_STYLE: 0based, TNLP::FORTRAN_STYLE: 1based; see also Triplet Format for Sparse Matrices) 
Implements Ipopt::TNLP.

virtual 
Method to request bounds on the variables and constraints.
n  (in) the number of variables \(x\) in the problem 
x_l  (out) the lower bounds \(x^L\) for the variables \(x\) 
x_u  (out) the upper bounds \(x^U\) for the variables \(x\) 
m  (in) the number of constraints \(g(x)\) in the problem 
g_l  (out) the lower bounds \(g^L\) for the constraints \(g(x)\) 
g_u  (out) the upper bounds \(g^U\) for the constraints \(g(x)\) 
The values of n
and m
that were specified in TNLP::get_nlp_info are passed here for debug checking. Setting a lower bound to a value less than or equal to the value of the option nlp_lower_bound_inf will cause Ipopt to assume no lower bound. Likewise, specifying the upper bound above or equal to the value of the option nlp_upper_bound_inf will cause Ipopt to assume no upper bound. These options are set to 10^{19} and 10^{19}, respectively, by default, but may be modified by changing these options.
Implements Ipopt::TNLP.

virtual 
Method to request scaling parameters.
This is only called if the options are set to retrieve user scaling, that is, if nlp_scaling_method is chosen as "userscaling". The method should provide scaling factors for the objective function as well as for the optimization variables and/or constraints. The return value should be true, unless an error occurred, and the program is to be aborted.
The value returned in obj_scaling determines, how Ipopt should internally scale the objective function. For example, if this number is chosen to be 10, then Ipopt solves internally an optimization problem that has 10 times the value of the original objective function provided by the TNLP. In particular, if this value is negative, then Ipopt will maximize the objective function instead of minimizing it.
The scaling factors for the variables can be returned in x_scaling, which has the same length as x in the other TNLP methods, and the factors are ordered like x. use_x_scaling needs to be set to true, if Ipopt should scale the variables. If it is false, no internal scaling of the variables is done. Similarly, the scaling factors for the constraints can be returned in g_scaling, and this scaling is activated by setting use_g_scaling to true.
As a guideline, we suggest to scale the optimization problem (either directly in the original formulation, or after using scaling factors) so that all sensitivities, i.e., all nonzero first partial derivatives, are typically of the order 0.110.
Reimplemented from Ipopt::TNLP.

virtual 
Method to request the starting point before iterating.
n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
init_x  (in) if true, this method must provide an initial value for \(x\) 
x  (out) the initial values for the primal variables \(x\) 
init_z  (in) if true, this method must provide an initial value for the bound multipliers \(z^L\) and \(z^U\) 
z_L  (out) the initial values for the bound multipliers \(z^L\) 
z_U  (out) the initial values for the bound multipliers \(z^U\) 
m  (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
init_lambda  (in) if true, this method must provide an initial value for the constraint multipliers \(\lambda\) 
lambda  (out) the initial values for the constraint multipliers, \(\lambda\) 
The boolean variables indicate whether the algorithm requires to have x, z_L/z_u, and lambda initialized, respectively. If, for some reason, the algorithm requires initializations that cannot be provided, false should be returned and Ipopt will stop. The default options only require initial values for the primal variables \(x\).
Note, that the initial values for bound multiplier components for absent bounds ( \(x^L_i=\infty\) or \(x^U_i=\infty\)) are ignored.
Implements Ipopt::TNLP.

virtual 
Method to request the value of the objective function.
n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
x  (in) the values for the primal variables \(x\) at which the objective function \(f(x)\) is to be evaluated 
new_x  (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise. This can be helpful when users have efficient implementations that calculate multiple outputs at once. Ipopt internally caches results from the TNLP and generally, this flag can be ignored. 
obj_value  (out) storage for the value of the objective function \(f(x)\) 
Implements Ipopt::TNLP.

virtual 
Method to request the gradient of the objective function.
n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
x  (in) the values for the primal variables \(x\) at which the gradient \(\nabla f(x)\) is to be evaluated 
new_x  (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f 
grad_f  (out) array to store values of the gradient of the objective function \(\nabla f(x)\). The gradient array is in the same order as the \(x\) variables (i.e., the gradient of the objective with respect to x[2] should be put in grad_f[2] ). 
Implements Ipopt::TNLP.

virtual 
Method to request the constraint values.
n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
x  (in) the values for the primal variables \(x\) at which the constraint functions \(g(x)\) are to be evaluated 
new_x  (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f 
m  (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
g  (out) array to store constraint function values \(g(x)\), do not add or subtract the bound values \(g^L\) or \(g^U\). 
Implements Ipopt::TNLP.

virtual 
Method to request either the sparsity structure or the values of the Jacobian of the constraints.
The Jacobian is the matrix of derivatives where the derivative of constraint function \(g_i\) with respect to variable \(x_j\) is placed in row \(i\) and column \(j\). See Triplet Format for Sparse Matrices for a discussion of the sparse matrix format used in this method.
n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
x  (in) first call: NULL; later calls: the values for the primal variables \(x\) at which the constraint Jacobian \(\nabla g(x)^T\) is to be evaluated 
new_x  (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f 
m  (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
nele_jac  (in) the number of nonzero elements in the Jacobian; it will have the same value that was specified in TNLP::get_nlp_info 
iRow  (out) first call: array of length nele_jac to store the row indices of entries in the Jacobian of the constraints; later calls: NULL 
jCol  (out) first call: array of length nele_jac to store the column indices of entries in the Jacobian of the constraints; later calls: NULL 
values  (out) first call: NULL; later calls: array of length nele_jac to store the values of the entries in the Jacobian of the constraints 
x
and values
will be NULL. If the arguments x
and values
are not NULL, then Ipopt expects that the value of the Jacobian as calculated from array x
is stored in array values
(using the same order as used when specifying the sparsity structure). At this call, the arguments iRow
and jCol
will be NULL. Implements Ipopt::TNLP.

virtual 
Method to request either the sparsity structure or the values of the Hessian of the Lagrangian.
The Hessian matrix that Ipopt uses is
\[ \sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k) \]
for the given values for \(x\), \(\sigma_f\), and \(\lambda\). See Triplet Format for Sparse Matrices for a discussion of the sparse matrix format used in this method.
n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
x  (in) first call: NULL; later calls: the values for the primal variables \(x\) at which the Hessian is to be evaluated 
new_x  (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f 
obj_factor  (in) factor \(\sigma_f\) in front of the objective term in the Hessian 
m  (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
lambda  (in) the values for the constraint multipliers \(\lambda\) at which the Hessian is to be evaluated 
new_lambda  (in) false if any evaluation method was previously called with the same values in lambda, true otherwise 
nele_hess  (in) the number of nonzero elements in the Hessian; it will have the same value that was specified in TNLP::get_nlp_info 
iRow  (out) first call: array of length nele_hess to store the row indices of entries in the Hessian; later calls: NULL 
jCol  (out) first call: array of length nele_hess to store the column indices of entries in the Hessian; later calls: NULL 
values  (out) first call: NULL; later calls: array of length nele_hess to store the values of the entries in the Hessian 
x
, lambda
, and values
will be NULL. If the arguments x
, lambda
, and values
are not NULL, then Ipopt expects that the value of the Hessian as calculated from arrays x
and lambda
are stored in array values
(using the same order as used when specifying the sparsity structure). At this call, the arguments iRow
and jCol
will be NULL.A default implementation is provided, in case the user wants to set quasiNewton approximations to estimate the second derivatives and doesn't not need to implement this method.
Reimplemented from Ipopt::TNLP.

virtual 
Intermediate Callback method for the user.
This method is called once per iteration (during the convergence check), and can be used to obtain information about the optimization status while Ipopt solves the problem, and also to request a premature termination.
The information provided by the entities in the argument list correspond to what Ipopt prints in the iteration summary (see also Ipopt Output), except for inf_pr, which by default corresponds to the original problem in the log but to the scaled internal problem in this callback. Further information can be obtained from the ip_data and ip_cq objects. The current iterate and violations of feasibility and optimality can be accessed via the methods Ipopt::TNLP::get_curr_iterate() and Ipopt::TNLP::get_curr_violations(). These methods translate values for the internal representation of the problem from ip_data
and ip_cq
objects into the TNLP representation.
It is not required to implement (overload) this method. The default implementation always returns true.
Reimplemented from Ipopt::TNLP.

virtual 
This method is called when the algorithm has finished (successfully or not) so the TNLP can digest the outcome, e.g., store/write the solution, if any.
status  (in) gives the status of the algorithm

n  (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
x  (in) the final values for the primal variables 
z_L  (in) the final values for the lower bound multipliers 
z_U  (in) the final values for the upper bound multipliers 
m  (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info 
g  (in) the final values of the constraint functions 
lambda  (in) the final values of the constraint multipliers 
obj_value  (in) the final value of the objective function 
ip_data  (in) provided for expert users 
ip_cq  (in) provided for expert users 
Implements Ipopt::TNLP.

inline 
get_curr_iterate() to be called by GetIpoptCurrentIterate()
Definition at line 202 of file IpStdInterfaceTNLP.hpp.

inline 
get_curr_violations() to be called by GetIpoptCurrentViolations()
Definition at line 218 of file IpStdInterfaceTNLP.hpp.
Update the internal state if the x value changes.

private 
Default Assignment Operator.

private 
Definition at line 236 of file IpStdInterfaceTNLP.hpp.
Number of variables.
Definition at line 241 of file IpStdInterfaceTNLP.hpp.
Number of constraints.
Definition at line 243 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing lower bounds for variables.
Definition at line 245 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing upper bounds for variables.
Definition at line 247 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing lower bounds for constraints.
Definition at line 249 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing upper bounds for constraints.
Definition at line 251 of file IpStdInterfaceTNLP.hpp.
Number of nonzero elements in the constraint Jacobian.
Definition at line 253 of file IpStdInterfaceTNLP.hpp.
Number of nonzero elements in the Hessian.
Definition at line 255 of file IpStdInterfaceTNLP.hpp.
Starting value of the iRow and jCol parameters for matrices.
Definition at line 257 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing starting point for variables.
Definition at line 259 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing starting values for constraint multipliers.
Definition at line 261 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing starting values for lower bound multipliers.
Definition at line 263 of file IpStdInterfaceTNLP.hpp.
Pointer to Number array containing starting values for upper bound multipliers.
Definition at line 265 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to callback function evaluating value of objective function.
Definition at line 267 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to callback function evaluating value of constraints.
Definition at line 269 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to callback function evaluating gradient of objective function.
Definition at line 271 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to callback function evaluating Jacobian of constraints.
Definition at line 273 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to callback function evaluating Hessian of Lagrangian.
Definition at line 275 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to intermediate callback function giving control to user.
Definition at line 277 of file IpStdInterfaceTNLP.hpp.

private 
Pointer to user data.
Definition at line 279 of file IpStdInterfaceTNLP.hpp.

private 
Objective scaling factor.
Definition at line 281 of file IpStdInterfaceTNLP.hpp.
Scaling factors for variables (if not NULL)
Definition at line 283 of file IpStdInterfaceTNLP.hpp.
Scaling factors for constraints (if not NULL)
Definition at line 285 of file IpStdInterfaceTNLP.hpp.

private 
A nonconst copy of x  this is kept uptodate in apply_new_x.
Definition at line 289 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 293 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 294 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 295 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 296 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 297 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 298 of file IpStdInterfaceTNLP.hpp.
Definition at line 305 of file IpStdInterfaceTNLP.hpp.

private 
Definition at line 306 of file IpStdInterfaceTNLP.hpp.