Ampl Interface, implemented as a TNLP. More...
#include <AmplTNLP.hpp>
Protected Attributes | |
SmartPtr< const Journalist > | jnlst_ |
Journalist. | |
ASL_pfgh * | asl_ |
pointer to the main ASL structure | |
Number | obj_sign_ |
Sign of the objective fn (1 for min, -1 for max) | |
void * | Oinfo_ptr_ |
Pointer to the Oinfo structure. | |
void * | nerror_ |
nerror flag passed to ampl calls - set to NULL to halt on error | |
SmartPtr< AmplSuffixHandler > | suffix_handler_ |
Suffix Handler. | |
StringMetaDataMapType | var_string_md_ |
meta data to pass on to TNLP | |
IntegerMetaDataMapType | var_integer_md_ |
NumericMetaDataMapType | var_numeric_md_ |
StringMetaDataMapType | con_string_md_ |
IntegerMetaDataMapType | con_integer_md_ |
NumericMetaDataMapType | con_numeric_md_ |
Problem Size Data | |
Index | nz_h_full_ |
number of nonzeros in the full_x Hessian | |
Internal copies of solution vectors | |
Number * | x_sol_ |
Number * | z_L_sol_ |
Number * | z_U_sol_ |
Number * | g_sol_ |
Number * | lambda_sol_ |
Number | obj_sol_ |
Flags to track internal state | |
bool | objval_called_with_current_x_ |
whether the objective value has been calculated with the current x | |
bool | conval_called_with_current_x_ |
whether the constraint values have been calculated with the current x set to false in apply_new_x, and set to true in internal_conval | |
bool | hesset_called_ |
whether we have called hesset | |
bool | set_active_objective_called_ |
whether set_active_objective has been called | |
Private Member Functions | |
void | gutsOfConstructor (const SmartPtr< RegisteredOptions > regoptions, const SmartPtr< OptionsList > options, const char *const *argv, bool allow_discrete, SmartPtr< AmplOptionsList > ampl_options_list, const char *ampl_option_string, const char *ampl_invokation_string, const char *ampl_banner_string, std::string *nl_file_content) |
Default Compiler Generated Methods | |
(Hidden to avoid implicit creation/calling). These methods are not implemented and we do not want the compiler to implement them for us, so we declare them private and do not define them. This ensures that they will not be implicitly created/called. | |
AmplTNLP () | |
Default Constructor. | |
AmplTNLP (const AmplTNLP &) | |
Copy Constructor. | |
void | operator= (const AmplTNLP &) |
Default Assignment Operator. | |
Additional Inherited Members | |
Public Types inherited from Ipopt::TNLP | |
enum | LinearityType { LINEAR , NON_LINEAR } |
Linearity-types of variables and constraints. More... | |
enum | IndexStyleEnum { C_STYLE = 0 , FORTRAN_STYLE = 1 } |
typedef std::map< std::string, std::vector< std::string > > | StringMetaDataMapType |
typedef std::map< std::string, std::vector< Index > > | IntegerMetaDataMapType |
typedef std::map< std::string, std::vector< Number > > | NumericMetaDataMapType |
Ampl Interface, implemented as a TNLP.
Definition at line 316 of file AmplTNLP.hpp.
Ipopt::AmplTNLP::AmplTNLP | ( | const SmartPtr< const Journalist > & | jnlst, |
const SmartPtr< RegisteredOptions > | regoptions, | ||
const SmartPtr< OptionsList > | options, | ||
const char *const * | argv, | ||
SmartPtr< AmplSuffixHandler > | suffix_handler = NULL , |
||
bool | allow_discrete = false , |
||
SmartPtr< AmplOptionsList > | ampl_options_list = NULL , |
||
const char * | ampl_option_string = NULL , |
||
const char * | ampl_invokation_string = NULL , |
||
const char * | ampl_banner_string = NULL , |
||
std::string * | nl_file_content = NULL |
||
) |
Constructor.
IPOPT_DEPRECATED Ipopt::AmplTNLP::AmplTNLP | ( | const SmartPtr< const Journalist > & | jnlst, |
const SmartPtr< OptionsList > | options, | ||
char **& | argv, | ||
SmartPtr< AmplSuffixHandler > | suffix_handler = NULL , |
||
bool | allow_discrete = false , |
||
SmartPtr< AmplOptionsList > | ampl_options_list = NULL , |
||
const char * | ampl_option_string = NULL , |
||
const char * | ampl_invokation_string = NULL , |
||
const char * | ampl_banner_string = NULL , |
||
std::string * | nl_file_content = NULL |
||
) |
Constructor without RegisteredOptions.
|
virtual |
Default destructor.
|
private |
Default Constructor.
Ipopt::AmplTNLP::DECLARE_STD_EXCEPTION | ( | NONPOSITIVE_SCALING_FACTOR | ) |
Exceptions.
|
virtual |
Method to request the initial information about the problem.
Ipopt uses this information when allocating the arrays that it will later ask you to fill with values. Be careful in this method since incorrect values will cause memory bugs which may be very difficult to find.
n | (out) Storage for the number of variables \(x\) |
m | (out) Storage for the number of constraints \(g(x)\) |
nnz_jac_g | (out) Storage for the number of nonzero entries in the Jacobian |
nnz_h_lag | (out) Storage for the number of nonzero entries in the Hessian |
index_style | (out) Storage for the index style, the numbering style used for row/col entries in the sparse matrix format (TNLP::C_STYLE: 0-based, TNLP::FORTRAN_STYLE: 1-based; see also Triplet Format for Sparse Matrices) |
Implements Ipopt::TNLP.
|
virtual |
Method to request meta data for the variables and the constraints.
This method is used to pass meta data about variables or constraints to Ipopt. The data can be either of integer, numeric, or string type. Ipopt passes this data on to its internal problem representation. The meta data type is a std::map with std::string as key type and a std::vector as value type. So far, Ipopt itself makes only use of string meta data under the key idx_names. With this key, variable and constraint names can be passed to Ipopt, which are shown when printing internal vector or matrix data structures if Ipopt is run with a high value for the option. This allows a user to identify the original variables and constraints corresponding to Ipopt's internal problem representation.
If this method is not overloaded, the default implementation does not set any meta data and returns false.
Reimplemented from Ipopt::TNLP.
Reimplemented in Ipopt::SensAmplTNLP.
|
virtual |
returns bounds of the nlp.
Overloaded from TNLP
Implements Ipopt::TNLP.
Reimplemented in Ipopt::SensAmplTNLP.
|
virtual |
Method to request the constraints linearity.
This method is never called by Ipopt, but is used by Bonmin to get information about which constraints are linear. Ipopt passes the array const_types of size m, which should be filled with the appropriate linearity type of the constraints (TNLP::LINEAR or TNLP::NON_LINEAR).
The default implementation just returns false and does not fill the array.
Reimplemented from Ipopt::TNLP.
|
virtual |
Method to request the starting point before iterating.
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
init_x | (in) if true, this method must provide an initial value for \(x\) |
x | (out) the initial values for the primal variables \(x\) |
init_z | (in) if true, this method must provide an initial value for the bound multipliers \(z^L\) and \(z^U\) |
z_L | (out) the initial values for the bound multipliers \(z^L\) |
z_U | (out) the initial values for the bound multipliers \(z^U\) |
m | (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
init_lambda | (in) if true, this method must provide an initial value for the constraint multipliers \(\lambda\) |
lambda | (out) the initial values for the constraint multipliers, \(\lambda\) |
The boolean variables indicate whether the algorithm requires to have x, z_L/z_u, and lambda initialized, respectively. If, for some reason, the algorithm requires initializations that cannot be provided, false should be returned and Ipopt will stop. The default options only require initial values for the primal variables \(x\).
Note, that the initial values for bound multiplier components for absent bounds ( \(x^L_i=-\infty\) or \(x^U_i=\infty\)) are ignored.
Implements Ipopt::TNLP.
|
virtual |
Method to request the value of the objective function.
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
x | (in) the values for the primal variables \(x\) at which the objective function \(f(x)\) is to be evaluated |
new_x | (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise. This can be helpful when users have efficient implementations that calculate multiple outputs at once. Ipopt internally caches results from the TNLP and generally, this flag can be ignored. |
obj_value | (out) storage for the value of the objective function \(f(x)\) |
Implements Ipopt::TNLP.
|
virtual |
Method to request the gradient of the objective function.
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
x | (in) the values for the primal variables \(x\) at which the gradient \(\nabla f(x)\) is to be evaluated |
new_x | (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f |
grad_f | (out) array to store values of the gradient of the objective function \(\nabla f(x)\). The gradient array is in the same order as the \(x\) variables (i.e., the gradient of the objective with respect to x[2] should be put in grad_f[2] ). |
Implements Ipopt::TNLP.
|
virtual |
Method to request the constraint values.
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
x | (in) the values for the primal variables \(x\) at which the constraint functions \(g(x)\) are to be evaluated |
new_x | (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f |
m | (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
g | (out) array to store constraint function values \(g(x)\), do not add or subtract the bound values \(g^L\) or \(g^U\). |
Implements Ipopt::TNLP.
|
virtual |
Method to request either the sparsity structure or the values of the Jacobian of the constraints.
The Jacobian is the matrix of derivatives where the derivative of constraint function \(g_i\) with respect to variable \(x_j\) is placed in row \(i\) and column \(j\). See Triplet Format for Sparse Matrices for a discussion of the sparse matrix format used in this method.
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
x | (in) first call: NULL; later calls: the values for the primal variables \(x\) at which the constraint Jacobian \(\nabla g(x)^T\) is to be evaluated |
new_x | (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f |
m | (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
nele_jac | (in) the number of nonzero elements in the Jacobian; it will have the same value that was specified in TNLP::get_nlp_info |
iRow | (out) first call: array of length nele_jac to store the row indices of entries in the Jacobian of the constraints; later calls: NULL |
jCol | (out) first call: array of length nele_jac to store the column indices of entries in the Jacobian of the constraints; later calls: NULL |
values | (out) first call: NULL; later calls: array of length nele_jac to store the values of the entries in the Jacobian of the constraints |
x
and values
will be NULL. If the arguments x
and values
are not NULL, then Ipopt expects that the value of the Jacobian as calculated from array x
is stored in array values
(using the same order as used when specifying the sparsity structure). At this call, the arguments iRow
and jCol
will be NULL. Implements Ipopt::TNLP.
|
virtual |
Method to request either the sparsity structure or the values of the Hessian of the Lagrangian.
The Hessian matrix that Ipopt uses is
\[ \sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k) \]
for the given values for \(x\), \(\sigma_f\), and \(\lambda\). See Triplet Format for Sparse Matrices for a discussion of the sparse matrix format used in this method.
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
x | (in) first call: NULL; later calls: the values for the primal variables \(x\) at which the Hessian is to be evaluated |
new_x | (in) false if any evaluation method (eval_* ) was previously called with the same values in x, true otherwise; see also TNLP::eval_f |
obj_factor | (in) factor \(\sigma_f\) in front of the objective term in the Hessian |
m | (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
lambda | (in) the values for the constraint multipliers \(\lambda\) at which the Hessian is to be evaluated |
new_lambda | (in) false if any evaluation method was previously called with the same values in lambda, true otherwise |
nele_hess | (in) the number of nonzero elements in the Hessian; it will have the same value that was specified in TNLP::get_nlp_info |
iRow | (out) first call: array of length nele_hess to store the row indices of entries in the Hessian; later calls: NULL |
jCol | (out) first call: array of length nele_hess to store the column indices of entries in the Hessian; later calls: NULL |
values | (out) first call: NULL; later calls: array of length nele_hess to store the values of the entries in the Hessian |
x
, lambda
, and values
will be NULL. If the arguments x
, lambda
, and values
are not NULL, then Ipopt expects that the value of the Hessian as calculated from arrays x
and lambda
are stored in array values
(using the same order as used when specifying the sparsity structure). At this call, the arguments iRow
and jCol
will be NULL.A default implementation is provided, in case the user wants to set quasi-Newton approximations to estimate the second derivatives and doesn't not need to implement this method.
Reimplemented from Ipopt::TNLP.
|
virtual |
Method to request scaling parameters.
This is only called if the options are set to retrieve user scaling, that is, if nlp_scaling_method is chosen as "user-scaling". The method should provide scaling factors for the objective function as well as for the optimization variables and/or constraints. The return value should be true, unless an error occurred, and the program is to be aborted.
The value returned in obj_scaling determines, how Ipopt should internally scale the objective function. For example, if this number is chosen to be 10, then Ipopt solves internally an optimization problem that has 10 times the value of the original objective function provided by the TNLP. In particular, if this value is negative, then Ipopt will maximize the objective function instead of minimizing it.
The scaling factors for the variables can be returned in x_scaling, which has the same length as x in the other TNLP methods, and the factors are ordered like x. use_x_scaling needs to be set to true, if Ipopt should scale the variables. If it is false, no internal scaling of the variables is done. Similarly, the scaling factors for the constraints can be returned in g_scaling, and this scaling is activated by setting use_g_scaling to true.
As a guideline, we suggest to scale the optimization problem (either directly in the original formulation, or after using scaling factors) so that all sensitivities, i.e., all non-zero first partial derivatives, are typically of the order 0.1-10.
Reimplemented from Ipopt::TNLP.
|
virtual |
This method is called when the algorithm has finished (successfully or not) so the TNLP can digest the outcome, e.g., store/write the solution, if any.
status | (in) gives the status of the algorithm
|
n | (in) the number of variables \(x\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
x | (in) the final values for the primal variables |
z_L | (in) the final values for the lower bound multipliers |
z_U | (in) the final values for the upper bound multipliers |
m | (in) the number of constraints \(g(x)\) in the problem; it will have the same value that was specified in TNLP::get_nlp_info |
g | (in) the final values of the constraint functions |
lambda | (in) the final values of the constraint multipliers |
obj_value | (in) the final value of the objective function |
ip_data | (in) provided for expert users |
ip_cq | (in) provided for expert users |
Implements Ipopt::TNLP.
Reimplemented in Ipopt::SensAmplTNLP.
Return the number of variables that appear nonlinearly in the objective function or in at least one constraint function.
If -1 is returned as number of nonlinear variables, Ipopt assumes that all variables are nonlinear. Otherwise, it calls get_list_of_nonlinear_variables with an array into which the indices of the nonlinear variables should be written - the array has the length num_nonlin_vars, which is identical with the return value of get_number_of_nonlinear_variables(). It is assumed that the indices are counted starting with 1 in the FORTRAN_STYLE, and 0 for the C_STYLE.
The default implementation returns -1, i.e., all variables are assumed to be nonlinear.
Reimplemented from Ipopt::TNLP.
|
virtual |
Return the indices of all nonlinear variables.
This method is called only if limited-memory quasi-Newton option is used and get_number_of_nonlinear_variables() returned a positive number. This number is provided in parameter num_nonlin_var.
The method must store the indices of all nonlinear variables in pos_nonlin_vars, where the numbering starts with 0 order 1, depending on the numbering style determined in get_nlp_info.
Reimplemented from Ipopt::TNLP.
|
inline |
Return the ampl solver object (ASL*)
Definition at line 498 of file AmplTNLP.hpp.
Write the solution file.
This is a wrapper for AMPL's write_sol.
void Ipopt::AmplTNLP::get_discrete_info | ( | Index & | nlvb_, |
Index & | nlvbi_, | ||
Index & | nlvc_, | ||
Index & | nlvci_, | ||
Index & | nlvo_, | ||
Index & | nlvoi_, | ||
Index & | nbv_, | ||
Index & | niv_ | ||
) | const |
Give the number of binary and integer variables.
AMPL orders the variables like (continuous, binary, integer). For details, see Tables 3 and 4 in "Hooking Your Solver to AMPL"
A method for setting the index of the objective function to be considered.
This method must be called after the constructor, and before anything else is called. It can only be called once, and if there is more than one objective function in the AMPL model, it MUST be called.
|
inline |
Definition at line 546 of file AmplTNLP.hpp.
|
inline |
Definition at line 554 of file AmplTNLP.hpp.
|
inline |
Definition at line 562 of file AmplTNLP.hpp.
|
inline |
Definition at line 570 of file AmplTNLP.hpp.
|
inline |
Definition at line 578 of file AmplTNLP.hpp.
|
inline |
Definition at line 586 of file AmplTNLP.hpp.
|
inline |
Method for returning the suffix handler.
Definition at line 596 of file AmplTNLP.hpp.
|
private |
Make the objective call to ampl.
Make the constraint call to ampl.
Internal function to update the internal and ampl state if the x value changes.
|
protected |
Method for obtaining the name of the NL file and the options set from AMPL.
regoptions | Registered Ipopt options |
options | Options |
ampl_options_list | AMPL options list |
ampl_option_string | AMPL options string |
ampl_invokation_string | AMPL invokation string |
ampl_banner_string | AMPL banner string |
argv | Program arguments |
|
inlineprotected |
Method for obtaining the name of the NL file and the options set from AMPL.
Definition at line 733 of file AmplTNLP.hpp.
|
protected |
calls hesset ASL function
|
protected |
Definition at line 640 of file AmplTNLP.hpp.
|
protected |
pointer to the main ASL structure
Definition at line 643 of file AmplTNLP.hpp.
|
protected |
Sign of the objective fn (1 for min, -1 for max)
Definition at line 646 of file AmplTNLP.hpp.
|
protected |
number of nonzeros in the full_x Hessian
Definition at line 651 of file AmplTNLP.hpp.
|
protected |
Definition at line 656 of file AmplTNLP.hpp.
|
protected |
Definition at line 657 of file AmplTNLP.hpp.
|
protected |
Definition at line 658 of file AmplTNLP.hpp.
|
protected |
Definition at line 659 of file AmplTNLP.hpp.
|
protected |
Definition at line 660 of file AmplTNLP.hpp.
|
protected |
Definition at line 661 of file AmplTNLP.hpp.
|
protected |
whether the objective value has been calculated with the current x
set to false in apply_new_x, and set to true in internal_objval
Definition at line 670 of file AmplTNLP.hpp.
|
protected |
whether the constraint values have been calculated with the current x set to false in apply_new_x, and set to true in internal_conval
Definition at line 674 of file AmplTNLP.hpp.
|
protected |
whether we have called hesset
Definition at line 676 of file AmplTNLP.hpp.
|
protected |
whether set_active_objective has been called
Definition at line 678 of file AmplTNLP.hpp.
|
protected |
Pointer to the Oinfo structure.
Definition at line 682 of file AmplTNLP.hpp.
|
protected |
nerror flag passed to ampl calls - set to NULL to halt on error
Definition at line 685 of file AmplTNLP.hpp.
|
protected |
Suffix Handler.
Definition at line 688 of file AmplTNLP.hpp.
|
protected |
meta data to pass on to TNLP
Definition at line 755 of file AmplTNLP.hpp.
|
protected |
Definition at line 756 of file AmplTNLP.hpp.
|
protected |
Definition at line 757 of file AmplTNLP.hpp.
|
protected |
Definition at line 758 of file AmplTNLP.hpp.
|
protected |
Definition at line 759 of file AmplTNLP.hpp.
|
protected |
Definition at line 760 of file AmplTNLP.hpp.