This document is a guide to using Ipopt
. It includes instructions on how to obtain and compile Ipopt
, a description of the interface, user options, etc., as well as a tutorial on how to solve a nonlinear optimization problem with Ipopt
. The documentation consists of the following pages:
The Ipopt
project website is https://github.com/coin-or/Ipopt.
Ipopt
(Interior Point Optimizer, pronounced "Eye-Pea-Opt") is an open source software package for large-scale nonlinear optimization. It can be used to solve general nonlinear programming problems of the form
\begin{align} \min_{x\in\mathbb{R}^n} && f(x) \nonumber \\ \text{s.t.} \; && g^L \leq g(x) \leq g^U \tag{NLP} \\ && x^L \leq x \leq x^U, \nonumber \end{align}
where \(x \in \mathbb{R}^n\) are the optimization variables (possibly with lower and upper bounds, \(x^L\in(\mathbb{R}\cup\{-\infty\})^n\) and \(x^U\in(\mathbb{R}\cup\{+\infty\})^n\)), \(f:\mathbb{R}^n \to \mathbb{R}\) is the objective function, and \(g:\mathbb{R}^n \to \mathbb{R}^m\) are the general nonlinear constraints. The functions \(f(x)\) and \(g(x)\) can be linear or nonlinear and convex or non-convex (but should be twice continuously differentiable). The constraints, \(g(x)\), have lower and upper bounds, \(g^L\in(\mathbb{R}\cup\{-\infty\})^m\) and \(g^U\in(\mathbb{R}\cup\{+\infty\})^m\). Note that equality constraints of the form \(g_i(x)=\bar g_i\) can be specified by setting \(g^L_{i}=g^U_{i}=\bar g_i\).
Ipopt
implements an interior point line search filter method that aims to find a local solution of (NLP). The mathematical details of the algorithm can be found in several publications [7], [12], [11], [10], [9].
The Ipopt
package is available from COIN-OR under the EPL (Eclipse Public License) open-source license and includes the source code for Ipopt
. This means, it is available free of charge, also for commercial purposes. However, if you give away software including Ipopt
code (in source code or binary form) and you made changes to the Ipopt
source code, you are required to make those changes public and to clearly indicate which modifications you made. After all, the goal of open source software is the continuous development and improvement of software. For details, please refer to the Eclipse Public License.
Also, if you are using Ipopt
to obtain results for a publication, we politely ask you to point out in your paper that you used Ipopt
, and to cite the publication [11]. Writing high-quality numerical software takes a lot of time and effort, and does usually not translate into a large number of publications, therefore we believe this request is only fair :). We also have space in the Ipopt
wiki where we list publications, projects, etc., in which Ipopt
has been used. We would be very happy to hear about your experiences.
In order to build Ipopt
, some third party components are required:
BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Many vendors of compilers and operating systems provide precompiled and optimized libraries for these dense linear algebra subroutines. You can also get the source code for a simple reference implementation from http://www.netlib.org. However, it is strongly recommended to use some optimized BLAS and LAPACK implementations, for large problems this can make a runtime difference of an order of magnitude!
Examples for efficient BLAS implementations are:
You find more information on the web.
Note: BLAS libraries distributed with Linux are often not optimized.
A sparse symmetric indefinite linear solver. Ipopt
needs to obtain the solution of sparse, symmetric, indefinite linear systems, and for this it relies on third-party code.
Currently, the following linear solvers can be used:
You must include at least one of the linear solvers above in order to run Ipopt
, and if you want to be able to switch easily between different alternatives, you can compile Ipopt
with all of them.
The Ipopt
library also has mechanisms to load the linear solvers MA27, MA57, HSL_MA77, HSL_MA86, HSL_MA97, and Pardiso from a shared library at runtime, if the library has not been compiled with them, see Using the Linear Solver Loader.
Ipopt
and the optimizer's performance and robustness depends on your choice. The best choice depends on your application, and it makes sense to try different options. Most of the solvers also rely on efficient BLAS code (see above), so you should use a good BLAS library tailored to your system. Please keep this in mind, particularly when you are comparing Ipopt
with other optimization codes.If you are compiling MA57, HSL_MA77, HSL_MA86, HSL_MA97, or MUMPS within the Ipopt
build system, you should also include the METIS linear system ordering package.
Interfaces to other linear solvers might be added in the future; if you are interested in contributing such an interface please contact us! Note that Ipopt
requires that the linear solver is able to provide the inertia (number of positive and negative eigenvalues) of the symmetric matrix that is factorized.
Ipopt
can also use the HSL package MC19 for scaling of the linear systems before they are passed to the linear solver. This may be particularly useful if Ipopt
is used with MA27 or MA57. However, it is not required to have MC19 to compile Ipopt
; if this routine is missing, the scaling is never performed.Ipopt
's build system. NOTE: This is only required if you want to use Ipopt
from AMPL and want to compile the Ipopt
AMPL solver executable.For more information on third-party components and how to obtain them, see Download, build, and install dependencies.
Since the Ipopt
code is written in C++, you will need a C++ compiler to build the Ipopt
library. We tried very hard to write the code as platform and compiler independent as possible.
In addition, the configuration script also searches for a Fortran compiler. If all third party dependencies are available as self-contained libraries and no Ipopt/Fortran interface needs to be build, a Fortran compile is not necessary.
When using GNU compilers, we recommend you use the same version numbers for gcc, g++, and gfortran. Further, mixing clang (for C/C++) and gfortran has been problematic and should be avoided.
If desired, the Ipopt
distribution generates an executable for the modeling environment AMPL. As well, you can link your problem statement with Ipopt
using interfaces for C++, C, Java, or Fortran. Ipopt
can be used with most Linux/Unix environments, and on Windows using Msys2/MinGW. In Interfacing your NLP to Ipopt this document demonstrates how to solve problems using Ipopt
. This includes installation and compilation of Ipopt
for use with AMPL as well as linking with your own code.
Additionally, the Ipopt
distribution includes an interface for the R project for statistical computing, see The R Interface ipoptr.
There is also software that facilitates use of Ipopt
maintained by other people, among them are:
ADOL-C (automatic differentiation)
ADOL-C facilitates the evaluation of first and higher derivatives of vector functions that are defined by computer programs written in C or C++. It comes with examples that show how to use it in connection with Ipopt
.
AIMMS (modeling environment)
The AIMMSlinks project on COIN-OR, maintained by Marcel Hunting, provides an interface for Ipopt
within the AIMMS modeling tool.
MATLAB, Python, and Web Interface to Ipopt
for Android, Linux, Mac OS X, and Windows.
CasADi is a symbolic framework for automatic differentiation and numeric optimization and comes with Ipopt
.
CppAD (automatic differentiation)
Given a C++ algorithm that computes function values, CppAD generates an algorithm that computes corresponding derivative values (of arbitrary order using either forward or reverse mode). It comes with an example that shows how to use it in connection with Ipopt
.
Interfacing Ipopt from .NET languages such as C#, F# and Visual Basic.NET.
GAMS (modeling environment)
The GAMSlinks project on COIN-OR includes a GAMS interface for Ipopt
.
Modern, light-weight (~1k loc), Eigen-based C++ interface to Ipopt
and Snopt.
Interfacing Ipopt
from Python.
Julia is a high-level, high-performance dynamic programming language for technical computing. JuliaOpt, is an umbrella group for Julia-based optimization-related projects. It includes the algebraic modeling language JuMP and an interface to Ipopt
.
MADOPT (Modelling and Automatic Differentiation for Optimisation)
Light-weight C++ and Python modelling interfaces implementing expression building using operator overloading and automatic differentiation.
Matlab (mex) interface to you use Ipopt
from Matlab.
OPTimization Interface (OPTI) Toolbox
OPTI is a free Matlab toolbox for constructing and solving linear, nonlinear, continuous and discrete optimization problem and comes with Ipopt
, including binaries.
The Optimization Services (OS) project provides a set of standards for representing optimization instances, results, solver options, and communication between clients and solvers, including Ipopt
, in a distributed environment using Web Services.
An interface to the Python language.
Scilab (free Matlab-like environment):
A Scilab interface is available at http://forge.scilab.org/index.php/p/sci-ipopt.
An issue tracking system and a wiki can be found at the Ipopt
homepage,
https://github.com/coin-or/Ipopt.
Ipopt
is an open source project, and we encourage people to contribute code (such as interfaces to appropriate linear solvers, modeling environments, or even algorithmic features). If you are interested in contributing code, please have a look at the COIN-OR contributions webpage and contact the Ipopt
project leader.
There is also a mailing list for Ipopt
, available from the webpage
http://list.coin-or.org/mailman/listinfo/ipopt,
where you can subscribe to get notified of updates, to ask general questions regarding installation and usage, or to share your experience with Ipopt
. You might want to look at the archives before posting a question. An easy way to search the archive with Google is to specify site:http://list.coin-or.org/pipermail/ipopt
in addition to your keywords in the search string.
We try to answer questions posted to the mailing list in a reasonable manner. Please understand that we cannot answer all questions in detail, and because of time constraints, we are not able to help you model and debug your particular optimization problem.
A short tutorial on getting started with Ipopt
is also available [13].
The original Ipopt
(Fortran version) was a product of the dissertation research of Andreas Wächter [12], under the supervision of Lorenz T. Biegler at the Chemical Engineering Department at Carnegie Mellon University. The code was made open source and distributed by the COIN-OR initiative, which is now a non-profit corporation. Ipopt
has been actively developed under COIN-OR since 2002.
To continue natural extension of the code and allow easy addition of new features, IBM Research decided to invest in an open source re-write of Ipopt
in C++. With the help of Carl Laird, who came to the Mathematical Sciences Department at IBM Research as a summer intern in 2004 and 2005 during his PhD studies, the code was re-implemented from scratch.
The new C++ version of the Ipopt
optimization code (Ipopt
3.0.0 and beyond) was maintained at IBM Research and remains part of the COIN-OR initiative. The development on the Fortran version has ceased, but the source code can still be downloaded from https://github.com/coin-or/Ipopt/tree/stable/2.3.
The initial version of this document was created by Yoshiaki Kawajir (then Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh PA) as a course project for 47852 Open Source Software for Optimization, taught by Prof. François Margot at Tepper School of Business, Carnegie Mellon University. After this, Carl Laird (then Department of Chemical Engineering, Carnegie Mellon University, Pittsburgh PA) has added significant portions, including the very nice tutorials. The current version is maintained by Stefan Vigerske (GAMS Software GmbH) and Andreas Wächter (Department of Industrial Engineering and Management Sciences, Northwestern University).
The following names used in this document are trademarks or registered trademarks: Apple, AMPL, IBM, Intel, Matlab, Microsoft, MKL, Visual Studio C++, Visual Studio C++ .NET