Prev Next

@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@This is cppad-20221105 documentation. Here is a link to its current documentation .
Optimize an ADFun Object Tape

Syntax
f.optimize()
f.optimize(options)
flag = f.exceed_collision_limit()

Purpose
The operation sequence corresponding to an ADFun object can be very large and involve many operations; see the size functions in fun_property . The f.optimize procedure reduces the number of operations, and thereby the time and the memory, required to compute function and derivative values.

f
The object f has prototype
    ADFun<
Basef

options
This argument has prototype
    const std::string& 
options
The default for options is the empty string. If it is present, it must consist of one or more of the options below separated by a single space character.

no_conditional_skip
The optimize function can create conditional skip operators to improve the speed of conditional expressions; see optimize . If the sub-string no_conditional_skip appears in options , conditional skip operations are not be generated. This may make the optimize routine use significantly less memory and take less time to optimize f . If conditional skip operations are generated, it may save a significant amount of time when using f for forward or reverse mode calculations; see number_skip .

no_compare_op
If the sub-string no_compare_op appears in options , comparison operators will be removed from the optimized function. These operators are necessary for the compare_change functions to be meaningful. On the other hand, they are not necessary, and take extra time, when the compare_change functions are not used.

no_print_for_op
If the sub-string no_compare_op appears in options , PrintFor operations will be removed form the optimized function. These operators are useful for reporting problems evaluating derivatives at independent variable values different from those used to record a function.

no_cumulative_sum_op
If this sub-string appears, no cumulative sum operations will be generated during the optimization; see optimize_cumulative_sum.cpp .

collision_limit=value
If this substring appears, where value is a sequence of decimal digits, the optimizer's hash code collision limit will be set to value . When the collision limit is reached, the expressions with that hash code are removed and a new lists of expressions with that has code is started. The larger value , the more identical expressions the optimizer can recognize, but the slower the optimizer may run. The default for value is 10.

Re-Optimize
Before 2019-06-28, optimizing twice was not supported and would fail if cumulative sum operators were present after the first optimization. This is now supported but it is not expected to have much benefit. If you find a case where it does have a benefit, please inform the CppAD developers of this.

Efficiency
If a zero order forward calculation is done during the construction of f , it will require more memory and time than required after the optimization procedure. In addition, it will need to be redone. For this reason, it is more efficient to use
    ADFun<
Basef;
    
f.Dependent(xy);
    
f.optimize();
instead of
    ADFun<
Basef(xy)
    
f.optimize();
See the discussion about sequence constructors .

Taylor Coefficients
Any Taylor coefficients in the function object are lost; i.e., f.size_order() after the optimization is zero. (See the discussion about efficiency above.)

Speed Testing
You can run the CppAD speed tests and see the corresponding changes in number of variables and execution time. Note that there is an interaction between using optimize and onetape . If onetape is true and optimize is true, the optimized tape will be reused many times. If onetape is false and optimize is true, the tape will be re-optimized for each test.

Atomic Functions
There are some subtitle issue with optimized atomic functions @(@ v = g(u) @)@:

rev_sparse_jac
The atomic_two_rev_sparse_jac function is be used to determine which components of u affect the dependent variables of f . For each atomic operation, the current atomic_sparsity setting is used to determine if pack_sparsity_enum, bool_sparsity_enum, or set_sparsity_enum is used to determine dependency relations between argument and result variables.

nan
If u[i] does not affect the value of the dependent variables for f , the value of u[i] is set to nan .

Checking Optimization
If NDEBUG is not defined, and f.size_order() is greater than zero, a forward_zero calculation is done using the optimized version of f and the results are checked to see that they are the same as before. If they are not the same, the ErrorHandler is called with a known error message related to f.optimize() .

exceed_collision_limit
If the return value flag is true (false), the previous call to f.optimize exceed the collision_limit .

Examples
optimize_twice.cpp Optimizing Twice: Example and Test
optimize_forward_active.cpp Optimize Forward Activity Analysis: Example and Test
optimize_reverse_active.cpp Optimize Reverse Activity Analysis: Example and Test
optimize_compare_op.cpp Optimize Comparison Operators: Example and Test
optimize_print_for.cpp Optimize Print Forward Operators: Example and Test
optimize_conditional_skip.cpp Optimize Conditional Expressions: Example and Test
optimize_nest_conditional.cpp Optimize Nested Conditional Expressions: Example and Test
optimize_cumulative_sum.cpp Optimize Cumulative Sum Operations: Example and Test

Input File: include/cppad/core/optimize.hpp