Prev Next wish_list

@(@\newcommand{\W}[1]{ \; #1 \; } \newcommand{\R}[1]{ {\rm #1} } \newcommand{\B}[1]{ {\bf #1} } \newcommand{\D}[2]{ \frac{\partial #1}{\partial #2} } \newcommand{\DD}[3]{ \frac{\partial^2 #1}{\partial #2 \partial #3} } \newcommand{\Dpow}[2]{ \frac{\partial^{#1}}{\partial {#2}^{#1}} } \newcommand{\dpow}[2]{ \frac{ {\rm d}^{#1}}{{\rm d}\, {#2}^{#1}} }@)@ This is cppad-20221105 documentation. Here is a link to its current documentation .
The CppAD Wish List

See Also
research

Purpose
The items on this list are improvements and extensions to CppAD that are currently being considered.

base2ad
It would be nice if base2ad functioned as expected with VecAD operations; see base2vec_ad.cpp .

Dynamic Parameters

Comparison Operators
The comparisons for dynamic parameters are not being included in the record_compare is true. This should be fixed and included in the number returned by f.compare_change . In addition, we will need a way to know if op_index corresponds to a variable or dynamic parameter operator.

VecAD Vectors
Currently, when a VecAD vector only depends on dynamic parameters it becomes a variable; see efficiency . This is a simple solution to the problem of having to pass the state of a vector, when it becomes a variable, from a dynamic sweep to a zero order forward mode sweep.

Graph Operators
The missing operators should be implemented so that they can be include in conversion between ADFun objects and the AD graphs; see cpp_ad_graph , json_ad_graph .

Reverse Mode
Reverse mode calculation of the function @(@ f : \B{R} \rightarrow \B{R} @)@ defined by @(@ y = f(x) @)@ where
    
ay[0] = pow(ax[0], 0.5) * pow(ax[0], 0.5)
yields the result zero when @(@ x_0 = 0 @)@; see the file bug/pow.sh. This is a feature of using azmul to select which components of a function are differentiated. This enables one component to yield nan for a partial derivative while another might not. If a separate flag was used for which variables are selected, reverse mode multiplications would not need to be converted to azmul and the function above would return nan for the derivative value. This may also be faster that using azmul.

Atomic Examples
Convert the remaining atomic_two_examples , and atomic_three_examples to use the atomic_four interface.

Abs-normal

Atomic Functions
The abs_normal_fun conversion does not handle atomic functions. This should be fixed.

Return Functions
Change the abs_normal_fun to return the functions z(x, u) and y(x, u) instead of @(@ g(x, u) @)@ and @(@ a(x) @)@. We can add a utility that computes @(@ a(x) @)@ using @(@ z(x, u) @)@, @(@ a_i (x) = | z_i (x, a(x) ) | @)@ and @(@ z_i @)@ does not depends on @(@ u_j @)@ for @(@ j \geq i @)@.

Cancellation
Avoid cancellation when computing the difference in the absolute value function at the current point @(@ \hat{x} @)@ the displaced point @(@ x = \hat{x} + \Delta x @)@; i.e., @[@ |z_i (x, \tilde{a}(x) ) | - |z_i (\hat{x}, a(\hat{x}) ) | @]@

cppad_lib

Requirement
Currently cppad_lib library is only needed if one uses colpack , json_ad_graph , cpp_ad_graph , or code_gen_fun ..

inline
The C++ inline specifier is used to avoid multiple copies of functions (that are not templates) during the link process. Perhaps some of these functions should be regular functions and part in the cppad_lib library.

Compilation Speed
Perhaps complication speed when using AD<double> could significantly be increased by including some of it's member functions in the cppad_lib library.

checkpoint


Perhaps there should be a version of the chkpoint_two class that uses a tapeless AD package to compute the derivative values. This would allow for algorithms where the operations sequence depends on the independent variable values. There is a question as to how sparsity patterns would be determined in this case. Perhaps they would be passed into the constructor. If it was known to be constant, the user could compute the pattern using CppAD. Otherwise, the user could input a conservative estimate of the pattern that would be correct.

Re-taping
Perhaps the checkpoint class should allow for re-taping derivative values. This would also allow for algorithms where the operations sequence depends on the independent variable values. Perhaps (as for tapeless entry above) the sparsity pattern should be passed into the constructor.

Testing
There should be some examples and tests for both speed and memory use that demonstrate that checkpointing is useful.

Subgraph

Forward Mode
The subgraph_jac_rev routine computes sparsity patterns of Jacobians using reverse mode. It is possible that a forward mode version of this method would be better for some cases.

Sparsity
The subgraph_sparsity calculation treats each atomic function call as if all of its outputs depend on all of its inputs; see atomic function . These sparsity patterns could be made more efficient (could have fewer possibly non-zeros entries) by using the sparsity patterns for the atomic functions.

check_finite
  1. Sometimes one only gets infinite value during zero order forward and nan when computing corresponding derivatives. Change check_for_nan to check_finite (not infinite or nan) so that error detection happens during zero order forward instead of later.
  2. In addition, the current check_for_nan writes the corresponding zero order values to a temporary file. It would be nice if the check_finite routine made writing the zero order values optional.


test_boolofvoid
For general purpose use, the test_boolofvoid should be usable without including a memory check at the end.

Example
Split the example list into separate groups by the corresponding example subdirectory.

Optimization

Atomic Functions
There is some confusion as to the value of the Taylor coefficients for atomic function arguments and results that have been optimized out. See atomic functions in optimize, 02-16 in whats new for 2021, optimize in atomic_three, and optimize in atomic_four.

Taping
Perhaps some of the optimization done while taping forward mode should be delayed to the optimization step.

Special Operators
Add special operators that can be implemented more efficiently, e.g.,
    square(
x) = x * x
and have the optimizer recognize when they should be used. (They could also be in the user API, but it would not be expected that the user would use them.)

Base Requirements
Change the Base requirements to use template specialization instead of functions so that there is a default value for each function. The default would result in a known assert when the operation is used and not defined by the base class. An example of this type of template specialization can be found in the implementation of to_string .

Adolc
Create a documentation page that shows how to convert Adolc commands to CppAD commands.

Forward Mode Recomputation
If the results of forward_order have already been computed and are still stored in the ADFun object (see size_order ), then they do not need to be recomputed and the results can just be returned.

Iterator Interface
All of the CppAD simple vector interfaces should also have an iterator version for the following reasons:
  1. It would not be necessary to copy information to simple vectors when it was originally stored in a different type of container.
  2. It would not be necessary to reallocate memory for a result that is repeatedly calculated (because an iterator for the result container would be passed in).


Tracing
Add tracing the operation sequence to the user API and documentation. Tracing the operation sequence is currently done by changing the CppAD source code. Use the command
 
    grep '^# *define *CPPAD_.*_TRACE' cppad/local/sweep*.hpp
to find all the possible tracing flags.

atan2
The atan2 function could be made faster by adding a special operator for it.
Input File: omh/appendix/wish_list.omh