CONOPT is a solver for large-scale nonlinear optimization (NLP) developed and maintained by ARKI Consulting & Development A/S in Bagsvaerd, Denmark. It has been under continuous development for over 25 years. It is one of the few solvers out in the market that can handle large nonlinear models. Models with over 10,000 constraints are routinely being solved. Specialized models with up to 1 million constraints have also been solved with CONOPT.
CONOPT is a feasible path solver based on the old proven GRG method with many newer extensions. CONOPT has been designed to be efficient and reliable for a broad class of models. The original GRG method helps achieve reliability and speed for models with a large degree of nonlinearity. Extensions to the GRG method such as preprocessing, a special phase 0, linear mode iterations, and a sequential linear programming and a sequential quadratic programming component makes CONOPT efficient on easier and mildly nonlinear models as well. The multi-method architecture of CONOPT combined with build-in logic for dynamic selection of the most appropriate method makes CONOPT a strong all-round NLP solver.
CONOPT incorporates a preprocessor that scans for equations that are pre-triangular, fixing variables and thus subsequently removing the constraint from further consideration. This can ultimately lead to a effective reduction in the size of the problem at hand. In the latest version new 2nd derivative-based methods have been implemented which make models with high degrees of nonlinearity easier and faster to solve.
To give CONOPT or factually any other nonlinear solver the best chance of finding good solutions fast, one has to incorporate good modeling practices. Bounding is extremely important in nonlinear problems. Setting tight and realistic bounds on variables restricts the possible search space the algorithm may look to find a solution. Simplification of terms can result in easier derivative calculation, thus complex nonlinear expressions where possible it is best to use intermediate variables to simplifying the terms i.e.
Elasticity = EXP(SUM(I: (X-Y)^2));
Can be re written as:
IntTerm = SUM(I: (X-Y)^2); Elasticity = EXP(IntTerm);
The second formulation has much simpler derivatives as EXP is taken of a single variable rather than a combination of variables
The progress of the algorithm is based on good directional information, thus whenever possible provide good initial starting values. Setting initial values in MPL can be done by:
VARIABLE P[i] INITIAL P0;
Note that one can use both data vectors and numerical numbers.
CONOPT is designed to be reactive in the search process, many of the tolerances are dynamic, thus in most cases the default settings would suffice. There are some settings that one may wish to try if the default settings are achieving inadequate results. CONOPT by default uses the steepest edge procedure in its linear mode phase of its algorithm. As in LPs this can be computationally expensive, if a large amount of time is spent in the linear mode phases (phase 1 and 3) one may need to set this option ("ActivateSteepestEdge") to off. Typically the steepest edge works best for models with less than 5000 constraints.
CONOPT can use exact second order derivative information, which is incorporated into its Sequential Quadratic Programming (SQP) procedure improving the directional search. Models that are highly nonlinear in nature generally perform very well with the SQP procedure. For these types of models activating SQP should be considered, the option "SDEvalUse" has to be set to one.
CONOPT provides progress on the optimization solve at regular intervals. MPL displays this information in the message window that also can be relayed to a log file. A typical log has the following appearance:
CONOPT: CONOPT: C O N O P T 3 version 3.14D CONOPT: Copyright (C) ARKI Consulting and Development A/S CONOPT: Bagsvaerdvej 246 A CONOPT: DK-2880 Bagsvaerd, Denmark CONOPT: CONOPT: Using default options. CONOPT: CONOPT: CONOPT: Iter Phase Ninf Infeasibility RGmax NSB Step InItr MX OK CONOPT: 0 0 1.5293000000E+05 (Input point) CONOPT: Pre-triangular equations: 0 CONOPT: Post-triangular equations: 1 CONOPT: 1 0 1.2718820000E+05 (After pre-processing) CONOPT: 2 0 1.9885058594E+01 (After scaling) CONOPT: CONOPT: ** Feasible solution. Value of objective = 6528635.36973 CONOPT: CONOPT: Iter Phase Ninf Objective RGmax NSB Step InItr MX OK CONOPT: 11 3 4.5660359865E+06 1.0E+06 8 7.7E-01 8 F F CONOPT: 21 4 4.0932649919E+06 8.9E+02 4 1.0E+00 F T CONOPT: 31 4 4.0918938925E+06 9.5E+01 5 1.0E+00 F T CONOPT: 39 4 4.0918937239E+06 2.8E-09 5 CONOPT: CONOPT: ** Optimal solution. Reduced gradient less than tolerance. CONOPT: CONOPT: CONOPT: CONOPT time Total 0.094 seconds CONOPT: of which: Function evaluations 0.000 = 0.0% CONOPT: 1st Derivative evaluations 0.000 = 0.0% CONOPT: Solver Statistics Solver name: Conopt Objective value: 4091893.72391576 Iterations: 39 Solution time: 0.11 sec Result code: 2 STATUS: Locally optimal solution found
Initially the log states what version of CONOPT is being used. The log shows the progress of the search during the various phases. The model will be infeasible during phase 0 to 2 thus the log shows the sum of infeasibilities at these iterations. In phase 3 and 4 the model is feasible, the objective value show in the fourth column. RGmax represents the largest gradient of the non-optimal variables, ideally this will converge to zero. Upon completion of the optimization process CONOPT displays a summary of the solve relaying total time, and amount spend doing the various evaluations.
For full description of all the Conopt Parameters that are supported in MPL, please go to the Conopt Option Parameters page.