Parent Node(s):


Optimization is an activity that aims at finding the best (i.e., optimal) solution to a problem. For optimization to be meaningful there must be an OBJECTIVE FUNCTION (see below) to be optimized and there must exist more than one FEASIBLE SOLUTION, i.e., a solution which does not violate the constraints. The term optimization does not apply, usually , when the number of solutions permits the best to be chosen by inspection, using an appropriate criterion (see DECISION THEORY). One distinguishes SINGLE OBJECTIVE and MULTIOBJECTIVE optimization. In the first case, the objective is SCALAR-VALUED (it can be measured by a single number); in the second, the objective is VECTOR-VALUED (its value is expressed by an n-tuple of numbers). In mathematical terms, the formulation of an optimization problem involves decision variables, Xl, X2,...Xn, the objective function,

Q = f(Xl,X2,..Xn)

constraint relations, usually of the form

Gi(Xl,X2,...Xn) greater than or equal to O, i = l,2,...m.

The OPTIMAL SOLUTION (or "solution to the optimization problem") is values of decision variables xl, x2,...xn that satisfy the constraints and for which the objective function attains a maximum (or a minimum, in a minimization problem). Very few optimization problems can be solved analytically, that is, by means of explicit formulae. In most practical cases appropriate computational techniques of optimization (numerical procedures of optimization) must be used. Among those techniques LINEAR PROGRAMMING permits the solution of problems in which the objective function and all constraint relations are linear. NONLINEAR PROGRAMMING does not have this restriction, but can manage many fewer decision variables and constraints. INTEGER PROGRAMMING serves to solve problems where the decision variables can take only integer values. stochastic or PROBABILISTIC PROGRAMMING must be used for problems where the objective function or constraint relations contain random-valued parameters (in the latter case, the problem is referred to as a CHANGE-CONSTRAINED PROBLEM). A special case is dynamic optimization problems where the decision variables are not real numbers or integers but functions of one or more independent variables -- functions of time or space coordinates, for example. Dynamic optimization problems are sometimes referred to as "optimal control problems." There exist special techniques to solve such problems; they often make use of DISCRETIZATION of the independent variables, for example dividing the time axis into a number of intervals and considering the solutions to be constant over those intervals. A single-objective optimization problem may have (and usually does have) a single-valued, unique solution. The solution to a multiobjective problem is, as a rule, not a particular value, but a set of values of decision variables such that, for each element in this set, none of the objective functions can be further increased without a decrease of some of the remaining object functions (every such value of a decision variable is referred to as pareto-optimal). (IIASA)

* Next * Previous * Index * Search * Help