Optimization In mathematics, computer science, economics, or management science, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard ... Optimization - Wikipedia
 Faster Optimization Optimization problems are everywhere in engineering: Balancing design tradeoffs is an optimization problem, as are scheduling and logistical planning. The theory — and sometimes the implementation — o...
 Optimization problem In mathematics and computer science, an optimization problem is the problem of finding the best solution from all feasible solutions. Optimization problems can be divided into two categories depending...
 Multiobjective optimization Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple cri...
 Karush-Kuhn-Tucker conditions In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions (also known as the Kuhn–Tucker conditions) are first order necessary conditions for a solution in nonlinear programming to be opti...
 Critical point (mathematics) In mathematics, a critical point or stationary point of a differentiable function of a real or complex variable is any value in its domain where its derivative is 0 or undefined. For a differentiable... Critical point (mathematics) - Wikipedia
 Differential calculus In mathematics, differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change. It is one of the two traditional divisions of calculus, the other bein...
 Gradient In mathematics, the gradient is a generalization of the usual concept of derivative of a function in one dimension to a function in several dimensions. If f(x1, ..., xn) is a differentiable, scalar-va... Gradient - Wikipedia
 Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of man... Hessian matrix - Wikipedia
 Positive definite matrix In linear algebra, a symmetric n × n real matrix M is said to be positive definite if zMz is positive for every non-zero column vector z of n real numbers. Here z denotes the transpose of z.More ...
 Lipschitz continuity In mathematical analysis, Lipschitz continuity, named after Rudolf Lipschitz, is a strong form of uniform continuity for functions. Intuitively, a Lipschitz continuous function is limited in how fast ... Lipschitz continuity - Wikipedia
 Rademacher's theorem In mathematical analysis, Rademacher's theorem, named after Hans Rademacher, states the following: If U is an open subset of R and  f : U → R  is Lipschitz continuous, then f&#...
 Convex function In mathematics, a real-valued function f(x) defined on an interval is called convex (or convex downward or concave upward) if the line segment between any two points on the graph of the function lies... Convex function - Wikipedia
 Convex analysis Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory....
 Nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem defined by a system of equalities and inequalities, collectively termed constraints, over a set of unknown...
 Iterative method In computational mathematics, an iterative method is a mathematical procedure that generates a sequence of improving approximate solutions for a class of problems. A specific implementation of an iter... Iterative method - Wikipedia
 Newton's method In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is a method for finding successively better approximations to the roots (... Newton's method - Wikipedia
 Quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or...
 Finite difference A finite difference is a mathematical expression of the form f(x + b) − f(x + a). If a finite difference is divided by b − a, one gets a difference ... Finite difference - Wikipedia
 Approximation theory In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that wh... Approximation theory - Wikipedia
 Numerical Analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to general symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete ... Numerical Analysis - Wikipedia
 Heuristic algorithm In computer science, artificial intelligence, and mathematical optimization, a heuristic is a technique designed for solving a problem more quickly when classic methods are too slow, or for finding an...
 List of optimization software Given a transformation between input and output values, described by a mathematical function f,optimization deals with generating and selecting a best solution from some set of available alternatives,...
 Bihari's inequality Bihari's inequality, proved by Hungarian mathematician Imre Bihari (1915–1998), is the following nonlinear generalization of Grönwall's lemma.Let u and ƒ be non-negative continuous functions defi...
 Optimal control Optimal control theory, an extension of the calculus of variations, is a mathematical optimization method for deriving control policies. The method is largely due to the work of Lev Pontryagin and his...
 Differential Galois theory In mathematics, differential Galois theory studies the Galois groups of differential equations.Whereas algebraic Galois theory studies extensions of algebraic fields, differential Galois theory s...
 Radon-Nikodym theorem In mathematics, the Radon–Nikodym theorem is a result in measure theory which states that, given a measurable space , if a σ-finite measure ν on is absolutely continuous with respect to a σ-finite m...
 Replicator equation In mathematics, the replicator equation is a deterministic monotone non-linear and non-innovative game dynamic used in evolutionary game theory. The replicator equation differs from other equations us...
 Lie theory Lie theory (/ˈliː/ LEE) is one of the areas of mathematics, developed initially by Sophus Lie and worked out by Wilhelm Killing and Élie Cartan. The foundation of Lie theory is the exponential map rel...
 Differentiation in Fréchet spaces In mathematics, in particular in functional analysis and nonlinear analysis, it is possible to define the derivative of a function between two Fréchet spaces. This notion of differentiation is signif...
 Logistic function A logistic function or logistic curve is a common "S" shape (sigmoid curve), with equation:where e = the natural logarithm base (also known as Euler's number),x0 = the x-value of the sigmoid's midpoin...