aka convex programming
Convex optimization refers to an Optimization problem in which the objective and constraint functions are both convex. This implies that the domain of the optimization variable is a convex set. The convexity property can make optimization in some sense "easier" than the general case - for example, any local minimum must be a global minimum. It is a generalization of Linear programming
See slides, and these notes (in particular Part I)
https://www.stats.ox.ac.uk/~lienart/blog_optimization.html
https://www.wikiwand.com/en/Convex_optimization
Convex optimization problems include least-squares fitting (see Linear regression), and Linear programming problems.
See Learning theory, Andrew Ng video
Dual problem, uses Max–min inequality
Log-barrier methods. See page 51 here
Projected subgradients, conditional gradients
Newton's method, quasi-Newton
conjugate-gradient
bundle algorithms
cutting-plane algorithz
Interior-point methods, like in linear programming.
See here
See Stochastic gradient descent and Gradient descent
See more on this theory here: Convex optimization heuristics for linear inverse problems, Linear inverse problem