where the objective function is the Lagrange dual function. Provided that the functions and are convex and continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem
is called the Wolfe dual problem.[2][clarification needed] This problem employs the KKT conditions as a constraint. Also, the equality constraint is nonlinear in general, so the Wolfe dual problem may be a nonconvex optimization problem. In any case, weak duality holds.[3]
↑Geoffrion, Arthur M. (1971). "Duality in Nonlinear Programming: A Simplified Applications-Oriented Development". SIAM Review. 13 (1): 1–37. doi:10.1137/1013001. JSTOR2028848.