For nonconvex optimization problems, zero duality gap and saddle-point properties can be established by using a generalized Lagrangian function that verifies suitable properties. The latter fact was originally proved by Rockafellar and Wets in 2007 in finite dimensions and extended in various ways in the last decade. A main advantage of this approach is that the resulting dual problem is convex, and hence tractable via standard techniques. In this way, the optimal value, and sometimes even a solution, of the original problem, can be obtained by solving the dual problem using nonsmooth convex techniques.
In the first part of the talk, we will recall some recent advances and applications of this fact in nonconvex duality. We will show how techniques from nonsmooth convex analysis can be incorporated into this duality scheme and provide a solution of the original (nonconvex/nonsmooth problem).
In the second part of the talk, we will report on some new results involving a sequence of dual problems that converge (in a suitable sense) to a given dual problem (called asymptotic dual problem). This model can be useful within an iterative scheme in which (i) we use a sequence of smooth approximations of a nonsmooth Lagrangian, or (ii) we want to incorporate current information to update the Lagrangian at each iteration. For the asymptotic duality, we establish hypotheses under which zero duality gap holds. We illustrate the new results in the context of equality constrained problems.