Nov 7, 2024
Accelerating First-Order Algorithms for High-Dimensional Minimax Optimization
This study introduces two first-order algorithms for high-dimensional minimax optimization: Accelerated Momentum Descent Ascent (AMDA) and Accelerated Variance-Reduced Gradient Descent Ascent (AVRGDA). These methods aim to address common challenges
in nonconvex optimization, such as slow convergence and computational complexity. AMDA leverages momentum-driven techniques to
smooth the optimization path, reducing oscillations and improving convergence speed, particularly in nonconvex-strongly-concave problems.
AVRGDA incorporates adaptive learning rates that dynamically adjust according to gradient norms, enhancing the efficiency of variance
reduction and handling complex optimization tasks in high-dimensional spaces. Through experiments in adversarial training and large-scale
logistic regression, these methods demonstrate superior performance in terms of training time, robustness, and computational cost compared
to traditional first-order methods. Theoretical analysis shows that AMDA and AVRGDA achieve convergence rates of O(ϵ−3)and O(ϵ−2.5)
respectively in high-dimensional, nonconvex minimax problems, confirming their efficiency and robustness in practical applications.