趙婷婷 王湘美
摘 要 次梯度法是解決大規(guī)模凸優(yōu)化問(wèn)題的經(jīng)典和有效的方法之一, 步長(zhǎng)的選取對(duì)次梯度法的收斂性起著至關(guān)重要的作用. Goffino等(1999)提出了動(dòng)態(tài)步長(zhǎng)次梯度算法,通過(guò)改進(jìn)其中的一個(gè)參數(shù),提出了改進(jìn)的動(dòng)態(tài)步長(zhǎng)次梯度算法,并證明了改進(jìn)算法的收斂性. 最后,通過(guò)數(shù)值實(shí)驗(yàn)可以看出改進(jìn)的算法比原來(lái)的算法更有效.
關(guān)鍵詞 計(jì)算數(shù)學(xué);凸優(yōu)化;次梯度算法;動(dòng)態(tài)步長(zhǎng)
中圖分類(lèi)號(hào) 0224文獻(xiàn)標(biāo)識(shí)碼 A
Abstract The subgradient algorithm is one of the classical and important algorithms to solve the largescale convex optimization problems, and it is well known that the convergence of the algorithm depends heavily on the choice of the step sizes. A modified version of the dynamic step sizes proposed by Goffino(1999) was proposed and the convergence of the algorithm was established. Some numerical experiments illustrate that the new algorithm is more effective than the prior one.
Key words Computational mathematics;Convex optimization;Subgradient method;Dynamic step size rule
參考文獻(xiàn)
[1] ERMOL'EV Y M. Methods of solution of nonlinear extremal problems[J]. Cybernetics, 1966, 2(4):1-14.
[2] SHOR N Z. Minimization Methods for Nondifferentiable Functions[J]. Springer, 1985,3(11-12):885-888.
[3] POLJAK B T. Minimization of nonsmooth functionals[J]. Gaea, 1985, 300(1):752-754.
[4] KIM S, AHN H. Convergence of a generalized subgradient method for nondifferentiable convex optimization[J]. Math. Program., 1991, 50(1-3):75-80.
[5] GOFFINO J L, KIWIEL K C. Convergence of a simple subgradient level method[J]. Math. Program., 1999, 85(1):207-211.
[6] 史樹(shù)中. 凸分析[M]. 上??茖W(xué)技術(shù)出版社, 1990.
[7] LONG Q, LI J? Y. Numerical Performance of subgradient methods in solving nonsmooth optimization problems[J]. Journal of Chongqing Normal University, 2013, 30(6):25-30.