Skip to main content
Log in

On the nonmonotonicity degree of nonmonotone line searches

  • Published:
Calcolo Aims and scope Submit manuscript

Abstract

The nonmonotone globalization technique is useful in difficult nonlinear problems, because of the fact that it may help escaping from steep sided valleys and may improve both the possibility of finding the global optimum and the rate of convergence. This paper discusses the nonmonotonicity degree of nonmonotone line searches for the unconstrained optimization. Specifically, we analyze some popular nonmonotone line search methods and explore, from a computational point of view, the relations between the efficiency of a nonmonotone line search and its nonmonotonicity degree. We attempt to answer this question how to control the degree of the nonmonotonicity of line search rules in order to reach a more efficient algorithm. Hence in an attempt to control the nonmonotonicity degree, two adaptive nonmonotone rules based on the morphology of the objective function are proposed. The global convergence and the convergence rate of the proposed methods are analysed under mild assumptions. Numerical experiments are made on a set of unconstrained optimization test problems of the CUTEr (Gould et al. in ACM Trans Math Softw 29:373–394, 2003) collection. The performance data are first analysed through the performance profile of Dolan and Moré (Math Program 91:201–213, 2002). In the second kind of analyse, the performance data are analysed in terms of increasing dimension of the test problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Ahookhosh, M., Amini, K., Peyghami, M.R.: A nonmonotone trust-region line search method for large-scale unconstrained optimization. Appl. Math. Model. 36, 411–422 (2010)

    MathSciNet  MATH  Google Scholar 

  2. Ahookhosh, M., Amini, K., Bahrami, S.: A class of nonmonotone Armijo-type line search method for unconstrained optimization. Optimization 61(4), 387–404 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  3. Amini, K., Ahookhosh, M., Nosratipour, H.: An inexact line search approach using modified nonmonotone strategy for unconstrained optimization. Numer. Algorithms 60(1), 49–78 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  4. Armijo, L.: Minimization of functions having Lipschitz-continuous first partial derivatives. Pac. J. Math. 16, 1–3 (1966)

    Article  MathSciNet  MATH  Google Scholar 

  5. Barzilai, B., Borwein, J.M.: Methods of conjugate gradients for solving linear systems. IMA J. Numer. Anal. 8, 141–148 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  6. Birgin, E.G., Chambouleyron, I., Martinez, J.M.: Estimation of the optical constants and the thickness of thin films using unconstrained optimization. J. Comput. Phys. 151, 862–880 (1999)

    Article  MATH  Google Scholar 

  7. Dai, Y.H.: On the nonmonotone line search. J. Optim. Theory Appl. 112(2), 315–330 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  8. Dai, Y.H., Zhang, H.: An adaptive two-point stepsize gradient algorithm. Numer. Algorithms 27(4), 377–385 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  9. Dolan, E., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Fasano, G., Licidi, S.: A nonmonotone truncated Newton–Krylov method exploiting negative curvature directions, for large scale unconstrained optimization. Optim. Lett. 3(4), 521–535 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  11. Gao, H., Zhang, H.B., Li, Z.B., Tadjouddine, E.: A nonmonotone inexact Newton method for unconstrained optimization. Optim. Lett. (2015). doi:10.1007/s11590-015-0976-2

    MATH  Google Scholar 

  12. Goldstein, A.A.: On steepest descen. SIAM J. Control 3, 147–151 (1965)

    MATH  Google Scholar 

  13. Gould, N.I.M., Orban, D., Toint, P.L.: CUTEr: a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29, 373–394 (2003)

    Article  MATH  Google Scholar 

  14. Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986)

    Article  MathSciNet  MATH  Google Scholar 

  15. Grippo, L., Lampariello, F., Lucidi, S.: A truncated Newton method with nonmonotone line search for unconstrained optimization. J. Optim. Theory Appl. 60, 401–419 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  16. Grippo, L., Lampariello, F., Lucidi, S.: A class of nonmonotone stabilization methods in unconstrained optimization. Numer. Math. 59(1), 779–805 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  17. Grippo, L., Lampariello, F., Lucidi, S.: Vector performance criteria in the convergence analysis of optimization algorithms. Optim. Methods Softw. 3(1–3), 77–92 (1994)

    Article  Google Scholar 

  18. Hager, W.W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17, 526–557 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  19. Huang, S., Wan, Z., Chen, Z.: A new nonmonotone line search technique for unconstrained optimization. Numer. Algorithms 68, 671–689 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kimiaei, M.: A new class of nonmonotone adaptive trust-region methods for nonlinear equations with box constraints. Calcolo (2016). doi:10.1007/s10092-016-0208-x

    MathSciNet  MATH  Google Scholar 

  21. Liu, D.C., Nocedal, J.: On the limited memory BFGS method for large scale optimization. Math. Program. 45, 503–528 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  22. Magoulas, G., Varhatis, M.N.: Adaptivee algorithms for neural network supervised learning: a deteministic optimization approach. Int. J. Bifurc. Chaos 16(7), 1929–1950 (2006)

    Article  Google Scholar 

  23. Ng, C.K., Liao, L.Z., Li, D.: A globally convergent and efficient method for unconstrained discrete-time optimal control. J. Glob. Optim. 22, 401–421 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  24. Nocedal, J.: Updating quasi-Newton matrices with limited storage. Math. Comput. 35, 773–782 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  25. Nosratipour, H., Fard, O.S., Borzabadi, A.H., Sarani, F.: Stable equilibrium configuration of two bar truss by an efficient nonmonotone global Barzilai-Borwein gradient method in a fuzzy environment. Afr. Mat. (2016). doi:10.1007/s13370-016-0451-y

    MATH  Google Scholar 

  26. Panier, E.R., Tits, A.L.: Avoiding the Maratos effect by means of a nonmonotone linesearch. SIAM J. Numer. Anal. 28, 1183–1195 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  27. Peng, C.C., Magoulas, G.: Advancesd adaptivee nonmonotone conjugate gradient training algorithms for recurrent neural networks. Int. J. Artif. Intel. Tools 17(5), 963–984 (2008)

    Article  Google Scholar 

  28. Plagianakos, V.P., Magoulas, G., Vrahatis, M.N.: Deterministic nonmonotone strategies for effective training of multilayer perceptrons. IEEE Trans. Neural Netw. 13(6), 1268–1284 (2002)

    Article  Google Scholar 

  29. Rosenbrock, H.H.: An automatic method for finding the greatest or least value of a function. Comput. J. 3, 175–184 (1960)

    Article  MathSciNet  Google Scholar 

  30. Toint, P.L.: An assessment of non-monotone line search techniques for unconstrained optimization. SIAM J. Sci. Comput. 17, 725–739 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  31. Wan, Z., Hu, C.M., Yang, Z.L.: A spectral PRP conjugate gradient methods for nonconvex optimization problem based on modified line search. Discret. Contin. Dyn. Syst. Ser. B 16, 1157–1169 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  32. Wan, Z., Teo, K.L., Shen, X.L., Hu, C.M.: New BFGS method for unconstrained optimization problem based on modified Armijo line search. Optimization 63(2), 285–304 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  33. Wolfe, P.: Convergence conditions for ascent methods. SIAM Rev. 11, 226–235 (1968)

    Article  MathSciNet  MATH  Google Scholar 

  34. Zhang, H.C., Hager, W.W.: A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optim. 14(4), 1043–1056 (2004)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Akbar Hashemi Borzabadi.

Test problems

Test problems

See Table 3 for the list of test problems from the CUTEr collection.

Table 3 Specifications of the test functions

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nosratipour, H., Hashemi Borzabadi, A. & Solaymani Fard, O. On the nonmonotonicity degree of nonmonotone line searches. Calcolo 54, 1217–1242 (2017). https://doi.org/10.1007/s10092-017-0226-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10092-017-0226-3

Keywords

Mathematics Subject Classification

Navigation