Numerical methods for linear algebra and optimisation

Additional Info

  • ECTS credits: 6
  • Semester: 2
  • University: University of L'Aquila
  • Prerequisites:

     

    Basic Numerical Analysis and Linear Algebra.

  • Objectives:

     

    The Aim of this course is to provide the student with knowledge of Numerical Linear Algebra and Numerical Optimisation and ability to analyze theoretical properties and design mathematical software based on the proposed schemes.

    On successful completion of this module, the student should
    - have profound knowledge and understanding of the most relevant numerical methods for Numerical Linear Algebra and Numerical Optimisation and the design of accurate and efficient mathematical software;
    - demonstrate skills in choosing the most suitable method in relation to the problem to be solved and ability to provide theoretical analysis and mathematical software based on the proposed schemes;
    - demonstrate capacity to read and understand other texts on the related topics.

  • Topics:

     

     

    MATRIX FACTORIZATIONS
    LU decomposition, Cholesky decomposition. Singular value decomposition and applications (image processing, recommender systems). QR decomposition and least squares. Householder triangularization. Conditioning and stability in the case of linear systems.

    EIGENVALUE PROBLEMS
    Approximation of the spectral radius. Power method and its variants. Reduction to Hessemberg form. Rayleigh quotient, inverse iteration. QR algorithm with and without shift. Jacobi method. Givens-Householder algorithm. Google PageRank.

    ITERATIVE METHODS FOR LINEAR SYSTEMS
    Overview of iterative methods. Arnold iterations, Krylov iterations. GMRES. Lanczos method. Conjugate gradient. Preconditioners. Preconditioned conjugate gradient.

    NUMERICAL OPTIMISATION
    Continuous versus discrete optimization. Constrained and unconstrained optimization. Global and local optimization. Overview of optimization algorithms. Convexity.
    Line search methods. Convergence of line search methods. Rate of convergence. Steepest descent, quasi-Newton methods. Step-length selection algorithms. Trust region methods. Cauchy point and related algorithms. Dogleg method. Global convergence. Algorithms based on nearly exact solutions. Conjugate gradient methods. Basic properties. Rate of convergence. Preconditioning. Nonlinear conjugate gradient methods: Fletcher-Reeves method, Polak-Ribiere method.
  • Books:

     

    - J. Stoer, R. Bulirsch, Introduction to numerical analysis , Springer. 2002.
    - J. Nocedal, S. J. Wright, Numerical optimization , Springer. 1999.
    - A. Quarteroni, R. Sacco, F. Saleri, P. Gervasio, Numerical Mathematics, Springer (2014).

Read 905 times Last modified on Tuesday, 19 May 2020 14:17