# Brief description:

The goal of this course is to provide firm foundations in linear algebra and optimization techniques that will enable students to analyze and solve problems arising in various areas of computer science, especially computer vision, robotics, machine learning, computer graphics, embedded systems, and market engineering and systems. The students will acquire a firm theoretical knowledge of these concepts and tools. They will also learn how to use these tools in practice by tackling various judiciously chosen projects (from computer vision, etc.). This course will serve as a basis to more advanced courses in computer vision, convex optimization, machine learning, robotics, computer graphics, embedded systems, and market engineering and systems.

Topics covered include: Fundamentals of linear algebra: Basic concepts; solving linear systems; eigenvalues and eigenvectors; introduction to the finite elements method; singular value decomposition, pseudo-inverses, PCA. Basics of quadratic optimization; the Rayleigh-Ritz ratio. Methods for computing eigenvalues (power iteration, QR method, etc.). Elementary spectral graph theory. Applications to graph clustering using normalized cuts

If time permits, I will discuss Methods using Krylov subspaces (Arnoldi, Lanczos); Hadamard matrices and applications; Basics of optimization: review of analysis (derivatives, gradient, Hessian, Lagrange multipliers).

# Syllabus:

(*) means: if time permits.
1. Linear Algebra;
• Basics
• Motivations: Linear independence, rank
• Vector Spaces
• Linear independence, Bases
• Linear maps
• Image (range) and Kernel (nullspace), rank
• Linear maps and Matrices
• Affine maps
• Direct products, sums and direct sums
• The dual space E^* and linear forms
• Hyperplanes and linear forms
• Transpose of a linar map and of a matrix
• The four fundamental subspaces
• Determinants
• Permutations, signature of a permutation
• Alternating multilinear maps
• Definition of a determinant
• Inverse matrices and determinants
• Systems of linear equations and determinants
• Determinant of a linear map
• The Cayley-Hamilton theorem
• Solving Linear Systems
• Gaussian Elimination
• LU factorization
• Gaussian elimination of tridiagonal systems
• Cholesky Decomposition
• Vector Norms and Matrix norms
• Normed vector spaces
• Matrix norms
• Condition numbers of matrices
• (*) Iterative Methods for Solving Linear Systems
• Convergence of Sequences of Vectors and Matrices
• Convergence of Iterative Methods
• Description of the Methods of Jacobi, Gauss-Seidel, and Relaxation
• Convergence of the Methods of Jacobi, Gauss-Seidel, and Relaxation
• Euclidean spaces
• Inner products
• Orthogonality, duality
• Adjoint of a linear map
• Linear isometries and Orthogonal Matrices
• The orthogonal group
• QR-Decomposition for invertible matrices
• Orthogonal reflections
• QR-decomposition using Householder matrices
• Hermitian Spaces
• Hermitian spaces, pre-Hilbert spaces
• Orthogonality, duality, adjoint of a linear map
• Linear isometries and unitary matrices
• The unitary group
• Eigenvalues and Eigenvectors
• Basic Definitions
• Algebraic and Geometric multiplicity
• Reduction to upper triangular form; the Schur decomposition
• Spectral Theorems
• Symmetric matrices, Normal matrices
• Self-adjoint and other special linear maps
• Normal and other special matrices
• (*) Introduction to the Finite Elements Method (Ritz, Galerkin, Rayleigh-Ritz)
• A one-dimensional problem: bending of a beam
• A two-dimensional problem: an elastic membrane
• Time-dependent boundary problems: the wave equation
• Singular Value Decomposition (SVD) and Polar Form
• Symmetric, Positive, Definite Matrices
• Symmetric, Positive, Semi-Definite Matrices
• SVD for square matrices
• SVD for rectangular matrices
• Applications of SVD and Pseudo-Inverses
• Least squares problems and the Pseudo-Inverse
• Data compression and SVD
• The Rayleigh-Ritz ratio and the Courant-Fisher theorem
• Principal component analysis (PCA)
• Best affine approximation
• Basic Quadratic Optimization
• Minimizing transpos{x} A x + 2 transpos{x} b, where A is a positive definite symmetric matrix.
• (*) Minimizing transpos{x} A x + 2 transpos{x} b, where A is a positive semidefinite symmetric matrix using a pseudo-inverse
• (*) Minimizing transpos{x} A x on the unit sphere subject to linear constraints
• (*)The Schur complement
• (*) Applications to computer vision (finding contours)
• Methods for Computing Eigenvalues
• Reduction to Hessenberg or tridiagonal form
• Power Iteration
• Inverse power iteration
• Basic QR algorithm
• QR algorithm with shifts
• (*) Methods using Krylov Subspaces
• The Companion matrix
• Arnoldi Iteration
• How does Arnoldi find eigenvalues (Ritz values)
• Using Arnoldi iteration to solve linear systems: GRMES
• The symmetric case: Lanczos iteration
• (*) Conjugate gradient
• (*) Annihilating polynomials; primary decomposition, Jordan form
• Annihilating polynomials and the minimal polynomial
• Minimal polynomials of diagonalizable linear maps
• The primary decomposition theorem
• Nilpotent linear maps and Jordan form
• (*) Hadamard matrices
• When do Hadamard matrices exist?
• Applications of Hadamard matrices (error-correcting codes, ...)
• (*) Positive and Nonnegative Matrices
• Positive matrices and the Perron-Frobenius Theorem
• Nonnegative matrices and the Perron-Frobenius Theorem
• Stochastic matrices
2. (*) Basics of Optimization;
• Review of Analysis
• First and second derivative of a function