CIS 515, Fall 2016
Brief description:
The goal of this course is to provide
firm foundations in linear algebra and optimization techniques
that will enable students to analyze and solve problems
arising in various areas of computer science, especially
computer vision, robotics, machine learning, computer graphics,
embedded systems, and market engineering and systems.
The students will acquire a firm theoretical knowledge of these
concepts and tools. They will also learn how to use these
tools in practice by tackling various judiciously chosen projects
(from computer vision, etc.).
This course will serve as a basis to more advanced courses
in computer vision, convex optimization, machine learning,
robotics, computer graphics, embedded systems,
and market engineering and systems.
Topics covered include:
Fundamentals of linear algebra:
Basic concepts; solving linear systems; eigenvalues and
eigenvectors; introduction to the finite elements method;
singular value decomposition, pseudoinverses, PCA.
Basics of quadratic optimization; the
RayleighRitz ratio.
Methods for computing eigenvalues (power iteration, QR method, etc.).
Elementary spectral graph theory.
Applications to graph clustering using normalized cuts
If time permits, I will discuss
Methods
using Krylov subspaces (Arnoldi, Lanczos);
Hadamard matrices and applications;
Basics of optimization: review of analysis (derivatives, gradient, Hessian,
Lagrange multipliers).
Syllabus:
(*) means: if time permits.

Linear Algebra;

Basics

Motivations: Linear independence, rank

Vector Spaces

Linear independence, Bases

Linear maps

Image (range) and Kernel (nullspace), rank

Linear maps and Matrices

Affine maps

Direct products, sums and direct sums

The dual space E^* and linear forms

Hyperplanes and linear forms

Transpose of a linar map and of a matrix

The four fundamental subspaces

Determinants

Permutations, signature of a permutation

Alternating multilinear maps

Definition of a determinant

Inverse matrices and determinants

Systems of linear equations and determinants

Determinant of a linear map

The CayleyHamilton theorem

Solving Linear Systems

Gaussian Elimination

LU factorization

Gaussian elimination of tridiagonal systems

Cholesky Decomposition

Vector Norms and Matrix norms

Normed vector spaces

Matrix norms

Condition numbers of matrices

(*) Iterative Methods for Solving Linear Systems

Convergence of Sequences of Vectors and Matrices

Convergence of Iterative Methods

Description of the Methods of Jacobi, GaussSeidel, and Relaxation

Convergence of the Methods of Jacobi, GaussSeidel, and Relaxation

Euclidean spaces

Inner products

Orthogonality, duality

Adjoint of a linear map

Linear isometries and Orthogonal Matrices

The orthogonal group

QRDecomposition for invertible matrices

Orthogonal reflections

QRdecomposition using Householder matrices

Hermitian Spaces

Hermitian spaces, preHilbert spaces

Orthogonality, duality, adjoint of a linear map

Linear isometries and unitary matrices

The unitary group

Eigenvalues and Eigenvectors

Basic Definitions

Algebraic and Geometric multiplicity

Reduction to upper triangular form; the Schur decomposition

Spectral Theorems

Symmetric matrices, Normal matrices

Selfadjoint and other special linear maps

Normal and other special matrices

(*) Introduction to the Finite Elements Method
(Ritz, Galerkin, RayleighRitz)

A onedimensional problem: bending of a beam

A twodimensional problem: an elastic membrane

Timedependent boundary problems: the wave equation

Singular Value Decomposition (SVD) and Polar Form

Symmetric, Positive, Definite Matrices

Symmetric, Positive, SemiDefinite Matrices

SVD for square matrices

SVD for rectangular matrices

Applications of SVD and PseudoInverses

Least squares problems and the PseudoInverse

Data compression and SVD

The RayleighRitz ratio and the CourantFisher theorem

Principal component analysis (PCA)

Best affine approximation

Basic Quadratic Optimization

Minimizing transpos{x} A x + 2 transpos{x} b,
where A is a positive definite symmetric matrix.

(*) Minimizing transpos{x} A x + 2 transpos{x} b,
where A is a positive semidefinite symmetric matrix
using a pseudoinverse

(*) Minimizing transpos{x} A x on the unit sphere
subject to linear constraints

(*)The Schur complement

(*) Applications to computer vision (finding contours)

Methods for Computing Eigenvalues

Reduction to Hessenberg or tridiagonal form

Power Iteration

Inverse power iteration

Basic QR algorithm

QR algorithm with shifts

(*) Methods using Krylov Subspaces

The Companion matrix

Arnoldi Iteration

How does Arnoldi find eigenvalues (Ritz values)

Using Arnoldi iteration to solve linear systems: GRMES

The symmetric case: Lanczos iteration
 (*) Conjugate gradient

(*) Annihilating polynomials; primary decomposition, Jordan form

Annihilating polynomials and the minimal polynomial

Minimal polynomials of diagonalizable linear maps

The primary decomposition theorem

Nilpotent linear maps and Jordan form

(*) Hadamard matrices

When do Hadamard matrices exist?
 Applications of Hadamard matrices (errorcorrecting codes, ...)

(*)
Positive and Nonnegative Matrices

Positive matrices and the PerronFrobenius Theorem

Nonnegative matrices and the PerronFrobenius Theorem

Stochastic matrices

(*) Basics of Optimization;

Review of Analysis

First and second derivative of a function

Gradient, Hessian

Extrema of real functions

Lagrangians and Lagrange multipliers

Extrema of real functions: using second derivatives

Extrema of real functions: using convexity

Newton's Method

Elementary spectral graph theory
Applications to graph clustering using normalized cuts
Back to
Gallier Homepage
published by: