Jeff Liu Lab
Home
Projects
Workshop
AI Wiki
AI Lab
Shop
中
Sign In
All
Computing Science
Artificial Intelligence
Deep Learning
Reinforcement Learning
AI Agents
Embodied Intelligence
Robot Engineering
Human-Like Intelligence
AI Engineering
← Back to Wiki
Computing Science
Mathematical Foundations
Calculus
Linear Algebra
Probability Theory
Information Theory
Statistics
Automatic Differentiation
Discrete Mathematics
Numerical Methods
Optimization Theory
Graph Theory Fundamentals
Theory of Computation
Algorithms
Computer Architecture
Operating Systems
Computer Networks
Programming Languages
Software Engineering
Comments (0)
Sign in to comment
Table of Contents
Overview
1. Convex Sets and Convex Functions
1.1 Convex Sets
1.2 Convex Functions
1.3 Convex Optimization Problems
2. Unconstrained Optimization
2.1 Gradient Descent
2.2 Learning Rate Selection
2.3 Gradient Descent Variants
3. Constrained Optimization
3.1 Lagrange Multiplier Method
3.2 KKT Conditions
3.3 Constrained Optimization Example
4. Duality Theory
4.1 Lagrangian Duality
4.2 Applications of Duality
5. Newton's Method and Quasi-Newton Methods
5.1 Newton's Method
5.2 Quasi-Newton Methods (BFGS)
5.3 Method Comparison
6. Special Convex Optimization Problems
6.1 Linear Programming (LP)
6.2 Quadratic Programming (QP)
6.3 Semidefinite Programming (SDP)
7. Non-Convex Optimization
7.1 Challenges
7.2 Non-Convex Optimization in Deep Learning
8. Connection Between Optimization Theory and Machine Learning
References
Comments
Comments (0)
Sign in to comment