Optimization
Optimization is the process of finding the best solution to a problem by choosing the best input variables from a set of possible inputs. It is an essential mathematical tool for dealing with many practical problems.
The Meta-Problem of optimization: How to choose the optimization method?
This is a really important topic that was barely introduced in my Calculus class.
Optimization plays a role in a wide range of disciplines, including the physical sciences, economics, and biology. For example, scientists have studied how migrating birds choose an optimal velocity that maximizes the distance they can travel without stopping, given the energy that can be stored as body.
In many optimization problems, the first step is to write down the objective function. This is the function whose minimum or maximum we seek.
However, there is this saying in Software Engineering:
Premature Optimization is the root of all evil.
If you optimize the components you will probably ruin the system performance. Systems Engineering
General steps
- Define the objective function
- Define the constraints
- Set up the optimization problem
- Solve the optimization problem (Different optimization methods use different techniques to solve the problem, such as gradient descent, linear programming, dynamic programming, and numerical methods)
- Evaluate the solution: Once a solution has been found, it must be evaluated to ensure that it meets the requirements and constraints.
- Implement the solution: The solution is implemented in the real-world and the results are monitored to see if it is meeting the expectations.
The ways human do optimization is simultaneous learning and inference. Do humans optimize in the first place?
Various Optimizations
- Genetic Algorithm
- Model Predictive Control
- Gradient Descent
- Quadratic Programming
- Linear Quadratic Regulator
Related
Optimization in Calculus
See Curve Sketching and Multivariate Function when it comes to unconstrained optimization.
For constrained optimization, we use Method of Lagrange.
Optimization in AI
Choosing the best option from a set of options.
Local Search
Search algorithms that maintain a single node and searches by moving to a neighboring node. Different from the other Searching Algorithms. In maze, the solution was obvious. Figuring out the solution is the heart of the challenge.
Hill Climbing
Simplest algorithm, just find the neighbors:
However, we can run into the issue of local minimums. Very common problem in AI / ML.
Hill Climbing Variants Due to the limitations of Hill Climbing, multiple variants have been thought of to overcome the problem of being stuck in local minima and maxima. What all variations of the algorithm have in common is that, no matter the strategy, each one still has the potential of ending up in local minima and maxima and no means to continue optimizing. The algorithms below are phrased such that a higher value is better, but they also apply to cost functions, where the goal is to minimize cost.
- Steepest-ascent: choose the highest-valued neighbor. This is the standard variation that we discussed above.
- Stochastic: choose randomly from higher-valued neighbors. Doing this, we choose to go to any direction that improves over our value. This makes sense if, for example, the highest-valued neighbor leads to a local maximum while another neighbor leads to a global maximum.
- First-choice: choose the first higher-valued neighbor.
- Random-restart: conduct hill climbing multiple times. Each time, start from a random state. Compare the maxima from every trial, and choose the highest amongst those.
- Local Beam Search: chooses the k highest-valued neighbors. This is unlike most local search algorithms in that it uses multiple nodes for the search, and not just one.