Wednesday, July 3, 2024
HomeData ModellingData Structure & AlgorithmLocal and Global Optimum in Uni-variate Optimization

Local and Global Optimum in Uni-variate Optimization

Univariate optimization refers to a specific case of nonlinear optimization, where the problem does not involve any constraints. It focuses on finding the optimal value for a single decision variable. In univariate optimization, the objective is to maximize or minimize a function without any limitations on the decision variable’s range or relationship to other variables.

                                                                                 min f(x) such that x ∈ R

where, f(x) = Objective function

 x = Decision variable 

So, when you look at this optimization problem you typically write it in the above form where you say you are going to minimize f(x), and this function is called the objective function. The variable that we use to minimize this function which is called the decision variable and we can also say that x is a continuous value that can take any value in the real number line. And since this is a uni-variate optimization problem x is a scalar variable and not a vector variable. 

Local and Global Optimum

We know an optimum refers to the best possible solution or the extreme value of an objective function. But sometimes it may not be possible to find the optimum value of an objective function in that case it finds a value greater than its neighbors we call this point as local optima. 

Local Optima 

A local optimum refers to a point within the domain of a function where the function attains the lowest (or highest) value in its local neighborhood. Mathematically, a point x* is said to be a local minimum (maximum) if there exists a neighborhood around x* such that f(x*) ≤ f(x) (or f(x*) ≥ f(x)) for all x within that neighborhood. In other words, a local optimum is a point where the function is lower (or higher) than its neighboring points.

Global Optima 

The global optimum of a function refers to the point within its domain where the function attains the lowest (or highest) value over the entire function. In other words, it is the overall best solution for the optimization problem.

Types of Objective Functions 

In optimization, an objective function is a function that represents the quantity to be maximized or minimized. It is the function that measures the performance or quality of a solution. Depending upon the problem we can formulate our objective function in different types. 

Convex Objective Function

We say a function is convex if, for any two points within its domain, the line segment connecting the two points lies below or on the graph of the function. In other words, a function f(x) is convex if, for any x1 and x2 within its domain and for any value t in the range [0, 1], the following condition holds:

f(tx1 + (1-t)x2) ≤ tf(x1) + (1-t)f(x2)

The convex function has a bowl-shaped graph that has an upward opening also, The slope of the function is increasing or non-decreasing as we move from left to right along the graph.

Non-convex Objective Function

We say a function is a Non-convex function if it does not satisfy the property of convexity. A Non-convex function can have various shapes and curvatures, such as multiple peaks, valleys, or irregular patterns. These functions do not exhibit the property that the line segment connecting any two points on the graph lies below or on the graph itself. 

Local and Global Optimum in Convex and Non-Convex function 

Convex Function 

In a convex or concave function since the graph is bowl-shaped both the local optimum value and global optimum are the same value we can see in the below graph that the graph has only-one valley so we easily find the optimum value. According to the need, we can transform our objective function from convex to concave if we want to find the maximum of the objective function and vice versa if we want to find the minimum of the objective function.

Local And global minimum in convex function

Local And global minimum in convex function 

what we have here is on the x-axis, we have different values for the decision variable x, and on the y-axis, we have the function value. And when you plot this you can quite easily notice in the graph that marked the point at which this function attains its minimum value. So, t

So, there is no question of multiple minima to choose from there is only one minimum here, and that is marked in the graph. So, in this case, we would say that this minimum is both a local minimum and also a global minimum

In fact, we can say it is a local minimum because, in the vicinity of this point, this is the best solution that you can get. And if the solution that we get in the vicinity of this point is also the best solution globally then we also call it the global minimum.

Non-convex Function

In the Non-convex function since the function can take multiple shapes hence it can have multiple valleys so in this case it becomes hard to find the global minima. 

Now, take a look at the graph below. Here we have a function and again it is a univariate optimization problem. So, on the x-axis, we have different values of the decision variable and on the y-axis, we plot the function. Now, we may notice that there are two points where the function attains a minimum and we can see that when we say minimum we automatically actually only mean local minimum because if you notice this x1* point in the graph, in the vicinity of this point, this function cannot take any better value from a minimization viewpoint. 

In other words, if I am at x1* and the function is taking this value, if I move to the right, the function value will increase which basically is not good for us because we are trying to find the minimum value, and if I move to my left the function value will again increase which is not good because we are finding the minimum for this function. 

Local and Global Minimum in Non convex function

Local and Global Minimum in Non convex function 

Why this concept is important for Data Science? 

Let’s make a connection between this concept and data science. This problem of finding the global minimum has been a real issue in several data science algorithms. For example, in the 90s there was a lot of excitement and interest in neural networks and so on, and for a few years lot of research went into neural networks and in many cases, it turned out that finding the global optimum solution was very difficult and in many cases, these neural networks trained to local optima which are not good enough for the type of problems that were that being solved. So, that became a real issue with the notion of neural networks, and then in recent years this problem has been revisited and now there are much better algorithms, and much better functional forms, and much better training strategies. So that you can achieve some notion of global optimality and that is the reason why we have these algorithms make a comeback and be very useful.

More on univariate optimization and its implementation can be found here 

Feeling lost in the world of random DSA topics, wasting time without progress? It’s time for a change! Join our DSA course, where we’ll guide you on an exciting journey to master DSA efficiently and on schedule.
Ready to dive in? Explore our Free Demo Content and join our DSA course, trusted by over 100,000 neveropen!

Dominic Rubhabha Wardslaus
Dominic Rubhabha Wardslaushttps://neveropen.dev
infosec,malicious & dos attacks generator, boot rom exploit philanthropist , wild hacker , game developer,
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments