# your code hereOptimization Pushups
The spirit of these exercises consists in learning how to write simple solution algorithms. For each algorithm, test that it works, using simple test functions whose solution is known.
- Write a function
fixed_point(f::Function, x0::Float64)which computes the fixed point offstarting from initial pointx0.
- Write a function
bisection(f::Function, a::Float64, b::Float64)which computes a zero of functionfwithin(a,b)using a bisection method.
# your code here- Write a function
golden(f::Function, a::Float64, b::Float64)which computes a zero of functionfwithin(a,b)using a golden ratio method.
# your code here- Write a function
zero_newton(f::Function, x0::Float64)which computes the zero of functionfstarting from initial pointx0.
# your code here- Add an option
zero_newton(f::Function, x0::Float64, backtracking=true)which computes the zero of functionfstarting from initial pointx0using backtracking in each iteration.
# your code here- Write a function
min_gd(f::Function, x0::Float64)which computes the minimum of functionfusing gradient descent. Assumefreturns a scalar and a gradient.
# your code here- Write a function
min_nr(f::Function, x0::Float64)which computes the minimum of functionfusing Newton-Raphson method. Assumefreturns a scalar, a gradient, and a hessian.
# your code here- Write a method
zero_newton(f::Function, x0::Vector{Float64})which computes the zero of a vector valued functionfstarting from initial pointx0.
# your code here- Add an method
zero_newton(f::Function, x0::Vector{Float64}, backtracking=true)which computes the zero of functionfstarting from initial pointx0using backtracking in each iteration.
# your code here- Add a method
zero_newton(f::Function, x0::Vector{Float64}, backtracking=true, lb=Vector{Float64})which computes the zero of functionfstarting from initial pointx0taking complementarity constraint into accountx>=lbusing the Fischer-Burmeister method.
# your code here