Quick Ex#1: Brachistochrone

Quick Ex#1: Brachistochrone

Solved by: John and Bernoulli, Newton and L'Hospital

Given:

A particle sliding without friction along an unknown track in a gravitational field

Dynamic Constraints

\[\dot{x}_1(t)=x_3(t)sin(u_1(t))\]
\[\dot{x}_2(t)=-x_3(t)cos(u_1(t))\]
\[\dot{x}_3(t)=gcos(u_1(t))\]

Boundary Conditions

\[{x}_1(0)=0 \qquad {x}_1(t_f)=2\]
\[{x}_2(0)=0\qquad {x}_2(t_f)=-2\]
\[{x}_3(0)=0\qquad {x}_3(t_f)=free\]

Find:

The track that minimizes time

\[J=t_f\]

Solution:

This problem can be found here.

Packages that will be used

using NLOptControl, PrettyPlots

Define the Problem

Next let's write down the boundary conditions into an array:

X0=[0.0,0.0,0.0]
XF=[2.,-2.,NaN]

Notice:

  1. The numbers that where put into the expression are Float64; For now this is required!

  2. The NaN is put into the boundary constraint for the third state; If any of the state bounds are free then pass a NaN

Now that we have the basic problem written down, we can call the define() function as:

n=define(numStates=3,numControls=1,X0=X0,XF=XF);

Basics

VariableDescription
nobject that holds the entire optimal control problem
dearray of differential equation expressions
numStatesthe number of states
numControlsthe number of controls
X0intial state constraint
XFfinal state constraint

Also, not in this problem, but

VariableDescription
XLlower state bound
XUupper state bound
CLlower state bound
CUupper control bound

The above bounds can be set in the same fashion as the initial and final state constraints. i.e. in an Array.

State and Control Names (optional)

states!(n,[:x,:y,:v],descriptions=["x(t)","y(t)","v(t)"]);
controls!(n,[:u],descriptions=["u(t)"]);

Differential Equations

Now we need to write all of the given information out. Let's start with the differential equation, that is written as an array of expressions:

dx=[:(v[j]*sin(u[j])),:(-v[j]*cos(u[j])),:(9.81*cos(u[j]))]
dynamics!(n,dx)

Configure the Problem

Now that the basic optimal control problem has been defined, the next step is to configure!() with additional options.

configure!(n;(:Nck=>[100]),(:finalTimeDV=>true));

Settings:

KeyDescription
:Nckarray of that holds the number of points within each interval
:finalTimeDVbool to indicate if time is a design variable

Notice:

  1. Final time is a design variable; we are trying to minimize it

  2. We defined this as a single interval problem with 100 points

Objective Function

Finally, the objective function needs to be defined. For this, we use the JuMP macro @NLOptControl() directly as:

@NLobjective(n.ocp.mdl,Min,n.ocp.tf)

with,

VariableDescription
n.ocp.mdlobject that holds them JuMP model
Minfor a minimization problem
n.ocp.tfa reference to the final time

Optimize

Now that the entire optimal control problem has been defined we can optimize!() it as:

optimize!(n)

Post Process

Make sure that you are not running the code in a folder where you have an important folder named results, because it will be deleted! Now that the problem has been optimized, we can quickly visualize the solution using $allPlots()$ as:

allPlots(n)

Optional plot settings

Many of the plot settings can be modified using the plotSettings() function. For instance;

plotSettings(;(:size=>(700,700)));

allPlots() automatically plots the solution to all of the state and control variables. In this problem, we may be interested in comparing two states against one another which can be done using the statePlot() function as:

statePlot(n,1,1,2)

For this case, there are four things that need to be passed to statePlots():

ArgumentNameDescription
1nobject that holds the entire optimal control problem
2idxreference to solution number used when we start solving mpc problems
3st1state number for xaxis
4st2state number for yaxis

Data Orginization

All of the states, control variables and time vectors are stored in an array of Dataframes called n.r.dfs

n.r.ocp.dfs
1-element Array{Any,1}:
 101×5 DataFrames.DataFrame
│ Row │ t           │ x           │ y            │ v          │ u           │
├─────┼─────────────┼─────────────┼──────────────┼────────────┼─────────────┤
│ 1   │ 0.0         │ 0.0         │ 0.0          │ 0.0        │ 5.82398e-14 │
│ 2   │ 0.000302536 │ 1.32471e-10 │ -4.48945e-7  │ 0.00296788 │ 0.000442609 │
│ 3   │ 0.0010139   │ 4.98629e-9  │ -5.04231e-6  │ 0.00994636 │ 0.00148333  │
│ 4   │ 0.00213113  │ 4.63039e-8  │ -2.2277e-5   │ 0.0209063  │ 0.00311783  │
│ 5   │ 0.00365302  │ 2.33209e-7  │ -6.54545e-5  │ 0.035836   │ 0.00534436  │
│ 6   │ 0.00557807  │ 8.30305e-7  │ -0.000152615 │ 0.0547203  │ 0.00816071  │
│ 7   │ 0.00790437  │ 2.36255e-6  │ -0.000306446 │ 0.0775401  │ 0.0115641   │
│ 8   │ 0.0106296   │ 5.74545e-6  │ -0.000554166 │ 0.104272   │ 0.0155511   │
⋮
│ 93  │ 0.812177    │ 1.92932     │ -1.97229     │ 6.22063    │ 1.18821     │
│ 94  │ 0.815101    │ 1.94622     │ -1.97905     │ 6.23128    │ 1.19249     │
│ 95  │ 0.817627    │ 1.96087     │ -1.98484     │ 6.24039    │ 1.19619     │
│ 96  │ 0.819753    │ 1.97323     │ -1.98968     │ 6.24799    │ 1.1993      │
│ 97  │ 0.821477    │ 1.98328     │ -1.99357     │ 6.25411    │ 1.20182     │
│ 98  │ 0.822796    │ 1.99098     │ -1.99654     │ 6.25877    │ 1.20375     │
│ 99  │ 0.823711    │ 1.99633     │ -1.9986      │ 6.26198    │ 1.20509     │
│ 100 │ 0.824219    │ 1.9993      │ -1.99973     │ 6.26377    │ 1.20583     │
│ 101 │ 0.824339    │ 2.0         │ -2.0         │ 6.26418    │ NaN         │

It is an array because the problem is designed to be solved multiple times in a receding time horizon. The variables can be accessed like this:

n.r.ocp.dfs[1][:x][1:4]
4-element Array{Float64,1}:
 0.0
 1.32471e-10
 4.98629e-9
 4.63039e-8

Optimization Data

n.r.ocp.dfsOpt
1×4 DataFrames.DataFrame
│ Row │ tSolve  │ objVal   │ status  │ iterNum │
├─────┼─────────┼──────────┼─────────┼─────────┤
│ 1   │ 4.08226 │ 0.824339 │ Optimal │ 2       │

The sailent optimization data is stored in the table above

VariableDescription
tSolvecpu time for optimization problem
objValobjective function value
iterNuma variable for a higher-level algorithm, often these problems are nested

One thing that may be noticed is the long time that it takes to solve the problem. This is typical for the first optimization, but after that even if the problem is modified the optimization time is greatly reduced.

For instance, let's re-run the optimization:

optimize!(n);
n.r.ocp.dfsOpt[:tSolve]
2-element Array{Any,1}:
 4.08226
 0.293274

Costate visualization

For $:ps$ methods the costates can also be calculates as

X0=[0.0,0.0,0.0]
XF=[2.,-2.,NaN]
n=define(numStates=3,numControls=1,X0=X0,XF=XF)
states!(n,[:x,:y,:v],descriptions=["x(t)","y(t)","v(t)"])
controls!(n,[:u],descriptions=["u(t)"])
dx=[:(v[j]*sin(u[j])),:(-v[j]*cos(u[j])),:(9.81*cos(u[j]))]
dynamics!(n,dx)
n.s.ocp.evalCostates = true
configure!(n;(:Nck=>[100]),(:finalTimeDV=>true));
@NLobjective(n.ocp.mdl,Min,n.ocp.tf)
optimize!(n);
using PrettyPlots
allPlots(n)

Notice how the control jumps down for a bit, that is due to the equivalence of $cos(n*2pi)$ for any integer n.

Save results

While some results are save automatically, state, control, and costate (if applicable) data (about the collocation points and the Lagrange polynomial that runs through them) can be saved with the function saveData() as:

saveData(n)