Cheers,

Cesar

]]>

Thank you for a detailed tutorial. It bridges the gap between the knowledge provided in the original paper and a through understanding required for coding. However i was wondering about the gradient evaluation ‘g’ in the provided algorithm. Although in the L-M algorithm it is Transpose(Jacobian) * residual, in the case of Bayesian regularization formulation shouldn’t it be [(Transpose(Jacobian) * residual) + alpha*Weights] because of the modification in the objective Function F=beta*SSE + alpha*SSW where SSE and SSW are sum of squared errors and sum of squared weights? I may be wrong. A brief derivation is provided below.

Starting from Taylor series: F(W+del_W) = F(W) + del_W * F’ + (1/2)(del_W)^2 * F”(W)

we have del_W = inv(F”)*F’ (This is well known Newton approx.) ; here del_W is step size.

In the current context, the derivative of F w.r.t W => F’ = beta * SSE’+ alpha* W

ans the second derivative of F w.r.t W => F”= beta * SSE” + alpha * I;

=> del_W = inv( beta * SSE” + alpha * I ) * (beta * SSE’+ alpha* W ); Here SSE’ is Jacobian * residual and SSE” is Transpose(J)* J;

In other words del_W = inv(H) * [ beta* J* residual + alpha*W]

Hessian H is straightforward as provided in your code. I am not sure why the J* e is used for ‘g’ evaluation instead of [ beta* J* e + alpha*W], where e is residual.

Please correct me if there are any assumptions i am missing in the derivation.

Thank you

]]>x1, x2, …xn -> y1, y2 … yn

or just

x -> y1,y2,yn ]]>

What’s missing to release the Install-Package Accord.Video.FFMPEG.x64 -Version 3.8.2-alpha version?

Thank you

]]>