You are on page 1of 31

Curve Fitting Dan

Optimalisasi
Tujuan Instruksional Khusus (TIK)
• Mahasiswa mengerti dan mampu menggunakan regresi linear untuk data
geofisika dan geologi
Tujuan utama aplikasi metoda
geofisika
• Major task of geophysics is to make quantitative statements about the interior of
the earth (model) from observation (data)
Regresi Linier dan Non-
Linier
Perbedaan Regresi Linier dan Non-linier?

Linier Case
• Linear regression requires a linear model.
• A model is linear when each term is either
a constant or the product of a parameter and
a predictor variable.
• Example : Response = constant + parameter *
predictor + ... + parameter * predictor
• Y = b o + b1X1 + b2X2 + ... + bkXk
• In statistics, a regression equation (or
function) is linear when it is linear in the
parameters.
Y = b o + b1X1 + b2X12
• This model is still linear in the
parameters even though the predictor
variable is squared.
Non –
Linier Case

• Nonlinear
equations can take
many different
forms.
• Literally, it’s not
linear. Mobility = (1288.14 + 1491.08 *
• If the equation Density Ln + 583.238 * Density
doesn’t meet the Ln^2 + 75.4167 * Density Ln^3) /
criteria above for a (1 + 0.966295 * Density Ln +
linear equation, it’s 0.397973 * Density Ln^2 +
nonlinear. 0.0497273 * Density Ln^3)
Regresi Linier (Regresi Garis Lurus)
• Misal temperatur (T) bervariasi secara linier terhadap kedalaman (z) sehingga
dapat dinyatakan oleh persamaan T = a + b z
Regresi Garis Lurus
• T pada z tertentu dapat diprediksi jika a dan b diketahui.
• Jika dilakukan pengukuran T pada beberapa z tertentu maka parameter model a
dan b dapat dicari  Pemodelan Inversi
• Caranya adalah dengan meminimumkan ″jarak″ antara Tical (hasil perhitungan)
dengan Tiobs (hasil pengamatan).
• Metoda kuadrat terkecil (Least-Squares)

𝑁 𝑁
𝑜𝑏𝑠 2 2
𝐸=෍ 𝑇𝑖𝑐𝑎𝑙𝑙 − 𝑇𝑖 = ෍ 𝑒𝑖
𝑖=1 𝑖=1
Regresi garis lurus
𝑁 𝑁
𝑜𝑏𝑠 2
𝐸=෍ 𝑇𝑖𝑐𝑎𝑙𝑙 − 𝑇𝑖 = ෍ 𝑎 + 𝑏𝑧𝑖 − 𝑇𝑖 2

𝑖=1 𝑖=1

• Jika E minimum maka turunannya terhadap parameter model a dan b sama


dengan nol,
𝜕𝐸 𝜕𝐸
= 0; =0
𝜕𝑎 𝜕𝑏
• Dua persamaan dg a dan b tidak diketahui, a dan b dapat dihitung → solusi
Regresi garis lurus
• Misfit atau fungsi Obyektif : min E, artinya meminimumkan E
Regresi garis lurus
Solusi : Regresi garis lurus
Regresi garis lurus dalam Notasi
Matriks
• Hubungan antara data observasi temperature dan kedalaman dengan parameter
model,
• Fungsi obyektif adalah meminumumkan error.
• Error adalah selisish antara data observasi dengan data kalkulasi dari model.
Summary
Contoh : Inversi Model Garis 
T=a+bz
Contoh (Cont’d)
Contoh
Contoh : Inversi Model Parabola 
T=a+bz+cz2
Contoh : Inversi Model Parabola 
T=a+bz+cz2
METHODS FOR
NON-LINEAR LEAST
SQUARES PROBLEMS
The Gauss–Newton Method
• Given a vector function
f: Rn → Rm
Dengan 𝑚 ≥ 𝑛 . We want to minimize f(x) , or equivalently to find
𝑥 ∗ = argmin𝑥 𝐹(x)
Dimana
𝑚
1 2
1 2
1
𝐹 𝑥 = ෍ 𝑓𝑖 (𝑥) = 𝐟(x) = 𝐟 x T 𝐟(x)
2 2 2
𝑖=1

Least squares problems can be solved by general optimization methods, but


we shall present special methods that are more efficient.
• Provided that f has continuous second partial derivatives, we can write its Taylor
expansion as
𝐟 x+h = 𝐟 x +𝐉 x h+𝑂 h 2
• Where 𝐉 ∈ 𝑹𝒎𝒙𝒏 is the Jacobian . This is a matrix containing the first partial
derivatives of the function components,
𝜕𝑓𝑖
𝑱(𝐱) 𝑖𝑗 = 𝐱
𝜕𝑥𝑗
And
𝑚
𝜕𝐹 𝜕𝑓𝑖
𝐱 = ෍ 𝑓𝑖 (𝑥) 𝐱
𝜕𝑥𝑗 𝜕𝑥𝑗
𝑖=1
The Gauss–Newton Method
• It is based on implemented first derivatives of the components
of the vector function.
• In special cases it can give quadratic convergence as the Newton-method does
for general optimization.
• The Gauss–Newton method is based on a linear approximation to the
components of f (a linear model of f) in the neighbourhood of x : For small h

2
𝐟 x+h =𝐟 x +𝐉 x h+𝑂 h
where 𝐉 ∈ 𝐑𝐦𝐱𝐧 is the Jacobian.

𝐟 x+h ≅𝐥 h ≡𝐟 x +𝐉 x h
The Gauss–Newton Method
• Substitution 𝐟 x + h ≅ 𝐥 h ≡ 𝐟 x + 𝐉 x h into

𝑚
1 1 2
1
𝐹 𝐱 = ෍ 𝑓𝑖 (𝐱) = 𝐟(𝐱) = 𝐟 x 𝑇 𝐟(𝐱)
2
2 2 2
𝑖=1
1
𝐟 𝐱 + h ≅ 𝐿 h = 𝑙 h T𝑙 h
2
T T
1 TT
= F x + h h f(x) + h J Jh
2
(with f = f(x) and J = J(x)). The Gauss–Newton step hgn minimizes L(h)
The Gauss–Newton Method
• It is easily seen that the gradient and the Hessian of L are

• Further, we see that the matrix L 00(h) is independent of h.


The Gauss–Newton Method
• This implies that L(h) has a unique minimizer, which can be found by solving

where 𝛼 is found by line search. The


classical Gauss-Newton method uses 𝛼 =
1 in all steps.
TERIMA KASIH

You might also like