Defining mean and standard deviation via moments.
Clash Royale CLAN TAG#URR8PPP
up vote
0
down vote
favorite
I'm reading a paper that defines the standard deviation as $$sigma = (mu_2 - barL^2)^1/2$$ where $$barL = fracmu_2mu_1.$$ The moments are defined as $mu_k = int_0^infty L^k n(L,t)dL$.
The paper gives a distribution of $$n(L) = 0.0399expbigg(frac-(L-50)^2200bigg)$$ as well as a table of moments with $mu_0 = 1.00$, $mu_1 = 50.12$, and $mu_2 = 2606.12$. I get similar values to these when solving the moments using either a trapezoidal rule or wolframalpha.
However, as defined above, we have $$barL^2 = bigg(frac2606.1250.12bigg)^2 = 51.9976^2 = 2703.751$$ which is larger than $mu_2$ making the variance negative and the standard deviation undefined.
Shouldn't the mean be defined $barL = mu_1$ e.g. here? Isn't it a mistake to have $barL$ defined as they do?
standard-deviation variance
add a comment |Â
up vote
0
down vote
favorite
I'm reading a paper that defines the standard deviation as $$sigma = (mu_2 - barL^2)^1/2$$ where $$barL = fracmu_2mu_1.$$ The moments are defined as $mu_k = int_0^infty L^k n(L,t)dL$.
The paper gives a distribution of $$n(L) = 0.0399expbigg(frac-(L-50)^2200bigg)$$ as well as a table of moments with $mu_0 = 1.00$, $mu_1 = 50.12$, and $mu_2 = 2606.12$. I get similar values to these when solving the moments using either a trapezoidal rule or wolframalpha.
However, as defined above, we have $$barL^2 = bigg(frac2606.1250.12bigg)^2 = 51.9976^2 = 2703.751$$ which is larger than $mu_2$ making the variance negative and the standard deviation undefined.
Shouldn't the mean be defined $barL = mu_1$ e.g. here? Isn't it a mistake to have $barL$ defined as they do?
standard-deviation variance
I would agree with you. The definition the paper gives for standard deviation is strange to say the least.
â herb steinberg
Aug 10 at 21:39
add a comment |Â
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I'm reading a paper that defines the standard deviation as $$sigma = (mu_2 - barL^2)^1/2$$ where $$barL = fracmu_2mu_1.$$ The moments are defined as $mu_k = int_0^infty L^k n(L,t)dL$.
The paper gives a distribution of $$n(L) = 0.0399expbigg(frac-(L-50)^2200bigg)$$ as well as a table of moments with $mu_0 = 1.00$, $mu_1 = 50.12$, and $mu_2 = 2606.12$. I get similar values to these when solving the moments using either a trapezoidal rule or wolframalpha.
However, as defined above, we have $$barL^2 = bigg(frac2606.1250.12bigg)^2 = 51.9976^2 = 2703.751$$ which is larger than $mu_2$ making the variance negative and the standard deviation undefined.
Shouldn't the mean be defined $barL = mu_1$ e.g. here? Isn't it a mistake to have $barL$ defined as they do?
standard-deviation variance
I'm reading a paper that defines the standard deviation as $$sigma = (mu_2 - barL^2)^1/2$$ where $$barL = fracmu_2mu_1.$$ The moments are defined as $mu_k = int_0^infty L^k n(L,t)dL$.
The paper gives a distribution of $$n(L) = 0.0399expbigg(frac-(L-50)^2200bigg)$$ as well as a table of moments with $mu_0 = 1.00$, $mu_1 = 50.12$, and $mu_2 = 2606.12$. I get similar values to these when solving the moments using either a trapezoidal rule or wolframalpha.
However, as defined above, we have $$barL^2 = bigg(frac2606.1250.12bigg)^2 = 51.9976^2 = 2703.751$$ which is larger than $mu_2$ making the variance negative and the standard deviation undefined.
Shouldn't the mean be defined $barL = mu_1$ e.g. here? Isn't it a mistake to have $barL$ defined as they do?
standard-deviation variance
asked Aug 10 at 20:23
Derek Handwerk
215
215
I would agree with you. The definition the paper gives for standard deviation is strange to say the least.
â herb steinberg
Aug 10 at 21:39
add a comment |Â
I would agree with you. The definition the paper gives for standard deviation is strange to say the least.
â herb steinberg
Aug 10 at 21:39
I would agree with you. The definition the paper gives for standard deviation is strange to say the least.
â herb steinberg
Aug 10 at 21:39
I would agree with you. The definition the paper gives for standard deviation is strange to say the least.
â herb steinberg
Aug 10 at 21:39
add a comment |Â
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2878788%2fdefining-mean-and-standard-deviation-via-moments%23new-answer', 'question_page');
);
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
I would agree with you. The definition the paper gives for standard deviation is strange to say the least.
â herb steinberg
Aug 10 at 21:39