Type: | Package |
Title: | Goodness-of-Fit Tests for the Inverse Gaussian Distribution |
Version: | 1.0 |
Author: | Bruno Ebner [aut, cre], Jaco Visagie [aut], Steffen Betsch [aut], James Allison [aut], Lucas Iglesias [ctb] |
Maintainer: | Bruno Ebner <bruno.ebner@kit.edu> |
Description: | We implement various tests for the composite hypothesis of testing the fit to the family of inverse Gaussian distributions. Included are methods presented by Allison, J.S., Betsch, S., Ebner, B., and Visagie, I.J.H. (2022) <doi:10.48550/arXiv.1910.14119>, as well as two tests from Henze and Klar (2002) <doi:10.1023/A:1022442506681>. Additionally, the package implements a test proposed by Baringhaus and Gaigall (2015) <doi:10.1016/j.jmva.2015.05.013>. For each test a parametric bootstrap procedure is implemented. |
License: | CC BY 4.0 |
Depends: | R (≥ 3.5.0) |
Imports: | pracma, rmutil, |
Encoding: | UTF-8 |
NeedsCompilation: | no |
Packaged: | 2024-10-29 07:57:37 UTC; lucas |
RoxygenNote: | 7.3.2 |
Repository: | CRAN |
Date/Publication: | 2024-11-01 14:30:12 UTC |
The first Allison-Betsch-Ebner-Visagie test statistic
Description
This function computes the first test statistic of the goodness-of-fit tests for the inverse Gaussian family due to Allison et al. (2022). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.
Usage
ABEV1(data, a = 10, meth = "MME")
Arguments
data |
a vector of positive numbers. |
a |
positive tuning parameter. |
meth |
method of estimation used. Possible values are |
Details
The numerically stable test statistic for the first Allison-Betsch-Ebner-Visagie test is defined as:
ABEV1_{n,a} = \frac{1}{4n} \sum_{j,k=1}^{n} \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) h_{1,a}(Y_{n,j}, Y_{n,k})
- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) h_{2,a}(Y_{n,j}, Y_{n,k})
- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) h_{2,a}(Y_{n,k}, Y_{n,j})
+ \frac{4}{a} e^{-a \max(Y_{n,j}, Y_{n,k})},
with \hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}
, where \hat{\mu}_n,\hat{\lambda}_n
are consistent estimators of \mu, \lambda
, respectively, the parameters of the inverse Gaussian distribution. Furthermore Y_{n,j} = \frac{X_j}{\hat{\mu}_n}
, j = 1,...,n
, for (X_j)_{j = 1,...,n}
, a sequence of independent observations of a positive random variable X
. The functions h_{i,a}(s,t)
, i = 1,2
, are defined in Allison et al. (2022), section 5.1.
The null hypothesis is rejected for large values of the test statistic ABEV1_{n,a}
.
Value
value of the test statistic.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK
Examples
ABEV1(rmutil::rinvgauss(20,2,1),a=10,meth='MLE')
The second Allison-Betsch-Ebner-Visagie test statistic
Description
This function computes the second test statistic of the goodness-of-fit tests for the inverse Gaussian family due to Allison et al. (2022). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.
Usage
ABEV2(data, a = 10, meth = "MME")
Arguments
data |
a vector of positive numbers. |
a |
positive tuning parameter. |
meth |
method of estimation used. Possible values are |
Details
The numerically stable test statistic for the second Allison-Betsch-Ebner-Visagie test is defined as:
ABEV2_{n,a} = \frac{1}{4n} \sum_{j,k=1}^{n} \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) \tilde{h}_{1,a}(Y_{n,j}, Y_{n,k})
- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,j}} - \frac{\hat{\varphi}_n}{Y_{n,j}^2} \right) \tilde{h}_{2,a}(Y_{n,j}, Y_{n,k})
- 2 \left( \hat{\varphi}_n + \frac{3}{Y_{n,k}} - \frac{\hat{\varphi}_n}{Y_{n,k}^2} \right) \tilde{h}_{2,a}(Y_{n,k}, Y_{n,j})
+ 4 \frac{\sqrt{\pi}}{a} \Phi \left( - \sqrt{2a} \max(Y_{n,j}, Y_{n,k}) \right),
with \hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}
, where \hat{\mu}_n,\hat{\lambda}_n
are consistent estimators of \mu, \lambda
, respectively, the parameters of the inverse Gaussian distribution. Furthermore Y_{n,j} = \frac{X_j}{\hat{\mu}_n}
, j = 1,...,n
, for (X_j)_{j = 1,...,n}
, a sequence of independent observations of a positive random variable X
.
The functions \tilde{h}_{i,a}(s,t)
, i = 1,2
, are defined in Allison et al. (2022), section 5.1, and \Phi
denotes the distribution function of the standard normal distribution.
The null hypothesis is rejected for large values of the test statistic ABEV2_{n,a}
.
Value
value of the test statistic.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK
Examples
ABEV2(rmutil::rinvgauss(20,2,1),a=10,meth='MLE')
The Anderson-Darling test statistic
Description
This function computes the test statistic of the goodness-of-fit test for the inverse Gaussian family in the spirit of Anderson and Darling.
Usage
AD(data)
Arguments
data |
a vector of positive numbers. |
Details
Let X_{(j)}
denote the j
th order statistic of X_1, \ldots, X_n
, a sequence of independent observations of a positive random variable X
. Furthermore, let \hat{F}(x) = F(x; \hat{\mu}_n, \hat{\lambda}_n)
, where F
is the distribution function of the inverse Gaussian distribution.
Note that \hat{\mu}_n,\hat{\lambda}_n
are the maximum likelihood estimators for \mu
and \lambda
, respectively, the parameters of the inverse Gaussian distribution.
The null hypothesis is rejected for large values of the test statistic:
AD = -n - \frac{1}{n} \sum_{j=1}^{n} \left[ (2j-1) \log \hat{F}(X_{(j)}) + (2(n-j) + 1) \log \left( 1 - \hat{F}(X_{(j)}) \right) \right].
Value
value of the test statistic.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK
Examples
AD(rmutil::rinvgauss(20,2,1))
The Baringhaus-Gaigall test statistic
Description
This function computes the test statistic of the goodness-of-fit test for the inverse Gaussian family due to Baringhaus and Gaigall (2015).
Usage
BG(data)
Arguments
data |
a vector of positive numbers. |
Details
The test statistic of the Baringhaus-Gaigall test is defined as:
BG_{n} = \frac{n}{(n(n-1))^5} \sum_{\mu, \nu = 1, \mu \neq \nu}^{n} \left( N_1(\mu, \nu)N_4(\mu, \nu) - N_2(\mu, \nu)N_3(\mu, \nu) \right)^2,
where
N_1(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} \leq \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} \leq \tilde{Z}_{\mu, \nu} \right\},
N_2(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} \leq \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} > \tilde{Z}_{\mu, \nu} \right\},
N_3(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} > \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} \leq \tilde{Z}_{\mu, \nu} \right\},
N_4(\mu, \nu) = \sum_{i,j = 1, i \neq j}^{n} \mathbf{1} \left\{ \tilde{Y}_{i,j} > \tilde{Y}_{\mu, \nu}, \tilde{Z}_{i,j} > \tilde{Z}_{\mu, \nu} \right\},
with \mathbf{1}
being the indicator function.
Let f(X_i,X_j) = (X_i + X_j)/2
and g(X_i,X_j) = (X_i^{-1} + X_j^{-1})/2 - f(X_i,X_j)^{-1}
, with X_1,...,X_n
positive, independent and identically distributed random variables with finite moments \mathbb{E}[X_1^2]
and \mathbb{E}[X_1^{-1}]
.
Then (\tilde{Y}_{i,j}, \tilde{Z}_{i,j}) = (f(X_i,X_j), g(X_i,X_j)), 1 \leq i,j \leq n, i \neq j
. Note that \tilde{Y}_{i,j}
and \tilde{Z}_{i,j}
are independent if, and only if X_1,...,X_n
are realized from an inverse Gaussian distribution.
Value
value of the test statistic.
References
Baringhaus, L. Gaigall, D. (2015). "On an independence test approach to the goodness-of-fit problem", Journal of Multivariate Analysis, 140, 193-208. doi:10.1016/j.jmva.2015.05.013
Examples
BG(rmutil::rinvgauss(20,2,1))
The Cramer-von Mises test statistic
Description
This function computes value of the test statistic of the goodness-of-fit test for the inverse Gaussian family in the spirit of Cramer and von Mises. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions.
Usage
CM(data)
Arguments
data |
a vector of positive numbers. |
Details
Let X_{(j)}
denote the j
th order statistic of X_1, \ldots, X_n
, a sequence of independent observations of a positive random variable X
. Furthermore, let \hat{F}(x) = F(x; \hat{\mu}_n, \hat{\lambda}_n)
, where F
is the distribution function of the inverse Gaussian distribution.
Note that \hat{\mu}_n,\hat{\lambda}_n
are the maximum likelihood estimators for \mu
and \lambda
, respectively, the parameters of the inverse Gaussian distribution.
The null hypothesis is rejected for large values of the test statistic:
CM = \frac{1}{12n} + \sum_{j=1}^{n} \left( \hat{F}(X_{(j)}) - \frac{2j-1}{2n} \right)^2.
Value
value of the test statistic.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK
Examples
CM(rmutil::rinvgauss(20,2,1))
The first Henze-Klar test statistic
Description
This function computes the first test statistic of the goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).
Usage
HK1(data, a = 0)
Arguments
data |
a vector of positive numbers. |
a |
positive tuning parameter. |
Details
The representation of the first Henze-Klar test statistic used for computation is given by:
HK_{n,a}^{(1)}= \frac{\hat{\varphi}_n}{n} \sum_{j,k=1}^{n} \hat{Z}_{jk}^{-1} \left\{ 1 - (Y_j + Y_k) \left( 1 + \sqrt{\frac{\pi}{2\hat{Z}_{jk}}} \text{erfce}\left( \sqrt{\frac{\hat{Z}_{jk}}{2}} \right) \right) + \left( 1 + \frac{2}{\hat{Z}_{jk}} \right) Y_j Y_k \right\},
with \hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}
, where \hat{\mu}_n,\hat{\lambda}_n
are the maximum likelihood estimators for \mu
and \lambda
, respectively, the parameters of the inverse Gaussian distribution.
Furthermore \hat{Z}_{jk} = \hat{\varphi}_n(Y_j + Y_k +a)
, where Y_i = \frac{X_i}{\hat{\mu}_n}
for (X_i)_{i = 1,...,n}
, a sequence of independent observations of a nonnegative random variable X
.
To ensure numerical stability of the implementation the exponentially scaled complementary error function \text{erfce}(x)
is used: \text{erfce}(x) = \exp{(x^2)}\text{erfc}(x)
, with \text{erfc}(x) = 2\int_x^\infty \exp{(-t^2)}dt/\pi
.
The null hypothesis is rejected for large values of the test statistic HK_{n,a}^{(1)}
.
Value
value of the test statistic
References
Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681
Examples
HK1(rmutil::rinvgauss(20,2,1))
The second Henze-Klar test statistic
Description
This function computes the test statistic of the second goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).
Usage
HK2(data)
Arguments
data |
a vector of positive numbers. |
Details
The representation of the second Henze-Klar test statistic used for computation (a = 0)
is given by:
HK_{n,0}^{(2)} = \frac{1}{n} \sum_{j,k=1}^{n} Z_{jk}^{-1} - 2 \sum_{j=1}^{n} Z_j^{-1} \left\{ 1 - \sqrt{\frac{\pi \hat{\varphi}_n}{2 Z_j}} \, \mathrm{erfce} \left( \frac{\hat{\varphi}_n^{1/2} (Z_j + 1)}{(2 Z_j)^{1/2}} \right) \right\} + n\frac{1 + 2 \hat{\varphi}_n}{4 \hat{\varphi}_n}
with \hat{\varphi}_n = \frac{\hat{\lambda}_n}{\hat{\mu}_n}
, where \hat{\mu}_n,\hat{\lambda}_n
are the maximum likelihood estimators for \mu
and \lambda
, respectively, the parameters of the inverse Gaussian distribution.
Furthermore Z_{jk} = (Y_j + Y_k)
and Z_j = Y_j
, where Y_i = \frac{X_i}{\hat{\mu}_n}
for (X_i)_{i = 1,...,n}
, a sequence of independent observations of a nonnegative random variable X
.
To ensure numerical stability of the implementation the exponentially scaled complementary error function \text{erfce}(x)
is used: \text{erfce}(x) = \exp{(x^2)}\text{erfc}(x)
, with \text{erfc}(x) = 2\int_x^\infty \exp{(-t^2)}dt/\pi
.
The null hypothesis is rejected for large values of the test statistic HK_{n,a}^{(2)}
.
Value
value of the test statistic.
References
Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681
Examples
HK2(rmutil::rinvgauss(20,2,1))
The Kolmogorov-Smirnov test statistic
Description
This function computes the test statistic of the goodness-of-fit test for the inverse Gaussian family in the spirit of Kolmogorov and Smirnov. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions.
Usage
KS(data)
Arguments
data |
a vector of positive numbers. |
Details
Let X_{(j)}
denote the j
th order statistic of X_1, \ldots, X_n
, a sequence of independent observations of a positive random variable X
. Furthermore, let \hat{F}(x) = F(x; \hat{\mu}_n, \hat{\lambda}_n)
, where F
is the distribution function of the inverse Gaussian distribution.
Note that \hat{\mu}_n,\hat{\lambda}_n
are the maximum likelihood estimators for \mu
and \lambda
, respectively, the parameters of the inverse Gaussian distribution.
The null hypothesis is rejected for large values of the test statistic:
KS = \max(D^+, D^-),
where
D^+ = \max_{j=1,\ldots,n} \left( \frac{j}{n} - \hat{F}(X_{(j)}) \right)
and
D^- = \max_{j=1,\ldots,n} \left( \hat{F}(X_{(j)}) - \frac{j-1}{n} \right).
Value
value of the test statistic.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2022) "On Testing the Adequacy of the Inverse Gaussian Distribution". LINK
Examples
KS(rmutil::rinvgauss(20,2,1))
Print method for tests of the inverse Gaussian distribution
Description
Printing objects of class "gofIG".
Usage
## S3 method for class 'gofIG'
print(x, ...)
Arguments
x |
object of class "gofIG". |
... |
further arguments to be passed to or from methods. |
Details
A gofIG
object is a named list of numbers and character string, supplemented with test
(the name of the teststatistic). test
is displayed as a title.
The remaining elements are given in an aligned "name = value" format.
Value
the argument x, invisibly, as for all print methods.
Examples
print(test.ABEV1(rgamma(20,1)))
The first Allison-Betsch-Ebner-Visagie goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family due to Allison et al. (2019). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.
Usage
test.ABEV1(data, a = 10, meth = "MME", B = 500)
Arguments
data |
a vector of positive numbers. |
a |
positive tuning parameter. |
meth |
method of estimation used. Possible values are |
B |
number of bootstrap iterations used to obtain p value. |
Details
The test is of weighted L^2
type and uses a characterization of the distribution function of the inverse Gaussian distribution. The p value is obtained by a parametric bootstrap procedure.
Value
a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$parameter
the value of the tuning parameter.
$est.method
the estimation method used.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L^2
-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK
Examples
test.ABEV1(rmutil::rinvgauss(20,2,1),B=100)
The second Allison-Betsch-Ebner-Visagie goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family due to Allison et al. (2019). Two different estimation procedures are implemented, namely the method of moment and the maximum likelihood method.
Usage
test.ABEV2(data, a = 10, meth = "MME", B = 500)
Arguments
data |
a vector of positive numbers. |
a |
positive tuning parameter. |
meth |
method of estimation used. Possible values are |
B |
number of bootstrap iterations used to obtain p value. |
Details
The test is of weighted L^2
type and uses a characterization of the distribution function of the inverse Gaussian distribution. The p value is obtained by a parametric bootstrap procedure.
Value
a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$parameter
the value of the tuning parameter.
$est.method
the estimation method used.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L^2
-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK
Examples
test.ABEV2(rmutil::rinvgauss(20,2,1),B=100)
The Anderson-Darling goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family in the spirit of Anderson and Darling. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions, i.e. a bootstrap procedure is implemented to perform the test.
Usage
test.AD(data, B = 500)
Arguments
data |
a vector of positive numbers. |
B |
number of bootstrap iterations used to obtain p value. |
Details
The Anderson-Darling test is computed as described in Allison et. al. (2019). The p value is obtained by a parametric bootstrap procedure.
Value
a list containing the value of the name of the test statistic, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L^2
-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK
Examples
test.AD(rmutil::rinvgauss(20,2,1),B=100)
The Baringhaus-Gaigall goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family due to Baringhaus and Gaigall (2015).
Usage
test.BG(data, B)
Arguments
data |
a vector of positive numbers. |
B |
number of bootstrap iterations used to obtain p value. |
Value
a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Baringhaus, L. Gaigall, D. (2015). "On an independence test approach to the goodness-of-fit problem", Journal of Multivariate Analysis, 140, 193-208. doi:10.1016/j.jmva.2015.05.013
Examples
test.BG(rmutil::rinvgauss(20,2,1),B=100)
The Cramer-von Mises goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family in the spirit of Cramer and von Mises. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions, i.e. a bootstrap procedure is implemented to perform the test.
Usage
test.CM(data, B = 500)
Arguments
data |
a vector of positive numbers. |
B |
number of bootstrap iterations used to obtain p value. |
Details
The Cramer-von Mises test is computed as described in Allison et. al. (2019). The p value is obtained by a parametric bootstrap procedure.
Value
a list containing the value of the name of the test statistic, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L^2
-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK
Examples
test.CM(rmutil::rinvgauss(20,2,1),B=100)
The first Henze-Klar goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).
Usage
test.HK1(data, a = 0, B = 500)
Arguments
data |
a vector of positive numbers. |
a |
positive tuning parameter. |
B |
number of bootstrap iterations used to obtain p value. |
Details
The test statistics is a weighted integral over the squared modulus of some measure of deviation of the empirical distribution of given data from the family of inverse Gaussian laws, expressed by means of the empirical Laplace transform.
Value
a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$parameter
the value of the tuning parameter.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681
Examples
test.HK1(rmutil::rinvgauss(20,2,1),B=100)
The second Henze-Klar goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family due to Henze and Klar (2002).
Usage
test.HK2(data, B)
Arguments
data |
a vector of positive numbers. |
B |
number of bootstrap iterations used to obtain p value. |
Details
The test statistic is a weighted integral over the squared modulus of some measure of deviation of the empirical distribution of given data from the family of inverse Gaussian laws, expressed by means of the empirical Laplace transform.
Value
a list containing the value of the name of the test statistic, the used tuning parameter, the parameter estimation method, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Henze, N. and Klar, B. (2002) "Goodness-of-fit tests for the inverse Gaussian distribution based on the empirical Laplace transform", Annals of the Institute of Statistical Mathematics, 54(2):425-444. doi:10.1023/A:1022442506681
Examples
test.HK2(rmutil::rinvgauss(20,2,1),B=100)
The Kolmogorov-Smirnov goodness-of-fit test for the inverse Gaussian family
Description
This function computes the goodness-of-fit test for the inverse Gaussian family in the spirit of Kolmogorov and Smirnov. Note that this tests the composite hypothesis of fit to the family of inverse Gaussian distributions, i.e. a bootstrap procedure is implemented to perform the test.
Usage
test.KS(data, B = 500)
Arguments
data |
a vector of positive numbers. |
B |
number of bootstrap iterations used to obtain p value. |
Details
The Kolmogorov Smirnov test is computed as described in Allison et. al. (2019). The p value is obtained by a parametric bootstrap procedure.
Value
a list containing the value of the name of the test statistic, the value of the test statistic, the bootstrap p value, the values of the estimators, and the number of bootstrap iterations:
$Test
the name of the used test statistic.
$T.value
the value of the test statistic.
$p.value
the approximated p value.
$par.est
the estimated parameters.
$boot.run
number of bootstrap iterations.
References
Allison, J.S., Betsch, S., Ebner, B., Visagie, I.J.H. (2019) "New weighted L^2
-type tests for the inverse Gaussian distribution", arXiv:1910.14119. LINK
Examples
test.KS(rmutil::rinvgauss(20,2,1),B=100)