This vignette gives a high level overview on how to use the R package LHD to generate different types of efficient Latin hypercube designs (LHDs) with flexible sizes. Popular optimality criteria along with other useful functions will be introduced as well.
Load the library first.
::load_all()
devtools#> i Loading LHD
library(LHD)
The overview starts with basic functions. The first function to be introduced is the “rLHD”, which generates a random LHD with dimension \(n\) by \(k\). Suppose we want a random LHD with 6 rows and 3 columns, and denote it as X. We can run the following code.
set.seed(1)
=rLHD(n=6,k=3);X
X#> [,1] [,2] [,3]
#> [1,] 1 3 5
#> [2,] 4 2 2
#> [3,] 3 6 6
#> [4,] 6 4 4
#> [5,] 2 1 3
#> [6,] 5 5 1
If we want to know the inter-site distance between row \(i\) and row \(j\) of X (\(i \neq j\)), we could use function “dij”. For example, the inter-site distance of the 2nd and the 4th row of X can be got via the following code.
dij(X=X,i=2,j=4) #The default setting is the rectangular distance.
#> [1] 6
#If Euclidean distance is desired, run the following
dij(X=X,i=2,j=4,q=2)
#> [1] 3.464102
The formula of “dij” is given by the following
\[\begin{equation} d(\boldsymbol{x_i}, \boldsymbol{x_j})= (\sum_{l=1}^{k}|x_{il}-x_{jl}|^q)^{1/q}, \end{equation}\]
where \(\boldsymbol{x_i}\) and \(\boldsymbol{x_j}\) denote two design points from LHD matrix X. Having all the inter-site distances calculated, we could further calculate the \(\phi_p\) criterion of X according to the formula from Jin, Chen and Sudjianto (2005).
\[\begin{equation} \phi_{p}= \bigg\{\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}d(\boldsymbol{x_i}, \boldsymbol{x_j})^{-p} \bigg\} ^{1/p}. \end{equation}\]
The function “phi_p” provides calculation for above criterion. We can run the following code.
phi_p(X=X,p=15)
#> [1] 0.2517586
#If Euclidean distance is desired, run the following
phi_p(X=X,p=15,q=2)
#> [1] 0.4102696
\(\phi_p\) criterion is popular for maximin distance LHDs. When orthogonal or nearly orthogonal LHDs are desired, average absolute correlation and maximum absolute correlation are widely used (Georgiou (2009)). Their formula are given by the following
\[\begin{equation} ave(|q|) = \frac{2 \sum_{i=1}^{k-1} \sum_{j=i+1}^{k}|q_{ij}|}{k(k-1)}, \, \, max(|q|)= max_{ij} |q_{ij}|, \end{equation}\]
where \(q_{ij}\) is the columnwise correlation. The function “AvgAbsCor” and “MaxAbsCor” provide calculations for above criteria. We can run the following code.
AvgAbsCor(X=X) #The average absolute correlation of X
#> [1] 0.3714286
MaxAbsCor(X=X) #The maximum absolute correlation of X
#> [1] 0.4285714
Another optimality criterion to be introduced is the maximum projection criterion (Joseph, Gul and Ba (2015)), which focuses on the projection property of the design matrix, and its formula is given by the following
\[\begin{equation} \psi = \Bigg\{ \frac{1}{{n \choose 2}} \sum_{i=1}^{n-1} \sum_{j=i+1}^{n} \frac{1}{\Pi_{l=1}^{k}(x_{il}-x_{jl})^2} \Bigg\}^{1/k}. \end{equation}\]
The function “MaxProCriterion” provides calculation for above criterion. We can run the following code.
MaxProCriterion(X=X) #The maximum projection criterion of X
#> [1] 0.3539991
Usually, a random LHD does not guarantee good space-filling property, orthogonality, nor full projection property. Morris and Mitchell (1995) proposed a version of the simulated annealing algorithm (SA) for generating maximin distance LHDs. Our function “SA” implements their algorithm, and we also added OC (optimality criterion) input argument in the “SA” function. Therefore, not only does “SA” generate maximin distance LHDs, but also it generates orthogonal and maximum projection LHDs. This is also true for the other algorithms in our package.
The function “exchange” makes “SA” workable since it is capable of switching two random elements from a given column of X. The following code shows examples of both “exchange” and “SA”.
#Suppose we want to exchange two random elements from the 1st column of X.
=exchange(X=X,j=1)
Xnew
#Look and compare
X;Xnew#> [,1] [,2] [,3]
#> [1,] 1 3 5
#> [2,] 4 2 2
#> [3,] 3 6 6
#> [4,] 6 4 4
#> [5,] 2 1 3
#> [6,] 5 5 1
#> [,1] [,2] [,3]
#> [1,] 2 3 5
#> [2,] 4 2 2
#> [3,] 3 6 6
#> [4,] 6 4 4
#> [5,] 1 1 3
#> [6,] 5 5 1
#The SA function with default setting
set.seed(1)
=SA(n=6,k=3)
trySA#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.02 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
trySA#> [,1] [,2] [,3]
#> [1,] 6 2 3
#> [2,] 1 6 4
#> [3,] 5 4 6
#> [4,] 4 5 2
#> [5,] 2 3 1
#> [6,] 3 1 5
#The phi_p of trySA is much smaller than the phi_p of X, which was 0.2517586
phi_p(X=trySA,p=15)
#> [1] 0.2046486
#Now we try SA function with a different optimality criterion
set.seed(1)
=SA(n=6,k=3,OC="AvgAbsCor")
trySA2#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.03 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
AvgAbsCor(trySA2) #The average absolute correlation is about 0.05
#> [1] 0.02857143
Leary, Bhaskar and Keane (2003) modified the work of Morris and Mitchell (1995). They showed that orthogonal-array-based LHD (OALHD) with SA converges faster and generates better designs. Our function “OASA” implements their algorithm.
The function “OA2LHD” makes “OASA” workable since it is capable of transferring an Orthogonal Array (OA) into an LHD with corresponding size, based on the work of Tang (1993). The following code shows examples of both “OA2LHD” and “OASA”.
#create an OA(9,2,3,2)
=matrix(c(rep(1:3,each=3),rep(1:3,times=3)),ncol=2,nrow=9,byrow = F);OA
OA#> [,1] [,2]
#> [1,] 1 1
#> [2,] 1 2
#> [3,] 1 3
#> [4,] 2 1
#> [5,] 2 2
#> [6,] 2 3
#> [7,] 3 1
#> [8,] 3 2
#> [9,] 3 3
#Transfer the OA above into a 9 by 2 LHD
=OA2LHD(OA=OA)
tryOA
tryOA#> [,1] [,2]
#> [1,] 2 1
#> [2,] 3 5
#> [3,] 1 8
#> [4,] 5 2
#> [5,] 4 4
#> [6,] 6 7
#> [7,] 8 3
#> [8,] 9 6
#> [9,] 7 9
#The OASA function
set.seed(1)
=OASA(OA=OA)
tryOASA#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.04 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 9"
tryOASA#> [,1] [,2]
#> [1,] 3 3
#> [2,] 1 5
#> [3,] 2 9
#> [4,] 5 1
#> [5,] 6 4
#> [6,] 4 7
#> [7,] 8 2
#> [8,] 9 6
#> [9,] 7 8
#The phi_p of tryOASA is smaller than the phi_p of SA
phi_p(X=tryOASA,p=15); phi_p(X=SA(n=9,k=2),p=15)
#> [1] 0.2899805
#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.04 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 9"
#> [1] 0.3356489
#Now we try OASA function with a different optimality criterion
=OASA(OA=OA,OC="AvgAbsCor")
tryOASA2#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.01 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 9"
AvgAbsCor(tryOASA2) #The average absolute correlation is 0
#> [1] 0
Joseph and Hung (2008) modified the work of Morris and Mitchell (1995) in another way that is able to reduce both \(\phi_{p}\) and column-wise correlations at the same time. Our function “SA2008” implements their algorithm. See the following code and example.
set.seed(1)
=SA2008(n=6,k=3)
trySA2008_63#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.06 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
trySA2008_63#> [,1] [,2] [,3]
#> [1,] 4 1 5
#> [2,] 1 2 3
#> [3,] 3 6 2
#> [4,] 6 4 4
#> [5,] 2 5 6
#> [6,] 5 3 1
phi_p(X=trySA2008_63,p=15)
#> [1] 0.203588
#Another example with different n and k
=SA2008(n=9,k=2)
trySA2008_92#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.08 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 9"
trySA2008_92#> [,1] [,2]
#> [1,] 8 2
#> [2,] 3 6
#> [3,] 5 9
#> [4,] 1 8
#> [5,] 4 1
#> [6,] 7 7
#> [7,] 9 5
#> [8,] 6 4
#> [9,] 2 3
phi_p(X=trySA2008_92,p=15)
#> [1] 0.2899805
#Now we try SA2008 function with a different optimality criterion
set.seed(1)
=SA2008(n=6,k=3,OC="AvgAbsCor")
trySA2008#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.06 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
AvgAbsCor(trySA2008) #The average absolute correlation is about 0.03
#> [1] 0.02857143
Ba, Myers and Brenneman (2015) extended the idea of Sliced Latin Hypercube Designs (SLHD) from Qian (2012) and had their own modifications of SA from Morris and Mitchell (1995). They called their algorithm “improved two-stage algorithm”. Our function “SLHD” implements their algorithm. See the following code and example.
set.seed(1)
=SLHD(n=6,k=3,t=2) #The default setting (stage2 is FALSE)
trySLHD_63F#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration for Stage I is: 0.01 seconds"
#> [1] "the number of iterations completed for Stage I is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
trySLHD_63F#> [,1] [,2] [,3]
#> [1,] 6 4 5
#> [2,] 1 2 4
#> [3,] 3 5 2
#> [4,] 4 3 3
#> [5,] 2 6 6
#> [6,] 5 1 1
phi_p(X=trySLHD_63F,p=15)
#> [1] 0.2517316
#If the second stage is desired, run the following
=SLHD(n=6,k=3,t=2,stage2=TRUE)
trySLHD_63T#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration for Stage I is: 0.01 seconds"
#> [1] "average CPU time per iteration for Stage II is: 0.01 seconds"
#> [1] "the number of iterations completed for Stage I is: 10"
#> [1] "the number of iterations completed for Stage II is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
trySLHD_63T#> [,1] [,2] [,3]
#> [1,] 5 1 2
#> [2,] 4 4 4
#> [3,] 2 6 5
#> [4,] 3 5 1
#> [5,] 1 2 3
#> [6,] 6 3 6
phi_p(X=trySLHD_63T,p=15) #Result is improved (phi_p is smaller than above)
#> [1] 0.21651
#Now we try SLHD function with a different optimality criterion
set.seed(1)
=SLHD(n=6,k=3,OC="AvgAbsCor",stage2=TRUE)
trySLHD#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration for Stage I is: 0.01 seconds"
#> [1] "average CPU time per iteration for Stage II is: 0.02 seconds"
#> [1] "the number of iterations completed for Stage I is: 10"
#> [1] "the number of iterations completed for Stage II is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
AvgAbsCor(trySLHD) #The average absolute correlation is about 0.07
#> [1] 0.04761905
Chen et al. (2013) followed the structure of Particle Swarm Optimization (PSO) framework and proposed a version of it for constructing maximin distance LHDs, and their algorithm is called LaPSO. Our function “LaPSO” implements their algorithm. See the following code and example.
set.seed(1)
=LaPSO(n=6,k=3)
tryLaPSO_63#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
tryLaPSO_63#> [,1] [,2] [,3]
#> [1,] 3 6 2
#> [2,] 4 3 1
#> [3,] 1 1 3
#> [4,] 6 5 4
#> [5,] 2 4 5
#> [6,] 5 2 6
phi_p(X=tryLaPSO_63,p=15)
#> [1] 0.2047011
#Another example with different n and k
=LaPSO(n=9,k=2)
tryLaPSO_92#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0.01 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 9"
tryLaPSO_92#> [,1] [,2]
#> [1,] 1 9
#> [2,] 7 8
#> [3,] 2 2
#> [4,] 9 6
#> [5,] 6 5
#> [6,] 5 1
#> [7,] 4 7
#> [8,] 3 4
#> [9,] 8 3
phi_p(X=tryLaPSO_92,p=15)
#> [1] 0.3361717
#Now we try LaPSO function with a different optimality criterion
set.seed(1)
=LaPSO(n=6,k=3,OC="AvgAbsCor")
tryLaPSO#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
AvgAbsCor(tryLaPSO) #The average absolute correlation is about 0.1
#> [1] 0.08571429
Liefvendahl and Stocki (2006) proposed a version of the genetic algorithm (GA) which implemented an elite strategy to focus on the global best directly for constructing maximin distance LHDs. Our function “GA” implements their algorithm. See the following code and example.
set.seed(1)
=GA(n=6,k=3)
tryGA_63#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
tryGA_63#> [,1] [,2] [,3]
#> [1,] 2 5 6
#> [2,] 5 3 1
#> [3,] 3 6 2
#> [4,] 6 4 4
#> [5,] 4 1 5
#> [6,] 1 2 3
phi_p(X=tryGA_63,p=15)
#> [1] 0.203588
#Another example with different n and k.
=GA(n=9,k=2)
tryGA_92#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 9"
tryGA_92#> [,1] [,2]
#> [1,] 8 6
#> [2,] 2 1
#> [3,] 3 7
#> [4,] 5 9
#> [5,] 6 5
#> [6,] 1 4
#> [7,] 4 3
#> [8,] 7 2
#> [9,] 9 8
phi_p(X=tryGA_92,p=15)
#> [1] 0.3501968
#Now we try GA function with a different optimality criterion
set.seed(1)
=GA(n=6,k=3,OC="AvgAbsCor")
tryGA#>
|
| | 0%
|
|======= | 10%
|
|============== | 20%
|
|===================== | 30%
|
|============================ | 40%
|
|=================================== | 50%
|
|========================================== | 60%
|
|================================================= | 70%
|
|======================================================== | 80%
|
|=============================================================== | 90%
|
|======================================================================| 100%
#> [1] "average CPU time per iteration is: 0 seconds"
#> [1] "the number of iterations completed is: 10"
#> [1] "the elements in design matrix is scaled to be 1 to 6"
AvgAbsCor(tryGA) #The average absolute correlation is about 0.07
#> [1] 0.06666667
In addition to above algorithms, Wang, L., Xiao, Q., and Xu, H. (2018) proposed a construction method for maximin \(L_1\) distance LHDs based on good lattice point designs. One big advantage is that their method gives instant solutions, and this is very helpful for practitioners who seek immediate results. See the following code and example.
#n by n design when 2n+1 is prime
=FastMmLHD(8,8)
try
try#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 0 1 2 3 4 5 6 7
#> [2,] 1 3 5 7 6 4 2 0
#> [3,] 2 5 7 4 1 0 3 6
#> [4,] 3 7 4 0 2 6 5 1
#> [5,] 4 6 1 2 7 3 0 5
#> [6,] 5 4 0 6 3 1 7 2
#> [7,] 6 2 3 5 0 7 1 4
#> [8,] 7 0 6 1 5 2 4 3
phi_p(try) #calculate the phi_p of "try".
#> [1] 0.05203145
#n by n design when n+1 is prime
=FastMmLHD(12,12)
try2
try2#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
#> [1,] 6 0 8 2 9 11 3 4 5 10 7 1
#> [2,] 3 9 0 10 4 8 2 7 1 11 5 6
#> [3,] 11 5 7 0 1 4 8 6 2 9 3 10
#> [4,] 2 4 9 11 7 0 10 5 6 8 1 3
#> [5,] 7 10 2 1 11 3 5 8 9 6 0 4
#> [6,] 8 1 5 9 6 7 0 3 10 4 2 11
#> [7,] 1 8 11 3 0 10 6 9 7 2 4 5
#> [8,] 10 7 4 8 5 9 11 1 3 0 6 2
#> [9,] 4 2 3 5 10 6 7 11 0 1 8 9
#> [10,] 5 11 10 6 8 2 1 0 4 3 9 7
#> [11,] 9 3 6 7 2 1 4 10 8 5 11 0
#> [12,] 0 6 1 4 3 5 9 2 11 7 10 8
phi_p(try2) #calculate the phi_p of "try2".
#> [1] 0.02575383
#n by n-1 design when n is prime
=FastMmLHD(7,6)
try3
try3#> [,1] [,2] [,3] [,4] [,5] [,6]
#> [1,] 3 0 1 6 2 4
#> [2,] 1 6 2 4 3 0
#> [3,] 0 1 6 2 4 3
#> [4,] 2 4 3 0 1 6
#> [5,] 4 3 0 1 6 2
#> [6,] 6 2 4 3 0 1
#> [7,] 5 5 5 5 5 5
phi_p(try3) #calculate the phi_p of "try3".
#> [1] 0.07656459
#General cases
=FastMmLHD(24,8)
try4
try4#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 11 7 16 12 19 0 4 23
#> [2,] 22 17 17 22 6 1 6 1
#> [3,] 8 20 3 15 15 3 8 20
#> [4,] 5 10 10 5 10 5 10 5
#> [5,] 19 0 23 4 11 7 12 16
#> [6,] 14 9 9 14 14 9 14 9
#> [7,] 0 19 4 23 7 11 16 12
#> [8,] 13 18 18 13 18 13 18 13
#> [9,] 20 8 15 3 3 15 20 8
#> [10,] 6 1 1 6 22 17 22 17
#> [11,] 7 11 12 16 0 19 23 4
#> [12,] 21 21 21 21 21 21 21 21
#> [13,] 12 16 7 11 4 23 19 0
#> [14,] 1 6 6 1 17 22 17 22
#> [15,] 15 3 20 8 8 20 15 3
#> [16,] 18 13 13 18 13 18 13 18
#> [17,] 4 23 0 19 12 16 11 7
#> [18,] 9 14 14 9 9 14 9 14
#> [19,] 23 4 19 0 16 12 7 11
#> [20,] 10 5 5 10 5 10 5 10
#> [21,] 3 15 8 20 20 8 3 15
#> [22,] 17 22 22 17 1 6 1 6
#> [23,] 16 12 11 7 23 4 0 19
#> [24,] 2 2 2 2 2 2 2 2
phi_p(try4) #calculate the phi_p of "try4".
#> [1] 0.03314641
Besides maximin distance LHDs, orthogonal Latin hypercube designs (OLHDs) are popular as well. Ye (1998) proposed a construction method for generating OHLDs. Our function “OLHD.Y1998” implements this method. See the following code and example.
#create an orthogonal LHD with m=4. So n=2^m+1=17 and k=2*m-2=6
=OLHD.Y1998(m=4)
tryOLHD
tryOLHD#> [,1] [,2] [,3] [,4] [,5] [,6]
#> [1,] 5 -4 -3 -6 8 2
#> [2,] 4 5 -7 -8 -6 -1
#> [3,] 7 -3 4 -1 -2 8
#> [4,] 3 7 5 -2 1 -6
#> [5,] 2 -1 -6 3 7 -5
#> [6,] 1 2 -8 7 -3 4
#> [7,] 8 -6 1 4 -5 -7
#> [8,] 6 8 2 5 4 3
#> [9,] 0 0 0 0 0 0
#> [10,] -5 4 3 6 -8 -2
#> [11,] -4 -5 7 8 6 1
#> [12,] -7 3 -4 1 2 -8
#> [13,] -3 -7 -5 2 -1 6
#> [14,] -2 1 6 -3 -7 5
#> [15,] -1 -2 8 -7 3 -4
#> [16,] -8 6 -1 -4 5 7
#> [17,] -6 -8 -2 -5 -4 -3
MaxAbsCor(tryOLHD) #zero columnwise correlations
#> [1] 0
Cioppa and Lucas (2007) extended Ye’s idea, and their method is able to accommodate more factors with the same run size. Our function “OLHD.C2007” implements this method. See the following code and example.
#create an orthogonal LHD with m=4. So n=2^m+1=17 and k=4+3=7
=OLHD.C2007(m=4)
tryOLHD
tryOLHD#> [,1] [,2] [,3] [,4] [,5] [,6] [,7]
#> [1,] 1 -2 -4 -8 3 7 5
#> [2,] 2 1 -3 -7 -4 -8 6
#> [3,] 3 -4 2 -6 -1 5 -7
#> [4,] 4 3 1 -5 2 -6 -8
#> [5,] 5 -6 -8 4 7 -3 -1
#> [6,] 6 5 -7 3 -8 4 -2
#> [7,] 7 -8 6 2 -5 -1 3
#> [8,] 8 7 5 1 6 2 4
#> [9,] 0 0 0 0 0 0 0
#> [10,] -1 2 4 8 -3 -7 -5
#> [11,] -2 -1 3 7 4 8 -6
#> [12,] -3 4 -2 6 1 -5 7
#> [13,] -4 -3 -1 5 -2 6 8
#> [14,] -5 6 8 -4 -7 3 1
#> [15,] -6 -5 7 -3 8 -4 2
#> [16,] -7 8 -6 -2 5 1 -3
#> [17,] -8 -7 -5 -1 -6 -2 -4
MaxAbsCor(tryOLHD) #zero columnwise correlations
#> [1] 0
Lin et al. (2009) proposed to couple OLHDs with OAs to generate OLHDs with large factor size. Our function “OLHD.L2009” implements this method. See the following code and example.
#create a 5 by 2 OLHD
=OLHD.C2007(m=2)
OLHD
#create an OA(25,6,5,2)
=matrix(c(2,2,2,2,2,1,2,1,5,4,3,5,3,2,1,5,4,5,1,5,4,3,2,5,
OA4,1,3,5,2,3,1,2,3,4,5,2,1,3,5,2,4,3,1,1,1,1,1,1,4,3,2,1,5,5,
5,5,5,5,5,1,4,4,4,4,4,1,3,1,4,2,5,4,3,3,3,3,3,1,3,5,2,4,1,3,
3,4,5,1,2,2,5,4,3,2,1,5,2,3,4,5,1,2,2,5,3,1,4,4,1,4,2,5,3,4,
4,2,5,3,1,4,2,4,1,3,5,3,5,3,1,4,2,4,5,2,4,1,3,3,5,1,2,3,4,2,
4,5,1,2,3,2),ncol=6,nrow=25,byrow=T)
#Construct a 25 by 12 OLHD
=OLHD.L2009(OLHD,OA)
tryOLHD
tryOLHD#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10] [,11] [,12]
#> [1,] 12 -8 12 -8 7 -9 6 -4 6 -4 -9 -7
#> [2,] 7 -9 -7 9 -10 -2 -9 -7 9 7 -5 -1
#> [3,] 10 2 -9 -7 -11 3 5 1 -7 9 -3 -11
#> [4,] -9 -7 -1 5 -8 -12 -7 9 2 -10 -4 -6
#> [5,] 4 6 -10 -2 2 -10 -8 -12 -5 -1 1 -5
#> [6,] 11 -3 -5 -1 8 12 3 11 10 2 4 6
#> [7,] 1 -5 8 12 -1 5 -2 10 4 6 2 -10
#> [8,] 6 -4 6 -4 6 -4 -12 8 -12 8 -12 8
#> [9,] -1 5 7 -9 -12 8 2 -10 -9 -7 -6 4
#> [10,] -12 8 -12 8 3 11 -6 4 -6 4 -11 3
#> [11,] -6 4 -6 4 4 6 12 -8 12 -8 -8 -12
#> [12,] 5 1 9 7 -7 9 -10 -2 7 -9 9 7
#> [13,] 0 0 0 0 5 1 0 0 0 0 -10 -2
#> [14,] -10 -2 -3 -11 1 -5 -5 -1 11 -3 -2 10
#> [15,] -5 -1 3 11 12 -8 10 2 -11 3 6 -4
#> [16,] -7 9 10 2 -9 -7 9 7 5 1 -7 9
#> [17,] 2 -10 -11 3 11 -3 1 -5 -3 -11 3 11
#> [18,] -8 -12 5 1 -6 4 -4 -6 -10 -2 12 -8
#> [19,] -4 -6 -8 -12 -5 -1 8 12 -4 -6 10 2
#> [20,] 9 7 -2 10 -4 -6 7 -9 -1 5 8 12
#> [21,] -3 -11 1 -5 -2 10 11 -3 -2 10 -1 5
#> [22,] -2 10 -4 -6 -3 -11 -1 5 8 12 11 -3
#> [23,] 8 12 4 6 0 0 4 6 -8 -12 0 0
#> [24,] 3 11 2 -10 9 7 -11 3 1 -5 7 -9
#> [25,] -11 3 11 -3 10 2 -3 -11 3 11 5 1
MaxAbsCor(tryOLHD) #zero columnwise correlations
#> [1] 0
Sun et al. (2010) proposed to construct OLHDs with flexible run sizes via matrices manipulations. Our function “OLHD.S2010” implements this method. See the following code and example.
#create an orthogonal LHD with C=3, r=3, type="odd".
#So n=3*2^4+1=49 and k=2^3=8
=OLHD.S2010(C=3,r=3,type="odd")
tryOLHD1
tryOLHD1#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 1 2 3 4 5 6 7 8
#> [2,] 2 -1 -4 3 6 -5 -8 7
#> [3,] 3 4 -1 -2 -7 -8 5 6
#> [4,] 4 -3 2 -1 -8 7 -6 5
#> [5,] 5 6 7 8 -1 -2 -3 -4
#> [6,] 6 -5 -8 7 -2 1 4 -3
#> [7,] 7 8 -5 -6 3 4 -1 -2
#> [8,] 8 -7 6 -5 4 -3 2 -1
#> [9,] 9 10 11 12 13 14 15 16
#> [10,] 10 -9 -12 11 14 -13 -16 15
#> [11,] 11 12 -9 -10 -15 -16 13 14
#> [12,] 12 -11 10 -9 -16 15 -14 13
#> [13,] 13 14 15 16 -9 -10 -11 -12
#> [14,] 14 -13 -16 15 -10 9 12 -11
#> [15,] 15 16 -13 -14 11 12 -9 -10
#> [16,] 16 -15 14 -13 12 -11 10 -9
#> [17,] 17 18 19 20 21 22 23 24
#> [18,] 18 -17 -20 19 22 -21 -24 23
#> [19,] 19 20 -17 -18 -23 -24 21 22
#> [20,] 20 -19 18 -17 -24 23 -22 21
#> [21,] 21 22 23 24 -17 -18 -19 -20
#> [22,] 22 -21 -24 23 -18 17 20 -19
#> [23,] 23 24 -21 -22 19 20 -17 -18
#> [24,] 24 -23 22 -21 20 -19 18 -17
#> [25,] 0 0 0 0 0 0 0 0
#> [26,] -1 -2 -3 -4 -5 -6 -7 -8
#> [27,] -2 1 4 -3 -6 5 8 -7
#> [28,] -3 -4 1 2 7 8 -5 -6
#> [29,] -4 3 -2 1 8 -7 6 -5
#> [30,] -5 -6 -7 -8 1 2 3 4
#> [31,] -6 5 8 -7 2 -1 -4 3
#> [32,] -7 -8 5 6 -3 -4 1 2
#> [33,] -8 7 -6 5 -4 3 -2 1
#> [34,] -9 -10 -11 -12 -13 -14 -15 -16
#> [35,] -10 9 12 -11 -14 13 16 -15
#> [36,] -11 -12 9 10 15 16 -13 -14
#> [37,] -12 11 -10 9 16 -15 14 -13
#> [38,] -13 -14 -15 -16 9 10 11 12
#> [39,] -14 13 16 -15 10 -9 -12 11
#> [40,] -15 -16 13 14 -11 -12 9 10
#> [41,] -16 15 -14 13 -12 11 -10 9
#> [42,] -17 -18 -19 -20 -21 -22 -23 -24
#> [43,] -18 17 20 -19 -22 21 24 -23
#> [44,] -19 -20 17 18 23 24 -21 -22
#> [45,] -20 19 -18 17 24 -23 22 -21
#> [46,] -21 -22 -23 -24 17 18 19 20
#> [47,] -22 21 24 -23 18 -17 -20 19
#> [48,] -23 -24 21 22 -19 -20 17 18
#> [49,] -24 23 -22 21 -20 19 -18 17
#create an orthogonal LHD with C=3, r=3, type="even".
#So n=3*2^4=48 and k=2^3=8
=OLHD.S2010(C=3,r=3,type="even")
tryOLHD2
tryOLHD2#> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
#> [1,] 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5
#> [2,] 1.5 -0.5 -3.5 2.5 5.5 -4.5 -7.5 6.5
#> [3,] 2.5 3.5 -0.5 -1.5 -6.5 -7.5 4.5 5.5
#> [4,] 3.5 -2.5 1.5 -0.5 -7.5 6.5 -5.5 4.5
#> [5,] 4.5 5.5 6.5 7.5 -0.5 -1.5 -2.5 -3.5
#> [6,] 5.5 -4.5 -7.5 6.5 -1.5 0.5 3.5 -2.5
#> [7,] 6.5 7.5 -4.5 -5.5 2.5 3.5 -0.5 -1.5
#> [8,] 7.5 -6.5 5.5 -4.5 3.5 -2.5 1.5 -0.5
#> [9,] 8.5 9.5 10.5 11.5 12.5 13.5 14.5 15.5
#> [10,] 9.5 -8.5 -11.5 10.5 13.5 -12.5 -15.5 14.5
#> [11,] 10.5 11.5 -8.5 -9.5 -14.5 -15.5 12.5 13.5
#> [12,] 11.5 -10.5 9.5 -8.5 -15.5 14.5 -13.5 12.5
#> [13,] 12.5 13.5 14.5 15.5 -8.5 -9.5 -10.5 -11.5
#> [14,] 13.5 -12.5 -15.5 14.5 -9.5 8.5 11.5 -10.5
#> [15,] 14.5 15.5 -12.5 -13.5 10.5 11.5 -8.5 -9.5
#> [16,] 15.5 -14.5 13.5 -12.5 11.5 -10.5 9.5 -8.5
#> [17,] 16.5 17.5 18.5 19.5 20.5 21.5 22.5 23.5
#> [18,] 17.5 -16.5 -19.5 18.5 21.5 -20.5 -23.5 22.5
#> [19,] 18.5 19.5 -16.5 -17.5 -22.5 -23.5 20.5 21.5
#> [20,] 19.5 -18.5 17.5 -16.5 -23.5 22.5 -21.5 20.5
#> [21,] 20.5 21.5 22.5 23.5 -16.5 -17.5 -18.5 -19.5
#> [22,] 21.5 -20.5 -23.5 22.5 -17.5 16.5 19.5 -18.5
#> [23,] 22.5 23.5 -20.5 -21.5 18.5 19.5 -16.5 -17.5
#> [24,] 23.5 -22.5 21.5 -20.5 19.5 -18.5 17.5 -16.5
#> [25,] -0.5 -1.5 -2.5 -3.5 -4.5 -5.5 -6.5 -7.5
#> [26,] -1.5 0.5 3.5 -2.5 -5.5 4.5 7.5 -6.5
#> [27,] -2.5 -3.5 0.5 1.5 6.5 7.5 -4.5 -5.5
#> [28,] -3.5 2.5 -1.5 0.5 7.5 -6.5 5.5 -4.5
#> [29,] -4.5 -5.5 -6.5 -7.5 0.5 1.5 2.5 3.5
#> [30,] -5.5 4.5 7.5 -6.5 1.5 -0.5 -3.5 2.5
#> [31,] -6.5 -7.5 4.5 5.5 -2.5 -3.5 0.5 1.5
#> [32,] -7.5 6.5 -5.5 4.5 -3.5 2.5 -1.5 0.5
#> [33,] -8.5 -9.5 -10.5 -11.5 -12.5 -13.5 -14.5 -15.5
#> [34,] -9.5 8.5 11.5 -10.5 -13.5 12.5 15.5 -14.5
#> [35,] -10.5 -11.5 8.5 9.5 14.5 15.5 -12.5 -13.5
#> [36,] -11.5 10.5 -9.5 8.5 15.5 -14.5 13.5 -12.5
#> [37,] -12.5 -13.5 -14.5 -15.5 8.5 9.5 10.5 11.5
#> [38,] -13.5 12.5 15.5 -14.5 9.5 -8.5 -11.5 10.5
#> [39,] -14.5 -15.5 12.5 13.5 -10.5 -11.5 8.5 9.5
#> [40,] -15.5 14.5 -13.5 12.5 -11.5 10.5 -9.5 8.5
#> [41,] -16.5 -17.5 -18.5 -19.5 -20.5 -21.5 -22.5 -23.5
#> [42,] -17.5 16.5 19.5 -18.5 -21.5 20.5 23.5 -22.5
#> [43,] -18.5 -19.5 16.5 17.5 22.5 23.5 -20.5 -21.5
#> [44,] -19.5 18.5 -17.5 16.5 23.5 -22.5 21.5 -20.5
#> [45,] -20.5 -21.5 -22.5 -23.5 16.5 17.5 18.5 19.5
#> [46,] -21.5 20.5 23.5 -22.5 17.5 -16.5 -19.5 18.5
#> [47,] -22.5 -23.5 20.5 21.5 -18.5 -19.5 16.5 17.5
#> [48,] -23.5 22.5 -21.5 20.5 -19.5 18.5 -17.5 16.5
MaxAbsCor(tryOLHD1) #zero columnwise correlations
#> [1] 0
MaxAbsCor(tryOLHD2) #zero columnwise correlations
#> [1] 0
Butler (2001) proposed to construct OLHDs using Williams transformation. Our function “OLHD.B2001” implements this method. See the following code and example.
#create an orthogonal LHD with n=11 and k=5
OLHD.B2001(n=11,k=5)
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 3 4 2 1 5
#> [2,] 11 2 3 7 8
#> [3,] 4 1 7 10 3
#> [4,] 5 3 11 4 10
#> [5,] 10 5 8 3 1
#> [6,] 2 7 4 9 11
#> [7,] 7 9 1 8 2
#> [8,] 8 11 5 2 9
#> [9,] 1 10 9 5 4
#> [10,] 9 8 10 11 7
#> [11,] 6 6 6 6 6
#create an orthogonal LHD with n=7 and k=6
OLHD.B2001(n=7,k=6)
#> [,1] [,2] [,3] [,4] [,5] [,6]
#> [1,] 2 1 3 7 3 5
#> [2,] 1 5 6 2 5 6
#> [3,] 3 6 1 5 7 2
#> [4,] 5 2 7 4 6 3
#> [5,] 7 3 2 3 4 7
#> [6,] 6 7 5 6 2 4
#> [7,] 4 4 4 1 1 1