Int. J. Anal. Appl. (2023), 21:51 Received: Apr. 24, 2023. 2020 Mathematics Subject Classification. 05B30, 60G55, 62K99. Key words and phrases. design of experiments; computer experiment designs; point process; continuum random cluster model; connected component process; Markov point processes; nearest-neighbour Markov point process; Strauss process; Markov Chain Monte Carlo (MCMC). https://doi.org/10.28924/2291-8639-21-2023-51 Β© 2023 the author(s) ISSN: 2291-8639 1 New Computer Experiment Designs Using Continuum Random Cluster Point Process Hichem Elmossaoui*, Nadia Oukid Department of Mathematics, Faculty of Science, University Saad Daheb, Blida, Algeria *Correspondence: elmossaoui.hichem@yahoo.com ABSTRACT. In this paper, we propose a new approach for building computer experiment designs using the continuum random cluster point process, also referred to as the connected component Markov point process. Our method involves generating designs through the Markov Chain Monte Carlo method (MCMC) and the Random Walk Metropolis Hastings algorithm (RWMH algorithm), which can be easily scaled to meet various objectives. We have conducted a comprehensive study on the convergence of the Markov chain and compared our approach with existing computer experiment designs. Overall, our approach offers a novel and flexible solution for constructing computer experiment designs. 1. INTRODUCTION The development of modelling techniques, boosted by increased computing power, has led to the development of simulators with challenging complexity. The main difficulties stem from the high computing cost of the simulator, and from the size of the problem to deal with. It may become impossible to consider the direct use of the simulator with some applications, even after significantly reducing the size of the system. Hence, the alternative option would be to use one or more functions instead of the simulator. These functions are generally relatively simple, and can be obtained by means of approximation or interpolation from computer experiment designs. In order to achieve a more thorough exploration of the parameter space and obtain information throughout the entire experimental area, we propose a method for constructing computer https://doi.org/10.28924/2291-8639-21-2023-51 mailto:elmossaoui.hichem@yahoo.com 2 Int. J. Anal. Appl. (2023), 21:51 experiment designs with points uniformly distributed in the unit hypercube. To accomplish this, we employ the Continuum Random Cluster Process (CRCP) ([1], [2], [3], [4]) to simulate the 𝑛 computer experiments, which make up the computer experiment designs. Although the process is not Markovian under Ripley-Kelly [5], it is Markovian with respect to a new neighborhood relationship that depends on the configuration π‘₯ (as defined in Definition 2.1) and the relation ∼ π‘₯ (as defined in Definition 2.3) for nearest neighbours. Franco (2008) [6] introduced computer experiment designs based on Strauss point processes that incorporate the concept of interaction between pairs of points. Elmossaoui et al (2020) ([7], [8]) proposed computer experiment designs using Marked Strauss point processes that can achieve multiple objectives simultaneously. The first objective is related to the distribution of points, while the second objective concerns the specification of the marks of those points. In contrast to the aforementioned approaches, CRCPs are an alternative that allows for the creation of models with more or less regular spatial distributions without constraints on parameters. To generate the designs proposed in this work, we will use simulation techniques via MCMC methods and Metropolis-Hastings ([9], [10]) algorithm. There are several sub-types of Metropolis-Hastings algorithms, depending on the chosen transition density π‘ž. In this regard, we will develop the algorithm to generate a Markov chain of the random walk type, where the density π‘ž is centred on the current value of the chain and is symmetrical. The proposed value π‘₯𝑑+1 takes the form π‘₯𝑑+1 + , where Ξ΅ is a random perturbation that follows the π‘ž density distribution and is independent of π‘₯𝑑. This paper is organised as follows: section 2 is about some general definitions and notations. Section 3 concerns the construction of new computer experiment designs based on the use of CRCP by means of MCMC method and Metropolis-Hastings algorithm. Section 4 is about the study of the convergence of the algorithm. Finally, in section 5 we have compared our results with the existing ones. 2. PRELIMINARIES AND GENERAL DEFINITIONS Let (Ξ©, ℬ, P) be a probability space that models the random aspects of the experiments. Let πœ’ be a nonempty set equipped with a Euclidean distance 𝑑, making it a complete and separable metric space. In most cases, πœ’ will be equal to [0,1]𝑝 (a subset of ℝ𝑝), where 𝑝 is the number of continuous factors of interest (𝑝 β‰₯ 1). We will use ΞΌ to denote the Lebesgue measure associated with this space, considered with its Borel Οƒ-algebra ℬ. 3 Int. J. Anal. Appl. (2023), 21:51 Definition 2.1. A completion π‘₯ of a point process 𝑋 on πœ’ is defined as any locally finite collection of points from πœ’, π‘₯ = ( π‘₯1, π‘₯2, … , π‘₯𝑛 ), with π‘₯𝑖 ∈ (πœ’, 𝑑) and 𝑖 ∈ β„•. In other words, it is a part π‘₯ βŠ‚ πœ’ , so that π‘₯ ∩ B is finite for every part B of ℬ that is Borelean and limited. Let 𝑁𝑙𝑓 denote the set of the locally finite point configurations, x, y … are configurations on πœ’ , and π‘₯𝑖 , 𝑦𝑖 … points from these configurations. Definition 2.2. A point process on πœ’ is an application 𝑋 of a probability space (Ξ©, ℬ, P) within the set of the locally finite point configurations 𝑁𝑙𝑓, so that for every limited Borelean B, the number of points 𝑁(B) = 𝑁𝑋 (B) of points of 𝑋 falling on B is a finite random discrete variable. In this definition, πœ’ can be replaced with a general complete metric space. However, it is important to note that the implementation of a point process is, at most, countable and without accumulation points. If πœ’ is bounded, then 𝑁𝑋 (πœ’) is almost surely finite and the point process is said to be finite. We shall consider here only simple point processes that do not allow the repetition of points, in which case the realization π‘₯ of the point process coincides with a subset of πœ’. Definition 2.3. Let ∼ be a binary relation that is symmetrical and reflexive on Ο‡. Two points π‘₯𝑖 and π‘₯𝑖′ are said to be neighbours if π‘₯𝑖 ∼ π‘₯𝑖′. The neighbourhood of 𝑦 βŠ‚ π‘₯ is given by: βˆ‚(𝑦|π‘₯) = {π‘₯𝑖 ∈ π‘₯ π‘Žπ‘›π‘‘ π‘₯𝑖 βˆ‰ 𝑦 ∢ βˆƒ π‘₯𝑖′ ∈ 𝑦 π‘ π‘œ π‘‘β„Žπ‘Žπ‘‘ π‘₯𝑖′ ∼ π‘₯𝑖 } For example, π‘₯𝑖 ∼ π‘₯𝑖′ if 𝑑(π‘₯𝑖 , π‘₯𝑖′ ) ≀ π‘Ÿ is the r-neighbourhood relation on (πœ’, 𝑑), and π‘Ÿ > 0 being a fixed radius. Definition 2.4. Let 𝐡(π‘₯) = {⋃ 𝐡(π‘₯𝑖 , π‘Ÿ 2 )π‘₯π‘–βˆˆπ‘₯ } ∩ πœ’ be the reunion of balls centered at points π‘₯𝑖 of π‘₯, of radius π‘Ÿ 2 , and limited at πœ’. Two points π‘₯𝑖 and π‘₯𝑖′ of π‘₯ are said to be connected for π‘₯ if π‘₯𝑖 and π‘₯𝑖′ are in the same connected component of 𝐡(π‘₯). Such relation shall be noted π‘₯𝑖 ~π‘₯ π‘₯𝑖′ . Definition 2.5. Let 𝑋 be a point process of density πœ‹ in relation to a Poisson point process of law πœ‹πœˆ (βˆ™) and intensity 𝜈(βˆ™). The process 𝑋 is the nearest-neighbour Markov point processes with regard to the relation ~π‘₯ if, for every configuration π‘₯ ∈ 𝑁 𝑙𝑓, where πœ‹(π‘₯) > 0, then we have what follows: (i) πœ‹(𝑦) > 0 for every 𝑦 βŠ‚ π‘₯ . (ii) βˆ€ π‘₯𝑖 ∈ πœ’, then : πœ†(π‘₯𝑖 , π‘₯) = 𝑓((π‘₯βˆͺ{π‘₯𝑖 }) 𝑓(π‘₯) depends only on π‘₯𝑖, βˆ‚({π‘₯𝑖 }|π‘₯⋃{π‘₯𝑖 }) ∩ π‘₯, and the two relations ~π‘₯ and ~π‘₯⋃{π‘₯𝑖} . The quotient πœ†(π‘₯𝑖 , π‘₯) being the Papangelou conditional density. 4 Int. J. Anal. Appl. (2023), 21:51 3. COMPUTER EXPERIMENT DESIGN USING NEAREST-NEIGHBOUR MARKOV POINT PROCESS Identically to the way we proceeded in ([7], [8]), we consider each experiment to be a point or particle defined on the interval [0,1]𝑝 , and each configuration π‘₯ as an experiment design. We thus simulate 𝑛 experiments to implement a continuum random cluster process. It is worth noting that the continuum random cluster process has interaction potentials. These interactions are defined by the neighbourhood properties as given by a Markov chain on the nearest neighbours [3]. The interaction potential used is the connectedness interaction. These object processes are important in modelling repulsive phenomena. The probability density of the process is given as: πœ‹(π‘₯) = π‘˜ 𝛽𝑛(π‘₯)π›Ύβ„Ž(π‘₯) where k is a positive normalization constant which makes πœ‹ a density, 𝛽 a positive scaling parameter, 𝑛(π‘₯) denotes the number of points of the configuration π‘₯, 𝛾 is a repulsion parameter such as 0 < 𝛾 < 1, and β„Ž(π‘₯) = βˆ’π‘Ž(π‘₯), where π‘Ž(π‘₯) refers to the area of the reunion of balls 𝐡(π‘₯), or β„Ž(π‘₯) = βˆ’π‘(π‘₯), with 𝑐(π‘₯) referring to the number of connected components of 𝐡(π‘₯). Through this article, we aim to study the connected component process with a probability density: πœ‹(π‘₯) = π‘˜ 𝛽𝑛(π‘₯)π›Ύβˆ’π‘(π‘₯) We note that the connected component process is not a Markov process under Ripley and Kelly [5]. Actually, two points of πœ’ can be neighbours with the relation ∼ π‘₯ while being arbitrarily far from one another in πœ’ with regard to the Euclidean distance. Connectedness through connected components shall link two points in case there exists a string of balls of radius π‘Ÿ, centred at points π‘₯𝑖 of π‘₯, and joining with one another. The Papangelou conditional intensity function of the process with connectedness interaction is given as follows [11]: πœ†(π‘₯𝑖 , π‘₯) = 𝛽𝛾 𝑐(π‘₯)βˆ’π‘(π‘₯βˆͺ{π‘₯𝑖}) Nevertheless, πœ†(π‘₯𝑖 , π‘₯) can depend on a point π‘₯𝑖′ ∈ π‘₯ arbitrarily far from π‘₯𝑖 in πœ’, and we can select a configuration 𝑦 so that, for every π‘₯ = 𝑦 βˆͺ {π‘₯𝑖′ }, 𝑐(𝑦) = 2 and 𝑐(π‘₯) = 𝑐(π‘₯ βˆͺ {π‘₯𝑖 }) = 1. Hence, for every r > 0, the process is not Markovian under Ripley-Kelly for the usual r-neighbourhood relation. However, if π‘₯𝑖′ ∈ π‘₯ is not connected to π‘₯𝑖 in 𝐡(π‘₯ βˆͺ {π‘₯𝑖 }), then π‘₯𝑖′ will not contribute to (3.1) (3.2) 5 Int. J. Anal. Appl. (2023), 21:51 the difference 𝑐(π‘₯) βˆ’ 𝑐(π‘₯ βˆͺ {π‘₯𝑖 }), and πœ†(π‘₯𝑖 , π‘₯) will not depend on π‘₯𝑖′ , in this case the process is nearest-neighbour Markov point process for the nearest neighbour ~π‘₯ relation. 3.1. Generating computer experiment designs via the connected component processes using MCMC method and RWMH algorithm MCMC method allows the simulation of a πœ‹ density using an ergodic Markov chain {𝑋0, 𝑋1, … , 𝑋𝑁𝑀𝐢𝑀𝐢 }; the former being a stationary distribution. To establish such a chain, we use RWMH algorithm. The basic idea of the algorithm is to build a transition 𝑃𝑀𝐻 , which is πœ‹- reversible. 𝑃𝑀𝐻 shall, then, be πœ‹-invariant. This shall be done in two steps: β€’ Change proposition transition: we start by proposing a change x β†’ y according to a density π‘ž(π‘₯,βˆ™). β€’ Change acceptance probability: change is, then, accepted with the probability π‘Ž(π‘₯, 𝑦), a : Ω Γ— Ω β†’ [0,1]. RWMH algorithm includes an instrumental Markov chain the transition density of which depends on the current state; more precisely it satisfies the equation: π‘ž(π‘₯, 𝑦) = π‘ž(𝑦 βˆ’ π‘₯). In addition, we can choose an instrumental density 𝑁(π‘₯, 𝜎2). So, given a current state π‘₯, the algorithm generates a potential state 𝑦 stemming from a normal distribution centered at x and having 𝜎2 as a variance. If the new state is accepted the next potential state shall be generated according to a distribution 𝑁(𝑦, 𝜎2). Else, π‘₯ shall be maintained as current state and another possible state y shall be proposed according to a distribution 𝑁(π‘₯, 𝜎2). The algorithm, as defined, generates a Markov chain the transition probabilities of which are given as follows [12]: 𝑃𝑀𝐻 (π‘₯, 𝑦) = π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦) + 1(π‘₯ = 𝑦) [1 βˆ’ ∫ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺 ] 𝑃𝑀𝐻 is πœ‹-reversibility if and only if the balance equation below is satisfied: βˆ€π‘₯, 𝑦 ∈ 𝛺 ∢ πœ‹(π‘₯) Γ— π‘ž(π‘₯, 𝑦) Γ— π‘Ž(π‘₯, 𝑦) = πœ‹(𝑦) Γ— π‘ž(𝑦, π‘₯) Γ— π‘Ž(𝑦, π‘₯) For a couple (π‘₯, 𝑦), let’s define Metropolis-Hastings acceptance probability as : π‘Ž(π‘₯, 𝑦) = πœ‹(𝑦) Γ— π‘ž(𝑦, π‘₯) πœ‹(π‘₯) Γ— π‘ž(π‘₯, 𝑦) The RWMH algorithm with the distribution 𝑁(π‘₯, 𝜎2) has an additional property conferred by the symmetry of the instrumental density, mainly due to the fact that π‘ž(π‘₯, 𝑦) = π‘ž(𝑦, π‘₯). This characteristic simplifies the calculation of a: 6 Int. J. Anal. Appl. (2023), 21:51 π‘Ž(π‘₯, 𝑦) = πœ‹(𝑦) πœ‹(π‘₯) = π‘˜π›½π‘›(𝑦)π›Ύβˆ’π‘(𝑦) π‘˜π›½π‘›(π‘₯)π›Ύβˆ’π‘(π‘₯) = 𝛾𝑐(π‘₯)βˆ’π‘(𝑦) 3.2. Algorithm for the construction of proposed experiment designs The computer experiment designs proposed in this work, referred to as connected component designs (CCD), are generated through the algorithm presented below, which is a variation of the random walk Metropolis Hastings algorithm. Algorithm β€’ Initialisation – Choose an initial experiment design π‘₯ = ( π‘₯1, π‘₯2, … , π‘₯𝑛 ) according to a given probability distribution. For instance the normal probability distribution. – Take 𝑋0 = π‘₯. β€’ For 𝑁 = 1, 2, . . . , 𝑁𝑀𝐢𝑀𝐢 For π‘˜ =, …, 𝑛 – Randomly choose a spin s uniformly on {1, … , 𝑛} – Simulate an experiment 𝑦𝑗 according to a proposed distribution q~𝑁(π‘₯𝑠, 𝜎 2). Thus, we take as a new configuration: 𝑦 = ( π‘₯1, π‘₯2, … π‘₯π‘ βˆ’1, 𝑦𝑗 , π‘₯𝑠+1 … , π‘₯𝑛 ) – Calculate the acceptance probability: π‘Ž(π‘₯, 𝑦) = 𝑀𝑖𝑛 (1, 𝛾𝑐(π‘₯)βˆ’π‘(𝑦)) – Take π‘₯ = { 𝑦 with a probability π‘Ž π‘₯ with a probability 1 βˆ’ π‘Ž End for k Take 𝑋𝑁 = π‘₯. End for 𝑁 For N=1000, figure 1 shows the convergence towards a configuration that satisfies the connected component property starting from an initial configuration of 30 points, drawn from a normal distribution with mean 0 and variance 1. 7 Int. J. Anal. Appl. (2023), 21:51 Figure 1. Left, an initial configuration of 30 points with c(x)=6. Right, a final configuration with c(x)=30 ( Ξ³ =0,01, r=0,19 and 𝜎 = 0,05 ). 3.3. Influence of parameters The figure below shows the influence of the parameter π‘Ÿ on the final distribution. The choice of the radius proves to be important. A radius that is too small generates a distribution without interaction, but with numerous deficiencies. However, a radius that is too large leads to a distribution with clusters. Figure 2. Left, a configuration of 30 points Ξ³ =0.01 and r =0.1. Right, a configuration for Ξ³=0.01 and r =0.3 8 Int. J. Anal. Appl. (2023), 21:51 The same as with the interaction radius, it is important to adequately set the attraction parameter Ξ³. The figure below shows that it is easier to generate a distribution that meets the criteria of filling the space with a strong attraction parameter. Figure 3. Left, a configuration of 30 points Ξ³ =0.01 and r =0.19. Right, a configuration for Ξ³ =0.1 and r =0.19 4. STUDY OF CONVERGENCE In this section, we shall prove the convergence of the sequence of computer experiment designs {𝑋0, 𝑋1, … , 𝑋𝑁𝑀𝐢𝑀𝐢 } generated with the construction algorithm previously introduced towards the invariant distribution Ο€ of a connected component process. This sequence is the realization of a Markov chain with transition kernel: 𝑃(π‘₯, 𝑦) = 𝑃𝑀𝐻 𝑛 (π‘₯, 𝑦) Moreover, it is important to know whether the distribution of the last generated design 𝑋𝑁𝑀𝐢𝑀𝐢 is close to the distribution πœ‹. To this end, let us state the main results that are of interest to us here: Proposition 4.1. On a finite space, the transition kernel 𝑃 = 𝑃𝑀𝐻 𝑛 of the Markov chain (𝑋𝑁 )𝑁 β‰₯ 0 obtained from the construction algorithm is irreducible and positive recurrent. The distribution πœ‹ is the unique stationary distribution of 𝑃; 𝑃 being aperiodic and a primitive kernel. Proof of proposition 4.1. Let us prove that the transition mechanism 𝑃𝑀𝐻 meets the three following conditions related to the distribution πœ‹ of the connected component process, defined in (3.2): πœ‹ βˆ’ reversibility, πœ‹ βˆ’stationary and πœ‹ βˆ’irreductibility. 9 Int. J. Anal. Appl. (2023), 21:51 The chain is said to be reversible with respect to the target distribution πœ‹(βˆ™) if its transition kernel satisfies: βˆ€π‘₯, 𝑦 ∈ 𝛺, B ∈ ℬ: ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, 𝑦)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(𝑦)𝑃𝑀𝐻 (𝑦, π‘₯)𝑑𝑦 𝛺 Let π‘₯, 𝑦 ∈ Ξ© and B ∈ ℬ. Then we have: ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, 𝑦)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦)𝑑π‘₯ 𝛺 + ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯) 𝛺 [∫ 1 βˆ’ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺 ] 𝛿π‘₯ (𝑦)𝑑π‘₯ = ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦)𝑑π‘₯ 𝛺 + ∫ 1𝐡(π‘₯,π‘₯)πœ‹(π‘₯) 𝛺 [∫ 1 βˆ’ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺 ] 𝑑π‘₯ By construction π‘Ž and π‘ž satisfy, πœ‹(π‘₯)π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦) = π‘˜π›½π‘›(π‘₯)π›Ύβˆ’π‘(π‘₯)π‘šπ‘–π‘› {1, 𝛾𝑐(π‘₯)βˆ’π‘(𝑦)}π‘ž(π‘₯, 𝑦) = π‘˜ π‘šπ‘–π‘› {𝛽𝑛(π‘₯)π›Ύβˆ’π‘(π‘₯), 𝛽𝑛(π‘₯)π›Ύβˆ’π‘(𝑦)}π‘ž(π‘₯, 𝑦) And since 𝑛(π‘₯) = 𝑛(𝑦) and π‘ž(π‘₯, 𝑦) = π‘ž(𝑦, π‘₯). Then we have: πœ‹(π‘₯)π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦) = π‘˜ min{𝛽𝑛(π‘₯)π›Ύβˆ’π‘(π‘₯), 𝛽𝑛(𝑦)π›Ύβˆ’π‘(𝑦)} π‘ž(𝑦, π‘₯) =π‘˜π›½π‘›(𝑦)π›Ύβˆ’π‘(𝑦)π‘šπ‘–π‘› {𝛾𝑐(𝑦)βˆ’π‘(π‘₯), 1}π‘ž(𝑦, π‘₯) = πœ‹(𝑦)π‘Ž(𝑦, π‘₯)π‘ž(𝑦, π‘₯). Finally, ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, 𝑦)𝑑π‘₯𝛺 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(𝑦)π‘Ž(𝑦, π‘₯)π‘ž(𝑦, π‘₯)𝑑π‘₯𝛺 + ∫ 1𝐡(𝑦,𝑦)πœ‹(𝑦) 𝛺 [∫ 1 βˆ’ π‘Ž(𝑦, 𝑧)π‘ž(𝑦, 𝑧)𝑑𝑧 𝛺 ] 𝑑𝑦 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(𝑦)𝑃𝑀𝐻 (𝑦, π‘₯)𝑑𝑦. 𝛺 This is the condition of πœ‹ βˆ’ reversibility of the transition mechanism 𝑃𝑀𝐻. A measure πœ‹(βˆ™) is said to be stationary for the transition kernel 𝑃𝑀𝐻 if: βˆ€π‘₯, 𝑦 ∈ 𝛺; B, A ∈ ℬ: ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, A)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)𝑑π‘₯ 𝛺 Let π‘₯ ∈ 𝛺, and B ∈ ℬ. Then for every Borelean A of ℬ we have: 10 Int. J. Anal. Appl. (2023), 21:51 ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, A)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯) [∫ π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦)𝑑𝑦 𝛺 ] 𝑑π‘₯ 𝛺 + ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯) 𝛺 [∫ 1 βˆ’ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺 ] 𝛿π‘₯ (𝑦)𝑑 = ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯) [∫ π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦)𝑑𝑦 𝛺 ] 𝑑π‘₯ 𝛺 + ∫ 1𝐡(π‘₯,π‘₯)πœ‹(π‘₯) 𝛺 [∫ 1 βˆ’ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺 ] 𝑑π‘₯ = ∫ ∫ 1𝐡(π‘₯,𝑦)πœ‹(π‘₯)π‘Ž(π‘₯, 𝑦)π‘ž(π‘₯, 𝑦)𝑑𝑦𝑑π‘₯ 𝛺𝛺 + ∫ 1𝐡(π‘₯,π‘₯)πœ‹(π‘₯)𝑑π‘₯ 𝛺 βˆ’ ∫ ∫ πœ‹(π‘₯)π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧) 𝛺 𝑑𝑧𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,π‘₯)πœ‹(π‘₯)𝑑π‘₯𝛺 . Consequently, the chain assumes πœ‹ as a stationary distribution. A measure πœ‹(βˆ™) is said to be irreducible for the transition kernel 𝑃𝑀𝐻 of a Markov chain if: βˆ€A ∈ ℬ so that πœ‹(A) > 0 β‡’ βˆƒπ‘‘ : 𝑃𝑀𝐻 𝑑 (π‘₯, A) > 0 Let A be a borelean of ℬ, and for t=1 we have: ∫ 1𝐡(π‘₯,A)𝑃𝑀𝐻(π‘₯, A)𝑑π‘₯ Ξ© = ∫ 1𝐡(π‘₯,A)π‘Ž(π‘₯, A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + ∫ 1𝐡(π‘₯,A) 𝛺 [∫ 1 βˆ’ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺 ] 𝛿π‘₯ (𝐴)𝑑π‘₯ = ∫ 1𝐡(π‘₯,A)π‘Ž(π‘₯, A)π‘ž(π‘₯, A)dx Ξ© + ∫ 1B(π‘₯,π‘₯) Ξ© [∫ 1 βˆ’ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 Ξ© ] 𝑑π‘₯ = ∫ 1𝐡(π‘₯,𝐴)π‘Ž(π‘₯, A)π‘ž(π‘₯, A)𝑑π‘₯ Ξ© + 1 βˆ’ ∫ ∫ π‘Ž(π‘₯, 𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺𝛺 𝑑π‘₯. Since π‘Ž(π‘₯, A) = π‘šπ‘–π‘› (1; 𝛾𝑐(π‘₯)βˆ’π‘(A)) and π‘Ž(π‘₯, 𝑧) = π‘šπ‘–π‘› (1; 𝛾𝑐(π‘₯)βˆ’π‘(𝑧)), then four possible cases can be distinguished: β€’ If π‘Ž(π‘₯, A) = 1 and π‘Ž(π‘₯, 𝑧) = 1 then: ∫ 1𝐡(π‘₯,A)𝑃𝑀𝐻 (π‘₯, A)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + 1 βˆ’ ∫ ∫ π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺𝛺 𝑑π‘₯ = ∫ 1𝐡(π‘₯,A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 > 0. β€’ If π‘Ž(π‘₯, A) = 1 and π‘Ž(π‘₯, 𝑧) = 𝛾𝑐(π‘₯)βˆ’π‘(𝑧) then: ∫ 1𝐡(π‘₯,A)𝑃𝑀𝐻 (π‘₯, A)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + 1 βˆ’ ∫ ∫ 𝛾𝑐(π‘₯)βˆ’π‘(𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺𝛺 𝑑π‘₯ = ∫ 1𝐡(π‘₯,A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + 1 βˆ’ 𝛾𝑐(π‘₯)βˆ’π‘(𝑧) > 0. β€’ If π‘Ž(π‘₯, A) = 𝛾𝑐(π‘₯)βˆ’π‘(A) and π‘Ž(π‘₯, 𝑧) = 1 then: 11 Int. J. Anal. Appl. (2023), 21:51 ∫ 1𝐡(π‘₯,A)𝑃𝑀𝐻(π‘₯, A)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,A)𝛾 𝑐(π‘₯)βˆ’π‘(A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + 1 βˆ’ ∫ ∫ π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺𝛺 𝑑π‘₯ = 𝛾𝑐(π‘₯)βˆ’π‘(A) ∫ 1𝐡(π‘₯,A)π‘ž(π‘₯, A)𝑑π‘₯ > 0. 𝛺 β€’ If π‘Ž(π‘₯, A) = 𝛾𝑐(π‘₯)βˆ’π‘(𝐴) and π‘Ž(π‘₯, 𝑧) = 𝛾𝑐(π‘₯)βˆ’π‘(𝑧) then : ∫ 1𝐡(π‘₯,A)𝑃𝑀𝐻(π‘₯, A)𝑑π‘₯ 𝛺 = ∫ 1𝐡(π‘₯,A)𝛾 𝑐(π‘₯)βˆ’π‘(A)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + 1 βˆ’ ∫ ∫ 𝛾𝑐(π‘₯)βˆ’π‘(𝑧)π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺𝛺 𝑑π‘₯ = 𝛾𝑐(π‘₯)βˆ’π‘(A) ∫ 1𝐡(π‘₯,𝐴)π‘ž(π‘₯, A)𝑑π‘₯ 𝛺 + 1 βˆ’ 𝛾𝑐(π‘₯)βˆ’π‘(𝑧) ∫ ∫ π‘ž(π‘₯, 𝑧)𝑑𝑧 𝛺𝛺 𝑑π‘₯ = 𝛾𝑐(π‘₯)βˆ’π‘(A) ∫ 1𝐡(π‘₯,𝐴)π‘ž(π‘₯, A)𝑑π‘₯𝛺 + 1 βˆ’ 𝛾 𝑐(π‘₯)βˆ’π‘(𝑧) > 0. So ∫ 1𝐡(π‘₯,A)𝑃𝑀𝐻 𝑑 (π‘₯, A)𝑑π‘₯ 𝛺 > 0 βˆ€ 𝑑 β‰₯ 0, then 𝑃𝑀𝐻 is πœ‹ βˆ’irreducible. Since πœ‹ is the invariant distribution of 𝑃𝑀𝐻, then it remains so with 𝑃. As a matter of fact, πœ‹π‘ƒπ‘€π» = πœ‹, and by recurrence on 𝑛 we get: πœ‹π‘ƒπ‘€π» = πœ‹π‘ƒπ‘€π» 2 = πœ‹π‘ƒπ‘€π» 3 = β‹― = πœ‹π‘ƒπ‘€π» 𝑛 = πœ‹ Yet 𝑃 = 𝑃𝑀𝐻 𝑛 , then we get: πœ‹π‘ƒ = πœ‹. On the other hand, πœ‹ βˆ’ reversibility of 𝑃𝑀𝐻 leads to πœ‹ βˆ’ reversibility of 𝑃, i.e: πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, 𝑦) = πœ‹(𝑦)𝑃𝑀𝐻 (𝑦, π‘₯) β‡’ πœ‹(π‘₯)𝑃(π‘₯, 𝑦) = πœ‹(𝑦)𝑃(𝑦, π‘₯) Since πœ‹π‘ƒπ‘€π» = πœ‹π‘ƒπ‘€π» 𝑛 = πœ‹, on then we shall get: πœ‹(π‘₯)𝑃𝑀𝐻 (π‘₯, 𝑦) = πœ‹(π‘₯)𝑃𝑀𝐻 𝑛(π‘₯, 𝑦) = πœ‹(𝑦)𝑃𝑀𝐻 (𝑦, π‘₯) = πœ‹(𝑦)𝑃𝑀𝐻 𝑛(𝑦, π‘₯) And since 𝑃𝑀𝐻 𝑛 = 𝑃, then we get: πœ‹(π‘₯)𝑃(π‘₯, 𝑦) = πœ‹(𝑦)𝑃(𝑦, π‘₯). By constructing 𝑃 = 𝑃𝑀𝐻 𝑛 , the πœ‹ βˆ’irreducibility of 𝑃𝑀𝐻 leads to πœ‹ βˆ’irreducibility of 𝑃. If 𝑃 is πœ‹ βˆ’irreducibile and has an invariant πœ‹ distribution, then 𝑃 is positive recurrent and Ο€ is the unique invariant distribution of 𝑃 [13] (see, Proposition 1). On the other hand, the chain generated by the construction algorithm shall be aperiodic as well provided there exists at least a pair of configuration (π‘₯, 𝑦) so that π‘Ž(π‘₯, 𝑦) < 1, and we shall ultimately get 𝑃(π‘₯, π‘₯) > 0. We are quick to notice that the chain is aperiodic since the event 𝑋(𝑁+1) = 𝑋(𝑁) is likely at practically any moment. As a matter of fact, each state can, then, be visited at two subsequent iterations, thus 𝑃1(π‘₯, π‘₯) > 0, and their period is then equal to 1. Since the chain generated with the algorithm is irreducible and aperiodic, then its kernel with transition 𝑃 is primitive (a characterization of primitive Markov Kernel more common in probability theory is to say that are irreducible and aperiodic [14]). 12 Int. J. Anal. Appl. (2023), 21:51 Theorem 4.1. The Markov chain (π‘‹π‘˜ )π‘˜ β‰₯ 0 obtained from the construction algorithm is uniformly ergodic, and its kernel 𝑃 realizes the simulation of the connected component process with density πœ‹(π‘₯) = π‘˜π›½π‘›(π‘₯)π›Ύβˆ’π‘(π‘₯), i.e. π‘£π‘ƒπ‘š converges to πœ‹ as π‘š approaches infinity; where 𝑣 is an initial distribution, and we have : β€–π‘£π‘ƒπ‘š βˆ’ πœ‹β€– β†’ 0, π‘š β†’ ∞ Proof of theorem 4.1 Let 𝑣 be an initial distribution, for every integer π‘š and βˆ€ π‘₯ ∈ 𝑁𝑓 we have: β€–π‘£π‘ƒπ‘š (π‘₯,βˆ™) βˆ’ πœ‹β€– = β€–π‘£π‘ƒπ‘š βˆ’ πœ‹π‘ƒπ‘š β€– On the other hand, β€–π‘£π‘ƒπ‘š βˆ’ πœ‹π‘ƒπ‘š β€– ≀ 2𝐢(π‘ƒπ‘š ) and 𝐢(π‘ƒπ‘š ) ≀ (𝐢(𝑃))π‘š [15] (see lemma 4.2.2, P.71), this implies that: β€–π‘£π‘ƒπ‘š (π‘₯,βˆ™) βˆ’ πœ‹β€– ≀ 2(𝐢(𝑃))π‘š Where, 𝐢(𝑃)is the Dobrushin contraction coefficient of 𝑃 [16]. According to proposition 4.1, the kernel 𝑃 is primitive, then 𝐢(𝑃) < 1 [15] (see lemma 4.2.3, P.72), so when π‘š approaches infinity, β€–π‘£π‘ƒπ‘š βˆ’ πœ‹β€– tends towards zero. Hence, the chain is uniformly ergodic and converges towards the distribution defined in (3.2). 5. SIMULATION STUDY AND DISCUSSION In this section, we will compare the quality of point distributions in the proposed designs using standard criteria to evaluate the degree of filling of the experimental space and the quality of the uniform distribution. β€’ Criterion of distance (Mindist): it is about maximizing the minimal distance between two points of the design [17]. 𝑀𝑖𝑛𝑑𝑖𝑠𝑑 = min 𝑖 min 𝑗≠𝑖 𝑑(π‘₯𝑖 , π‘₯𝑗 ) Where, 𝑑(π‘₯𝑖 , π‘₯𝑗 ) is the Euclidean distance between the points π‘₯𝑖 and π‘₯𝑗. β€’ The coverage criterion (Cov): it helps to measure the gap between the points of the design and those of a regular grid. This criterion is invalid for a regular grid. Our goal, here, is to minimize this gap to get closer to a regular grid, hence ensuring the filling of the space without, however, reaching it to comply with a uniform distribution, mainly in projection on factorial axes [18]. 13 Int. J. Anal. Appl. (2023), 21:51 π‘π‘œπ‘£ = 1 𝛿̅ √ 1 𝑛 βˆ‘(𝛿𝑖 βˆ’ 𝛿̅) 2 𝑛 𝑖=1 Where 𝛿𝑖 = min 𝑖≠𝑗 (𝑑(π‘₯𝑖 , π‘₯𝑗 )) and οΏ½Μ…οΏ½ = 1 𝑛 βˆ‘ 𝛿𝑖 𝑛 𝑖=1 . For regular grid, 𝛿1 = 𝛿2 = β‹― = 𝛿𝑛, then Cov=0. β€’ The mesh Ratio (R): is the ratio between maximal and minimal distances separating the points of the experimental design. In the case of a regular grid, R=1. Thus, when R is close to the value 1 the points are close to those of a regular grid. 𝑅 = max 𝑖:1…𝑛 𝛿𝑖 min 𝑖:1…𝑛 𝛿𝑖 β€’ The Discrepancy criterion (Disc): Discrepancy measures the difference between the empirical distribution function of the design points and that of the uniform distribution. Unlike the previous three criteria, Discrepancy does not depend on the distance between points. There are different ways to measure Discrepancy, but in this study, we use the L2 norm. [19]. 𝐷𝑖𝑠𝑐 = ( 1 3 ) 𝑝 βˆ’ 21βˆ’π‘ 𝑛 βˆ‘ ∏ (1 βˆ’ (π‘₯𝑖 𝑗 ) 2 ) + 1 𝑛2 βˆ‘ βˆ‘ ∏ (1 βˆ’ max(π‘₯𝑖 𝑗 βˆ’ π‘₯π‘˜ 𝑗 )) 𝑝 𝑗=1 𝑛 π‘˜=1 𝑛 𝑖=1 𝑝 𝑗=1 𝑛 𝑖=1 In table 1, we present the results related to the discrepancy criterion for the proposed designs and compare them with sequences with low discrepancy (such as Halton sequence [20], Sobol sequence [21], and Faure sequence [22]). It is interesting to note that the proposed designs have similarly low discrepancies as those of the low discrepancy sequences. Table 1. Discrepancy values for CCD designs: a low discrepancy Halton sequence, a low discrepancy Sobol sequence and a low discrepancy Faure sequence. The presented results cover three, five, seven, and ten dimensions. The underlined values represent the lowest values for each dimension. Dimension Number of Points CCD Halton Sequence Sobol Sequence Faure Sequence 3 5 7 10 20 40 50 60 0,001224 0,000634 0,000257 0.0000140 0,00173285 0,00040004 0,00012451 0,00005601 0,001382617 0,000260329 0,000074156 0,000007428 0,00113882 0,00028458 0,00012455 0,00001327 14 Int. J. Anal. Appl. (2023), 21:51 The designs presented in this article are also compared with commonly used designs in computer experiments, excluding sequences with weak discrepancy. To ensure meaningful results, the criteria have been tested on 80 designs. The designs considered in this section are typically the following: β€’ Random Designs (RD). β€’ Latin Hypercube Sampling (LHS) [23]. β€’ Maximin LHS Designs (mLHS) [24]. β€’ Strauss Designs (SD) [6]. β€’ Maximal Entropy Designs (Dmax) [25]. β€’ Marked Strauss Designs (MSD) ([7], [8]). β€’ Connected Component Designs (CCD). Figures 4 and 5 present the criteria results in box plots for 80 designs in 10 and 5 dimensions, respectively. The plots clearly illustrate the distribution of the data and allow for easy comparison between designs. Figure 4. Representations of usual criteria calculated on 80 designs with 60 points for dimension 10. 15 Int. J. Anal. Appl. (2023), 21:51 Figure 5. Representations of usual criteria calculated on 80 designs with 40 points for dimension 5. Some remarks regarding the figures above are worth noting. Maximal entropy designs, Strauss designs, Marked Strauss designs, as well as connected component designs all scored well with regard to the discrepancy criterion. It is interesting to note that connected component designs are among the designs previously mentioned which also have very good distance criteria. The application of the recovery criterion helps to show that the proposed designs offer better results regarding this criterion. 6. CONCLUSION The use of the connected component process and Markov Chain Monte Carlo (MCMC) method is instrumental in creating novel computer designs that are customized based on the distribution of a connected component model. This method is highly versatile, as it allows us to manipulate the distribution by representing it in a manner that makes it possible to impose certain properties such as filling. Furthermore, MCMC methodology presents a fascinating alternative to the classical statistical approach that typically involves the independent realization of the same 16 Int. J. Anal. Appl. (2023), 21:51 distribution. Ultimately, the designs developed in this study were compared to those commonly used in computer experiments, and the outcomes were highly satisfactory. Conflicts of Interest: The authors declare that there are no conflicts of interest regarding the publication of this paper. References [1] D.J. Daley, D. Vere-Jones, An Introduction to the Theory of Point Processes. Volume I: Elementary Theory and Methods, 2nd edn, Springer, Berlin, 1988. [2] A.J. Baddeley, J. MΓΈller, Nearest Neighbour Markov Point Processes and Random Sets. Int. Stat. Rev. 57 (1989), 89–121. https://doi.org/10.2307/1403381. [3] A.J. Baddeley, M.N.M. van Lieshout, Area-Interaction Point Processes, Ann. Inst. Stat. Math. 47 (1995), 601–619. https://doi.org/10.1007/bf01856536. [4] Y.C. Chin, A.J. Baddeley, On Connected Component Markov Point Processes, Adv. Appl. Probab. 31 (1999), 279–282. https://doi.org/10.1239/aap/1029955135. [5] B.D. Ripley, F.P. Kelly, Markov Point Processes, J. Lond. Math. Soc. s2-15 (1977), 188–192. https://doi.org/10.1112/jlms/s2-15.1.188. [6] J. Franco, Planification d’ExpΓ©riences NumΓ©riques en Phase Exploratoire pour des Codes de Calculs Simulant des PhΓ©nomΓ¨nes Complexes. Doctoral Thesis, l’Ecole Nationale SupΓ©rieure des Mines de SaintEtienne, France, 2008. [7] H. Elmossaoui, N. Oukid, F. Hannane, Construction of Computer Experiment Designs Using Marked Point Processes, Afr. Mat. 31 (2020), 917–928. https://doi.org/10.1007/s13370-020-00770-9. [8] H. Elmossaoui, Contribution Γ  la MΓ©thodologie de la Recherche ExpΓ©rimentale, Doctoral Thesis, University Saad Dahleb, Blida, Algeria, 2020. [9] N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, E. Teller, Equation of State Calculations by Fast Computing Machines, J. Chem. Phys. 21 (1953), 1087–1092. https://doi.org/10.1063/1.1699114. [10] W.K. Hastings, Monte Carlo Sampling Methods Using Markov Chains and Their Applications, Biometrika. 57 (1970), 97–109. https://doi.org/10.1093/biomet/57.1.97. [11] A. Baddeley, J. MΓΈller, A.G. Pakes, Properties of Residuals For Spatial Point Processes, Ann. Inst. Stat. Math. 60 (2007), 627–649. https://doi.org/10.1007/s10463-007-0116-6. [12] S. Chib, E. Greenberg, Understanding the Metropolis-Hastings Algorithm, Amer. Stat. 49 (1995), 327- 355. https://doi.org/10.2307/2684568. https://doi.org/10.2307/1403381 https://doi.org/10.1007/bf01856536 https://doi.org/10.1239/aap/1029955135 https://doi.org/10.1112/jlms/s2-15.1.188 https://doi.org/10.1007/s13370-020-00770-9 https://doi.org/10.1063/1.1699114 https://doi.org/10.1093/biomet/57.1.97 https://doi.org/10.1007/s10463-007-0116-6 https://doi.org/10.2307/2684568 17 Int. J. Anal. Appl. (2023), 21:51 [13] S. Chib, E. Greenberg, Markov Chain Monte Carlo Simulation Methods in Econometrics, Econ. Theory. 12 (1996), 409–431. https://doi.org/10.1017/s0266466600006794. [14] E. Senata, Non-Negative Matrices and Markov Chains, 2nd edition. Springer, New York Heidelberg Berlin, 1981. [15] G. Winkler, Image Analysis Random Fields and Dynamic Monte Carlo Methods, Springer, Berlin, 1995. [16] R.L. Dobrushin, Central Limit Theorem for Nonstationary Markov Chains. I, Theory Probab. Appl. 1 (1956), 65–80. https://doi.org/10.1137/1101006. [17] M. Gunzburger, J. Burkardt, Uniformity Measures for Point Samples in Hypercubes. (2004). https://people.sc.fsu.edu/~jburkardt/publications/gb_2004.pdf. [18] M.E. Johnson, L.M. Moore, D. Ylvisaker, Minimax and Maximin Distance Designs, J. Stat. Plan. Inference. 26 (1990), 131–148. https://doi.org/10.1016/0378-3758(90)90122-b. [19] T.T. Warnock, Computational Investigations of Low-Discrepancy Point Sets II. In: Niederreiter H., and Shiue P.JS. (eds) Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing. Lecture Notes in Statistics, 106. Springer, New York, 1995. [20] J.H. Halton, On the Efficiency of Certain Quasi-Random Sequences of Points in Evaluating Multi- Dimensional Integrals, Numer. Math. 2 (1960), 84–90. https://doi.org/10.1007/bf01386213. [21] I.M. Sobol, Uniformly Distributed Sequences with an Additional Uniform Property, USSR Comput. Math. Math. Phys. 16 (1976), 236–242. https://doi.org/10.1016/0041-5553(76)90154-3. [22] H. Faure, DiscrΓ©pance de Suites AssociΓ©es Γ  un SystΓ¨me de NumΓ©ration (en Dimension s), Acta Arith. 41 (1982), 337–351. https://doi.org/10.4064/aa-41-4-337-351. [23] W.L. Loh, On Latin Hypercube Sampling, Ann. Stat. 24 (1996), 2058-2080. https://doi.org/10.1214/aos/1069362310. [24] M.D. Morris, T.J. Mitchell, Exploratory Designs for Computational Experiments, J. Stat. Plan. Inference. 43 (1995), 381–402. https://doi.org/10.1016/0378-3758(94)00035-t. [25] M.C. Shewry, H.P. Wynn, Maximum Entropy Sampling, J. Appl. Stat. 14 (1987), 165–170. https://doi.org/10.1080/02664768700000020. https://doi.org/10.1017/s0266466600006794 https://doi.org/10.1137/1101006 https://people.sc.fsu.edu/~jburkardt/publications/gb_2004.pdf https://doi.org/10.1016/0378-3758(90)90122-b https://doi.org/10.1007/bf01386213 https://doi.org/10.1016/0041-5553(76)90154-3 https://doi.org/10.4064/aa-41-4-337-351 https://doi.org/10.1214/aos/1069362310 https://doi.org/10.1016/0378-3758(94)00035-t https://doi.org/10.1080/02664768700000020