GNGTS 2019 - Atti del 38° Convegno Nazionale

746 GNGTS 2019 S essione 3.3 appropriate proposal distribution is crucial for an efficient probabilistic sampling: Suboptimal choices of the proposal can result in a persistent rejection of models or in an entrapment in local optimal of the target pdf . An acceptance rate (the ratio between accepted and proposed models) between 0.2 and 0.4 usually proves an optimal choice of the proposal. The first method we employ is the standard Random Walk Metropolis (RWM) that adopts a fixed proposal distribution for the entire MCMC sampling. For this reason, the performances of this method are related to the selected proposal. The second and third methods overcome this limitation by performing an automatic tuning of the proposal during the MCMC sampling. These methods are the Adaptive Metropolis (AM) algorithm, and the Metropolis algorithm with adaptation of the scaling factor (AM_sd). They use two different strategies to continuously adapt the covariance of the proposal basing on the accepted samples of the chain. All the previously considered MCMC algorithms can employ multiple chains to more efficiently sample the parameter space, but these chains do not communicate each other, or in other words they do not share any information about the sampled models and do not exploit this information to direct the sampling toward the most promising model space regions (e.g. regions in the model space with high likelihood function values). This limit often results in a decreased exploration capability and in an inefficient sampling when the modes of the target pdf are separated by high energy barriers (low probability regions). To overcome this issue many improvements of the previous MCMC methods have been proposed. In particular, a mixing of the information (sampled models) brought by the different chains, considerably increases the efficiency and the convergence of the MCMC sampling. For this reason, the fourth method we consider is the Differential Evolution Markov Chain (DEMC) that uses differential evolution as genetic algorithm for population evolution with a Metropolis selection rule to decide whether candidate points should replace their parents or not. In DEMC multiple Markov chains and multivariate proposals are generated on the fly from the collection of chains using differential evolution principles. Finally, the fifth method we analyze is the DiffeRential EvolutionAdaptive Metropolis (DREAM) that has its roots within DEMC but uses subspace sampling and outlier chain correction to speed up convergence to the target distribution. Subspace sampling is implemented in DREAM by only updating randomly selected dimensions of each time a proposal is generated. More detailed mathematical information about all these methods can be found in Vrugt (2016). Results on analytical pdfs. In the first analytical example we use the five algorithms to sample from a 1D Gaussian-mixture distribution in which two modes are close each other. For each MCMC algorithm we employ 10 chains running for 10000 iterations with a burn-in period of 5000 samples. Fig. 1a represents the final results in the form of a comparison of the target and final sampled pdfs together with the acceptance rate for each method and the L2 norm distance between the target pdf and the pdfs derived from the ensemble of sampled models (E2 values in Fig. 1a). We note that all the algorithms successfully converge toward the target pdf at the end of the iterations. All the algorithms show final acceptance rates that lie in the optimal interval of [0.2, 0.4]. This confirms an optimal choice of the proposal for the RWM algorithm and that the AM and AM_sd algorithms are able to optimal adapt the statistical properties of the proposal. If we observe the L2 distance between the target and estimated pdfs we note that AM and AM_sd outperform the RWM algorithm, and that DEMC and DREAM outperform all the other methods. In particular, AM and AM_sd yield very similar outcomes, such as the DREAM and DEMC. Fig. 1b displays the evolution of the PSRF values after the burn-in period. From the one hand, we note that RWM, AM, and AM_sd successfully converge toward the target pdf after 3000, 2000 and 2500 iterations after the burn-in. From the other hand, both DEMC and DREAM converge just after 1400 iterations. This clearly demonstrates that the automatic adaptation of the proposal used byAM andAM_sd guarantees a slightly more efficient sampling than the standard RWM, and that the mixing of the chains performed by DEMC and DREAM significantly speeds up the sampling procedure.

RkJQdWJsaXNoZXIy MjQ4NzI=