?Statistical abstraction for multi-scale spatio-temporal systems? proposes a methodology that supports analysis of large-scaled spatio-temporal systems. These are represented via a set of agents whose behaviour depends on a perceived field. The proposed approach is based on a novel simulation strategy based on a statistical abstraction of the agents. The abstraction makes use of Gaussian Processes, a powerful class of non-parametric regression techniques from Bayesian Machine Learning, to estimate the agent?s behaviour given the environmental input. The authors use two biological case studies to show how the proposed technique can be used to speed up simulations and provide further insights into model behaviour. This replicated computations results report focuses on the scripts used in the paper to perform such analysis. The required software was straightforward to install and use. All the experimental results from the paper have been reproduced.
In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated estimates of a common quantity ? and the variance of the k?th estimate is thought to decay like k^-y for an unknown rate parameter y ? [0, 1]. If we combine the estimates as though y = 1/2, then the resulting estimate attains the optimal variance rate with a constant that is too large by at most 9/8 for any y and any number K of estimates to combine.
We consider numerical schemes for root finding of noisy responses through generalizing the Probabilistic Bisection Algorithm (PBA) to the more practical context where the sampling distribution is unknown and location-dependent. As in standard PBA, we rely on a knowledge state for the approximate posterior of the root location. To implement the corresponding Bayesian updating, we also carry out inference of oracle accuracy, namely learning the probability of correct response. To this end we utilize batched querying in combination with a variety of frequentist and Bayesian estimators based on majority vote, as well as the underlying functional responses, if available. For guiding sampling selection we investigate both Information Directed sampling, as well as Quantile sampling. Our numerical experiments show that these strategies perform quite differently; in particular we demonstrate the efficiency of randomized quantile sampling which is reminiscent of Thompson sampling. Our work is motivated by the root-finding sub-routine in pricing of Bermudan financial derivatives, illustrated in the last section of the paper.
The widespread adoption of Modelling and Simulation (M&S) techniques hinges on the availability of tools supporting each phase in the M&S-based workflow. This includes tasks such as specifying, experimenting with, as well as verifying and validating simulation models. Recently, research efforts have been directed towards providing debugging support for simulation models using the same abstractions as the formalism(s) in which those models were specified. We have previously developed a technique where advanced debugging environments are generated from an explicit behavioral model of the user interface and the simulator. These models are extracted from the code of existing modelling environments and simulators, and instrumented with debugging operations. This technique can be reused for a large family of modelling formalisms. We adapt and apply this approach to accommodate dynamic-structure formalisms. As a representative example, we choose Dynamic-Structure DEVS (DSDEVS), a formalism that includes the characteristics of discrete-event and agent-based modelling paradigms. We observe that to effectively debug DSDEVS models, domain-specific visualizations developed by the modeller should be (re)used for debugging tasks. To this end, we present a modular, reusable approach, which includes an architecture and a workflow. We provide a concrete example of a minimal, but sufficiently complex simulation system modelled in the DSDEVS formalism that can be successfully debugged using the generated debugging environment.
In this paper we present Three-Valued Spatio-Temporal Logic (TSTL), which enriches the available spatio-temporal analysis of properties expressed in Signal Spatio-Temporal Logic (SSTL), to give further insight into the dynamic behaviour of systems. Our novel analysis starts from the estimation of satisfaction probabilities of given SSTL properties and allows the analysis of their temporal and spatial evolution. Moreover, in our verification procedure, we use a three-valued approach to include the intrinsic and unavoidable uncertainty related to the simulation-based statistical evaluation of the estimates; this can be also used to assess the appropriate number of simulations to use depending on the analysis needs. We present the syntax and three-valued semantics of TSTL and a specific extended monitoring algorithm to check the validity of TSTL formulas. We introduce a reliability requirement for TSTL monitoring and an automatic procedure to verify it. Two case studies demonstrate how TSTL broadens the application of spatio-temporal logics in realistic scenarios, enabling analysis of threat monitoring and privacy preservation based on spatial stochastic population models.
Introduction to the Qest Special Issue
A number of perfect simulation algorithms for multi-server First Come First Served queues have recently been developed. Those of Connor and Kendall (2015) and Blanchet, Pei, and Sigman (2015) use dominated Coupling from the Past (domCFTP) to sample from the equilibrium distribution of the Kiefer-Wolfowitz workload vector for stable M/G/c and GI/GI/c queues respectively, using random assignment queues as dominating processes. In this note we answer a question posed by Connor and Kendall (2015), by demonstrating how these algorithms may be modified in order to carry out domCFTP simultaneously for a range of values of c (the number of servers).
The existing performance evaluation methods for discrete-state stochastic models such as Petri nets either generate the reachability graph followed by a numerical solution of equations, or use some variant of simulation. Both methods have characteristic advantages and disadvantages depending on the size of the reachability graph and type of performance measure. The paper proposes a hybrid performance evaluation algorithm for Generalized Stochastic Petri Nets that integrates elements of both methods. It automatically adapts its behavior depending on the available size of main memory and number of model states. As such, the algorithm unifies simulation and numerical analysis in a joint framework. It is proved to result in an unbiased estimator whose variance tends to zero with increasing simulation time. The paper extends earlier results with an algorithm variant that starts with a small maximum number of particles and increases them during the run to increase the efficiency in cases which are rapidly solved by regular simulation. The algorithm's applicability is demonstrated through case studies.
?Analysis of spatio-temporal properties of stochastic systems using TSTL? proposes a three-valued spatio-temporal logic to enrich the analysis framework for Signal Spatio-Temporal Logic previously developed by the authors. This allows to reason on the evolution of the satisfaction of properties expressed in a spatio-temporal logic, providing additional insight on the behaviour of the studied system. The approach has been validated on two case studies: a fire spread and evacuation model, and a novel case study on privacy in a communication network. This replicated computations results report focuses on the artifact accompanying the paper, consisting of a prototypical tool implementation of the techniques presented in the paper, together with all files necessary to replicate the analysis performed thereof. The artifact is available at https://ludovicalv.github.io/TOMACS/. After a few iterations with the authors, I found that the artifact agrees with the guidelines on availability (Artifact Avaliable) and replicability (Results Replicated) dictated in https://www.acm.org/publications/policies/artifact-review-badging. The software was made available in an accessible archival repository, and thanks to the instructions provided in the accompanying webapge it has been straightforward to replicate the experimental results from the paper.
When simulating a complex stochastic system, the behavior of output response depends on input parameters estimated from finite real-world data, and the finiteness of data brings input uncertainty into the system. The quantification of the impact of input uncertainty on output response has been extensively studied. Most of the existing literature focuses on providing inferences on the mean response at the true but unknown input parameter, including point estimation and confidence interval construction. Risk quantification of mean response under input uncertainty often plays an important role in system evaluation and control, because it provides inferences on extreme scenarios of mean response in all possible input models. To the best of our knowledge, it has rarely been systematically studied in the literature. In this paper, first we introduce risk measures of mean response under input uncertainty, and propose a nested Monte Carlo simulation approach to estimate them. Then we develop asymptotical properties such as consistency and asymptotic normality for the proposed nested risk estimators. Finally we study the associated budget allocation problem for efficient nested risk simulation.
Markov models have a long tradition in modeling and simulation of dynamic systems. In this paper, we look at certain properties of a discrete time Markov chain including entropy, trace and 2nd largest eigenvalue to better understand their role for time series analysis. We simulate a number of possible input signals, fit a discrete time Markov chain and explore properties with the help of Sobol indices, partial correlation coefficients, and Morris elementary effect screening method. Our analysis suggests that the presence of a trend, periodicity, and autocorrelation impact entropy, trace, and 2nd largest eigenvalue to varying degrees but not independently of each other and with Markov chain parameter settings as other influencing factors. The properties of interest are promising to distinguish time series data, as evidenced for the entropy measure by recent results in the analysis of cell development for Xenopus laevis in cell biology.