ACM Transactions on

Modeling and Computer Simulation (TOMACS)

Latest Articles

Infinite Swapping using IID Samples

We propose a new method for estimating rare event probabilities when independent samples are available. It is assumed that the underlying probability measures satisfy a large deviation principle with a scaling parameter ϵ that we call temperature. We show how by combining samples at different temperatures, one can construct an estimator... (more)

Predicting the Simulation Budget in Ranking and Selection Procedures

The goal of ranking and selection (R8S) procedures is to identify the best among a finite set of alternative systems evaluated by stochastic... (more)

New Performance Modeling Methods for Parallel Data Processing Applications

Predicting the performance of an application running on parallel computing platforms is increasingly becoming important because of its influence on... (more)

Keddah: Network Evaluation Powered by Simulating Distributed Application Traffic

As a distributed system, Hadoop heavily relies on the network to complete data-processing jobs. While the traffic generated by Hadoop jobs is critical for job execution performance, the actual behaviour of Hadoop network traffic is still poorly understood. This lack of understanding greatly complicates research relying on Hadoop workloads. In this... (more)

A Data Assimilation Framework for Discrete Event Simulations

Discrete event simulation (DES) is traditionally used as an offline tool to help users to carry out analysis for complex systems. As real-time sensor... (more)

Exact-Differential Simulation: Differential Processing of Large-Scale Discrete Event Simulations

Using computer simulation to analyze large-scale discrete event systems requires repeated executions with various scenarios or parameters. Such repeated executions can induce significant redundancy in event processing when the modification from a prior scenario to a new scenario is relatively minor, and when the altered scenario influences only a... (more)


New TOMACS Editor in Chief

August/September 2019

We are pleased to report that the Editor-In-Chief search for ACM Transactions on Modeling and Computer Simulation (TOMACS) has been completed. The new Editor-In-Chief of ACM TOMACS will be Francesco Quaglia.



Reproducibility Board

June 2018

TOMACS has established a reproducibility board. Its members will take an active role in reproducing computational results and in evaluating modeling and simulation artifacts, as well as in fine tuning the overall process. 


MS:STAROC A Survey of Statistical Model Checking published

February 2018

In the TOMACS series  "State of the Art and Open Challenges" (MS:STAROC) the paper "A Survey of Statistical Model Checking", by Gul Agha and Karl Palmskog has been published.


Special Issue: Toward an Ecosystem of Models and Data

January 2018

A special issue based on the 2017 Informs Simulation Society Research Workshop (I-Sim 2017) will appear in 2019, guest edited by Peter J. Haas and Georgios Theodoropoulos.


Forthcoming Articles
RCR Report for Statistical abstraction for multi-scale spatio-temporal systems

?Statistical abstraction for multi-scale spatio-temporal systems? proposes a methodology that supports analysis of large-scaled spatio-temporal systems. These are represented via a set of agents whose behaviour depends on a perceived field. The proposed approach is based on a novel simulation strategy based on a statistical abstraction of the agents. The abstraction makes use of Gaussian Processes, a powerful class of non-parametric regression techniques from Bayesian Machine Learning, to estimate the agent?s behaviour given the environmental input. The authors use two biological case studies to show how the proposed technique can be used to speed up simulations and provide further insights into model behaviour. This replicated computations results report focuses on the scripts used in the paper to perform such analysis. The required software was straightforward to install and use. All the experimental results from the paper have been reproduced.

Parallel Data Distribution Management on Shared-Memory Multiprocessors

The problem of identifying intersections between two sets of d-dimensional axis-parallel rectangles appears frequently in the context of agent-based simulation studies. For this reason, the High Level Architecture~(HLA) specification -- a standard framework for interoperability among simulators -- includes a Data Distribution Management (DDM) service whose responsibility is to report all intersections between a set of subscription and update regions. The algorithms at the core of the DDM service are CPU-intensive, and could greatly benefit from the large computing power of modern multi-core processors. In this paper we propose two parallel solutions to the DDM problem that can operate effectively on shared-memory multiprocessors. The first solution is based on a data structure (the Interval Tree) that allows concurrent computation of intersections between subscription and update regions. The second solution is based on a novel parallel extension of the Sort Based Matching algorithm, whose sequential version is considered among the most efficient solutions to the DDM problem. Extensive experimental evaluation of the proposed algorithms confirm their effectiveness on taking advantage of multiple execution units in a shared-memory architecture.

The square root rule for adaptive importance sampling

In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated estimates of a common quantity ? and the variance of the k?th estimate is thought to decay like k^-y for an unknown rate parameter y ? [0, 1]. If we combine the estimates as though y = 1/2, then the resulting estimate attains the optimal variance rate with a constant that is too large by at most 9/8 for any y and any number K of estimates to combine.

Generalized Probabilistic Bisection for Stochastic Root-Finding

We consider numerical schemes for root finding of noisy responses through generalizing the Probabilistic Bisection Algorithm (PBA) to the more practical context where the sampling distribution is unknown and location-dependent. As in standard PBA, we rely on a knowledge state for the approximate posterior of the root location. To implement the corresponding Bayesian updating, we also carry out inference of oracle accuracy, namely learning the probability of correct response. To this end we utilize batched querying in combination with a variety of frequentist and Bayesian estimators based on majority vote, as well as the underlying functional responses, if available. For guiding sampling selection we investigate both Information Directed sampling, as well as Quantile sampling. Our numerical experiments show that these strategies perform quite differently; in particular we demonstrate the efficiency of randomized quantile sampling which is reminiscent of Thompson sampling. Our work is motivated by the root-finding sub-routine in pricing of Bermudan financial derivatives, illustrated in the last section of the paper.

Statistical abstraction for multi-scale spatio-temporal systems

Modelling spatio-temporal systems exhibiting multi-scale behaviour is a powerful tool in many branches of science, yet it still presents significant challenges. Here we consider a general two-layer (agent-environment) modelling framework, where spatially distributed agents behave according to external inputs and internal computation; this behaviour may include influencing their immediate environment, creating a medium over which agent-agent interaction signals can be transmitted. We propose a novel simulation strategy based on a statistical abstraction of the agent layer, which are typically the most detailed components of the model and can incur significant computational cost in simulation. The abstraction makes use of Gaussian Processes, a powerful class of non-parametric regression techniques from Bayesian Machine Learning, to estimate the agent's behaviour given the environmental input. We show on two biological case studies how this technique can be used to speed up simulations and provide further insights into model behaviour.

Extending Explicitly Modelled Simulation Debugging Environments with Dynamic Structure

The widespread adoption of Modelling and Simulation (M&S) techniques hinges on the availability of tools supporting each phase in the M&S-based workflow. This includes tasks such as specifying, experimenting with, as well as verifying and validating simulation models. Recently, research efforts have been directed towards providing debugging support for simulation models using the same abstractions as the formalism(s) in which those models were specified. We have previously developed a technique where advanced debugging environments are generated from an explicit behavioral model of the user interface and the simulator. These models are extracted from the code of existing modelling environments and simulators, and instrumented with debugging operations. This technique can be reused for a large family of modelling formalisms. We adapt and apply this approach to accommodate dynamic-structure formalisms. As a representative example, we choose Dynamic-Structure DEVS (DSDEVS), a formalism that includes the characteristics of discrete-event and agent-based modelling paradigms. We observe that to effectively debug DSDEVS models, domain-specific visualizations developed by the modeller should be (re)used for debugging tasks. To this end, we present a modular, reusable approach, which includes an architecture and a workflow. We provide a concrete example of a minimal, but sufficiently complex simulation system modelled in the DSDEVS formalism that can be successfully debugged using the generated debugging environment.

Exact Simulation of Truncated Lévy Subordinator

A truncated Lévy subordinator is a Lévy subordinator in R+ with Lévy measure restricted from above by a certain level b. In this paper, we study the path and distribution properties of this type of processes in detail and set up an exact simulation framework based on a marked renewal process. In particular, we focus on a typical specification of truncated Lévy subordinator, namely the truncated stable process. We establish an exact simulation algorithm for the truncated stable process, which is very accurate and efficient. Compared to the existing algorithm suggested in Chi (2012), our algorithm outperforms over all parameter settings. Using a distribution decomposition technique, we also develop an exact simulation algorithm for the truncated tempered stable process and other related processes. We illustrate an application of our algorithm as a valuation tool for stochastic hyperbolic discounting, and numerical analysis are provided to demonstrate the accuracy and effectiveness of our methods.

Introduction to the Qest Special Issue

An Adaptive Persistence and Work-stealing Combined Algorithm for Load Balancing on PDES

Load imbalance has always been a crucial challenge in PDES. In the past few years, we have witnessed an increased interest in using multithreading PDES on multicore platforms. In multithreading PDES, migrating LPs and coordinating threads are much more convenient and cause lower overhead, which provides a better circumstance for load balancing. However, current algorithms, including persistence based scheme and work-stealing based scheme, have their drawbacks. On one hand, persistence based load balancers, which use the historical data to predict the future, will inevitably make some error. On the other hand, the work-stealing scheme ignores the application-related characteristic, which may limit the potential performance improvement. In this paper, we propose an adaptive persistence and work-stealing combined dynamic load balancing algorithm (APWS). The algorithm detects load imbalance, adaptively rebalances the LPs distribution based on an evaluation of whether there are benefits, and uses a greedy lock-free work-stealing scheme to eliminate bias at runtime. We analyze the performance of the APWS algorithm by a series of experiments. Results demonstrate that our APWS algorithm achieves better performance in different scenarios.

Enhancing Response Predictions with a Joint Gaussian Process Model

The stochastic Gaussian process model has been widely used in stochastic simulation metamodeling. In practice, the performance of this model can be largely affected by the noises of the observations. In this article, we propose an approach to mitigate the impact of the noisy observations by jointly modeling the response of interest with a correlated but less noisy auxiliary response. The main idea is to leverage on and learn from the correlated and more accurate response to improve the prediction. To achieve this, we extend the existing deterministic multi-response model for stochastic simulation to jointly model the two responses, use some simplified examples to show the benefit of the proposed model, and investigate the input estimation of this model. Quantile prediction is used to illustrate the efficiency of the proposed approach by jointly modeling it with the expectation, which typically has a less noisy estimator compared with that of the quantile. Several numerical examples are conducted and the results show that the joint model can provide better performance. These promising results illustrate the potential of this joint model especially in situations where the response of interest is much noisier or when observations are scarce. We further propose a two-stage design approach that balances the selection of design points for different responses under different costs to more efficiently utilize limited computing budget to improve predictions. We see from these designs also the benefits of the joint model, where cheaper auxiliary response observations can be used to improve the response of interest.

Omnithermal Perfect Simulation for Multi-server Queues

A number of perfect simulation algorithms for multi-server First Come First Served queues have recently been developed. Those of Connor and Kendall (2015) and Blanchet, Pei, and Sigman (2015) use dominated Coupling from the Past (domCFTP) to sample from the equilibrium distribution of the Kiefer-Wolfowitz workload vector for stable M/G/c and GI/GI/c queues respectively, using random assignment queues as dominating processes. In this note we answer a question posed by Connor and Kendall (2015), by demonstrating how these algorithms may be modified in order to carry out domCFTP simultaneously for a range of values of c (the number of servers).

Fidelity and Performance of State Fast-Forwarding in Microscopic Traffic Simulations

Traditionally, the model time in agent-based simulations is advanced in fixed time steps. However, a purely time-stepped execution is inefficient in situations where the states of individual agents are independent of other agents and thus predictable far into the simulated future. In this work, we propose a method to accelerate microscopic traffic simulations based on identifying independence among agent state updates. Instead of iteratively updating an agent's state throughout a sequence of time steps, a computationally inexpensive "fast-forward' function advances the agent's state to the time of its earliest possible interaction with other agents. To demonstrate the approach in practice, we present an algorithm to efficiently determine intervals of independence in microscopic traffic simulations and derive fast\hyp{}forward functions for several well-known traffic simulation models. In contrast to existing approaches based on reducing the level of detail, our approach retains the microscopic nature of the simulation. A performance evaluation is performed for a synthetic scenario and on the road network of the city of Singapore. At low traffic densities, maximum speedup factors of about 2.6 and 1.6 were achieved for the two scenarios, while at the highest considered densities, only few opportunities for fast\hyp{}forwarding exist. We show that the deviation from a time-stepped execution can be reduced to a minimum by choosing an adequate numerical integration scheme to execute the time-stepped updates. Our validation results show that the overall deviation in vehicle travel times is marginal.

An Integrated Method for Simultaneous Calibration and Parameter Selection in Computer Models

For many large and complex computer models, there usually exist a large number of unknown parameters. To improve the computer model?s predictive performance for more reliable and confident decision making, two important issues have to be addressed. The first is to calibrate the computer model and the second is to select the most influential set of parameters. However, these two issues are often addressed independently, which may waste the computational effort. In this paper, a Gaussian process based Bayesian method is first proposed to simultaneously calibrate and select parameters in stochastic computer models. The whole procedure can be more efficient by sharing the data information between these two steps. To further ease the computational burden, an approximation approach based on a weighted normal approximation is proposed to evaluate the posteriors in the proposed Bayesian method. A sequential approximation procedure is further proposed to improve the approximation accuracy by allocating the sequential design points more appropriately. The efficiency and accuracy of the proposed approaches are compared in a building energy model and a pandemic influenza simulation model.

RCR Report for Analysis of spatio-temporal properties of stochastic systems using TSTL

?Analysis of spatio-temporal properties of stochastic systems using TSTL? proposes a three-valued spatio-temporal logic to enrich the analysis framework for Signal Spatio-Temporal Logic previously developed by the authors. This allows to reason on the evolution of the satisfaction of properties expressed in a spatio-temporal logic, providing additional insight on the behaviour of the studied system. The approach has been validated on two case studies: a fire spread and evacuation model, and a novel case study on privacy in a communication network. This replicated computations results report focuses on the artifact accompanying the paper, consisting of a prototypical tool implementation of the techniques presented in the paper, together with all files necessary to replicate the analysis performed thereof. The artifact is available at After a few iterations with the authors, I found that the artifact agrees with the guidelines on availability (Artifact Avaliable) and replicability (Results Replicated) dictated in The software was made available in an accessible archival repository, and thanks to the instructions provided in the accompanying webapge it has been straightforward to replicate the experimental results from the paper.

Simulation Study to Identify the Characteristics of Markov Chain Properties

Markov models have a long tradition in modeling and simulation of dynamic systems. In this paper, we look at certain properties of a discrete time Markov chain including entropy, trace and 2nd largest eigenvalue to better understand their role for time series analysis. We simulate a number of possible input signals, fit a discrete time Markov chain and explore properties with the help of Sobol indices, partial correlation coefficients, and Morris elementary effect screening method. Our analysis suggests that the presence of a trend, periodicity, and autocorrelation impact entropy, trace, and 2nd largest eigenvalue to varying degrees but not independently of each other and with Markov chain parameter settings as other influencing factors. The properties of interest are promising to distinguish time series data, as evidenced for the entropy measure by recent results in the analysis of cell development for Xenopus laevis in cell biology.

All ACM Journals | See Full Journal Index

enter search term and/or author name