ACM Transactions on

Modeling and Computer Simulation (TOMACS)

Latest Articles

Introduction to the Special Issue on Quest 2017

RCR Report for Analysis of Spatiotemporal Properties of Stochastic Systems Using TSTL

“Analysis of Spatiotemporal Properties of Stochastic Systems Using TSTL” [1] proposes a three-valued spatiotemporal logic to... (more)

Integrating Simulation and Numerical Analysis in the Evaluation of Generalized Stochastic Petri Nets

The standard existing performance evaluation methods for discrete-state stochastic models such as... (more)

Sequential Schemes for Frequentist Estimation of Properties in Statistical Model Checking

Statistical Model Checking (SMC) is an approximate verification method that overcomes the state space explosion problem for probabilistic systems by... (more)

Data-driven Model-based Detection of Malicious Insiders via Physical Access Logs

The risk posed by insider threats has usually been approached by analyzing the behavior of users solely in the cyber domain. In this article, we show... (more)

Interval Markov Decision Processes with Multiple Objectives: From Robust Strategies to Pareto Curves

Accurate Modelling of a real-world system with probabilistic behaviour is a difficult task. Sensor noise and statistical estimations, among other... (more)


New TOMACS Editor in Chief

August/September 2019

We are pleased to report that the Editor-In-Chief search for ACM Transactions on Modeling and Computer Simulation (TOMACS) has been completed. The new Editor-In-Chief of ACM TOMACS will be Francesco Quaglia.



Reproducibility Board

June 2018

TOMACS has established a reproducibility board. Its members will take an active role in reproducing computational results and in evaluating modeling and simulation artifacts, as well as in fine tuning the overall process. 


MS:STAROC A Survey of Statistical Model Checking published

February 2018

In the TOMACS series  "State of the Art and Open Challenges" (MS:STAROC) the paper "A Survey of Statistical Model Checking", by Gul Agha and Karl Palmskog has been published.


Special Issue: Toward an Ecosystem of Models and Data

January 2018

A special issue based on the 2017 Informs Simulation Society Research Workshop (I-Sim 2017) will appear in 2019, guest edited by Peter J. Haas and Georgios Theodoropoulos.


Forthcoming Articles
Parallel Data Distribution Management on Shared-Memory Multiprocessors

The problem of identifying intersections between two sets of d-dimensional axis-parallel rectangles appears frequently in the context of agent-based simulation studies. For this reason, the High Level Architecture~(HLA) specification -- a standard framework for interoperability among simulators -- includes a Data Distribution Management (DDM) service whose responsibility is to report all intersections between a set of subscription and update regions. The algorithms at the core of the DDM service are CPU-intensive, and could greatly benefit from the large computing power of modern multi-core processors. In this paper we propose two parallel solutions to the DDM problem that can operate effectively on shared-memory multiprocessors. The first solution is based on a data structure (the Interval Tree) that allows concurrent computation of intersections between subscription and update regions. The second solution is based on a novel parallel extension of the Sort Based Matching algorithm, whose sequential version is considered among the most efficient solutions to the DDM problem. Extensive experimental evaluation of the proposed algorithms confirm their effectiveness on taking advantage of multiple execution units in a shared-memory architecture.

The square root rule for adaptive importance sampling

In adaptive importance sampling, and other contexts, we have unbiased and uncorrelated estimates of a common quantity ? and the variance of the k?th estimate is thought to decay like k^-y for an unknown rate parameter y ? [0, 1]. If we combine the estimates as though y = 1/2, then the resulting estimate attains the optimal variance rate with a constant that is too large by at most 9/8 for any y and any number K of estimates to combine.

Generalized Probabilistic Bisection for Stochastic Root-Finding

We consider numerical schemes for root finding of noisy responses through generalizing the Probabilistic Bisection Algorithm (PBA) to the more practical context where the sampling distribution is unknown and location-dependent. As in standard PBA, we rely on a knowledge state for the approximate posterior of the root location. To implement the corresponding Bayesian updating, we also carry out inference of oracle accuracy, namely learning the probability of correct response. To this end we utilize batched querying in combination with a variety of frequentist and Bayesian estimators based on majority vote, as well as the underlying functional responses, if available. For guiding sampling selection we investigate both Information Directed sampling, as well as Quantile sampling. Our numerical experiments show that these strategies perform quite differently; in particular we demonstrate the efficiency of randomized quantile sampling which is reminiscent of Thompson sampling. Our work is motivated by the root-finding sub-routine in pricing of Bermudan financial derivatives, illustrated in the last section of the paper.

Exact Simulation of Truncated Lévy Subordinator

A truncated Lévy subordinator is a Lévy subordinator in R+ with Lévy measure restricted from above by a certain level b. In this paper, we study the path and distribution properties of this type of processes in detail and set up an exact simulation framework based on a marked renewal process. In particular, we focus on a typical specification of truncated Lévy subordinator, namely the truncated stable process. We establish an exact simulation algorithm for the truncated stable process, which is very accurate and efficient. Compared to the existing algorithm suggested in Chi (2012), our algorithm outperforms over all parameter settings. Using a distribution decomposition technique, we also develop an exact simulation algorithm for the truncated tempered stable process and other related processes. We illustrate an application of our algorithm as a valuation tool for stochastic hyperbolic discounting, and numerical analysis are provided to demonstrate the accuracy and effectiveness of our methods.

An Adaptive Persistence and Work-stealing Combined Algorithm for Load Balancing on PDES

Load imbalance has always been a crucial challenge in PDES. In the past few years, we have witnessed an increased interest in using multithreading PDES on multicore platforms. In multithreading PDES, migrating LPs and coordinating threads are much more convenient and cause lower overhead, which provides a better circumstance for load balancing. However, current algorithms, including persistence based scheme and work-stealing based scheme, have their drawbacks. On one hand, persistence based load balancers, which use the historical data to predict the future, will inevitably make some error. On the other hand, the work-stealing scheme ignores the application-related characteristic, which may limit the potential performance improvement. In this paper, we propose an adaptive persistence and work-stealing combined dynamic load balancing algorithm (APWS). The algorithm detects load imbalance, adaptively rebalances the LPs distribution based on an evaluation of whether there are benefits, and uses a greedy lock-free work-stealing scheme to eliminate bias at runtime. We analyze the performance of the APWS algorithm by a series of experiments. Results demonstrate that our APWS algorithm achieves better performance in different scenarios.

Enhancing Response Predictions with a Joint Gaussian Process Model

The stochastic Gaussian process model has been widely used in stochastic simulation metamodeling. In practice, the performance of this model can be largely affected by the noises of the observations. In this article, we propose an approach to mitigate the impact of the noisy observations by jointly modeling the response of interest with a correlated but less noisy auxiliary response. The main idea is to leverage on and learn from the correlated and more accurate response to improve the prediction. To achieve this, we extend the existing deterministic multi-response model for stochastic simulation to jointly model the two responses, use some simplified examples to show the benefit of the proposed model, and investigate the input estimation of this model. Quantile prediction is used to illustrate the efficiency of the proposed approach by jointly modeling it with the expectation, which typically has a less noisy estimator compared with that of the quantile. Several numerical examples are conducted and the results show that the joint model can provide better performance. These promising results illustrate the potential of this joint model especially in situations where the response of interest is much noisier or when observations are scarce. We further propose a two-stage design approach that balances the selection of design points for different responses under different costs to more efficiently utilize limited computing budget to improve predictions. We see from these designs also the benefits of the joint model, where cheaper auxiliary response observations can be used to improve the response of interest.

Omnithermal Perfect Simulation for Multi-server Queues

A number of perfect simulation algorithms for multi-server First Come First Served queues have recently been developed. Those of Connor and Kendall (2015) and Blanchet, Pei, and Sigman (2015) use dominated Coupling from the Past (domCFTP) to sample from the equilibrium distribution of the Kiefer-Wolfowitz workload vector for stable M/G/c and GI/GI/c queues respectively, using random assignment queues as dominating processes. In this note we answer a question posed by Connor and Kendall (2015), by demonstrating how these algorithms may be modified in order to carry out domCFTP simultaneously for a range of values of c (the number of servers).

ChunkedTejas: A Chunking-based Approach to Parallelizing a Trace-Driven Architectural Simulator

Research in computer architecture is commonly done using software simulators. The simulation speed of such simulators is therefore critical to the rate of progress in research. One of the less commonly used ways to increase the simulation speed is to decompose the benchmark?s execution into contiguous chunks of instructions and simulate these chunks in parallel. Two issues arise from this approach. The first is of correctness, as each chunk (other than the first chunk) start from an incorrect state. The second is of performance: the decomposition must be done in such a way that the simulation of all chunks finishes at nearly the same time, allowing for maximum speedup. In this paper, we study these two aspects and compare three different chunking approaches (two of them are novel) and two warmup approaches (one of them is novel). We demonstrate that average speedups of up to 5.39X can be achieved (while employing 8 parallel instances), while constraining the error to 0.2% on average.

Inhomogeneous CTMC Models of Birth-and-Death Systems with Steady-State Detection

Time-inhomogenous queuing models play an important role in service systems modeling. Although the transient solutions of corresponding continuous-time Markov chains are more precise than methods using stationary approximations, most authors consider their computational costs prohibitive for practical application. This paper presents a new variant of the uniformization algorithm which utilizes a modified steady-state detection technique. The presented algorithm is applicable for continuous-time Markov chains when their stationary solution can be efficiently calculated in advance, particularly for many practically applicable birth-and-death models with limited size. It significantly improves computational efficiency due to an early prediction of an occurrence of a steady-state, using the properties of the convergence function of an embedded discrete-time Markov chain. Moreover, in the case of inhomogenous continuous-time Markov chain solved in consecutive time steps, the modification guarantees that the error of the computed probability distribution vector is strictly bounded at each point of the considered time interval.

Fidelity and Performance of State Fast-Forwarding in Microscopic Traffic Simulations

Traditionally, the model time in agent-based simulations is advanced in fixed time steps. However, a purely time-stepped execution is inefficient in situations where the states of individual agents are independent of other agents and thus predictable far into the simulated future. In this work, we propose a method to accelerate microscopic traffic simulations based on identifying independence among agent state updates. Instead of iteratively updating an agent's state throughout a sequence of time steps, a computationally inexpensive "fast-forward' function advances the agent's state to the time of its earliest possible interaction with other agents. To demonstrate the approach in practice, we present an algorithm to efficiently determine intervals of independence in microscopic traffic simulations and derive fast\hyp{}forward functions for several well-known traffic simulation models. In contrast to existing approaches based on reducing the level of detail, our approach retains the microscopic nature of the simulation. A performance evaluation is performed for a synthetic scenario and on the road network of the city of Singapore. At low traffic densities, maximum speedup factors of about 2.6 and 1.6 were achieved for the two scenarios, while at the highest considered densities, only few opportunities for fast\hyp{}forwarding exist. We show that the deviation from a time-stepped execution can be reduced to a minimum by choosing an adequate numerical integration scheme to execute the time-stepped updates. Our validation results show that the overall deviation in vehicle travel times is marginal.

An Integrated Method for Simultaneous Calibration and Parameter Selection in Computer Models

For many large and complex computer models, there usually exist a large number of unknown parameters. To improve the computer model?s predictive performance for more reliable and confident decision making, two important issues have to be addressed. The first is to calibrate the computer model and the second is to select the most influential set of parameters. However, these two issues are often addressed independently, which may waste the computational effort. In this paper, a Gaussian process based Bayesian method is first proposed to simultaneously calibrate and select parameters in stochastic computer models. The whole procedure can be more efficient by sharing the data information between these two steps. To further ease the computational burden, an approximation approach based on a weighted normal approximation is proposed to evaluate the posteriors in the proposed Bayesian method. A sequential approximation procedure is further proposed to improve the approximation accuracy by allocating the sequential design points more appropriately. The efficiency and accuracy of the proposed approaches are compared in a building energy model and a pandemic influenza simulation model.

A Distributed Shared-Memory Middleware for Speculative Parallel Discrete Event Simulation

The advent and large diffusion of multi-core machines has pushed towards new programming paradigms the research in the field of Parallel Discrete Event Simulation (PDES). These new paradigms enable the simplification of the way models are coded, via the exploitation of shared memory. On the opposite side, the advent of Cloud computing---and the possibility to group together many (low-cost) virtual machines to form a distributed-memory cluster capable of hosting simulation applications---has raised the need to bridge shared memory programming and seamless distributed execution. In this article we present the design of a distributed middleware that transparently allows a PDES application coded for shared memory to run on clusters of (Cloud) resources. The shared-memory programming paradigm we tackle is Event & Cross State (ECS). It allows cross simulation object access by event handlers, thus representing a powerful tool for the development of various types of PDES applications. With our innovative ECS middleware we can take the advantage of the expressiveness of the ECS programming model, while not renouncing to the possibility to deploy PDES applications in the Cloud. We also provide data for an experimental assessment of our middleware architecture, which has been integrated into the open source {\sf ROOT-Sim} speculative PDES platform.

Simulation Study to Identify the Characteristics of Markov Chain Properties

Markov models have a long tradition in modeling and simulation of dynamic systems. In this paper, we look at certain properties of a discrete time Markov chain including entropy, trace and 2nd largest eigenvalue to better understand their role for time series analysis. We simulate a number of possible input signals, fit a discrete time Markov chain and explore properties with the help of Sobol indices, partial correlation coefficients, and Morris elementary effect screening method. Our analysis suggests that the presence of a trend, periodicity, and autocorrelation impact entropy, trace, and 2nd largest eigenvalue to varying degrees but not independently of each other and with Markov chain parameter settings as other influencing factors. The properties of interest are promising to distinguish time series data, as evidenced for the entropy measure by recent results in the analysis of cell development for Xenopus laevis in cell biology.

All ACM Journals | See Full Journal Index

enter search term and/or author name