ACM DL

ACM Transactions on

Modeling and Computer Simulation (TOMACS)

Menu
Latest Articles

Towards Data-Driven Simulation Modeling for Mobile Agent-Based Systems

Simulation models are widely used to study complex systems. Current simulation models are generally handcrafted using expert knowledge... (more)

Ranking and Selection: A New Sequential Bayesian Procedure for Use with Common Random Numbers

We introduce a new sampling scheme for selecting the best alternative out of a given set of systems that are evaluated with respect to their expected performances. We assume that the systems are simulated on a computer and that a joint observation of all systems has a multivariate normal distribution with unknown mean and unknown covariance matrix.... (more)

Fast Random Integer Generation in an Interval

In simulations, probabilistic algorithms, and statistical tests, we often generate random integers in an interval (e.g., [0,s)). For example, random integers in an interval are essential to the Fisher-Yates random shuffle. Consequently, popular languages such as Java, Python, C++, Swift and Go include ranged random integer generation functions as... (more)

Replicated Computational Results (RCR) Report for “Fast Random Integer Generation in an Interval”

The article “Fast Random Integer Generation in an Interval” by Lemire (2018)... (more)

Visual Analytics to Identify Temporal Patterns and Variability in Simulations from Cellular Automata

Cellular Automata (CA) are discrete simulation models, thus producing spatio-temporal data through... (more)

A Variational Inference-Based Heteroscedastic Gaussian Process Approach for Simulation Metamodeling

In this article, we propose a variational Bayesian inference-based Gaussian process metamodeling... (more)

An Introduction to Multiobjective Simulation Optimization

The multiobjective simulation optimization (MOSO) problem is a nonlinear multiobjective optimization... (more)

NEWS

Reproducibility Board

June 2018

TOMACS has established a reproducibility board. Its members will take an active role in reproducing computational results and in evaluating modeling and simulation artifacts, as well as in fine tuning the overall process. 

READ MORE

MS:STAROC A Survey of Statistical Model Checking published

February 2018

In the TOMACS series  "State of the Art and Open Challenges" (MS:STAROC) the paper "A Survey of Statistical Model Checking", by Gul Agha and Karl Palmskog has been published.

READ MORE

Special Issue: Toward an Ecosystem of Models and Data

January 2018

A special issue based on the 2017 Informs Simulation Society Research Workshop (I-Sim 2017) will appear in 2019, guest edited by Peter J. Haas and Georgios Theodoropoulos.

READ MORE

Forthcoming Articles
PDES-A: Accelerators for Parallel Discrete Event Simulation implemented on FPGAs

In this paper, we present experiences implementing a general Parallel Discrete Event Simulation (PDES) accelerator on a FPGA. The accelerator can be specialized to any particular simulation model by defining the event handling code, which are then synthesized into a custom accelerator. The accelerator consists of several event processors that can process events in parallel while maintaining the dependencies between them. The accelerator supports optimistic simulation by automatically keeping track of event history and supporting rollbacks. The architecture is limited in scalability locally by the communication bandwidth of different structures. However, it is designed to allow multiple accelerators to be connected together to scale up the simulation. We evaluate the design and explore tradeoffs and optimizations. We show the accelerator can scale to 64 concurrent event processors with high performance. At this point, the scalability becomes limited by contention on the shared structures within the datapath. To alleviate this, we also develop a new version of the datapath that partitions the state and event space of the simulation, but allows these partitions to share the use of the event processors. The new design substantially reduces contention and improves the performance with 64 processors from 49x to 62x relative to single processor design. We went through design iterations, first using Verilog, and then using Chisel, and report some observations in the differences in using these two languages. PDES-A outperforms the ROSS simulator running on a 12 core Intel Xeon machine by a factor of 3.2x at 20% of the power consumption.

Data-Driven Model-Based Detection of Malicious Insiders via Physical Access Logs

The risk posed by insider threats has usually been approached by analyzing the behavior of users solely in the cyber domain. In this paper, we show the viability of using physical movement logs, collected via a building access control system, together with an understanding of the layout of the building housing the system's assets, to detect malicious insider behavior that manifests itself in the physical domain. In particular, we propose a systematic framework that uses contextual knowledge about the system and its users, learned from historical data gathered from a building access control system, to select suitable models for representing movement behavior. We suggest two different models of movement behavior in this paper and evaluate their ability to represent normal user movement. We then explore the online usage of the learned models, together with knowledge about the layout of the building being monitored, to detect malicious insider behavior. Finally, we show the effectiveness of the developed framework using real-life data traces of user movement in railway transit stations.

Keddah: Network Evaluation Powered by Simulating Distributed Application Traffic

As a distributed system, Hadoop heavily relies on the network to complete data processing jobs. While the traffic generated by Hadoop jobs is critical for job execution performance, the actual behaviour of Hadoop network traffic is still poorly understood. This lack of understanding greatly complicates research relying on Hadoop workloads. In this paper, we explore Hadoop traffic through empirical traces. We analyse the generated traffic of multiple types of MapReduce jobs, with varying input sizes, and cluster configuration parameters. We present Keddah, a toolchain for capturing, modelling and reproducing Hadoop traffic, for use with network simulators to better capturing the behaviour of Hadoop. By imitating the Hadoop traffic generation process and considering the Yarn resource allocation, Keddah can be used to create Hadoop traffic workloads, enabling reproducible Hadoop research in more realistic scenarios.

Mean-Payoff Optimization in Continuous-Time Markov Chains with Parametric Alarms

Continuous-time Markov chains with alarms (ACTMCs) allow for alarm events that can be non-exponentially distributed. Within parametric ACTMCs, the parameters of alarm-event distributions are not given explicitly and can be subject of parameter synthesis. In this line, an algorithm is presented that solves the µ-optimal parameter synthesis problem for parametric ACTMCs with long-run average optimization objectives. The approach provided in this paper is based on a reduction of the problem to finding long-run average optimal policies in semi-Markov decision processes (semi-MDPs) and sufficient discretization of the parameter (i.e., action) space. Since the set of actions in the discretized semi-MDP can be very large, a straightforward approach based on an explicit action-space construction fails to solve even simple instances of the problem. The algorithm presented uses an enhanced policy iteration on symbolic representations of the action space. Soundness of the algorithm is established for parametric ACTMCs with alarm-event distributions that satisfy four mild assumptions, fulfilled by many kinds of distributions. Exemplifying proofs for the satisfaction of these requirements are provided for Dirac, uniform, exponential, Erlang, and Weibull distributions in particular. An experimental implementation shows that the symbolic technique substantially improves the efficiency of the synthesis algorithm and allows to solve instances of realistic size.

Exposing Inter-Process Information for Efficient PDES of Spatial Stochastic Systems

We present a new efficient approach to the parallelization of discrete event simulators for multicore computers, which is based on exposing and disseminating essential information between processors. We aim specifically at simulation models with a spatial structure, where time intervals between successive events are highly variable and without lower bounds. In Parallel Discrete Event Simulation (PDES), the model is distributed onto parallel processes. A key challenge in PDES is that each process must continuously decide when to pause its local simulation in order to reduce the risk of expensive rollbacks caused by future "delayed" incoming events from other processes. A process could make such decisions optimally if it would know the timestamps of future incoming events. Unfortunately, this information is often not available in PDES algorithms. We present an approach to designing efficient PDES algorithms, in which an existing natural parallelization of PDES is restructured in order to expose and disseminate more precise information about future incoming events to each LP. We have implemented our approach in a parallel simulator for spatially extended Markovian processes, intended for simulating, e.g., chemical reactions, biological and epidemiological processes. On 32 cores, our implementation exhibits speedup that significantly outweighs the overhead incurred by the refinement. We also show that our resulting simulator is superior in performance to existing 17 relative to an efficient sequential implementation.

Infinite Swapping using IID Samples

We propose a new method for estimating rare event probabilities when independent samples are available. It is assumed that the underlying probability measures satisfy a large deviations principle with scaling parameter " which we call temperature. We show that by combining samples at different temperatures, one can construct an estimator with greatly reduced variance. Although as presented here the method is not as broadly applicable as other rare event simulation methods, such as splitting or importance sampling, it does not require any problem-dependent constructions that can lead to worse performance than ordinary Monte Carlo if done incorrectly.

Exact-Differential Simulation: Differential Processing of Large-Scale Discrete Event Simulations

Using computer simulation to analyze large-scale discrete event systems requires repeated executions with various scenarios or parameters. Such repeated executions can induce significant redundancy in event processing when the modification from a prior scenario to a new scenario is relatively minor, and when the altered scenario influences only a small part of the simulation. For example, in a city-scale traffic simulation, an altered scenario of blocking one junction may only affect a small part of the city for considerable length of time. However, traditional simulation approaches would still repeat the simulation for the whole city even when the changes are minor. In this paper, we propose a new redundancy reduction technique for large-scale discrete event simulations, called exact-differential simulation, which simulates only the altered portions of scenarios and their influences in repeated executions while still achieving the same results as re-executing entire simulations. The paper presents the main concepts of the exact-differential simulation, the design of its algorithm, and a method to build an exact-differential simulation middleware, which supports multiple applications of discrete event simulation. We also evaluate our approach by using two case studies, the PHOLD benchmark and a traffic simulation of Tokyo.

New Performance Modeling Methods for Parallel Data Processing Applications

Predicting performance of an application running on parallel computing platforms is increasingly becoming important because of its influence on development time and resource management. However, predicting the performance with respect to parallel processes is complex for iterative, multi-stage applications. This research proposes a performance approximation approach FiM to predict the calculation time with FiM-Cal and communication time with FiM-Com, of an application running on a master-compute distributed framework. FiM-Cal consists of two key components that are coupled with each other: 1) Stochastic Markov Model to capture non-deterministic runtime that often depends on parallel resources, e.g., number of processes. 2) Machine Learning Model that extrapolates the parameters for calibrating our Markov model when we have changes in application parameters such as dataset. Along with the parallel calculation time these platforms also consumes some data transfer time to communicate between master and compute nodes. FiM-Com consists of a simulation queuing model to quickly estimate communication time. Our new modeling approach considers different design choices along multiple dimensions, namely (i) process level parallelism, (ii) distribution of cores on multi-processor platform, (iii) application related parameters, and (iv) characteristics of datasets. The major contribution of our prediction approach is that FiM is able to provide an accurate prediction of parallel computation time for the datasets which have much larger size than that of the training datasets.

A Data Assimilation Framework for Discrete Event Simulations

Discrete event simulation (DES) is traditionally used as an offline tool to help users to carry out analysis for complex systems. As real time sensor data becomes more and more available, there is increasing interest of assimilating real time data into DES to achieve on-line simulation to support real time decision making. This paper presents a data assimilation framework that works with DES models. Solutions are proposed to address unique challenges associated with data assimilation for DES. A tutorial example of discrete event road traffic simulation is developed to demonstrate the developed framework as well as principles of data assimilation in general. This paper makes contributions to the DES community by developing a new data assimilation framework and a concrete example that helps readers to grasp the details of data assimilation for DES.

Interval Markov Decision Processes with Multiple Objectives: from Robust Strategies to Pareto Curves

Accurate Modelling of a real world system with probabilistic behavior is a difficult task. Sensor noise and statistical estimations, among other errors, make the exact probability values impossible to obtain. In this paper, we consider the Interval Markov decision processes IMDPs, which generalise classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that prevents the knowledge of the exact transition probabilities. We investigate the problem of robust multi-objective synthesis for IMDPs and Pareto curve analysis of it. We study how to find a robust (randomized) strategy that satisfies multiple objectives involving rewards, reachability, and omega-regular properties against all possible resolutions of the transition probability uncertainties, as well as to generate an approximate Pareto curve providing an explicit view of the trade-offs between multiple objectives. We show that the multi-objective synthesis problem is PSPACE-hard and afterwards, we provide a value iteration-based decision algorithm to approximate the Pareto set of achievable points. We finally demonstrate the practical effectiveness of our proposed approaches by applying them on several case studies using a prototypical tool.

Sequential Schemes for Frequentist Estimation of Properties in Statistical Model Checking

Statistical Model Checking (SMC) is an approximate verification method that overcomes the state space explosion problem for probabilistic systems by Monte Carlo simulations. Simulations might be however costly if many samples are required. It is thus necessary to implement efficient algorithms to reduce the sample size while preserving precision and accuracy. In the literature, some sequential schemes have been provided for the estimation of property occurrence based on predefined confidence and absolute or relative error. Nevertheless, these algorithms remain conservative and may result in huge sample sizes if the required precision standards are demanding. In this article, we compare some useful bounds and some sequential methods. We propose outperforming and rigorous alternative schemes, based on Massart bounds and robust confidence intervals. Our theoretical and empirical analysis show that our proposal reduces the sample size while providing guarantees on error bounds.

All ACM Journals | See Full Journal Index

Search TOMACS
enter search term and/or author name