ACM DL

ACM Transactions on

Modeling and Computer Simulation (TOMACS)

Menu
Latest Articles

On Automated Memoization in the Field of Simulation Parameter Studies

Processes in computer simulations tend to be highly repetitive. In particular, parameter studies further exasperate the situation as the same model is... (more)

Combining Simulation and Emulation Systems for Smart Grid Planning and Evaluation

Software-defined networking (SDN) enables efficient network management. As the technology matures, utilities are looking to integrate those benefits... (more)

A Role-Dependent Data-Driven Approach for High-Density Crowd Behavior Modeling

In this article, we propose a role-dependent (RD) data-driven modeling approach to simulate pedestrians’ motion in high-density scenes. It is... (more)

Modeling Large-Scale Slim Fly Networks Using Parallel Discrete-Event Simulation

As supercomputers approach exascale performance, the increased number of processors translates to an increased demand on the underlying network... (more)

NeMo: A Massively Parallel Discrete-Event Simulation Model for Neuromorphic Architectures

Neuromorphic computing is a broad category of non–von Neumann architectures that mimic biological nervous systems using hardware. Current research shows that this class of computing can execute data classification algorithms using only a tiny fraction of the power conventional CPUs require. This raises the larger research question: How... (more)

NEWS

Reproducibility Board

June 2018

TOMACS has established a reproducibility board. Its members will take an active role in reproducing computational results and in evaluating modeling and simulation artifacts, as well as in fine tuning the overall process. 

READ MORE

MS:STAROC A Survey of Statistical Model Checking published

February 2018

In the TOMACS series  "State of the Art and Open Challenges" (MS:STAROC) the paper "A Survey of Statistical Model Checking", by Gul Agha and Karl Palmskog has been published.

READ MORE

Special Issue: Toward an Ecosystem of Models and Data

January 2018

A special issue based on the 2017 Informs Simulation Society Research Workshop (I-Sim 2017) will appear in 2019, guest edited by Peter J. Haas and Georgios Theodoropoulos.

READ MORE

Forthcoming Articles
PDES-A: Accelerators for Parallel Discrete Event Simulation implemented on FPGAs

In this paper, we present experiences implementing a general Parallel Discrete Event Simulation (PDES) accelerator on a FPGA. The accelerator can be specialized to any particular simulation model by defining the event handling code, which are then synthesized into a custom accelerator. The accelerator consists of several event processors that can process events in parallel while maintaining the dependencies between them. The accelerator supports optimistic simulation by automatically keeping track of event history and supporting rollbacks. The architecture is limited in scalability locally by the communication bandwidth of different structures. However, it is designed to allow multiple accelerators to be connected together to scale up the simulation. We evaluate the design and explore tradeoffs and optimizations. We show the accelerator can scale to 64 concurrent event processors with high performance. At this point, the scalability becomes limited by contention on the shared structures within the datapath. To alleviate this, we also develop a new version of the datapath that partitions the state and event space of the simulation, but allows these partitions to share the use of the event processors. The new design substantially reduces contention and improves the performance with 64 processors from 49x to 62x relative to single processor design. We went through design iterations, first using Verilog, and then using Chisel, and report some observations in the differences in using these two languages. PDES-A outperforms the ROSS simulator running on a 12 core Intel Xeon machine by a factor of 3.2x at 20% of the power consumption.

Replicated Computational Results (RCR) Report for

An Introduction to Multi-Objective Simulation Optimization

We provide an introduction to the multi-objective simulation optimization (MOSO) problem aimed at researchers and practitioners who wish to begin working in this nascent and under-developed area. The MOSO problem is a nonlinear multi-objective optimization (MOO) problem in which multiple simultaneous and conflicting objective functions can only be observed with stochastic error. The solution to this problem is called the Pareto set, that is, the set of decision points for which no other feasible point is better on all objectives. We focus on the so-called a posteriori or vector optimization methods that characterize the entire Pareto set as input to the decision-making process. Since these methods enable the decision-maker to articulate solution preferences after the optimization is conducted, their popularity in the MOO literature has increased with the increase in available computing power. Likewise, recent MOSO application papers demonstrate an interest in returning several or all points in the Pareto set to the decision-maker. The recent availability of mature and efficient single-objective simulation optimization algorithms, coupled with ubiquitously available parallel computing power, makes identifying the Pareto set as the solution to a MOSO problem seem like an increasingly realistic goal. However, the methodological and theoretical development of specialized a posteriori MOSO algorithms remains in its infancy. To assist researchers beginning to work in this area, we provide an introduction to the current MOSO literature, review existing MOSO methods, and discuss some of the key open questions that remain in this emerging field.

Data-Driven Model-Based Detection of Malicious Insiders via Physical Access Logs

The risk posed by insider threats has usually been approached by analyzing the behavior of users solely in the cyber domain. In this paper, we show the viability of using physical movement logs, collected via a building access control system, together with an understanding of the layout of the building housing the system's assets, to detect malicious insider behavior that manifests itself in the physical domain. In particular, we propose a systematic framework that uses contextual knowledge about the system and its users, learned from historical data gathered from a building access control system, to select suitable models for representing movement behavior. We suggest two different models of movement behavior in this paper and evaluate their ability to represent normal user movement. We then explore the online usage of the learned models, together with knowledge about the layout of the building being monitored, to detect malicious insider behavior. Finally, we show the effectiveness of the developed framework using real-life data traces of user movement in railway transit stations.

Keddah: Network Evaluation Powered by Simulating Distributed Application Traffic

As a distributed system, Hadoop heavily relies on the network to complete data processing jobs. While the traffic generated by Hadoop jobs is critical for job execution performance, the actual behaviour of Hadoop network traffic is still poorly understood. This lack of understanding greatly complicates research relying on Hadoop workloads. In this paper, we explore Hadoop traffic through empirical traces. We analyse the generated traffic of multiple types of MapReduce jobs, with varying input sizes, and cluster configuration parameters. We present Keddah, a toolchain for capturing, modelling and reproducing Hadoop traffic, for use with network simulators to better capturing the behaviour of Hadoop. By imitating the Hadoop traffic generation process and considering the Yarn resource allocation, Keddah can be used to create Hadoop traffic workloads, enabling reproducible Hadoop research in more realistic scenarios.

Towards Data-Driven Simulation Modeling for Mobile Agent-Based Systems

Simulation models are widely used to study complex systems. Current simulation models are generally handcrafted using expert knowledge (knowledge-driven); however, this process is slow and introduces modeler bias. This paper presents an approach towards data-driven simulation modeling by developing a framework that discovers simulation models in an automated way for mobile agent-based applications. The framework is composed of three components: 1) a model space specification, 2) a search method (genetic algorithm), and 3) framework measurement metrics. The model space specification provides a formal specification for the general model structure from which various models can be generated. The search method is used to efficiently search the model space for candidate models that exhibit desired behavior. The five framework measurement metrics: flexibility, comprehensibility, controllability, compossability, and robustness, are developed to evaluate the overall framework. The results demonstrate that it is possible to discover a variety of interesting models using the framework.

Exposing Inter-Process Information for Efficient PDES of Spatial Stochastic Systems

We present a new efficient approach to the parallelization of discrete event simulators for multicore computers, which is based on exposing and disseminating essential information between processors. We aim specifically at simulation models with a spatial structure, where time intervals between successive events are highly variable and without lower bounds. In Parallel Discrete Event Simulation (PDES), the model is distributed onto parallel processes. A key challenge in PDES is that each process must continuously decide when to pause its local simulation in order to reduce the risk of expensive rollbacks caused by future "delayed" incoming events from other processes. A process could make such decisions optimally if it would know the timestamps of future incoming events. Unfortunately, this information is often not available in PDES algorithms. We present an approach to designing efficient PDES algorithms, in which an existing natural parallelization of PDES is restructured in order to expose and disseminate more precise information about future incoming events to each LP. We have implemented our approach in a parallel simulator for spatially extended Markovian processes, intended for simulating, e.g., chemical reactions, biological and epidemiological processes. On 32 cores, our implementation exhibits speedup that significantly outweighs the overhead incurred by the refinement. We also show that our resulting simulator is superior in performance to existing 17 relative to an efficient sequential implementation.

Exact-Differential Simulation: Differential Processing of Large-Scale Discrete Event Simulations

Using computer simulation to analyze large-scale discrete event systems requires repeated executions with various scenarios or parameters. Such repeated executions can induce significant redundancy in event processing when the modification from a prior scenario to a new scenario is relatively minor, and when the altered scenario influences only a small part of the simulation. For example, in a city-scale traffic simulation, an altered scenario of blocking one junction may only affect a small part of the city for considerable length of time. However, traditional simulation approaches would still repeat the simulation for the whole city even when the changes are minor. In this paper, we propose a new redundancy reduction technique for large-scale discrete event simulations, called exact-differential simulation, which simulates only the altered portions of scenarios and their influences in repeated executions while still achieving the same results as re-executing entire simulations. The paper presents the main concepts of the exact-differential simulation, the design of its algorithm, and a method to build an exact-differential simulation middleware, which supports multiple applications of discrete event simulation. We also evaluate our approach by using two case studies, the PHOLD benchmark and a traffic simulation of Tokyo.

New Performance Modeling Methods for Parallel Data Processing Applications

Predicting performance of an application running on parallel computing platforms is increasingly becoming important because of its influence on development time and resource management. However, predicting the performance with respect to parallel processes is complex for iterative, multi-stage applications. This research proposes a performance approximation approach FiM to predict the calculation time with FiM-Cal and communication time with FiM-Com, of an application running on a master-compute distributed framework. FiM-Cal consists of two key components that are coupled with each other: 1) Stochastic Markov Model to capture non-deterministic runtime that often depends on parallel resources, e.g., number of processes. 2) Machine Learning Model that extrapolates the parameters for calibrating our Markov model when we have changes in application parameters such as dataset. Along with the parallel calculation time these platforms also consumes some data transfer time to communicate between master and compute nodes. FiM-Com consists of a simulation queuing model to quickly estimate communication time. Our new modeling approach considers different design choices along multiple dimensions, namely (i) process level parallelism, (ii) distribution of cores on multi-processor platform, (iii) application related parameters, and (iv) characteristics of datasets. The major contribution of our prediction approach is that FiM is able to provide an accurate prediction of parallel computation time for the datasets which have much larger size than that of the training datasets.

Efficient Parallel Simulation over Large-scale Social Contact Networks

Social contact network (SCN) models the daily contacts between people in real life. It consists of agents and locations. When agents visit a location at the same time, the social interactions can be established among them. Simulations over SCN have been employed to study social dynamics such as disease spread among population. Because of the scale of SCN and the execution time requirement, the simulations are usually run in parallel. However, a challenge to the parallel simulation is that the structure of SCN is naturally skewed with a few hub locations that have far more visitors than others. These hub locations can cause load imbalance and heavy communication between partitions, which therefore impact the simulation performance. This paper proposes a comprehensive solution to address this challenge. Firstly, the hub locations are decomposed into small locations, so that SCN can be divided into partitions with better balanced workloads. Secondly, the agents are decomposed to exploit data locality, so that the overall communication across partitions can be greatly reduced. Thirdly, two enhanced execution mechanisms are designed for locations and agents respectively to improve simulation parallelism. To evaluate the efficiency of the proposed solution, an epidemic simulation was developed and extensive experiments were conducted on two computer clusters using three SCN data sets with different scales. The results demonstrate that our approach can significantly improve the execution performance of the simulation.

A Data Assimilation Framework for Discrete Event Simulations

Discrete event simulation (DES) is traditionally used as an offline tool to help users to carry out analysis for complex systems. As real time sensor data becomes more and more available, there is increasing interest of assimilating real time data into DES to achieve on-line simulation to support real time decision making. This paper presents a data assimilation framework that works with DES models. Solutions are proposed to address unique challenges associated with data assimilation for DES. A tutorial example of discrete event road traffic simulation is developed to demonstrate the developed framework as well as principles of data assimilation in general. This paper makes contributions to the DES community by developing a new data assimilation framework and a concrete example that helps readers to grasp the details of data assimilation for DES.

A Variational Inference-based Heteroscedastic Gaussian Process Approach for Simulation Metamodeling

In this paper, we propose a variational inference-based Gaussian process metamodeling approach (VBGP) that is suitable for design and analysis of stochastic simulation experiments. This approach enables statistically and computationally efficient approximations to the mean and variance response surfaces implied by a stochastic simulation, while taking into full account the uncertainty in the heteroscedastic variance; furthermore, it can accommodate the situation where either one or multiple simulation replications are available at every design point. We demonstrate the superior performance of VBGP compared to existing simulation metamodeling methods through two numerical examples.

Visual Analytics to Identify Temporal Patterns and Variability in Simulations from Cellular Automata

Cellular Automata (CA) models are discrete simulation models, in which a collection of cells placed in a regular spatial configuration change state over a number of time steps. Running an experiment produces spatio-temporal data and multi-run data (as multiple runs are employed for stochastic models). Visually navigating such data proves arduous, as the commonly employed slider-based visualizations offer little support to identify temporal trends or assess where a model's uncertainty may be excessive. In this paper, we designed and implemented a new visualization environment dealing with the spatio-temporal, multi-run data produced by CA. This new environment uses several linked visualizations, and focuses on the identifying of temporal trends and uncertainty. We conducted an empirical evaluation of this new environment to (i) assess whether important tasks for modelers can be performed efficiently with this environment, (ii) examine how performances are influenced by key simulation factors, and (iii) identify whether modelers can use the familiar slider-based visualization together with our new environment. Our results shows that participants were confident on results obtained using our new environment. They were also able to accomplish tasks without taking longer than they would with current solutions. Our qualitative analysis found that some participants saw value switching between our proposed visualization and the commonly used slider-based version. In addition, we noted that errors were affected not only by the type of visualizations by also by specific features of the simulations. Future work should explore combining and adapting these visualizations depending on salient parameters from the simulations.

Interval Markov Decision Processes with Multiple Objectives: from Robust Strategies to Pareto Curves

Accurate Modelling of a real world system with probabilistic behavior is a difficult task. Sensor noise and statistical estimations, among other errors, make the exact probability values impossible to obtain. In this paper, we consider the Interval Markov decision processes IMDPs, which generalise classical MDPs by having interval-valued transition probabilities. They provide a powerful modelling tool for probabilistic systems with an additional variation or uncertainty that prevents the knowledge of the exact transition probabilities. We investigate the problem of robust multi-objective synthesis for IMDPs and Pareto curve analysis of it. We study how to find a robust (randomized) strategy that satisfies multiple objectives involving rewards, reachability, and omega-regular properties against all possible resolutions of the transition probability uncertainties, as well as to generate an approximate Pareto curve providing an explicit view of the trade-offs between multiple objectives. We show that the multi-objective synthesis problem is PSPACE-hard and afterwards, we provide a value iteration-based decision algorithm to approximate the Pareto set of achievable points. We finally demonstrate the practical effectiveness of our proposed approaches by applying them on several case studies using a prototypical tool.

Managing Pending Events in Sequential & Parallel Simulations using 3-Tier Heap & 2-Tier-Ladder Queue

Performance of sequential and parallel Discrete Event Simulation (DES) is strongly influenced by the data structure used for managing and processing pending events. Accordingly, we propose and evaluate the effectiveness of our multi-tiered (2 and 3 tier) data structures and our 2-tier Ladder Queue, for both sequential and optimistic parallel simulations on distributed memory platforms. Our experiments compare the performance of our data structures against a performance-tuned version of the Ladder Queue, which has shown to outperform many other data structures for DES. The core simulation-based empirical assessments are in C++ and are based on 2,500 configurations of well-established PHOLD and PCS benchmarks. We have conducted analyses on two computing clusters with different hardware to ensure our results are reproducible. Moreover, to fully establish the robustness of our analysis and data structures, we have also implemented pertinent queues in Java and verified consistent, reproducible performance characteristics. Collectively, our analyses show that our 3-tier heap and 2-tier ladder queue outperform the Ladder Queue by 60× in some simulations, particularly those with higher concurrency per Logical Process (LP), in both sequential and Time Warp synchronized parallel simulations.

Ranking and Selection: A New Sequential Bayesian Procedure for Use with Common Random Numbers

We introduce a new concept for selecting the best alternative out of a given set of systems which are evaluated with respect to their expected performances. We assume that the systems are simulated on a computer and that a joint observation of all systems has a multivariate normal distribution with \emph{unknown mean} and \emph{unknown covariance} matrix. In particular, the observations of the systems may be stochastically dependent as it is the case if common random numbers are used for the simulation. The main application we have in mind is heuristic stochastic optimization where systems are different solutions to an optimization problem with random inputs. We repeatedly allocate a fixed budget of simulation runs to the alternatives. We use a Bayesian setup with an uninformative prior and allocate the simulation budget based on the posterior distribution of the observations until the ranking and selection decision is correct with a given high probability. We introduce a new simple allocation strategy that is directly connected to the error probabilities calculated before. The necessary posterior distributions can only be approximated, but we give extensive empirical evidence that the error made is well below the given bounds. Our extensive test results show that our procedure \BayesRS uses less simulations than comparable procedures from the literature in different correlation scenarios. At the same time \BayesRS needs no additional prior parameters and can cope with different types of ranking and selection tasks.

All ACM Journals | See Full Journal Index

Search TOMACS
enter search term and/or author name