"Green Simulation: Reusing the Output of Repeated Experiments'' by Feng and Staum describes methods based on likelihood-ratio or importance-sampling theory for reusing the outputs of simulation experiments at previous parameter settings to augment and improve (by reducing the estimator variance) simulation experiments at new parameter settings. The paper presents empirical results for two realistic examples in the area of finance; Matlab code for these examples was made available by the authors. The examples were straightforward to run without extensive knowledge of Matlab, and both experiment and scenario parameters can be altered easily. All experiment results in the paper were reproduced.
This article presents Sequem, a fully sequential procedure for computing point and confidence-interval (CI) estimators for extreme steady-state quantiles of a simulation output process. The method is an enhancement of the Sequest procedure proposed by Alexopoulos et al. in 2014 for estimating nonextreme steady-state quantiles. Sequem exploits a combination of batching, sectioning, and the maximum transformation technique to achieve the following: (i) reduction in point-estimator bias arising from initial conditions or inadequate simulation run length; and (ii) adjustment of the CI half-length to compensate for the effects of skewness or autocorrelation on the corresponding quantile point estimators obtained from nonoverlapping batches of observations. The CIs delivered by Sequem satisfy user-specified requirements concerning their coverage probability and their absolute or relative precision. An experimental evaluation based on seven ``stress-testing'' processes revealed that Sequem exhibited good performance when used in difficult applications.
The classical Adomian decomposition method frequently used to solve linear and nonlinear algebraic or integro-differential equations of ordinary and partial type is revisited. Rewriting the technique in an elegant form, a parameter so-called as the convergence control parameter, is embedded into the method to control the convergence and the rate of convergence of the method. Besides the constant level curves for determining suitable values, an effective approach for obtaining the best possible convergence control parameter is later devised based on the squared residual error of the studied problem. The optimum Adomian decomposition method is proved to converge to the true solution where the classical Adomian decomposition method fails to converge. When both methods are convergent, the present algorithm is observed to accelerate the rate of convergence. Moreover, the restricted domain of convergent physical solution obtained by the classical Adomian method is shown to be greatly extended to a finer interval by the optimum Adomian decomposition method. The justification of the new scheme is made clear on several mathematical/physical examples selected from the open literature. Finally, an example is provided to demonstrate the better accuracy of the optimum Adomian decomposition method over the recently popular homotopy analysis method.
ThepaperMNOPQRS:MaxNonnegativityOrderingPiecewise-QuadraticRateSmoothingbyChenand Schmeiser constructs a smooth piecewise-quadratic rate estimate for a nonhomogeneuous Poisson process based on event counts overk adjacent time intervals. The event times can be generated by generating a Poissonprocesswithunitrateandinvertingthecumulativeratefunctionorbythethinningtechnique.The overall algorithm hasO(k2) time complexity andO(k) space requirements in the number of intervals. This replicated computation report focuses on the reproducibility of the experimental results in the aforementioned paper.
Simulations are often driven by input models that are estimated from finite real-world data. When we use simulations to assess the performance of a stochastic system, there exist two sources of uncertainty in the performance estimates: input and simulation estimation uncertainty. In this paper, we develop a design of experiments that can efficiently employ the potentially tight simulation resource to construct a percentile confidence interval quantifying the impact of the input uncertainty on the system performance estimates, while controlling the simulation estimation error. Specifically, non-parametric bootstrap is used to generate samples of input models quantifying both input distribution family and parameter value uncertainty. Then, the direct simulation is used to propagate the input uncertainty to the output response. Since each simulation run could be computationally expensive, given a tight simulation budget, we develop a sequential design of experiments that can find the optimal combination of the number of bootstrapped samples of input distributions and the replication allocation at these samples. It can control both finite sampling error introduced by using finite bootstrapped samples to quantify the input uncertainty and the system response estimation error introduced by using finite replications at each bootstrapped sample. Our approach is supported by a rigorous theoretical study, and an empirical study also demonstrates that it has better and more robust finite-sample performance than the direct bootstrapping.
We introduce a new paradigm in simulation experiment design and analysis, called ``green simulation,'' for the setting in which experiments are performed repeatedly with the same simulation model. Green simulation means reusing outputs from previous experiments to answer the question currently being asked of the simulation model. As one method for green simulation, we propose estimators that reuse outputs from previous experiments by weighting them with likelihood ratios, when parameters of distributions in the simulation model differ across experiments. We analyze convergence of these estimators as more experiments are repeated, while a stochastic process changes the parameters used in each experiment. As another method for green simulation, we propose an estimator based on stochastic kriging. We find that green simulation can reduce mean squared error by more than an order of magnitude in examples involving catastrophe bond pricing and credit risk evaluation.