Once the base simulation passes the validation process, we assume that it is a valid simulation model and it can be used to compare results with desired cases. In this part is where the study is actually made through an experimental design according to the study objectives
The main objective of the experimental design for simulations is to determine which parameters have more effect on the system output and how to do it in a limited number of simulations[1]. However, in this method, the priority is not to discover the highest impact parameter but to verify if a certain condition does in fact have an impact on the system performance.
To analyze and present the results, we opt to use a factorial 2^k experimental design (k is the number of parameters utilized) with statistical analysis by confidence intervals [1] (step 9), in a similar way as with statistical validation (step 6).
This factorial design has an increasing demanding cost as number of parameters and experiments increases, but it is very straightforward when working with a limited number of parameters simultaneously.
Let Rj be the simulation result (evaluation parameter) of interest of each experiment j of n total experiments, we choose two levels (values) for a target parameter or algorithm.
Example: we may adopt robot maximum velocity of 1m/s as factor “-” and 2m/s for factor “+” or use two different control strategies, "A" from the reference simulation (factor "-") and "B" for the new algorithm (factor "+"), to compare results with the same initial conditions.
Assuming we want to evaluate three simultaneous parameters in the simulation, for example, first we need to determine the combined influence of each factor on the result Rj by calculating the differences e (1). We can obtain this by building a auxiliary table (also known as project matrix),filling it with all combination possibilities. Each difference e can be obtained value by multiplying the respective parameter signal column on table by its corresponding Rj (for each factor level combination), dividing the sum by 2^(k-1), with k being the total number of factors.
- Auxiliary table for experimental design
The cross-factor influence between two or more factors is calculated in the same Table, but this time by multiplying the signal from all the columns of interest, resulting in (2).
We adopt as "-" level all the parameters values or algorithms utilized in the reference simulation, and with "+" level values or algorithms that we want to observe the impact.
As for implementation step, is a straight forward procedure as we have simulate with the same initial condition all the possible combinations and keep the results in a file or in a document. However, as we want to do a statistical analysis in the next step, we must already think about how many samples we will use for it.
The total number of experiments can quickly increases as we increase the number of factors, samples and number of "+" levels (if more than one is needed) are selected.
Example: if we want to do an experimental design with 2 levels, 3 factors and 10 samples, it will result in (2^3)*10 or 80 total simulations
References
[1] Law, A. M. “Simulation Modelling and Analysis”, McGraw-Hill, 5th Edition, pp. 804, 2015.
[2] Klugl, F. et al. “A validation methodology for agent-based simulations”, in Proceedings of the 2008 ACM Symposium on Applied Computing (SAC), DOI: 10.1145/1363686.1363696, 2008.
[3] Balci, O. "Introduction to Modeling and Simulation". Class Slides, ACM SIGSIM, Available in <http://www.acm-sigsim-mskr.org/Courseware/Balci/introToMS.htm>, 2013.
[4] Sargent, R. G. “Verification and validation of simulation models”, in Journal of Simulation, 7, 12-24, 2013.
[5] Siegfried, R. "Modeling and Simulation of Complex Systems", Springer Vieweg, ISBN 978-3-658-07528-6, 2014.
[4] Sargent, R. G. “Verification and validation of simulation models”, in Journal of Simulation, 7, 12-24, 2013.
[5] Siegfried, R. "Modeling and Simulation of Complex Systems", Springer Vieweg, ISBN 978-3-658-07528-6, 2014.
Page Release: 10/05/17
Last Update: 10/05/17