Tuesday, January 7, 2014

Monte-Carlo (sort of) Simulation in eVSM

I have been talking a little about new enhancements to the calculation engine in eVSM.  One very exciting development here is what we are calling Variational Solve.  We added this because we know that variation is a huge source of waste in most value streams.  We thought that being able to visualize variation on a map would help in reducing or eliminating it.

So there are a few different parts to the variational solve.  The first is somehow getting variation data into eVSM for the calculation engine to use.  The second is actually calculating variation in the map.  And the last part is visualizing it.  I'll go into all three in this post.

We wanted to make variation data input as easy as possible, and we settled on using Distribution shapes, which you glue onto variables in eVSM.  We included Normal, Uniform, Triangular, and Exponential distributions out of the box.  For those with simulation backgrounds, this is a very small subset of what's available in most commercial simulation packages, but we felt that most of our users wouldn't want to bother with distribution fitting software to feed into eVSM.  We kept it simple and hope that you can model whatever you need with the distributions we've made available.  For everything else, we've also provided a List distribution, which simply stores a list of values inside the shape.

So to use any of these distributions, you drag out one of the distribution shapes and glue it onto the variable you're applying the distribution to.


Each distribution has a set of parameters, which are just sub-shapes for the distribution shape.  You can hold your mouse over the sub shape for any parameter, and Visio will display the parameter name.  I've also summarized the parameters below:
  • Uniform - the top value is the minimum value, bottom is the maximum
  • Triangular - a is the median/central tendency, b is the maximum, and c is the minimum
  • Exponential - l is the lambda parameter, or the mean
  • Normal - m is the mean, sd is the standard deviation
The List Distribution is handled differently.  To enter values into the list distribution, right click it and and select 'Edit List Values'.  There you can enter values one by one, or paste them from Excel.

Next, you would want to decide what calculated values to measure the variation on.  For instance, you might want to know how the lead time varies with random inputs on inventories or cycle times.  So you would drag an Output Variation shape out from the main eVSM stencil and glue it onto the Lead Time NVU in a Time Summary.  When you run a simulation, eVSM will then store every observation of the calculated value within the Output Variation shape, and make that data available for analysis.

Now, with your variable inputs defined, and your outputs, you would run the Variational Solve by first clicking the button of the same name in the eVSM ribbon.  This will bring up the Variational Solve dialog, where you enter the number of iterations to run.  The rule of thumb here is to increase the number of iterations with increasing amounts of variation in the system.

Variational Solve icon


So after you run a simulation with some number of replications, you want to analyze and visualize the results.  As I mentioned before, the solve engine will store all observations of calculated values into glued-on Output Variation shapes.  We also store the samples used for each distribution within the distribution shape.  Lastly, you can turn an Output Variation shape into an input distribution, by right clicking the shape and selecting that option.

Any distribution or Output Variation shape has a right-mouse option for plotting a histogram of observations, too.  This allows you to look at the variability of the distribution: how big the spread is and also what shape the data observations take.

You can also right-click on any variable shape and plot the distributions of all NVU's with that name.  An example of this can be seen below, where we have process A0070 having a cycle time with a mean of 10 minutes, and A0030 with a mean of 20 minutes.  You can see though that the 20 minute cycle time is probably more desirable, since the variability is so much smaller, even though it's a longer time.  Instead of trying to reduce the 20 minute time, it would serve the value stream better to reduce the amount of variation on the 10 minute cycle time.



You can also view a list of all samples, either for a distribution or output variation shape, and export that to Excel for other analysis.

We've also provided a shape called the Variation Percentile, which is actually for use in static calculations.  What it does is, samples the input values for a calculated values, at a certain percentile.  So if for instance you wanted to know the minimum sum of cycle times on a map, you can do that with the variation percentile shape.  So you would have to write a managed equation that just sums up all the Cycle Time values on the map, and samples, say, the 5th percentile value for each one.

Rather than write a managed equation, though, you can instead use a Data Target shape, since each NVU is an implicit data source.  So this is all you'd have to do to get the sum of all 5th percentile NVU values for cycle time:

So these variational tools were created in the hopes of allowing you to easily add variational data to a value stream map, and visualize that.  We don't want to get into a full-on simulation tool, since even the modern, well-developed ones like Simio are still pretty hard to understand and use.  We wanted to start with a minimum functionality that most of our users can use and do something with.

One thing to keep in mind with this variational solve is, any of these calculated values can be sampling independent random variables from across the map.  For instance, if you're keeping the output variation for Lead Time, and you have a bunch of inventory centers, those inventories are going to fluctuate randomly independent of each other.  In real life this may not be the case.  It's possible inventories can be dependent on one another, and so the variational solve might then not give a very valid result.  So one thing you'll have to think about is, does this limitation make the answer more conservative, or more optimistic?

So if you do find that you're mapping a system and have some variability to address, try using the variational capabilities in eVSM v6.  Visualize the sources of variation on the map, and use that to start working out how to reduce variation, and see how that can affect your future state.  Let us know how it goes, and if there are any limits you run into, and we will be happy to work out how to move past them.

No comments: