Thursday, January 30, 2014

IFTTT as a Lean Enabling Tool

A few weeks ago we had a bad cold snap in Connecticut, and a few of my pipes froze.  The pipes froze not just because of the sub 0F temperatures, but also because I had my wood stove running pretty hard throughout the cold.  Because it was so warm in my house from the wood stove, the thermostat in my hall never turned the furnace on, which would have circulated hot water through my baseboard heating system.  Instead, the water sat in the pipes and cooled down, and eventually froze.

While I waited for the pipes to thaw, and also while I was repairing one of them, which had burst and started spewing hot water into my basement, I thought about how I could prevent this from happening again.

I could have bought electric heaters for the pipes, which would have been a lot of expense and time buying and installing them.  I could also just not run my wood stove, which would mean I'd use more oil than I had been, which would also be pretty expensive.

Instead of spending money on the problem, I decided to try to implement some standard work for handling the cold in my house.  I know how silly that sounds, but bear with me. It doesn't get this cold very often in CT (although it has been fairly often this winter), so I knew I'd get careless some night or not pay attention to the forecast, and I'd be back to doing extremely bad pipe soldering jobs (actually, if it happens again I'm going to use PEX).

So I turned to a very cool web-based tool called IFTTT (IF This Then That), which is a service that monitors any of a huge number of different "conditions."  The condition can be generated by (currently) around 70 different web services like Facebook, GMail, or Pinboard.  When a condition is satisfied (like a new tweet to you, or a new email), you can then have some action run, again against this huge list of web services.

The really cool part of IFTTT is that they have some links into the real world, and not just messaging services.  One example is the WeMo switch system, so you can have a lamp turn on when someone sends you an email (who wouldn't want that?).  They're also hooked into the weather forecast, which is why I mention the pipes freezing.

So what I ended up doing was, setting up a trigger condition, to send me an email when the forecasted temperature is supposed to be less than 8 degrees Fahrenheit, which is a few degrees above where I've ever had problems with pipes freezing.  But I built a list of "tasks" to do when the weather is going to be cold, basically:

  1. Make sure my basement has good airflow, so my cedar closet gets good air flow (that's where the burst happened)
  2. Make sure I turn the thermostat on before I go to sleep, to warm the pipes
  3. Lay off the wood stove a little bit, so the thermostat is able to come on through the night
Now, I don't have to spend any mental effort on what the weather is like, I just let the IFTTT service keep an eye on things and tell me only on days when I have to worry about it.  I actually set the alert up to send me an alert when the forecast is for less than 8 degrees, and then again when the temperature actually dips that low.

And really, I feel like the optimal set up for me would be to have something like the Nest thermostat, but have it controlled via the IFTTT service or something similar.  I'd like it to go into a different mode when the temperature outside my house drops, and no matter what the temperature in the house, just circulate the water in the pipes every hour or two.  If only there was an IFTTT trigger for when Nest integrates with IFTTT. And I just looked, and they actually have a trigger for new channels.  Crazy.

So after all that, my point is, you should have a look at IFTTT.com, and think about whether there is some trigger you could use to alert you so you don't miss something.  Or maybe set up a trigger so when your most important client sends you a message, that lamp does turn on.  Why not?  How much is it worth to make them happy?  Also, that WeMo system has a motion detector, so you could set IFTTT up to text you when someone trips that motion sensor.

I've actually thought about this topic quite a lot over the last few years, after noticing that a lot of the reason that organizations struggle with change because they let processes stagnate for years regardless of changes in conditions.  So I've always wanted a tool that would let me input the conditions that affected some business decision, and once those conditions change, they tell me to re-evaluate the conditions and update my processes.  That's what's exciting about IFTTT, because as they add more and more of these trigger channels, the tool I want so badly gets closer and closer to fruition.

So that's why I say IFTTT is a lean-enabling tool.  It lets you focus on other things, but makes sure you don't miss out on something important.

Tuesday, January 7, 2014

Monte-Carlo (sort of) Simulation in eVSM

I have been talking a little about new enhancements to the calculation engine in eVSM.  One very exciting development here is what we are calling Variational Solve.  We added this because we know that variation is a huge source of waste in most value streams.  We thought that being able to visualize variation on a map would help in reducing or eliminating it.

So there are a few different parts to the variational solve.  The first is somehow getting variation data into eVSM for the calculation engine to use.  The second is actually calculating variation in the map.  And the last part is visualizing it.  I'll go into all three in this post.

We wanted to make variation data input as easy as possible, and we settled on using Distribution shapes, which you glue onto variables in eVSM.  We included Normal, Uniform, Triangular, and Exponential distributions out of the box.  For those with simulation backgrounds, this is a very small subset of what's available in most commercial simulation packages, but we felt that most of our users wouldn't want to bother with distribution fitting software to feed into eVSM.  We kept it simple and hope that you can model whatever you need with the distributions we've made available.  For everything else, we've also provided a List distribution, which simply stores a list of values inside the shape.

So to use any of these distributions, you drag out one of the distribution shapes and glue it onto the variable you're applying the distribution to.


Each distribution has a set of parameters, which are just sub-shapes for the distribution shape.  You can hold your mouse over the sub shape for any parameter, and Visio will display the parameter name.  I've also summarized the parameters below:
  • Uniform - the top value is the minimum value, bottom is the maximum
  • Triangular - a is the median/central tendency, b is the maximum, and c is the minimum
  • Exponential - l is the lambda parameter, or the mean
  • Normal - m is the mean, sd is the standard deviation
The List Distribution is handled differently.  To enter values into the list distribution, right click it and and select 'Edit List Values'.  There you can enter values one by one, or paste them from Excel.

Next, you would want to decide what calculated values to measure the variation on.  For instance, you might want to know how the lead time varies with random inputs on inventories or cycle times.  So you would drag an Output Variation shape out from the main eVSM stencil and glue it onto the Lead Time NVU in a Time Summary.  When you run a simulation, eVSM will then store every observation of the calculated value within the Output Variation shape, and make that data available for analysis.

Now, with your variable inputs defined, and your outputs, you would run the Variational Solve by first clicking the button of the same name in the eVSM ribbon.  This will bring up the Variational Solve dialog, where you enter the number of iterations to run.  The rule of thumb here is to increase the number of iterations with increasing amounts of variation in the system.

Variational Solve icon


So after you run a simulation with some number of replications, you want to analyze and visualize the results.  As I mentioned before, the solve engine will store all observations of calculated values into glued-on Output Variation shapes.  We also store the samples used for each distribution within the distribution shape.  Lastly, you can turn an Output Variation shape into an input distribution, by right clicking the shape and selecting that option.

Any distribution or Output Variation shape has a right-mouse option for plotting a histogram of observations, too.  This allows you to look at the variability of the distribution: how big the spread is and also what shape the data observations take.

You can also right-click on any variable shape and plot the distributions of all NVU's with that name.  An example of this can be seen below, where we have process A0070 having a cycle time with a mean of 10 minutes, and A0030 with a mean of 20 minutes.  You can see though that the 20 minute cycle time is probably more desirable, since the variability is so much smaller, even though it's a longer time.  Instead of trying to reduce the 20 minute time, it would serve the value stream better to reduce the amount of variation on the 10 minute cycle time.



You can also view a list of all samples, either for a distribution or output variation shape, and export that to Excel for other analysis.

We've also provided a shape called the Variation Percentile, which is actually for use in static calculations.  What it does is, samples the input values for a calculated values, at a certain percentile.  So if for instance you wanted to know the minimum sum of cycle times on a map, you can do that with the variation percentile shape.  So you would have to write a managed equation that just sums up all the Cycle Time values on the map, and samples, say, the 5th percentile value for each one.

Rather than write a managed equation, though, you can instead use a Data Target shape, since each NVU is an implicit data source.  So this is all you'd have to do to get the sum of all 5th percentile NVU values for cycle time:

So these variational tools were created in the hopes of allowing you to easily add variational data to a value stream map, and visualize that.  We don't want to get into a full-on simulation tool, since even the modern, well-developed ones like Simio are still pretty hard to understand and use.  We wanted to start with a minimum functionality that most of our users can use and do something with.

One thing to keep in mind with this variational solve is, any of these calculated values can be sampling independent random variables from across the map.  For instance, if you're keeping the output variation for Lead Time, and you have a bunch of inventory centers, those inventories are going to fluctuate randomly independent of each other.  In real life this may not be the case.  It's possible inventories can be dependent on one another, and so the variational solve might then not give a very valid result.  So one thing you'll have to think about is, does this limitation make the answer more conservative, or more optimistic?

So if you do find that you're mapping a system and have some variability to address, try using the variational capabilities in eVSM v6.  Visualize the sources of variation on the map, and use that to start working out how to reduce variation, and see how that can affect your future state.  Let us know how it goes, and if there are any limits you run into, and we will be happy to work out how to move past them.

Friday, January 3, 2014

Lean guitar-making

I'm probably late hearing this, but I just listened to the Alton Brown Cast where he interviewed Bob Taylor from Taylor Guitars.  They talk about quite a lot, including sustainability, and lean manufacturing for guitars.  Bob talks about when they started building guitars, they would try to build a batch of 10 guitars to optimize for setup time, but would usually come in much later than planned, with only 5 guitars finished.

So they visited a luthier who had maybe 10 guitars going at once, but at different stages, so that instead of finishing 10 guitars every 10 days, he got one guitar done every day.  This meant he had to do more setups, but repeating the setup so often actually let him get really good at the setup job.

The thing I really like about Taylor's approach, is that it's lean without having to call it lean, they're doing the right things and just getting things done.  That's the best way to do it.

It's a great listen, especially if you're a fan of Alton Brown, like me.

Alton Brown Cast