Wednesday, February 22, 2012

Winter SImulation Conference 2011

Had another good year at the Winter Simulaton Conference last year in Tempe, AZ. I didn't get to play golf (which is lucky for everyone else on the course), and there was more rain than I'd like, but the conference itself was probably the best I've been too (out of 6). It didn't hurt that they had the conference mere miles from an in-n-out burger. But I felt that manufacturing is starting to get a little more popular at the conference, which I like.

I also was blown away by Simio for the 4th year in a row, with their announcement of the Risk-based Planning & Scheduling tool they've released. I also took their practitioners training course, which showed me even more how powerful, and great their product is becoming.

Forio was there again and was very interesting as always, as they now have support for running discrete event simulations in "the cloud." Very cool technology, which I think has a lot of potential for helping make simulation more of an every-day tool, on top of the education arena they seem to be popular in.

For the first time, I chaired a session of presentations, and was very impressed with the papers presented on auto-generated simulation models. I also presented in the Sustainability track on manufacturing sustainability (I think...it's been a couple months, and I guess I should have posted this sooner). Anyway, it was probably nothing terribly new for anyone who reads this blog, but I feel it was probably the last I'll talk about my work on CMSD, at least for a while:


If anyone has any comments or questions I'm always happy to respond. Though, the "contact me" link on the top right seems to get to me much faster than the comments. At least until I find a way to automatically subscribe to all comments...

Interactive MT Connect Simulator

In the series of posts I made on MT Connect and a viewer I created for consuming some of this data, I also put up a post on a simulator for feeding data into MT Connect without having an actual machine. The simulator was based in Visio, where I could draw a "toolpath" to follow and a little shape would follow the path spitting out coordinates and breaking down intermittently, all the while pushing data to an MT Connect agent.

This simulator didn't work well for use in developing an MT Connect client I've been working on, because it was just a bit too random. I could never really demonstrate the software the way I wanted, having to wait an indeterminate amount of time for something I wanted to have happen, happen.

So I decided to mess around with building an interactive tool I could use to give a more predictable feel to the demo. I wanted to be able to, for instance, put a machine into a broken-down state and take it out of that state at will. I wanted to preempt a process if it was taking too long, or needed to get another part on the machine.

So the premise for how this simulator works is, I have a bank of machines, as well as some number of part types (3 for now). Each part type has its own processing sequence, and takes a different path through the machines. Each process also has a processing time distribution assigned to it, so they're somewhat randomly distributed (I think just normal).

To use the simulator, then, you grab a part from one of the in queues, and drop it onto a machine. The machine takes over, and runs for a time, and spits the part to the right of the machine. You then have to pick up the part and drop it in the out queue, and it travels to the next queue in its sequence.

Behind the scenes, I have javascript that uses an HTTP PUT to an MT Connect agent (has to go through a PHP script because of cross site scripting issues) whenever some event occurs (part on machine, part off, machine state change, etc...). Double clicking on the left arrow for a machine will make the machine "break down." The right arrow will restart the machine when double clicked. Double clicking the machine with a part on it will preempt the process and kick the part off the machine. It'll have to finish on the machine later. Double clicking the out queue will also preempt the process, but the part is considered "finished" and moves on to its next process.

You can play with it below, and muck around in the source code if you want. It's all made using HTML and javascript, utilizing the Raphael scalable vector graphics library to make the graphics easy. Cool thing is, this more or less works on any browser (thanks to Raphael's support of VML for older versions of IE). At some point I hope to start this as an open-source project so others can add new data items. It'd be good, too, to be able to point this thing at other agents, so anyone can use it. Right now it's hooked to the .NET agent running on our network, so you need a secret code to start submitting to it.

Any comments or suggestions would be more than welcome.




Monday, February 20, 2012

Process Summary Read/Write

QUEST does not provide a BCL command to give you a process cycle time. Instead, there's an SCL routine called sample_cycle_time that does just that, gives you a sample. If your time is constant then fine, that will return the cycle time for the process. However, if you want to get the distribution name as well as any parameters, you're on your own.

Years ago I put together an SCL script that reads the mdl file for the current model and parses out any process definitions it finds. You could pretty easily modify this routine to use an arbitrary mdl file, if you want.

The script can be found here. the script saves out the definitions to a tab-delimited text file, and tries to open in Excel. I have a constant defined as EXCEL_LOCATION that points to the Excel.exe file, and allows the launch_excel procedure to open an Excel file for you. So you'd have to provide that constant as well.

I have another script that allows me to read the tab-delimited file back in, assuming you've made some modifications to the cycle times you want to apply to the model. At some point I'll post that up.