Wednesday, July 13, 2011

HTML5/JavaScript MT Connect Viewer

In my last few posts, I've talked about what MT Connect is, how it works, and why CCAT is excited about it. Now I want to talk about a proof-of-concept web-based viewer I've created for viewing the real time status of our machine lab.

As I said before, an MT Connect agent provides the current status of the devices talking to it, by returning an XML string with all the details in it. Since the advent of AJAX it's been very common for developers to create web-based applications that hit a server for XML data, and on loading it, modify what's being shown on a web page (using DOM manipulation). This is basically the strategy I wanted to take, with the added feature of using the HTML canvas element included in modern HTML5-compliant web browsers. The canvas basically allows you to draw in 2D on a portion of a web page. The great thing about using HTML and JavaScript is, it works on any platform that supports HTML5 features (including iPad/iPhone/Android browsers).

One reason I chose the canvas element was because we had built a flash-based viewer a few years ago for a single machine, which just listed out some of the MT Connect data we were getting in real time (spindle speed, etc...). But now we have three machines (and maybe more in the future) that we'll want to be able to view at the same time, so I thought having an overhead view of our layout would be a cool way to get heads-up info on what's happening in our lab.

Luckily I had built a basic JavaScript library for creating and drawing 2D shapes on a canvas when learning JavaScript. So all I really had to do was write some code to interrogate the Agent, extract data on the machines I wanted to know about, update the 2D shape objects with the right information, and let my graphics library handle the graphics.

So to start, I used a VBA macro in Visio to export my 2D layout to JavaScript code that would create 2D shape objects in my graphics library format. So at this point I had a canvas element that would draw our machine lab layout, which was a great start.

I then added a JavaScript function that uses JQuery to get an XML string from my MT Connect Agent detailing the current status of my machines. I also used JQuery to convert the XML output into the JSON format, which converts the XML data into native JavaScript objects, and step through the XML tree to find what I need.

All that was required now was a function that would get the current status information as JSON data, and map that data to my 2D shape objects, and then call itself after some interval (using setTimeout). So I wrote that function, and it works pretty well, just turning machines different colors depending on their current state. You can click on a machine, and see some more detailed information pop up on the side.

I've also played with storing the last 150 spindle locations and drawing that on top of the machine, but the problem there is converting the spindle locations into relative locations within the bounds of its machine, otherwise the toolpath usually leaves the bounds of the machine. But if you watch long enough you see it follows the shape I set in my MT Connect machine simulator.

I've also started playing with pulling in other information, like temperature, and displaying that. Right now my simulator doesn't modify the temperature but that should be easy enough. I can see it working where there's a little thermometer shape next to the machine that changes color with increasing temperature. Or even a spindle shape that moves around drawing the toolpath line and changing color with temperature. There's also a power consumption aspect to this all, where maybe I draw bar gadgets next to the machines so you can see at a glance which machines are using the most electricity.

Regardless, MT Connect can provide some very interesting data, but there's a lot of it. At some point the really hard part will be in gleaning out what a user wants to see and only showing them that.

One issue I ran into while building this tool had to do with cross site request issues. An MT Connect agent is basically a webserver, except that it only serves XML strings based on what URL you go to. You can't serve arbitrary files (like JavaScript/HTML files). This isn't an issue if you're writing an application in VB/A, for instance, because the XMLHTTPRequest object provided by Microsoft doesn't care where the request is going to. However, the XMLHTTPRequest does care when it's being called in a web browser. Basically, any AJAX request you send has to be served by the same server that gave you the HTML document your JavaScript is running in.

So to get past this issue (which is a showstopper, if you can't actually get anything out of the agent) I used a PHP script that when called, retrieves /current from the agent, and returns that to the caller, keeping the request on the same domain, as far as the browser knows. So that works well enough, it seems, but it now means anyone who wants to do something similar in the browser has to also have php running (or some server side scripting that can do the same thing).

One other issue I'd like to mention is that I have to use a JQuery library for converting the XML output string of the agent into the JSON format, for creating native JavaScript objects. It'd be very cool if JSON was an option for output on the agent. Maybe that should be something I work on building, and if successful, contribute it to the project and try to get it included in a release.

For now, you can see the veiwer at ccat.us/mtconnect. At some point, I hope the simulated data goes away, and we can start showing the real status of our machines. I'd also like to work at automating the creation of these HTML pages so that other MT Connect users can view their shops using the same code I've written. We'll see where this all goes. In the meantime, I cobbled together another version of the viewer that shows the status of the simulated machine at agent.mtconnect.org as well as the toolpath the spindle is following (right now it only keeps about 20 points, and kinda looks like the game "snake", but I might up that value to show more points.

Contact me at jfournier@ccat.us if you have any questions about this.

Machine Simulator for MT Connect

In my last post, I talked about how MT Connect works, and how our machine lab isn't yet hooked up to an MT Connect agent (though we're working on it). While I'm waiting for the machines to be connected (and to provide some more lively data) I've created a very basic machine simulator with a fake MT Connect adapter built into the simulation, to provide some action to an agent, ultimately letting me view that action in a web-based tool.

I built the simulator using Microsoft Visio. Basically, I created a fake tool path for a spindle to follow, by just drawing a bunch of node shapes, and connecting them using splines. Then, I created a circle representing the spindle of a machine. The device/spindle shape also holds custom shapesheet attributes mapping device attributes to the device/data item ID's for the machines defined on my agent.

The spindle shape just follows the path of the lines (using a Visio animation routine I built a while back), and at each animation update (set to 2 seconds) it sends our MT Connect agent the current location of the spindle, as well as the current status of the virtual machine (working or broken down). You can see a screen shot of the simulator in action below:


To handle reporting the state of the machine, I just hold a "Next State Change" time on each device object, and on every update see if that time has elapsed. If it has, I flip states (from working to broken or vice versa) and then schedule the next state change event for that machine. Very basic stuff.

I mentioned in my last post that the two agents I've used (available at https://github.com/mtconnect) both allow you to use an HTTP "PUT" request to store information about a machine. So, every time I want to update my MT Connect agent, I just create an XMLHTTPRequest object in VBA (from the Microsoft XML library reference) and open a URL that basically looks like this: http://mtagent/storeSample?timeStamp=(current time)&dataItemID=(data item id for x location, or whatever)&value=(whatever value to store)

With the simulator created, I just set up three device shapes in Visio; one for our Haas machine, one for our Hurco, and one for our Yasda. These machines were already defined in our Devices.xml file on our MTAgent server, which is where our MT Connect Agent gets run from.

By using the simulator to populate data for our real machines, I was able to build a viewer that would dynamically update and show the latest status of all our machines (even though the status was fake). Now, when we get the machines hooked up for real, the same viewer will be able to show the actual status of our machines, which will include a lot more than just spindle X/Y location and if the machine is running or not.

In my next and final post on MT Connect (for the next few weeks at least) I will talk about the HTML/JavaScript viewer I wrote for viewing the current state of our machine lab.

MT Connect Basics

So CCAT has been tracking this standard/protocol called MT Connect for a while. I've been very interested in it, especially from the standpoint of consuming all this great data that machine shops with MT Connect can generate. I think we're still a long way from having this tool be as ubiquitous as general CNC machining, but I'm hoping CCAT is there with some incredible tools when that does happen (and I think it's inevitable).

So in this post I'd like to talk about how I constructed a web-based viewer for MT Connect data. But first, I'd like to talk briefly about how MT Connect works, as well as how I'm generating data for this proof-of-concept.

MT Connect has been created as a data protocol, and really nothing else. My understanding is that it exists to standardize/define things like: for a milling machine the spindle is where the tool is, and the spindle turns at a rate called SpindleSpeed, and there are certain units that can be measured in, etc... The protocol also defines several key terms: agents, adapters, and devices.

A device is just a discrete unit that can be connected to MT Connect. The most important devices typically are the machines/controllers themselves, but I know the standard involves more than that, including bar stock feeders, tool holders, all kinds of stuff.

An agent is basically a software executable that sits somewhere and listens to things that devices want to tell it. The agent also is able to be interrogated by client software that wants to know things about devices. The agents I've dealt with so far (both are open source and free to use: https://github.com/mtconnect) are implemented using TCP/IP, meaning they work over the internet/web.

The two agents implement HTTP web servers, which allow you to interrogate them using regular web technologies (i.e. the "GET" XMLHTTPRequest). When you interrogate an agent, it spits back your data in an XML file, which is very easy to program for. The two agents I've used also allow you to use HTTP "PUT" requests to store information about devices, meaning you can update an agent even by visiting a URL in your web browser. Very cool, and easy to do.

So, probably the most difficult item here is the adapter. The adapter is what allows a device to talk to the agent. Machine tools typically use their own proprietary protocols and data formats and terminologies within their controllers. A Fanuc controller can put out completely different data streams than a Haas controller, even if they're saying basically the same things.

So an adapter has to be made for converting from one controller "language" (proprietary format/protocol) into the standard MT Connect protocol, and then sent to the agent. So that's the adapter's job. The adapter can live on the controller as a separate piece of software (if the controller allows that) or it can live on a computer that sits near the machine and communicates with it over a parallel or serial cable. Either way, the adapter provides a stream of data about its machine in a standard format.

So CCAT has this Advanced Manufacturing Center where we have a few machines on consignment from Yasda and Hurco, as well as housing our Laser Applications Lab. We want to have these machines hooked up through MT Connect, but we're still working to install the correct adapters for these machines. In the next post, I'll talk about how I built a machine simulator that also acts as an adapter, sending the status of a simulated machine to an MT Connect agent.

Tuesday, July 12, 2011

MT Connect and CCAT

MT Connect is a protocol/standard developed for the purpose of allowing machine tools to communicate information about themselves to other computer systems. For instance, this allows a machine to report its current status - whether it's broken down, working, or idle. It can also report its current spindle speed, feed rate, location, and even the current line of g-code that it's running.

All in all, it's a pretty great way to be able to collect activation/utilization data on any machine, regardless of the brand or make or model of controller running it. The obvious upside for simulation users is that you can process the data from a machine and get a list of state-change events and what times they happened at. This data can be run through a distribution-fitting tool, and spit out mean time between failure and mean time to repair distributions. There's a very good paper on this topic here.

The standard is still evolving. Right now it really only covers the machine resources on a shop floor, but at some point I hope it will include labor resources, and part/process tracking. This would allow us to get accurate cycle time/setup time observations on a part-process-machine level, and get us to the point where simulation models can be initialized to the current state of the system being modeled, and then run to predict the next few hours or days or weeks worth of production.

MT Connect can help turn simulation users into basically a weather man, though instead of predicting the weather they're predicting when jobs will be completed and planning contingencies in case a machine goes down. Powerful stuff.

And CCAT is hoping to help make this a reality. We have been attending the MT Connect standards meetings, and I just finished developing a proof-of-concept JavaScript/HTML-based viewer for MT Connect, which I'll detail more in a post to come.

Thursday, June 23, 2011

SCL Time Measureme

Lately I've been working on some models that are pretty involved, in terms of the scale of what's going on as well as making some calls that seem to run slow (creating new parts, packing parts, messing with allocating lists and structures into memory). I wasn't sure what was taking so long, and it's very hard to say with SCL the way it is.

I could have instrumented my code with some subroutine call to store the start time of every subroutine in a logic file, and then another call at the end of the subroutine (before the End line or any Return lines) that logs the time it took to run that subroutine.

Doing so manually would take quite a bit of effort, so I tried my hand at writing something in VB6 that would do the trick, and I think I've succeeded.

I created a standalone executable you can call from any command shell (on Windows) and pass in the full path to the SCL file to instrument, and optionally you can pass a second argument that is the name of the init logic in your model, and an optional 3rd argument specifying a string pattern for commenting out any existing instrumentation you've added to the logic.

Here's an example call I've used:
scltimeinstrument.exe "C:/Delmia/Questlib/LOGICS/CMSD/CMSD.scl" cmsd_model_init_logic *writer(*,*)*

So the first argument again is just the path to the SCL file to instrument. The second argument is cmsd_model_init_logic, so the instrumenter program will find the cmsd_model_init_logic subroutine as it's defined in the file, and the first line after Begin in that subroutine will be a call to a subroutine we include that will initialize the time log file (hardcoded as C:/Tmp/scl_times_log.txt"

The third argument "*writer(*,*)*" specifies to the program to find any line that contains the text "writer(" followed by anything, then a comma, then anything, and closed with a close paren, and finally allow anything outside the paren (to allow comments). This string pattern works well for me because I have one routine called writer where I pass some string message in as the first argument, and the second argument tells what the call means (i.e. error message, or standard debug, or starting/ending some subroutine). So this pattern matches my usage well, because my code is instrumented with this single call, and I can easily wipe out those calls now.

The output file is just a flat text file (saved to C:/Tmp/scl_times_log.txt) that shows the time (in milliseconds) every subroutine call took every time it was called (so long as that time was more than 1 millisecond). It's up to you to analyze the data (I use averageif, etc... in Excel to summarize the data)

Anyways, enough of my blabbing, here's a link to the tool.

Monday, April 11, 2011

Associative Arrays / Hash Tables / Collections in SCL

I've been using QUEST for just over five years now, and over time I've become more and more bothered by its lack of a hash table data structure, like I get out of the box in VBA (I know, VBA is pretty ancient stuff itself, but compared to SCL, VBA is a dream, in my opinion.)

A hash table is a data structure is basically a string array that instead of being indexed by numbers, is essentially indexed by strings. In VBA, a hash table is called a Collection, and provides methods for adding, removing, and iterating over its contents. Most implementations of hash tables allow you to put anything in the "bucket" of stuff, keyed by a string.

The purpose of a hash table, as far as I can tell, is to provide a somewhat faster mechanism for finding items in an array than by just looking at every item one by one. Call this the slowest possible case. The fastest possible case would be to just convert a string into a big long binary number, and have a gigantic string array with enough indexes to hold any length string we can throw at it. This is probably the fastest case (I think fetching an item from an array is done in constant time), but it's not really feasible, especially with QUEST SCL, because we don't have all the memory in the world to hold this huge array, which, by the way, would be pretty much empty.

So enter the hash table. According to the Algorithms in C++ book I picked up at the library book sale for 75 cents, we can instead create a small array, with each array space holding a linked list of items with a specific hash value. The linked lists are there because we know that by limiting the size of our array, we can have different keys with the same hash.

The idea then, for searching for a key value, is that you hash the key and get a number between 1 and 111, and get the base node in the linked list at that array index. Then, we just search through each of the nodes in that linked list until we find our exact key value, and return the original string.

This kind of thing seems pretty easy to pull off with SCL, as it's just a matter of providing the hashing function (nicely included in the book), and a way to hold an array of linked lists for each of our 111 possible hash values.

I've put together a hash table utility in SCL, though I called it a collection. It's available for download here with an example for using it here. I put the hash_table.scl file into my directory of always compiled utility routines.

To use the hash table, you have to include the SCL file (I had to specify the full path to the file, so you'll have to modify that to accommodate your system. To create a collection, call the new_collection routine, which initializes a collection structure. Use col_add to add an item, remove_item_from_collection to remove an item, and col_item_exists to check if there is an item in the collection already. The kill_collection procedure will deallocate all the memory for the collection, so make sure to call that when you're done with it, or you could end up eating a lot of memory, depending on how you use this.

Another note, you can loop through every item in the collection, starting (assuming you call your collection col) and looping through a linked list of every item in the collection.

I haven't tested it terribly well, but the example file seems to work as I intend, so unless I start using it and find it broken, this is how it'll stay.

I'm not sure how much faster this is, computationally, than just looping through each item in an array and checking it. It's probably a little slower for small collections, but I imagine as the thing grows the hash table becomes faster than just looping, but I don't know. I do think it'll make some lookups a bit easier, at least from a programmer's productivity perspective, but then again, what do I know? Either way, I'll have to try and implement more of these algorithms, especially the section on regular expressions.

Thursday, March 10, 2011

Labor Popups (or, Demystifying the Labor System in QUEST)

Labor logic can be really difficult to understand without first getting the proper grounding in just what's going on. I was lucky enough to get some training from Martin Barnes at DELMIA, who stepped me through the entire process, including how popups play into things.

Basically, the labor system uses a controller to select a labor to fulfill some request, then it passes information about the request through a command. The labor then picks up the command and acts on it. This seems simple enough, but reading through the code it doesn't seem so straightforward, as you can't see the labor actually doing anything. This is because of popups.

Popups in QUEST allow a user to write incredibly flexible (reusable) logics at the expense of readability. In the default labor process logic, you can see the labor just constantly sits there getting the next pending command to work on, and then does a switch block to see what kind of command it is. It then gets a handle to the popup it should use to execute the command, and runs that popup.

So in the case of laborers, a popup is simply an object that lets us run some SCL code without knowing ahead of time precisely which SCL routine/procedure to run. You just query the popup that a user has selected, and run it, and assume that logic is taking care of business.

When we apply this thinking to the labor system, it demystifies things a little bit (at least for me). The regular labor process logic then becomes fairly simplistic...do a switch against the command type, and based on command type, get a handle to the logic to run, then run that logic.

The real meat of the labor logics, and the thing that kept me from understanding how they work, is in the default popup implementations.

In the labor Logics window in QUEST, you'll see a few options beyond the regular process and init logics. These are all popups, just a way of selecting a procedure/routine that's going to get called in the default labor logics. So to see what these default selections look like, we can just look at a labor's properties and see the routine names and file paths for the selected popups. The default labor load popup is notify_after_all_loads and can be found in QUESTlib\SYSDEF\LOGICS\agv_load.scl.src.

If you load that file up in an SCL editory, you can see the notify_after_all_loads is a procedure that takes an agv_cmd handle (agv's and labors are pretty similar, you can see, by digging into their logics. My understanding is laborers were essentially copied and pasted from agv's, back in the day at Deneb. Old/ex Deneb people can be dangerously misleading, but interesting nonetheless).

The flow of this logic is much much more readable than the labor process logic, again, because this is where we actually DO stuff. So here we can see that we do load processes if necessary and all that, but eventually we just do a REQUIRE PART EXACT the_part, where the_part is just a handle passed in as cmd->part_handle.

The interesting thing about this line of code is that it's the same thing a machine element uses to get parts (or buffers, whatever). If you look at the unload popups, you see it's just transferring a part to whatever element it's at. There is nothing the least bit magical or mystifying here. In fact, this makes the labor system seem to make a lot of sense.

This may have been obvious to you, but I was never able to get a handle on this until someone walked through the whole thing with me. I know I can't do as good a job as Martin did, but I hope this gives newer QUEST users who don't know the labor system the ability to go in and see just what's going on. Ever since I "saw the light" on how this all works, I've been able to write my own labor controllers and labor logics. With that understanding comes the ability to get very detailed control of your labors, as well as the ability to write some really really bad code. Be careful.

Here's a quick example of a custom labor load popup I recently had to write. The labor would load a part and immediately destroy it, with the controller having already gotten what it needs from the part. Keep in mind I'm using a custom controller, so I don't need to notify the controller that the labor destroyed the part. Your mileage may vary.

procedure custom_labor_load_popup( cmd : Agv_Cmd )
Var
the_part : Part
Begin
the_part = cmd->part_handle
require part exact the_part
destroy( the_part )
End