Enabling Automation Podcast: S3 E4

We’re excited to bring you our first-ever podcast series, Enabling Automation. This monthly podcast series will bring together industry leaders from across ATS Automation to discuss the latest industry trends, new innovations and more!

In our fourth episode of season 4, host Simon Drexler is joined by Steve Wardell to discuss Why vision systems should be included in your automation project.

.What we discuss:

  • What makes vision a critical aspect of an automation project?
  • How can you train AI?
  • Guidance for when AI tools vs traditional tools are beneficial on a line
  • How do vision systems work to identify defects?

Host: Simon Drexler, ATS Corporation (ATS Products Group)

Simon has been in the automation industry for approximately 15 years in a variety of roles, ranging from application engineering to business leadership, as well as serving several different industries and phases of the automation lifecycle.

Guest: Steve Wardell, ATS Corporation (ATS Life Sciences)

Steve is the Manager, Imaging for the Vision Engineering division of ATS Life Sciences. He has over 35 years of experience in automation with the last 15 years focusing on vision.

——Full Transcript of Enabling Automation: S3, E4——

SD: Welcome to the Enabling Automation Podcast, where we bring experts from across the ATS Corporation to discuss topics relevant to those using automation within their businesses. I’m your host, Simon Drexler. I’ve been a part of the automation industry for more than 17 years in various roles at both large and small company. I’m passionate about applying technology to issues that exist in companies looking to scale, where automation is a critical technology for scaling manufacturing. Very happy to be joined today by Steve Wardell for episode four of our third season. Steve is an expert in automation vision technologies with a career delivering vision solutions around the world, and he’s also a board member for the A3. The focus of our discussion today is why vision systems should be a critical component of your automation project. Steve, can you take a moment to introduce yourself to our listeners before we get started?

SW: Well, I certainly can. Thank you, Simon, happy to be here. Happy to talk about vision. This is what I do. That’s what I’ve been doing for the last almost 15 years specifically. And put another 20 before that in the automation industry here at ATS. I’m a career guy in automation. I’ve been doing it for 35 years now. Been focusing on the vision for 15 years. And there’s a reason that vision is, you know, passion for me right now is because it just keeps growing and growing. Everywhere we look, everywhere we plan, everywhere we design. We’re putting more and more vision into our systems. And I’m hoping to share maybe some examples about what we do there.

SD: That’s great. Thank you, Steve. You and I get to talk fairly frequently, maybe not as frequently as I would like, but obviously a long career in automation. What keeps you so passionate about coming into work every day?

SW: Well, it’s what I see out in the floor when I come in. You ever see that show how it’s made? I love that show. And I get to go out on the floor and see that show every day. You know, when I hear a machine out there chunking and doing what it’s supposed to do, I’ll take a walk out and I’ll stand there for 15 minutes and watch you do what it’s supposed to do. I still get excited about that. And because here at ATS, we’re building different machines all the time. Every machine is new. It’s like a new episode every week to watch a new machine being set up out there so that that keeps the interest out there for me. And ATS has been a company that since I first started, it was much smaller than it is now. It’s gone through so many different changes and growth spurts over the years that, I’ve been privileged to be a part of, and it’s given me, I would say, 4 or 5 different careers within one, one company. So I haven’t had to go elsewhere to, to get the variety that I need.

SD: That’s great. And you’re right. Like just the amount of change. And so it’s an interesting framing to say at different episodes of how it’s made, but it’s also different problems to solve every single time. Right?

SW: Yes.  Yeah. Keeps it interesting.

SD: Steve, you’ve had a long career in vision, a long career in automation. I’d be remiss if I didn’t ask you, do you have a favorite vision application?

SW: Yes. I’ve had a long career. It started out in the control side of things. and I was a programmer coming out of school, and, and one of the cool machines that I worked on, initially here at ATS about 20 years ago was, an automated inspection system. This was before I really got into the vision side of things, here at ATS. But I was working on this system that all it did was automated visual inspection of freeze dried vaccines in little vials. And we made this machine for a customer that was producing vaccines for the general public and needed to make sure that before it went out to the doctors to inject into their patients, they wanted to make sure it was a quality vaccine. There was no defects in it. The containers that was holding the vaccine didn’t have cracks. The lids that were on them didn’t have any, kind of defects in them as well. And the reason it was really so interesting is because it was running at 500 parts per minute, you know, that’s 8 or 9 vials a second going past 25 different cameras, doing 12 different inspections at each camera. It was just a vast amount of information coming out on each and every vial that went by this system that at a really high rate when it was running, these vials were coming out the back end of it, just pop, pop, pop, pop, pop, pop, pop, and you knew that it was a good vial. The product was proper. It was been fully inspected. And it’s you had confidence that, you know, you’re delivering to your customers what you needed to deliver. I really enjoyed working on that, that that program from a controls perspective. Then years later, as I was got into the vision side of things, well, we were still building these types of machines for our customer. And now I’m responsible on the camera side of it, not the movement of it and the data management of it, but now the camera side of it and the inspections and the algorithms and able to get more into that part of it too. So it’s been one of these types of applications. It’s kind of come with me from both sides of the frame, from the types of engineering that I’ve done. And as well as a great customer over many years, delivering many different, of the systems of the same type of this machine. And that’s one favorite application. It’s very applicable. It’s not my only favorite application. There’s been a whole lot more. But day to day, every application is new, it’s different. And, you know, offers up some interest. Still.

SD: That’s a great example because you’ve seen it from two different sides of the technical world, but then you’ve also seen the evolution over the course of your career for how that applications changed.

SW: Yeah. Yeah. It’s really it’s like my baby.

SD: Yeah I can understand why that’s a really interesting.

SD: So Steve let’s start broad with our conversation. Long career in automation. Lots of different projects, lots of different challenges. What makes vision such a critical aspect of any and pretty much every automation project?

SW: Well, I like to say that the vision systems that we deploy in our in our automation give our machines eyes, and by having eyes, they can see what they’re doing and thus be able to do it better. I’ve seen a growth of usage in vision applications over the years from the really hard stuff to the really easy stuff and everything in between. but what it what it really does is it gives feedback to the automation itself so that it can be better, every day when the automation is doing what it’s supposed to be doing, if it doesn’t know how well it’s doing, then it doesn’t know what can be done to make itself better. Vision is it’s feedback loop. It gives it information so that it knows when things start going off the rails. It knows when it’s creating poor product, bad product, it knows when maybe, things are getting close to being perfect and providing that kind of information too. The information that you get out of a vision system is thousands and thousands of times more rich than a simple sensor that you might have, like a through beam sensor or, you know, presence absence types of sensor. Those are great in the automation side of things. But from a vision image, video feedback perspective, you can get so much more out of it too. Just more data, more information to be. Lots of information, sometimes too much information. I think of many times when, customers would come to us and they’d have an idea of how the automation should go and they’d want to put a vision system in here to do something that maybe they don’t need a vision system. It’s too much information, really. All you do need in that case is a simple through beam sensor or presence absence type of sensor. But for the most part, these days, the vision systems have become so affordable, and so easy to deploy for sensor type of applications, even for those simple applications where you’re just looking for some basic feedback, you can get extra feedback on it and then use that for further determining what you could, how you can improve the system beyond it.

SD: I get the benefit of being able to work with a lot of customers that are in the early stage of their automation journey, maybe looking to automate something for the first time. And so that feedback loop becomes really important for the process that they’re trying to automate. And generally vision can be used as that validation step to say we put in a robot, maybe it’s a collaborative lean cell or maybe a big piece of automation, but the vision is there to be that. Yep. We did what we said we’re going to do, right? Is that accurate? A good framing for somebody who might be doing this for the first time?

SW: Yeah for sure. A  lot of times we’re working, on applications that haven’t been automated.  Maybe the process has been handled from a manual perspective. It’s being done by people, in manual, methods. But while they’re doing it, they’re always using their eyes to verify that they’re doing it properly. And that feedback, that validation that it’s being done properly is inherent in in the body. And it’s happening whether they realize it or not, that when they get to the automation point and they say, okay, if this is how the process is going to be handled in an automated fashion, how am I going to know that it actually did it right? If I’m taking two parts and I’m putting it together and I’m screwing it up, screwing it down, how do I know that it’s actually been screwed down far enough? Or how do I know that it’s not corked and off in a in a manner in which it shouldn’t be vision sensors, feedback, those kinds of things that you naturally get with your eyes can be done just as easily with the with the vision system, provided the things that you’re looking for are well defined. Oftentimes, especially in the manual side of things, when we’re when we’re looking at trying to automate processes that are manually handled now and inspections are done by trained operators, they’ve been looking at what’s good and bad for years and years. Some of those are very difficult to actually replace with a vision system in a fully automated fashion, because what the brain is telling the person is right or wrong is not always easily adaptable to something that’s automated because they can’t even define it themselves. You know, you may look at a part that’s got a defect on it. Maybe it’s a scratch on a surface of something that you’re building. And they would look at one scratch and say, well, that one’s okay, but this one over here, that one’s not. And if you ask them why, well, they’ll say, well, that one’s going to cause more damage. And if you were to put it into actual technical terms on what they see, it’s hard for them to come back and write it down and says, okay, this scratch is, you know, a lighter color than that. It’s this size, this much bigger, this length, and put something, some parameters around it that we could then duplicate. And without that, we, we run into those types of situations all the time where we would look at it and say, we just don’t know, how are we going to automate it?

SW: Now I’ll step back a little bit. That was maybe five years ago. We would look at and say, we don’t know how to automate it, but with the advancements in AI in that person doing that work, already sharing that information from what they know in their brain, with the system that we train and the AI capabilities, we can now do those types of things as well.

SD: And we jumped ahead in one of the most interesting things going on in the automation world, and in a number of different industries right now, is the application of AI. and it’s sort of become this big nebulous idea to say, well, you got to use AI. One of the things I like so much about vision is that there is a very direct and very practical application of AI. And you just gave a great example of that is when you’re translating that tribal knowledge, that experience from somebody who has been working on a production line or on a process for a number of years, and you can’t quite write down the specification of what they do. AI has the capability to fill that gap, correct?

SW: Yes. With traditional training methods, you can take that person who has that tribal knowledge, help them or use them to provide the training information to the system so that the system can learn it and then duplicate it in an automated fashion.

SD: So you have the operator and the vision system working side by side in that case, is that what you mean by traditional training methods?

SW: Yes. Yeah. It’s a supervised training where what we would do is we would initially put cameras into the systems instead of having them look at what they were normally looking at by eye. We would have a camera look at it and then provide that information from the camera itself, the image to the operator. And they would say, yes, that image is good, that image is bad. And that’s a labeling process that they would use for the supervised training that could be pumped into training a machine learning model that then once it’s gotten enough information, it kind of takes over and from that, learning is able to duplicate what the operator did themselves. And there’s been so much advancement in this area over the years that it is becoming a daily thing that we talk about in the automation side of things, in machine vision side of things where we’re applying it. I think it’s the most advanced of any of the areas, because a lot of these AI developments were initially done in the computer vision side of things, just dealing with images and making Google determined that these are pictures of faces and birds and cars and those types of things and that development that was done by these big companies for those purposes and by, all of the researchers coming out of schools has been a direct reflection on what the tools are available to us these days now, and our automation and our solutions.

SD: I’d never really stepped back and thought about why vision was so advanced in the AI space, but that makes a ton of sense.

SW: Yeah, yeah. Well, if you ask anybody about what they know about AI, it’s usually got something to do with how images are processed.

SD: Interesting. And what I’ve seen, especially with those that are early in their automation journey, is one of the most difficult things for those companies is to actually write down in detail their process. Because when you remove the operator and you remove the not only flexibility of the person, but the perception of the person, and need to move that into the world where you train a robot to do the same thing every time. Translating that process information can be quite challenging. And it looks like at least in the vision space, there’s a very real solution to that problem today.

SW: Yeah, but it’s not a solution for everything. It’s a I consider it a tool in our toolbox that we now make wise use of. whereas five years ago it was a tool in our toolbox that was just starting to be used or usable. Right. Now, it’s quite usable. It still remains a tool. It’s not a, a stopgap that fixes everything. And there’s the initial way to go about every app application. In fact, we do thousands of applications here a year. And I would say right now we’ll use that kind of toolset within 10% of those applications at the most. The problem with it is it’s and it’s not a problem. It’s a limitation associated with it is that the ability to validate and verify what is happening with that approach is not nearly as straightforward as it is with more traditional means. And in the life sciences side of things, where we’re doing a lot of our work, verification and validation is really, you know, very, very important to what we do. So even that capability is, is, is advancing daily.

SD: I would imagine, just based on the development in other places, it’s changing rapidly. For those that are listening, where would you give them some guidance on where the line is? So if out of the applications that we do, you know, 10% of the applications that we do, AI tool sets apply to, where is that line to where they’re good and where they aren’t?

SW: What I’m talking about here, when I say traditional methods versus more AI based methods, is learning techniques versus more program at programmatic type of techniques, it’s all about we work with images on our systems. We have cameras, they as their products being built. The cameras take pictures of them. They these images have to be processed for some sort of information. And if that information that you see in that image can be well described as a set of steps on how you see that information, like, okay, there’s a line over here from that line, if you go to the right and you look for, you know, kind of a blob of information within that blob of information, if you could do this, if you find yourself through this kind of step by step process of looking at that image and somehow pulling out the information, that’s still a more robust way of putting together a solution, because one that’s explainable when it fails, you know why it fails and where it failed. And two, it’s more maintainable because when it fails and you know where it fails, you know where to go to fix it. On the other side of things, if this if you were to take an AI approach where you just say, okay, this is a good part and this is a bad part, and you feed it all sorts of information with all just, you know, these kinds of labels. When the system starts to fail, you don’t know why. You just know that, hey, that really should have caught it, but you don’t know why it didn’t catch it. Because the algorithm that goes behind a an AI machine learning approach is still very black box ish, we’ll call it. There’s a lot of a lot of bits and pieces behind these algorithms in these neural networks that you can’t go back to and say, okay, this is where the step failed and fix that step. So you have to train it with more information, more data. And you don’t always have that data to train it with now.

SD: Or know where to point it then. Yeah. Either, right? Exactly. What data does it need to be more accurate?

SD: It’s interesting. We jumped right into the new stuff, the emerging trend of AI. But I think that was a really good framing of the traditional approach having value even in a world where there’s advanced models and advanced computational capability for those listeners that it would be new to vision, new to automation, how would you describe to them how a vision system works to identify defects? You touched on a little bit with the traditional means and maybe tool sets and inspections, but you’re going to say it a lot better than I would. How would you answer the question? Like, how does the vision system work to find a defect?

 

SW:  What we do is we if we have a system where we want to look at it and say if it’s defective or not, we would start with some sort of requirement that says, okay, this part is defective, when, for instance, I was talking about when we put something together earlier on, if you take two parts, you put them together, maybe you screw it together and it’s only, acceptable once it’s been screwed fully closed. So there can be no gap between the two parts. In that kind of situation, if we wanted to put a vision system in to make sure that that’s the case, we would add a camera system to look at the parts so that we could see where that gap was. And the really critical part is we don’t even think about software at this point. What we think about is how good the image is going to look, because it’s the image that the camera takes that holds all the information. And if that image is properly designed, which we have the, you know, the, the luxury of doing is designing what that image is going to look like. If it’s properly designed, then we can easily pull that information out of it and we can do that robustly. So how do we properly design an image? We use the right camera systems, sensors, the right optics, things like lenses and mirrors and filters and the right lighting, which is really critical to have the area that we’re looking at properly contrast it. So if I’m looking for a gap, I need to have a lot of information around that gap. That gives me information that says, okay, this is where it is. Maybe I have a white background and the part itself is black, and the gap, as soon as it starts opening up, white states start shining through. In that contrast between the white background and the. And the black part itself would give me lots of information through being opposites in intensity, that I could easily find that information. So we do a lot of crafting and designing of the system, just how well we’re going to get an image that has the information that we need. And then once we’ve got that image, then we start talking about software. We start talking about how we’re going to process that image, how are we going to look at the pixels and pull the information out that we need. And that’s where we go to our toolbox of different types of tools to do it. That toolbox has all sorts of different algorithms to process those pixels, to process that image and pull information out of it. And AI machine learning is just another one of those tools in that toolbox. Right.

SD: So it circles back around to, you have a good image. You have lots of information. Once you have the information. We have a variety of tools to be able to pull out where issues or defects may be. AI is just one of those tools.

SW: That’s right

SD: Steve, I’m so grateful that you agreed to join me for this conversation today and in part some of your experience and knowledge to our podcast here and help those that are listening to automate. Just before we close off, would you have any closing words of guidance to those that are looking to apply vision systems to their automation project? Don’t delay or don’t be afraid to do so. The worst thing that could happen is you get a system on there that doesn’t necessarily give you the information that you need. That’s if it’s not done well or, you know, maybe things have changed by the time you, you initially start. The one thing about a vision system, it’s usually a system that’s not part of the actual automation process itself. All it does is more or less take information in. So if it doesn’t necessarily work, sometimes people will turn them off. I don’t, I don’t, you know, promote that because they’re there for a purpose. But you’re not taking as big a risk by trying to put in some machinery that’s doing a bunch of stuff other than getting information. And so don’t fear the fact that if the information that comes back is wrong, you can’t do anything about it. That’s the one thing about, these vision systems as they go in, they can always be modified, usually on the software side of things, to get more information, to alter what you’re originally planned for it, to update it, to do something new, usually with very little cost on the hardware side of things, or maybe sometimes never, because all you’re dealing with is a different scene. The way you get the scene is the same. It really is very flexible that way. So, yea, don’t be afraid to throw a camera at something and start playing with it because you get a lot of information, a big bang for your buck.

SD: I think that’s a great place to end is in a industry that is very utilitarian. It’s bang for your buck, it’s ROI, and there’s fewer better examples than applying an inspection station to your process. You get some understanding, you get some improved quality, but more importantly, you get information and data that you can work with and adapt over time.

SD: Steve, thank you so much for joining our podcast today. To those that listened to this episode, I hope you took a lot of learning and a lot of information towards why you should apply vision systems in the first place, and why they’re such a critical component of your automation system. As always, thank you so much for listening to our discussion. If you liked what you heard, our fifth episode will be Reducing Risk in the Automation Process and we’ll be joined by Ryan Babel. Thank you so much for joining us today.