Enabling Automation Podcast: S3 E1

We’re excited to bring you our first-ever podcast series, Enabling Automation. This monthly podcast series will bring together industry leaders from across ATS Automation to discuss the latest industry trends, new innovations and more!

In our first episode of season 3, host Simon Drexler is joined by Paul Dragan to discuss Making AI Real.

What we discuss:

  • How would you define AI?
  • Practical examples where AI is making a difference in the manufacturing world
  • AI analyzing a problem vs a human analyzing a problem

Host: Simon Drexler, ATS Corporation (ATS Products Group)

Simon has been in the automation industry for approximately 15 years in a variety of roles, ranging from application engineering to business leadership, as well as serving several different industries and phases of the automation lifecycle.

Guest: Paul Dragan, ATS Corporation (ATS Life Sciences)

Paul Dragan is leads the digital pillar within innovation at ATS Life Science. Previous to ATS, he spent a decade wearing multiple hats and leading digital services initiatives.

——Full Transcript of Enabling Automation: S3, E1——

SD: Welcome to the Enabling Automation Podcast, where we’re so excited to be kicking off our third season. So thank you so much to the listeners that tune in to where we bring experts from across the ATS organization to discuss topics that are relevant to those that are trying to get started on their automation journey, or to scale technology within their operations. I’m your host, Simon Drexler I’ve been a part of the automation industry for more than 17 years now, with varying roles at both large and small companies. I’m really passionate about applying technology to problems that are gaining businesses and scaling, and automation is just so critical to scaling manufacturing, and that’s what we’re here to talk about today. The topic of our first episode here is Making AI Real. It’s a current buzzword inside the industry and really the hot topic right now. So we’re so fortunate to have Paul join us from ATS. He’s driving a significant portion of our roadmap and our development around artificial intelligence, and how we’re applying it inside the machine building organizations that we have. Paul, can you give a quick introduction to yourself, to our listener base?

PD: Absolutely. So truly appreciate the opportunity to be on here today, Simon. So my name as Simon alluded, Paul Dragan, my role here today within the ATS is leading our digital pillar with an innovation at Life Science. Previous in my life on ATS, I’ve had been very fortunate in my career. I’ve had an exciting opportunity the last decade in really wearing multiple hats and leading a very similar initiative to what we do today within Illuminate, within ATS. For the listeners who don’t know what Illuminate is, it’s essentially a standardized data collection system that’s capable of collecting data sets such as OEE, parts per history, quality data, and really ideally drive manufacturing results on the floor. Over the last decade, I personally flew and helped integrate over 60 factories, you know, over 12 different countries, and thousands of assets have been connected. My responsibilities from there, you know, really architecting our solution, owning it. I working with C-level executives on approvals, budgeting, building a culture around data, selling the solution and then helping standardize data around equipment, around our processes, training around things like constraint management, and really then focused around the OT IT space. A big part of this was many of our factories were just not ready for the industry 4.0 initiative. So a lot of it was to, you know, regearing our factories, redoing network topologies, focusing around security and so forth. And my goal now at ATS is really taking on those learnings and digitizing equipment here within our company.

SD: Paul, what a great background to have this conversation. I’m so excited to learn from you. You mentioned industry 4.0 as part of your introduction and to me, the buzz around AI, it feels quite similar to industry 4.0 about ten years ago where it’s the buzz. It’s the primary talking point. But it means so many different things to so many different people. It’s a big topic. So how would you define AI? Let’s start there for our listener base. We’ll get solid on what does AI mean to you.

PD: You’re right. Industry 4.0 has been for a while, has been around for a while now. We’ve been talking a lot about it and really it’s kind of funny. There’s a new buzzword in town and you know, we’re starting to talk about industry 5.0. So, in generally, what industry 4.0 to us today is really digitization of equipment, really the whole connected assets piece, big data, leveraging AI and the move to industry 5.0, which I think is slowly going to start happening, is where we start seeing the digital part of our equipment intersect with either physical equipment, humans and operators. This is where we start seeing things like robots and AI working side by side together. And really, this is once again an interaction where we start seeing things like humanoids on a floor one day, you know, operating equipment, you know, with us and building those systems around that to help interact. So I think what’s important to understand, Simon, is historically, when we typically develop applications, a lot of times it’s programmers would try to outline all the potential results and what might happen in this environment itself. And we try to, you know, program around that. And the reality is when we get to more sophisticated problems, it’s just impossible to develop solutions around that. So the goal of AI here is to start enabling with intelligence and start identifying these, you know, situation scenarios and start making decisions for us where we have more adaptable programs, more adaptable software, and really an environment where I don’t have to be involved in every single decision on the floor as long as the AI bot itself has been programed to, to kind of learn around that and make decisions similar to how I would itself, you know, a good example of this, today is vision. Historically, once again, I take a photo of a, you know, some sort of mechanical environment, a part, and I would outline specific components that I want to isolate and view. And this is great. We do this, you know, for years and years on the floor. You know AI has a capability as a manufacturing process change. And for anyone in the field today knows every day is a different day on the floor. The reality is AI can adapt. So not only identify certain things in a part, but the AI itself can start identifying other components that visually an operator would identify might be incorrect, and we could start producing insights on that as well. So really here, it’s not just focusing on the core problems We’ve tasked itself to the AI, but starting to build that out and really function like you and I would on the floor itself.

SD: And I’m so happy that you brought up vision as an example. I think at least today, it’s the most practical and real example of the application of AI to think or behave like a human would, where it’s learning and taking in so many variables that it combats that problem that you highlighted, where sometimes we just can’t program everything, you can’t write everything down. You had mentioned that it becomes too hard to program even, even in the vision world, to have the camera behave the same way that a person would. Is it really just around the variables that come into the problem and the problem definition? Or is there another reason that people can’t write a program in the same way that that AI approaches the problem?

PD: Typically when we’re automating, you know, equipment, we’re focused on one task at hand, right? You know, we want to put on a station of some sort that helps isolate some sort of defect or some sort of problem and help leverage software once again to automate that task and not necessarily have an operator there consistently doing that. One of the advantages of having an operator, though, is that operator typically is more aware of just that scenario itself. So once again, if we’re just looking at a part to find a specific seal on the part itself and, you know, we identified a massive gouge in the part itself and there’s a whole, you know, those are things that an operator would typically detect and say, there’s something wrong here, you know, stop the line, go look, you know, either upstream and downstream, identify the problem. The goal here now, once again, is the capabilities of what AI potentially, you know, holds for us is that it can start handling multi-variable problems for us and start learning on how, you know, how does a part actually look like. It’s going to start understanding what a good part is, what a bad part is, and it’s more just based on the characteristic that we identified. You know, we can start looking at the entire ecosystem at once and make once again, better decisions as we automate those processes.

SD: And in that example where you have an AI engine in the vision space, how do people interact with that vision station? Do they interact? Do they help with the learning? Or is the model and the AI engine operating in isolation?

PD: The reality today is, it is a bit of a black box. So we spent a lot of time developing algorithms. These algorithms are trained on, you know, the parameters that we want to solve. And we have a desired output once that’s kind of in, in production, it is very difficult to kind of interact and see what’s happening. We’re heavily reliant on that decision itself that the algorithm outputs. So to your point, you know, once this is in production, it’s running. There’s a lot of trust today sitting beside the system and understanding, hey, I have to trust that it’s making the correct decisions and, you know, outputting the correct results are, you know, ideal.

SD: That’s one of the areas of concern that I’ve heard around AI is the fact that it is a black box. What would you say to those that are listening that provide some level of concern for them? They provide the inputs, they get the outputs, but don’t necessarily know what’s going on inside of that black box. What guidance would you give to them to combat that, that concern?

PD: Yeah. Transparency. That’s where it starts. Right. Really training everyone around that ecosystem. And how the actual model itself works. I think that’s highly dependent on having proper validation checkpoints in place to ensure that, hey, every several parts, we just ensure that, you know, everything is working as, as intended and such so are very important. I think a big part of it is once again, trust. I think the common fallacy in industry today is operators do make errors as well. After having an operator look at parts after 10 or 15 minutes, you know their attention to detail has degraded significantly, right? So the reality is, is we’re prone to this in the industry. We see this all time. We see parts that get through our checks and balances that are incorrect, that have quality defects. So, the goal here isn’t necessarily AI isn’t here to make it perfect. The goal is to ensure that it’s, you know, a lot better and what we see on our floor and then ideally reallocate someone’s resources in some more advanced tasks that we can actually automate today.

SD: I’m so happy that you took the discussion there, because I think that’s one of the challenges that AI has in general, is that there’s this expectation that once you put this tool in place, that everything is perfect, and the idea is not to make it perfect. The idea is to make it better than it is today. And then that model and the use of the tool will continue to evolve over time. But you’re right. If you compare it to the baseline of what exists today, AI generally provides an incremental, at least improvement. It might not be perfect, but it’s better. This conversation reminds me of another one that I was having. I have a friend who, runs an AI company, and when he describes AI and AI models, you know, at its most basic and fundamental level, he says that it’s basically a pattern recognition tool. It’s a really smart, multi-variable pattern recognition tool. Do you agree with that framing as something as a way to explain what the AI engine is doing?

PD: Yeah. So and that’s an open ended question. There’s so many different variants today, subsets of AI that help feed into what an AI ecosystem is.So pattern recognition, machine learning, that’s all one subset today of what empowers I think today we’ve seen some large advancements, huge advancements over industry over, you know, recent times with natural language processing. So when you look at things like ChatGPT and such, that’s taken industry by storm. I think from the manufacturing side today, that’s what we’re probably most interested in still is that process side the patterns. Once again, we we live as such a complex, environment today with so many variables out there that we’re concentrating to identify patterns that will help lead us to gains throughout the process, the kind of, once again, a optimize what we do but deliver a better product. Is it all that? No, there’s a lot more to it today. But I think every segment, every industry today is using different facets of that that are more ideal, that are suited for that industry. And today, I think within the manufacturing space and even from the mathematical side, that pattern recognition, identifying correlations, that’s a huge opportunity for us to leverage today in this space.

SD: So we had mentioned pattern recognition machine learning in the vision space. Is there another area that you’re seeing, you know, practical examples in the work that you do where AI is making an impact on the manufacturing floor?

PD: Yeah. So absolutely, once again, I think the biggest one that’s taken the industry by storm today is the whole natural language processing. I think the whole ChatGPT side has come so quickly that people, I think, misunderstood or didn’t fully understand the scale of what this can do. So we’re seeing it everywhere today. It’s automating a lot of tasks around the world, but really the value there today is the ability for it to take unstructured data, unstructured prompts, and create value out of that. That’s one of the biggest values today of ChatGPT. So, you know, today, if I have a problem on a piece of equipment and I have a fault, you know, fault 43 and I look at my pressure gauge and I see my PSIs you know, 30, I still need to have some sort of context or some sort of understanding of the mechanical environment to go and isolate and identify what the problem is. And the power today of what ChatGPT can do is, you know, you could ask a question, you could ask, hey, I have this fault. This is the pressure reading I see right now. And the reality today is as those natural language processing, models start learning our environments, start learning our manuals, we could really simplify and identify and point the operator or the technician to that exact problem itself quickly and reliably. That’s a big problem today that, it’s going to solve, I think, across the world and help streamline a lot of the things that we see today within our space.

SD: I think that’s a great example of how AI can make a significant impact today to somebody who’s listening, who says, that’s what I need. I need that right now. What would be their process for implementing that within their operation?

PD: I think there’s a ton of partners around the world that do this. So, I mean, I think the first part so without getting into partner aspect, because once again, there’s a lot of companies that are spending a lot of money today and actually building out these natural language models, I think a part of it foundation with everything we do today starts off with your data. So really understanding, hey, you know, what am I trying to automate? what answers or what am I trying to get better at? So in the case of what we talked about with machine prompts, you know, a big part that’s probably going to be around manuals, around processes, around your equipment, you know, PM your equipment. So getting maintenance involved. So step one, getting that data available I think that’s the biggest part. So identifying where data is, identifying what data doesn’t exist and either creating it somehow or finding a way to make synthetic data around that. But part one getting that data before we can start training anything.

SD: And just before we go to part two. So part one, when you say data in this context, you’re talking about written language. You’re talking about manuals and processes for fixing the machine. Is that correct? Correct, correct. Before this, you know, once again, if we’re if we focused on machine related problems. Absolutely. I think there’s other data sets out there today that we can help integrate in a future into our equipment. But I think step one, that’s probably the largest data set we have. I mean, every single device, every single piece of equipment we buy, you know, as much as there’s a lot of customized equipment out there, there are some sort of manuals. There are some sort of teachings that integrators would leave, you know, after the equipment shipped that have important details where either A very difficult to find, you know, I bought a piece of equipment ten years ago. Don’t know where the manual is. Step one. Once again, if I could take those, those documents and virtualize them, bring them into a, you know, digital world that’s, you know, that’s biggest bang for your buck today. And then automating that, that’s once again, there’s hundreds of partners out there today that do that very well.

SD: The path doesn’t sound particularly different than the industry 4.0 implementation, where it starts with having a strong data set, having a foundation of information that you can apply tools to. Is that fair?

PD: Yeah, absolutely. I think foundationally that’s one of things that we’re discovering here too. I talk a lot throughout ATS about data. Our biggest goal right now is, you know, some of the things we’re talking about today is things like lights out, you know, we want to get to a point where we want to start automating equipment. And there were several shortages today in industry. Right. Our equipment gain more complex. The products we make are much more complex. We have challenges of offshoring of equipment, labor, skill set shortages. They’re all abundant. They’re all around there today. The goal today is how can we leverage some of these tools and help offset some of those challenges? And part of that starts off with having more data. One of the analogies we use today in a lot of our presentations is, you know, around cities and such. So today, when you look at a normal city that you and I live in, one of the biggest problems around those cities is congestion, traffic everywhere. And no matter how much work we spend, upgrading infrastructure, building new roads, subway systems, we still have a fundamental problem. And that is, you know, congestion, you know, traffic getting around the city and the root cause of that is not necessarily the solutions we’re coming up with, but the nature of how cities and what our concept of cities start off with hundreds of years ago. Our goal today, and one of the discussion points we have is if we’re going to start talking about equipment that’s going to empower the future, we really need to start rethinking how we digitize our equipment and the strategy around that. Right. We should start maybe focusing more around the digital fingerprint of a machine, you know, the architecture itself around how data would transfer and then start time a process, right.

SD: That’s a fundamental gap I see today across industry. There’s an overlap there, right from a very basic approach for, lean process and lean process knowledge as a starting point to invest in automation and technology. It’s really about understanding the definition of each of the steps of the transformation that you’re trying to drive and where the information flows. Have you seen those two concepts overlap at all effectively in the implementation of AI?

PD: We have processes today around AI and ML. So step one is really identifying your problem success criteria. Really try to identify what’s the end result we have. And that takes a bit of time right. Identifying what problem we want to solve. A lot of companies get stuck in the whole ROI piece you know is the data readily available? Is it not? How do we do that? So that itself is a fairly sophisticated problem where I see a lot of companies fail. They’re absolutely eager to start with AI. They’re eager to get out there, they’re ready to partner with people, but they can’t even identify a problem they want to solve. And then even if they do identify a problem that they want to solve, typically that data doesn’t exist to solve that. Once that data is there, we typically call it data engineering. This is probably one of the hardest parts. So a part of that is once again it’s taking that data, identifying, cleansing and exploring it, data finding correlations. And a lot of that, about 80% of the work today is actually in the data engineering piece. Even before we get into the AI ML piece. Right. So before we even build models, you really have to do all the engineering behind it. So I think very similar the process you had today to there are today, processes that are becoming more apparent in the industry and we’re adapting to these processes. But once again, a big part of where I see a lot of companies fail within the AI space is identifying problem, making sure that data is there, sufficient data is enough to solve that problem too, and then actually spend the time to ensure that we’re doing data engineering around that data before we actually process within an AI or ML model.

SD: Coming back to the example that we provided around, maintenance manuals and using natural language processing to provide contextual and efficient feed of information to operators, what would the data engineering exercise look like in that example?

PD: Great question. I think step one is identifying what users are we trying to impact first. Right? So, you know, maybe a use case is Am I trying to help today maintenance technicians on the floor, better support equipment? Am I trying to empower operators? Right. That’s a big one. Am I trying to empower some sort of, you know, you know, supervisors and management? I mean, those are all different use cases. And I say that because despite having a very similar data set, the questions that they’re gonna be asked, how we quantify those questions and how those results are outputted are all going to be in very vastly different formats. Right? So, I think that’s probably your first part of that step is really identifying what are we trying to actually solve in a floor if it is an operator one? Once again, just simple context I think is important today. So when I see faults that pop up on my, you know, machine, a lot of times you see operators struggle in identifying who do I call, what do I do. You know, I have an error message on my machine. It says replace this filter. I have no clue where this filter exists. What do I do? Right? And the reality is somewhere in some sort of manual that exists, someone has probably already highlighted that the manual, you know, the filters underneath this box right here and such. So the data is there today and we don’t actually have to contextualize that. I don’t need to have a pretty screen a pops up that shows, you know, in a 3D CAD model where to go. That’s I think, you know, future state today, just being able to pop up some sort of manual and just show that description of what to do is, is is enough today to solve so many headaches that we have in the floor and then the operator that could decide, hey, can I do this? Or, you know, am I capable of doing it? Do I even have a filter? Or should I just call someone really quickly and have them solve my problem for me? Right. So, that’s important. From a technician side, I think that’s a vastly different way of how we solve it. Right. So and maybe a part of it is also learning skill sets within the, within a company, learning what kind of tools already have that, you know, are present within the environment, how we solve problems. So if it is a software glitch, you know, do we have a team that we can really call in to take a look at it? Can we contextualize the prompts? You know, so once again I think that user case story is important first. And then you build out your process around that. And you can always expand. Right. That’s the goal of what we do. But that initial you just starting not trying to solve all the world’s problems at once is highly important.

SD: That’s a key theme of our Enabling Automation podcast is to get started, find that first step. Don’t try to move from nothing to lights out. Move from what you’re currently at and make an incremental step forward toward your end objective. We’ve given an example of machine learning with multi-variable problems. We’ve given an example of natural language processing and the challenge of distributing information effectively to the operator. we’ve talked about the computer vision example and the application of AI to build better vision inspection stations. Are there any other areas where we could highlight an example of AI causing transformation, or providing value to the shop floor?

PD: Providing insights. That’s the biggest one. So you know, the way I look at it today, we do have a lot of data that exists in our systems, currently. There’s a challenge around actually using that data, filtering through it and actually making it digestible. And we spent a lot of human hours, you know, actually digesting that and trying to make some sort of data driven actions around that. And then one of the problems I see today is that we are doing it today in some sort of fashion, but the delay between we identify a problem and when it actually happened on the floor could be hours, days and in some cases weeks. You know, in my past life we would do typically constrained analysis over two weeks would identify, hey, there’s a problem here. And the reality is if you go try fetching on the floor, find that problem, you probably won’t find it because that was a problem that happened two weeks ago and it doesn’t exist anymore. Or the problematic part, process or issue either has been removed, mitigated, you know, so, I think our goal today is how can we reliably provide data as quick as possible to the decision makers and identify problems and then potentially somehow prioritize those and show the impact, you know, an impact sports self. So, you know, a lot of the data we’ve been doing internally is our goal is how can we explain to anyone, how can we show correlations to anyone? the goal isn’t that you have to be, you know, a subject matter expert to solve problem machine. Can we just identify trends, insights on equipment today and leverage those to our end operators? And then once again somehow quantify that with what we call it impact score.

SD: We’re applying this computational horsepower in a way that we haven’t been able to in the past to predict and prevent problems. You’re providing those insights before the machine goes down.For somebody who’s listening and they say, I know I need to use AI, but I don’t know how. I hear it everywhere. What would you recommend to them to just take a first step? How do they not get lost in the ocean and maybe just dip their toes into the water? So I was probably use the same answer on anything you asked about AI today, I think. I think today step one is still around data. We love that big shiny object we want to you know, solve the world’s problems. But ultimately today, the fallacy of what I see in industry is, is the type of data we have. And it starts off with where we actually define data. So typically when we design a piece of equipment or we design some sort of process, we actually struggle. It actually takes quite a bit of, let’s say, energy today to define an ideal data set of what we want to actually collect of our equipment. And I think this is actually common across every industry. It’s not just ours. What’s different I think between our industry and let’s say the finance industry, is that the finance industry has been really using the same terms, the same set of rules for so many years that they’ve helped define that early on. It’s been fairly similar since, you know, since then. And sure, changes do happen, but they have a fairly well structured piece of data. When you look at the environment you and I live in, everything we do is custom. You know, how we define cycles start, where you know certain assets are placed, how they interact with each other. There’s it’s such a different world daily that we’re constantly bending rules that the reality is we don’t really have a standardized data set itself. We do push hard today. So we have been able to group data by stations, by cells. I think we’ve done an excellent job there today. But there’s always those custom, you know, things that happen that bend those rules and cause problems with the data itself. The next part of it is, once again, where, where and how do we define data. So typically when I’m designing a process, I’m focused around, I’m really focused on getting my machine out. That’s what that’s what my goal is. I’m getting paid to make a product not focused on the data itself. So a lot of times it’s glossed over and there’s some frustration that typically happens there too. What data I collect, when do I collect it. And there’s a lot of back and forth and a lot of conversations can’t get frustrating. You know, the integrator wants to know exactly what the customer wants and customer, And the reality is the customer doesn’t exactly know what the integrator could provide, right? So eventually we just decide on a data set. And a lot of times historically, what I’ve seen is we will typically focus on a data set that’s typically based to the blueprint of a part. I’ll say, hey, Simon, give me a blueprint of your, your product. I’ll look at critical features and I’ll say, you know, Simon’s identify those as critical, features, either, tolerances to a part, you know, certain mechanisms installed. And I’m going to collect data to that. And that’s awesome. That’s a great starting point today to our problem today is that as much as that data lends well to things like process and understanding, maybe quality, it is a very poor, set of data that typically will lend to things like analytics. And AI. So our problem that I see today is we have a lot of data. I’ll see companies say, hey, we’ve invested millions of dollars. We’ve been collecting data for 20 years. Can you just show up and make AI work? Well, the reality is, is we’ve never really focused on having data that lends well to analytics. So many use cases, all that data you have, although it might be critical to the business, doesn’t work well with an AI or an ML model. So for instance, I’ll give you an example because I think this one lends well to how people think. So I have a machine. I have a fault that happens. It’s machine not home. You know, that’s a great indicator of my machine has been able to home. It can’t get back to its original starting position. It can’t cycle a new part. That’s a pretty big problem. The reality is, is that typically in most equipment, there’s going to be several sets of criteria that drive why the machine did not get, you know, was not capable of going home. So it could be that the door didn’t close correctly or open. It might be a jammed sensor somewhere else. So despite having five years worth of data on, you know, the machine not being of the home, the reality is I have no context of understanding what drove that. You know, that ability for not to be able to go home. And that’s the data I need. So to really drive actual insights on my equipment. So that analogy I think plays out well with anything we do today. So we have a lot of data. I think it’s awesome, trying to transform the data and use it for analytics. We’re going to struggle with it today because we never start off that initial mindset of what data I actually need to drive correlations.

SD: That’s a really good overview of where we get started and why inside the AI world. So thank you very much. Now, I don’t normally do this because we really try to center ourselves on the early part of automation, but I’m excited about where AI might take us. And so could you give a little bit of insight into what the roadmap to AI looks like in the manufacturing world, and what are you excited about for what, five years from now looks like? So that’s a pretty interesting question. So I think AI is a small piece of all this. You know, we’re looking at autonomous vehicles today. We’re looking at different technologies, stats to augment, you know, screens become more connected. I think that that vision of the next five years, AI does play a big role in it, but it’s a small part to all the other technologies you’re going to advance for the next little while. So, you know, without spending too much time thinking, I think there’s going to be a ton more of things to leverage with AI around how we identify problems, how we, you know, sort through massive amounts of data, how we establish correlations and determine problems, how do we optimize our actual processes, how we design equipment. AI plays a huge part in that today, too. things like supply chain. I mean, there’s so many opportunities in there in optimizing that and reducing costs and finding the best suppliers are better ways to design components. So I think AI is going to impact all these pieces slowly. And I don’t know if the piece of equipment from five years from now is going to look much different than what it is today, outside of it just potentially running better and being more optimized. It’s all the other technologies I think that we’re looking into today that’s going to give it that more cyber feel and more futuristic feel that’s going to play a bigger role in that.

SD: Paul, continuing on that thread, what would success look like for you in your role five years from now? What’s the goal post that we would say, yes, we got there. We’ve achieved something great.

PD: That’s a great question. And we talk a lot about that internally today. Having the ability to have a closed loop system, a system that sits side by side with the process statistically and analyzes what’s going on. And to be able to provide that feedback, I think, is is really a target of ours in the next five years.

SD: Thank you very much for joining us today, Paul, and providing your expertise and background inside this very large topic. I really appreciate you coming in and having this chat with me.

PD: Appreciate you guys having me on today and hopefully we do another time.

SD: To those that are listening, as always, thank you so much for joining us for the first episode of our third season. We wouldn’t have a third season if it wasn’t for those tuning in and listening to our Enabling Automation podcast, and hopefully taking some really strong insights to help you with your own automation journey. Our next episode, episode two, will be about establishing mutually beneficial relationships, and that’ll be a lot around partnerships with external parties and how you can approach them and take the most value out of them, which may actually apply and be an extension of this AI discussion. I look forward to talking to you then. Thanks so much.